text
stringlengths 67
1.03M
| metadata
dict |
---|---|
# Notebook from Majoburo/spaxlet
Path: docs/tutorials/deconvolution.ipynb
# Deconvolution Tutorial
## Introduction
There are several problems with the standard initialization performed in the [Quickstart Guide](../0-quickstart.ipynb):
1. The models exist in a frame with a narrow model PSF while the the observed scene will have a much wider PSF. So the initial models will be spread out over a larger region, which causes more blending and an increased number of iterations for convergence.
1. The initial morphologies for `ExtendedSource`s and `MultibandSource`s are determined using a combined "detection coadd," which weights each observed image with the SED at the center of each source. Due to different seeing in each band, this results in artificial color gradients in the detection coadd that produce a less accurate initial model.
One way to solve these problems is to deconvolve the observations into the model frame where the PSF is the same in each band, resulting in more accurate initial morphologies and colors. This is not a trivial task, as deconvolution of a noisy image is an ill-defined operation and numerical divergences dominate the matching kernel when matching a wider PSF to a narrower PSF in Fourier space.
To avoid the numerical instability of deconvolution kernels created in k-space we instead use scarlet itself to model the kernel and deconvolve the image. There is a computational cost to this procedure and creating the deconvolution kernel for use with a single blend is not advisable, as the cost to generate it is greater than the time saved. However, there are some situations where the following procedure is quite useful, including deblending a large number of blends from survey data where the PSF is well-behaved. For example, we have experimented with HSC data and found that if we calculate the deconvolution kernel at the center of a 4k$\times$4k patch, we can use the result to deconvolve _all_ of the blends from the same coadd. This is possible because the deconvolution doesn't have to be exact, we just require it to be better for _initialization_ than the observed images._____no_output_____
<code>
# Import Packages and setup
from functools import partial
import numpy as np
import scarlet
import scarlet.display as display
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# use a good colormap and don't interpolate the pixels
matplotlib.rc('image', cmap='inferno', interpolation='none', origin='lower')_____no_output_____
</code>
## Load and Display Data
We load the same example data set used in the quickstart guide._____no_output_____
<code>
# Load the sample images
data = np.load("../../data/hsc_cosmos_35.npz")
images = data["images"]
filters = data["filters"]
catalog = data["catalog"]
weights = 1/data["variance"]
# Note that unlike in the quickstart guide,
# we set psfs the data["psfs"] image
# not a scarlet.PSF object.
psfs = data["psfs"]_____no_output_____
</code>
## Generate the PSF models
Unlike the [Quickstart Guide](../0-quickstart.ipynb), we cannot use the pixel integrated model PSF because the [error function](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.erf.html) in scipy used to integrate the gaussian goes to zero too quickly to match an observed PSF. So instead we use a gaussian with a similar $\sigma=1/\sqrt{2}$ for our model. We then make this the _observed_ PSF, since this is the seeing that we want to deconvolve our observed images into._____no_output_____
<code>
py, px = np.array(psfs.shape[1:])//2
model_psf = scarlet.psf.gaussian(py, px, 1/np.sqrt(2), bbox=scarlet.Box(psfs.shape), integrate=False)[0]
model_psf = model_psf/model_psf.sum()
model_psf = np.array([model_psf]*psfs.shape[0])
model_frame = scarlet.Frame(psfs.shape,channels=filters)
psf_observation = scarlet.PsfObservation(model_psf, channels=filters).match(psfs)_____no_output_____
</code>
## Matching the PSFs
### Algorithm
To understand how the matching algorithm works it is useful to understand how convolutions are performed in scarlet. We can define the observed PSF $P$ by convolving the model PSF $M$ with the difference kernel $D$, giving us
$P = M * D$,
where `*` is the convolution operator. The difference kernel is calculated in k-space using the ratio $\tilde{P}/\tilde{D}$, which is well defined as long as $P$ is wider than $M$ in real space. Then the `Observation.render` method is used to convolve the model with $D$ to match it with the observed seeing.
For deconvolution we require the opposite, namely
$M = P * D$
As mentioned in the [Introduction](#Introduction) this is numerically unstable because in k-space $\tilde{D}/\tilde{P}$ diverges in the wings as $\tilde{P}$ is narrower than $\tilde{D}$. Modeling the deconvolution kernel with scarlet is possible because of the commutivity of the convolution operation, where
$M = D * P$.
In this case we can define $M$ as the observation we seek to match, make $D$ the model we want to fit, and then convolve the model ($D$) with $P$ in each iteration to match the "data." In this way we can fit the deconvolution kernel needed to deconvolve from the observation seeing to the model frame.
## An implementation
Choosing the correct parameters for PSF matching is a bit of a black art in itself, another reason why deconvolution should only be done when deblending large datasets and the payoff is greater than the cost. For 41$\times$41 pixel HSC PSFs we've found the following initialization script to work well, however the configuration for your observations may differ substantially.
We introduce the `PSFDiffKernel` class, which acts like a scarlet `Component` used to model the scene, however in this case there is a "source" for each band since we want out deconvolution kernels to be mono-chromatic._____no_output_____
<code>
# Parameters used to initial and configure the fit.
max_iter = 300
e_rel = 1e-5
morph_step = 1e-2
# We should be able to improve our initial guess if we model the
# width of the observed PSF and calculate an analytic solution
# for the deconvolution kernel, however for now just using the
# observed PSF works well.
init_guess = psfs.copy()
psf_kernels = [
scarlet.PSFDiffKernel(model_frame, init_guess, band, morph_step)
for band in range(len(filters))
]
psf_blend = scarlet.Blend(psf_kernels, psf_observation)
%time psf_blend.fit(max_iter, e_rel=e_rel)
plt.plot(psf_blend.loss, ".-")
plt.title("$\Delta$loss: {:.3e}, e_rel:{:.3e}".format(psf_blend.loss[-2]-psf_blend.loss[-1], (psf_blend.loss[-2]-psf_blend.loss[-1])/np.abs(psf_blend.loss[-1])))
plt.show()
for band, src in enumerate(psf_blend.sources):
residual = psfs[band]-psf_observation.render(psf_blend.get_model())[band]
print("{}: chi^2={:.3f}, max(abs)={:.3f}".format(filters[band], np.sum(residual**2), np.max(np.abs(residual))))
fig, ax = plt.subplots(1, 2, figsize=(7, 3))
ax[0].imshow(src.get_model()[band], cmap="Greys_r")
ax[0].set_title("{} band kernel".format(filters[band]))
vmax = np.max(np.abs(residual))
im = ax[1].imshow(residual, vmin=-vmax, vmax=vmax, cmap="seismic")
ax[1].set_title("residual")
plt.colorbar(im, ax=ax[1])
plt.show()_____no_output_____
</code>
The residual is created by convolving the observed PSF with the deconvolution kernel and comparing it to the model PSF. We see that the kernel isn't perfect and that it tends to overshoot the center of the model PSF, but the result is good enough to improve our initialization. One thing that we've noticed is that if we set our relative error too low then the ringing in the wings of bright objects is too large while running for too long makes the images crisper at the cost of amplifying the noise to the point where it isn't useful for faint (and even moderately faint) sources.
We now create the frame for our model, using an analytic PSF, and an observation for the deconvolved image. This is a `DeconvolvedObservation` class, which sets the deconvolution kernel._____no_output_____
<code>
# This is the frame for our model
model_psf = scarlet.PSF(partial(scarlet.psf.gaussian, sigma=1/np.sqrt(2)), shape=(None, 11, 11))
model_frame = scarlet.Frame(
images.shape,
psfs=model_psf,
channels=filters)
# This object will perform the deconvolution
deconvolved = scarlet.DeconvolvedObservation(
images,
psfs=model_psf,
weights=weights,
channels=filters).match(model_frame, psf_blend.get_model())
# These are the observations that we want to model
observation = scarlet.Observation(
images,
psfs=scarlet.PSF(psfs),
weights=weights,
channels=filters).match(model_frame)_____no_output_____
</code>
Let's take a look at the result:_____no_output_____
<code>
model = deconvolved.images
fig, ax = plt.subplots(1, 2, figsize=(15,7))
norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10)
rgb = display.img_to_rgb(images, norm=norm)
ax[0].imshow(rgb)
ax[0].set_title("Observed")
for center in catalog:
ax[0].plot(center[1], center[0], "wx")
norm = display.AsinhMapping(minimum=np.min(model), stretch=np.max(model)*0.055, Q=10)
rgb = display.img_to_rgb(model, norm=norm)
ax[1].imshow(rgb)
ax[1].set_title("Deconvolved")
for center in catalog:
ax[1].plot(center[1], center[0], "wx")
plt.show()_____no_output_____
</code>
In the case the result isn't great due to the bright star at the center. We could try to fit the model a bit better to supress the ringing but it turns out this is usually unnecessary and not worth the extra computation time.
To see how this is useful lets take a look at the detection coadds for the brightest 3 sources with and without deconvolution. These detection coadds are built internally for all extended and multiband sources, but it's a useful exercise to build them separately just to take a look at them. The red x's in the plots below mark the location of the source whose SED was used to make that particular detection coadd:_____no_output_____
<code>
# We just define a rough estimate of the background RMS needed
# for `build_detection_coadd`.
bg_rms=np.zeros((len(images),))
bg_rms[:] = 1e-3
for center in catalog[:4]:
center = (center[1], center[0])
figure, ax = plt.subplots(1, 2, figsize=(10, 5))
# Build the deconvolved coadd
sed = scarlet.source.get_psf_sed(center, deconvolved, model_frame)
detect, bg_cutoff = scarlet.source.build_detection_coadd(sed, bg_rms, deconvolved)
# display
ax[1].imshow(np.log10(detect), cmap="Greys_r")
ax[1].plot(center[1], center[0], "rx")
ax[1].set_title("deconvolved detection coadd")
# Build the coadd without deconvolution
sed = scarlet.source.get_psf_sed(center, observation, model_frame)
detect, bg_cutoff = scarlet.source.build_detection_coadd(sed, bg_rms, observation)
#display
ax[0].imshow(np.log10(detect), cmap="Greys_r")
ax[0].plot(center[1], center[0], "rx")
ax[0].set_title("detection coadd")
plt.show()_____no_output_____
</code>
We see that the ringing in the PSF doesn't really matter, as it's at the same amplitude as the noise and our initial requirement of monotonicity will trim the model to the inner region that doesn't ring, achieving our goal of making the initial models compact and allowing them to grow if necessary. So next we'll initialize our sources using both the deconvolved and original observations and compare them:_____no_output_____
<code>
# Build the sources without deconvolution
sources = []
for k,src in enumerate(catalog):
if k == 1:
new_source = scarlet.MultiComponentSource(model_frame, (src['y'], src['x']), observation)
else:
new_source = scarlet.ExtendedSource(model_frame, (src['y'], src['x']), observation)
sources.append(new_source)
# Build the convolved sources
deconvolved_sources = []
for k,src in enumerate(catalog):
if k == 1:
new_source = scarlet.MultiComponentSource(model_frame, (src['y'], src['x']), deconvolved)
else:
new_source = scarlet.ExtendedSource(model_frame, (src['y'], src['x']), deconvolved)
deconvolved_sources.append(new_source)_____no_output_____norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10)
display.show_sources(sources[:4],
norm=norm,
observation=observation,
show_rendered=True,
show_observed=True)
plt.show()
display.show_sources(deconvolved_sources[:3],
norm=norm,
observation=observation,
show_rendered=True,
show_observed=True)
plt.show()_____no_output_____
</code>
Notice that the deconvovled initial models use much smaller boxes while still capturing all of the features in the true observations. The better initial guess and smaller boxes will make it much faster to deblend:_____no_output_____
<code>
# Fit the non-deconvolved blend
blend = scarlet.Blend(sources, observation)
%time blend.fit(200)
print("scarlet ran for {0} iterations to logL = {1}".format(len(blend.loss), -blend.loss[-1]))
plt.plot(-np.array(blend.loss))
plt.title("Regular initialization")
plt.xlabel('Iteration')
plt.ylabel('log-Likelihood')
plt.show()_____no_output_____# Fit the deconvolved blend
deconvolved_blend = scarlet.Blend(deconvolved_sources, observation)
%time deconvolved_blend.fit(200)
print("scarlet ran for {0} iterations to logL = {1}".format(len(deconvolved_blend.loss), -deconvolved_blend.loss[-1]))
plt.plot(-np.array(deconvolved_blend.loss))
plt.title("Deconvolved initialization")
plt.xlabel('Iteration')
plt.ylabel('log-Likelihood')
plt.show()_____no_output_____
</code>
So we see that using the deconvolved images for initialization cut our runtime in half for this particular blend (this difference might not be as pronounced in the notebook environment because the default initialization is executed first, heating up the processors before the second blend is run). Looking at the residuals we see that the final models are comparable, so when the same kernel can be used on multiple blends this method proves to be quite useful._____no_output_____
<code>
norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10)
# Display the convolved model
scarlet.display.show_scene(blend.sources,
norm=norm,
observation=observation,
show_rendered=True,
show_observed=True,
show_residual=True)
plt.show()
# Display the deconvolved model
scarlet.display.show_scene(deconvolved_blend.sources,
norm=norm,
observation=observation,
show_rendered=True,
show_observed=True,
show_residual=True)
plt.show()_____no_output_____
</code>
| {
"repository": "Majoburo/spaxlet",
"path": "docs/tutorials/deconvolution.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 19298,
"hexsha": "d0f3ba5b9ca8d6647c85722362a0a43c05fbb4ce",
"max_line_length": 896,
"avg_line_length": 44.9836829837,
"alphanum_fraction": 0.6206860815
} |
# Notebook from Messicka/LSSTC-DSFP-Sessions
Path: Sessions/Session13/Day2/02-Fast-GPs.ipynb
# Fast GP implementations_____no_output_____
<code>
%matplotlib inline_____no_output_____%config InlineBackend.figure_format = 'retina'_____no_output_____from matplotlib import rcParams
rcParams["figure.dpi"] = 100
rcParams["figure.figsize"] = 12, 4_____no_output_____
</code>
## Benchmarking GP codes
Implemented the right way, GPs can be super fast! Let's compare the time it takes to evaluate our GP likelihood and the time it takes to evaluate the likelihood computed with the snazzy ``george`` and ``celerite`` packages. We'll learn how to use both along the way. Let's create a large, fake dataset for these tests:_____no_output_____
<code>
import numpy as np
np.random.seed(0)
t = np.linspace(0, 10, 10000)
y = np.random.randn(10000)
sigma = np.ones(10000)_____no_output_____
</code>
### Our GP_____no_output_____
<code>
def ExpSquaredCovariance(t, A=1.0, l=1.0, tprime=None):
"""
Return the ``N x M`` exponential squared
covariance matrix.
"""
if tprime is None:
tprime = t
TPrime, T = np.meshgrid(tprime, t)
return A ** 2 * np.exp(-0.5 * (T - TPrime) ** 2 / l ** 2)
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
"""
Return the log of the GP likelihood for a datatset y(t)
with uncertainties sigma, modeled with a Squared Exponential
Kernel with amplitude A and lengthscale l.
"""
# The covariance and its determinant
npts = len(t)
K = ExpSquaredCovariance(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The log marginal likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like_____no_output_____
</code>
Time to evaluate the GP likelihood:_____no_output_____
<code>
%%time
ln_gp_likelihood(t, y, sigma)CPU times: user 1min 41s, sys: 3.82 s, total: 1min 45s
Wall time: 18 s
</code>
### george_____no_output_____Let's time how long it takes to do the same operation using the ``george`` package (``pip install george``).
The kernel we'll use is
```python
kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)
```
where ``amp = 1`` and ``tau = 1`` in this case.
To instantiate a GP using ``george``, simply run
```python
gp = george.GP(kernel)
```
The ``george`` package pre-computes a lot of matrices that are re-used in different operations, so before anything else, we'll ask it to compute the GP model for our timeseries:
```python
gp.compute(t, sigma)
```
Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!
Finally, the log likelihood is given by ``gp.log_likelihood(y)``.
How do the speeds compare? Did you get the same value of the likelihood?_____no_output_____
<code>
import george_____no_output_____%%time
kernel = george.kernels.ExpSquaredKernel(1.0)
gp = george.GP(kernel)
gp.compute(t, sigma)CPU times: user 24.2 s, sys: 690 ms, total: 24.9 s
Wall time: 4.32 s
%%time
print(gp.log_likelihood(y))-14095.32136897017
CPU times: user 294 ms, sys: 50.5 ms, total: 345 ms
Wall time: 128 ms
</code>
``george`` also offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Let's instantiate the GP object again by passing the keyword ``solver=george.HODLRSolver`` and re-compute the log likelihood. How long did that take? Did we get the same value for the log likelihood?_____no_output_____
<code>
%%time
gp = george.GP(kernel, solver=george.HODLRSolver)
gp.compute(t, sigma)CPU times: user 42.8 ms, sys: 41.8 ms, total: 84.7 ms
Wall time: 84.2 ms
%%time
gp.log_likelihood(y)CPU times: user 6.74 ms, sys: 3.29 ms, total: 10 ms
Wall time: 8.94 ms
</code>
### celerite_____no_output_____The ``george`` package is super useful for GP modeling, and I recommend you read over the [docs and examples](https://george.readthedocs.io/en/latest/). It implements several different [kernels](https://george.readthedocs.io/en/latest/user/kernels/) that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then ``celerite`` is what it's all about:
```bash
pip install celerite
```
Check out the [docs](https://celerite.readthedocs.io/en/stable/) here, as well as several tutorials. There is also a [paper](https://arxiv.org/abs/1703.09710) that discusses the math behind ``celerite``. The basic idea is that for certain families of kernels, there exist **extremely efficient** methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, ``celerite`` is able to do everything in order $N$ (!!!) This is a **huge** advantage, especially for datasets with tens or hundreds of thousands of data points. Using ``george`` or any homebuilt GP model for datasets larger than about ``10,000`` points is simply intractable, but with ``celerite`` you can do it in a breeze.
Next we repeat the timing tests, but this time using ``celerite``. Note that the Exponential Squared Kernel is not available in ``celerite``, because it doesn't have the special form needed to make its factorization fast. Instead, we'll use the ``Matern 3/2`` kernel, which is qualitatively similar and can be approximated quite well in terms of the ``celerite`` basis functions:
```python
kernel = celerite.terms.Matern32Term(np.log(1), np.log(1))
```
Note that ``celerite`` accepts the **log** of the amplitude and the **log** of the timescale. Other than this, we can compute the likelihood using the same syntax as ``george``.
How much faster did it run? Is the value of the likelihood different from what you found above? Why?_____no_output_____
<code>
import celerite
from celerite import terms_____no_output_____%%time
kernel = terms.Matern32Term(np.log(1), np.log(1))
gp = celerite.GP(kernel)
gp.compute(t, sigma)_____no_output_____%%time
gp.log_likelihood(y)_____no_output_____
</code>
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise (the one and only)</h1>
</div>
Let's use what we've learned about GPs in a real application: fitting an exoplanet transit model in the presence of correlated noise.
Here is a (fictitious) light curve for a star with a transiting planet: _____no_output_____
<code>
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(t, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");_____no_output_____
</code>
There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. **Your task is to verify this claim.**
Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.
Fit the transit with a simple inverted Gaussian with three free parameters:
```python
def transit_shape(depth, t0, dur):
return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)
```
*HINT: I borrowed heavily from [this tutorial](https://celerite.readthedocs.io/en/stable/tutorials/modeling/) in the celerite documentation, so you might want to take a look at it...*_____no_output_____
| {
"repository": "Messicka/LSSTC-DSFP-Sessions",
"path": "Sessions/Session13/Day2/02-Fast-GPs.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 13266,
"hexsha": "d0f3cc32bd868dfbc40efaa1d6cccf757169e796",
"max_line_length": 758,
"avg_line_length": 30.356979405,
"alphanum_fraction": 0.5654304236
} |
# Notebook from KarrLab/bcforms
Path: examples/1. Introductory tutorial.ipynb
`BcForms` is a toolkit for concretely describing the primary structure of macromolecular complexes, including non-canonical monomeric forms and intra and inter-subunit crosslinks. `BcForms` includes a textual grammar for describing complexes and a Python library, a command line program, and a REST API for validating and manipulating complexes described in this grammar. `BcForms` represents complexes as sets of subunits, with their stoichiometries, and covalent crosslinks which link the subunits. DNA, RNA, and protein subunits can be represented using `BpForms`. Small molecule subunits can be represented using `openbabel.OBMol`, typically imported from SMILES or InChI._____no_output_____This notebook illustrates how to use the `BcForms` Python library via some simple. Please see the second tutorial for more details and more examples. Please also see the [documentation](https://docs.karrlab.org/bcforms/) for more information about the `BcForms` grammar and more instructions for using the `BcForms` website, JSON REST API, and command line interface._____no_output_____# Import BpForms and BcForms libraries_____no_output_____
<code>
import bcforms
import bpforms_____no_output_____
</code>
# Create complexes from their string representations_____no_output_____
<code>
form_1 = bcforms.BcForm().from_str('2 * subunit_a + 3 * subunit_b')
form_1.set_subunit_attribute('subunit_a', 'structure',
bpforms.ProteinForm().from_str('CAAAAAAAA'))
form_1.set_subunit_attribute('subunit_b', 'structure',
bpforms.ProteinForm().from_str('AAAAAAAAC'))_____no_output_____form_2 = bcforms.BcForm().from_str(
'2 * subunit_a'
'| x-link: [type: disulfide | l: subunit_a(1)-1 | r: subunit_a(2)-1]')
form_2.set_subunit_attribute('subunit_a', 'structure',
bpforms.ProteinForm().from_str('CAAAAAAAA'))_____no_output_____
</code>
# Create complexes programmatically_____no_output_____
<code>
form_1_b = bcforms.BcForm()
form_1_b.subunits.append(bcforms.core.Subunit('subunit_a', 2,
bpforms.ProteinForm().from_str('CAAAAAAAA')))
form_1_b.subunits.append(bcforms.core.Subunit('subunit_b', 3,
bpforms.ProteinForm().from_str('AAAAAAAAC')))_____no_output_____form_2_b = bcforms.BcForm()
subunit = bcforms.core.Subunit('subunit_a', 2,
bpforms.ProteinForm().from_str('CAAAAAAAA'))
form_2_b.subunits.append(subunit)
form_2_b.crosslinks.append(bcforms.core.OntologyCrosslink(
'disulfide', 'subunit_a', 1, 'subunit_a', 1, 1, 2))_____no_output_____
</code>
# Get properties of polymers_____no_output_____## Subunits_____no_output_____
<code>
form_1.subunits_____no_output_____
</code>
## Crosslinks_____no_output_____
<code>
form_2.crosslinks _____no_output_____
</code>
# Get the string representation of a complex_____no_output_____
<code>
str(form_1_b) _____no_output_____
</code>
# Check equality of complexes_____no_output_____
<code>
form_1_b.is_equal(form_1)_____no_output_____
</code>
# Calculate properties of a complex_____no_output_____## Molecular structure_____no_output_____
<code>
form_1.get_structure()[0]_____no_output_____
</code>
## SMILES representation_____no_output_____
<code>
form_1.export('smiles') _____no_output_____
</code>
## Formula_____no_output_____
<code>
form_1.get_formula()_____no_output_____
</code>
## Charge_____no_output_____
<code>
form_1.get_charge()_____no_output_____
</code>
## Molecular weight_____no_output_____
<code>
form_1.get_mol_wt()_____no_output_____
</code>
| {
"repository": "KarrLab/bcforms",
"path": "examples/1. Introductory tutorial.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 9792,
"hexsha": "d0f4c13d9c72efec4c57e32d3bb48c173f1bca9f",
"max_line_length": 723,
"avg_line_length": 23.0943396226,
"alphanum_fraction": 0.5295138889
} |
# Notebook from sanchobarriga/course-content
Path: tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Neuromatch Academy: Week 3, Day 2, Tutorial 1
# Neuronal Network Dynamics: Neural Rate Models
_____no_output_____## Background
The brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is a very large network of densely interconnected neurons.
The activity of neurons is constantly evolving in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of views include information processing, network science, and statistical models). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study neuronal dynamics if we want to understand the brain.
In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.
## Objectives
In this tutorial we will learn how to build a firing rate model of a single population of excitatory neurons.
Steps:
- Write the equation for the firing rate dynamics of a 1D excitatory population.
- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.
- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system.
- Investigate the stability of the fixed points by linearizing the dynamics around them.
_____no_output_____# Setup_____no_output_____
<code>
# Imports
import matplotlib.pyplot as plt # import matplotlib
import numpy as np # import numpy
import scipy.optimize as opt # import root-finding algorithm
import ipywidgets as widgets # interactive display_____no_output_____#@title Figure Settings
%matplotlib inline
fig_w, fig_h = 6, 4
my_fontsize = 16
my_params = {'axes.labelsize': my_fontsize,
'axes.titlesize': my_fontsize,
'figure.figsize': [fig_w, fig_h],
'font.size': my_fontsize,
'legend.fontsize': my_fontsize-4,
'lines.markersize': 8.,
'lines.linewidth': 2.,
'xtick.labelsize': my_fontsize-2,
'ytick.labelsize': my_fontsize-2}
plt.rcParams.update(my_params)_____no_output_____# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6,4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()_____no_output_____#@title Helper functions
def plot_dE_E(E, dEdt):
plt.figure()
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
plt.xlabel('E activity')
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x,dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('dF(x)', fontsize=14.)
plt.show()
_____no_output_____
</code>
# Neuronal network dynamics_____no_output_____
<code>
#@title Video: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ZSsAaeaG9ZM", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
videoVideo available at https://youtube.com/watch?v=ZSsAaeaG9ZM
</code>
## Dynamics of a single excitatory population
Individual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of different network parameters.
\begin{align}
\tau_E \frac{dE}{dt} &= -E + F(w_{EE}E + I^{\text{ext}}_E) \quad\qquad (1)
\end{align}
$E(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau_E$ controls the timescale of the evolution of the average firing rate, $w_{EE}$ denotes the strength (synaptic weight) of the recurrent excitatory input to the population, $I^{\text{ext}}_E$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.
To start building the model, please execute the cell below to initialize the simulation parameters._____no_output_____
<code>
#@title Default parameters for a single excitatory population model
def default_parsE( **kwargs):
pars = {}
### Excitatory parameters ###
pars['tau_E'] = 1. # Timescale of the E population [ms]
pars['a_E'] = 1.2 # Gain of the E population
pars['theta_E'] = 2.8 # Threshold of the E population
### Connection strength ###
pars['wEE'] = 0. # E to E, we first set it to 0
### External input ###
pars['I_ext_E'] = 0.
### simulation parameters ###
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['E_init'] = 0.2 # Initial value of E
### External parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars_____no_output_____
</code>
You can use:
- `pars = default_parsE()` to get all the parameters, and then you can execute `print(pars)` to check these parameters.
- `pars = default_parsE(T=T_sim, dt=time_step)` to set new simulation time and time step
- After `pars = default_parsE()`, use `pars['New_para'] = value` to add an new parameter with its value_____no_output_____## F-I curves
In electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.
The transfer function $F(\cdot)$ in Equation (1) represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values.
A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.
$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$
The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.
Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$._____no_output_____### Exercise 1: Implement F-I curve
Let's first investigate the activation functions before simulating the dynamics of the entire population.
In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters._____no_output_____
<code>
# Excercise 1
def F(x,a,theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################################################
## TODO for students: compute f = F(x), remove the NotImplementedError once done#
#################################################################################
# the exponential function: np.exp(.)
# f = ...
raise NotImplementedError("Student excercise: implement the f-I function")
return f
# Uncomment these lines when you've filled the function, then run the cell again
# to plot the f-I curve.
pars = default_parsE() # get default parameters
# print(pars) # print out pars to get familiar with parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment this when you fill the exercise, and call the function
# plot_fI(x, F(x,pars['a_E'],pars['theta_E']))_____no_output_____# to_remove solution
def F(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
the population activation response F(x) for input x
"""
# add the expression of f = F(x)
f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1
return f
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
with plt.xkcd():
plot_fI(x, F(x,pars['a_E'],pars['theta_E']))findfont: Font family ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'] not found. Falling back to DejaVu Sans.
</code>
### Interactive Demo: Parameter exploration of F-I curve
Here's an interactive demo that shows how the F-I curve is changing for different values of the gain and threshold parameters.
**Remember to enable the demo by running the cell.**_____no_output_____
<code>
#@title F-I curve Explorer
def interactive_plot_FI(a, theta):
'''
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
'''
# set the range of input
x = np.arange(0,10,.1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14.)
plt.ylabel('F(x)', fontsize=14.)
plt.show()
_ = widgets.interact(interactive_plot_FI, a = (0.3, 3., 0.3), \
theta = (2., 4., 0.2)) _____no_output_____
</code>
## Simulation scheme of E dynamics
Because $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation (1) can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:
\begin{align}
&\frac{dE}{dt} \approx \frac{E[k+1]-E[k]}{\Delta t}
\end{align}
where $E[k] = E(k\Delta t)$.
Thus,
$$\Delta E[k] = \frac{\Delta t}{\tau_E}[-E[k] + F(w_{EE}E[k] + I^{\text{ext}}_E(k;a_E,\theta_E)]$$
Hence, Equation (1) is updated at each time step by:
$$E[k+1] = E[k] + \Delta E[k]$$
**_Please execute the following cell to enable the WC simulator_**_____no_output_____
<code>
#@title E population simulator: `simulate_E`
def simulate_E(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
E : Activity of excitatory population (array)
"""
# Set parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
E_init = pars['E_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
E = np.zeros(Lt)
E[0] = E_init
I_ext_E = I_ext_E*np.ones(Lt)
# Update the E activity
for k in range(Lt-1):
dE = dt/tau_E * (-E[k] + F(wEE*E[k]+I_ext_E[k], a_E, theta_E))
E[k+1] = E[k] + dE
return E
print(help(simulate_E))
Help on function simulate_E in module __main__:
simulate_E(pars)
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
E : Activity of excitatory population (array)
None
</code>
#### Interactive Demo: Parameter Exploration of single population dynamics
Note that $w_{EE}=0$, as in the default setting, means no recurrent input to the excitatory population in Equation (1). Hence, the dynamics is entirely determined by the external input $I_{E}^{\text{ext}}$. Try to explore how $E_{sim}(t)$ changes with different $I_{E}^{\text{ext}}$ and $\tau_E$ parameter values, and investigate the relationship between $F(I_{E}^{\text{ext}}; a_E, \theta_E)$ and the steady value of E. Note that, $E_{ana}(t)$ denotes the analytical solution._____no_output_____
<code>
#@title Mean-field model Explorer
# get default parameters
pars = default_parsE(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau_E):
# set external input and time constant
pars['I_ext_E'] = I_ext
pars['tau_E'] = tau_E
# simulation
E = simulate_E(pars)
# Analytical Solution
E_ana = pars['E_init'] + (F(I_ext,pars['a_E'],pars['theta_E'])-pars['E_init'])*\
(1.-np.exp(-pars['range_t']/pars['tau_E']))
# plot
plt.figure()
plt.plot(pars['range_t'], E, 'b', label=r'$E_{\mathrm{sim}}$(t)', alpha=0.5, zorder=1)
plt.plot(pars['range_t'], E_ana, 'b--', lw=5, dashes=(2,2),\
label=r'$E_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'], F(I_ext,pars['a_E'],pars['theta_E'])\
*np.ones(pars['range_t'].size), 'k--', label=r'$F(I_E^{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext = (0.0, 10., 1.),\
tau_E = (1., 5., 0.2))
_____no_output_____
</code>
### Think!
Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**try changing the value of $w_{EE}$ to a positive number**). Yet, $E(t)$ either decays to zero or reaches a fixed non-zero value.
- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that E(t) stays finite?
- Which parameter would you change in order to increase the maximum value of the response? _____no_output_____## Fixed points of the E system
_____no_output_____
<code>
#@title Video: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="B31fX6V0PZ4", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
videoVideo available at https://youtube.com/watch?v=B31fX6V0PZ4
</code>
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($E$) is zero, i.e. $\frac{dE}{dt}=0$.
We can find that the steady state of the Equation $1$ by setting $\displaystyle{\frac{dE}{dt}=0}$ and solve for $E$:
$$E_{\text{steady}} = F(w_{EE}E_{\text{steady}} + I^{\text{ext}}_E;a_E,\theta_E) = 0, \qquad (3)$$
When it exists, the solution of Equation $3$ defines a **fixed point** of the dynamics which satisfies $\displaystyle{\frac{dE}{dt}=0}$ (and determines steady state of the system). Notice that the right-hand side of the last equation depends itself on $E_{steady}$. If $F(x)$ is nonlinear it is not always possible to find an analytical solution that can instead be found via numerical simulations, as we will do later.
From the Interactive Demo one could also notice that the value of $\tau_E$ influences how quickly the activity will converge to the steady state from its initial value.
In the specific case of $w_{EE}=0$, we can also analytically compute the analytical solution of Equation $1$ (i.e., the thick blue dashed line) and deduce the role of $\tau_E$ in determining the convergence to the fixed point:
$$\displaystyle{E(t) = \big{[}F(I^{\text{ext}}_E;a_E,\theta_E) -E(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau_E}})} + E(t=0)$$ \\
We can now numerically calculate the fixed point with the `scipy.optimize.root` function.
<font size=3><font color='gray'>_(note that at the very beginning, we `import scipy.optimize as opt` )_</font></font>.
\\
Please execute the cell below to define the functions `my_fpE`, `check_fpE`, and `plot_fpE`_____no_output_____
<code>
#@title Function of calculating the fixed point
def my_fpE(pars, E_init):
# get the parameters
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# define the right hand of E dynamics
def my_WCr(x):
E = x[0]
dEdt=(-E + F(wEE*E+I_ext_E,a_E,theta_E))
y = np.array(dEdt)
return y
x0 = np.array(E_init)
x_fp = opt.root(my_WCr, x0).x
return x_fp
def check_fpE(pars, x_fp):
a_E, theta_E = pars['a_E'], pars['theta_E']
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
# calculate Equation(3)
y = x_fp- F(wEE*x_fp+I_ext_E, a_E, theta_E)
return np.abs(y)<1e-4
def plot_fpE(pars, x_fp, mycolor):
wEE = pars['wEE']
I_ext_E = pars['I_ext_E']
plt.plot(wEE*x_fp+I_ext_E, x_fp, 'o', color=mycolor)
_____no_output_____
</code>
#### Exercise 2: Visualization of the fixed point
When no analytical solution of Equation $3$ can be found, it is often useful to plot $\displaystyle{\frac{dE}{dt}=0}$ as a function of $E$. The values of E for which the plotted function crosses zero on the y axis correspond to fixed points.
Here, let us, for example, set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$. Define $\displaystyle{\frac{dE}{dt}}$ using Equation $1$, plot the result, and check for the presence of fixed points.
We will now try to find the fixed points using the previously defined function `my_fpE(pars, E_init)` with different initial values ($E_{\text{init}}$). Use the previously defined function `check_fpE(pars, x_fp)` to verify that the values of $E$ for which $\displaystyle{\frac{dE}{dt}} = 0$ are the true fixed points._____no_output_____
<code>
# Exercise 2
pars = default_parsE() # get default parameters
# set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
E_grid = np.linspace(0, 1., 1000)# give E_grid
#figure, line (E, dEdt)
###############################
## TODO for students: #
## Calculate dEdt = -E + F(.) #
## Then plot the lines #
###############################
# Calculate dEdt
# dEdt = ...
# Uncomment this to plot the dEdt across E
# plot_dE_E(E_grid, dEdt)
# Add fixed point
#####################################################
## TODO for students: #
# Calculate the fixed point with your initial value #
# verify your fixed point and plot the corret ones #
#####################################################
# Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 1)
#check if x_fp is the intersection of the lines with the given function check_fpE(pars, x_fp)
#vary different initial values to find the correct fixed point (Should be 3)
# Use blue, red and yellow colors, respectively ('b', 'r', 'y' codenames)
# if check_fpE(pars, x_fp_1):
# plt.plot(x_fp_1, 0, 'bo', ms=8)
# Replicate the code above (lines 35-36) for all fixed points.
_____no_output_____# to_remove solution
pars = default_parsE() # get default parameters
#set your external input and wEE
pars['I_ext_E'] = 0.5
pars['wEE'] = 5.0
# give E_grid
E_grid = np.linspace(0, 1., 1000)
# Calculate dEdt
dEdt = -E_grid + F(pars['wEE']*E_grid+pars['I_ext_E'], pars['a_E'], pars['theta_E'])
with plt.xkcd():
plot_dE_E(E_grid, dEdt)
#Calculate the fixed point with your initial value
x_fp_1 = my_fpE(pars, 0.)
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
x_fp_2 = my_fpE(pars, 0.4)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'ro', ms=8)
x_fp_3 = my_fpE(pars, 0.9)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'yo', ms=8)
plt.show()findfont: Font family ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'] not found. Falling back to DejaVu Sans.
</code>
#### Interactive Demo: fixed points as a function of recurrent and external inputs.
You can now explore how the previous plot changes when the recurrent coupling $w_{\text{EE}}$ and the external input $I_E^{\text{ext}}$ take different values._____no_output_____
<code>
#@title Fixed point Explorer
def plot_intersection_E(wEE, I_ext_E):
#set your parameters
pars['wEE'] = wEE
pars['I_ext_E'] = I_ext_E
#note that wEE !=0
if wEE>0:
# find fixed point
x_fp_1 = my_fpE(pars, 0.)
x_fp_2 = my_fpE(pars, 0.4)
x_fp_3 = my_fpE(pars, 0.9)
plt.figure()
E_grid = np.linspace(0, 1., 1000)
dEdt = -E_grid + F(wEE*E_grid+I_ext_E, pars['a_E'], pars['theta_E'])
plt.plot(E_grid, dEdt, 'k')
plt.plot(E_grid, 0.*E_grid, 'k--')
if check_fpE(pars, x_fp_1):
plt.plot(x_fp_1, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_2):
plt.plot(x_fp_2, 0, 'bo', ms=8)
if check_fpE(pars, x_fp_3):
plt.plot(x_fp_3, 0, 'bo', ms=8)
plt.xlabel('E activity', fontsize=14.)
plt.ylabel(r'$\frac{dE}{dt}$', fontsize=18.)
plt.show()
_ = widgets.interact(plot_intersection_E, wEE = (1., 7., 0.2), \
I_ext_E = (0., 3., 0.1)) _____no_output_____
</code>
## Summary
In this tutorial, we have investigated the dynamics of a rate-based single excitatory population of neurons.
We learned about:
- The effect of the input parameters and the time constant of the network on the dynamics of the population.
- How to find the fixed point(s) of the system.
Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:
- How to determine the stability of a fixed point by linearizing the system.
- How to add realistic inputs to our model._____no_output_____## Bonus 1: Stability of a fixed point_____no_output_____
<code>
#@title Video: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nvxxf59w2EA", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
videoVideo available at https://youtube.com/watch?v=nvxxf59w2EA
</code>
#### Initial values and trajectories
Here, let us first set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, and investigate the dynamics of $E(t)$ starting with different initial values $E(0) \equiv E_{\text{init}}$. We will plot the trajectories of $E(t)$ with $E_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$._____no_output_____
<code>
#@title Initial values
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
plt.figure(figsize=(10,6))
for ie in range(10):
pars['E_init'] = 0.1*ie # set the initial value
E = simulate_E(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], E, 'b', alpha=0.1 + 0.1*ie, label= r'E$_{\mathrm{init}}$=%.1f' % (0.1*ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel('E(t)')
plt.legend(loc=[0.72, 0.13], fontsize=14)
plt.show()
_____no_output_____
</code>
#### Interactive Demo: dynamics as a function of the initial value.
Let's now set $E_{init}$ to a value of your choice in this demo. How does the solution change? What do you observe?_____no_output_____
<code>
#@title Initial value Explorer
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def plot_E_diffEinit(E_init):
pars['E_init'] = E_init
E = simulate_E(pars)
plt.figure()
plt.plot(pars['range_t'], E, 'b', label='E(t)')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('E activity', fontsize=16.)
plt.show()
_ = widgets.interact(plot_E_diffEinit, E_init = (0., 1., 0.02)) _____no_output_____
</code>
### Stability analysis via linearization of the dynamics
Just like Equation $1$ in the case ($w_{EE}=0$) discussed above, a generic linear system
$$\frac{dx}{dt} = \lambda (x - b),$$
has a fixed point for $x=b$. The analytical solution of such a system can be found to be:
$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$
Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as:
$$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$
- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".
- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially and the fixed point is, therefore, "**unstable**" ._____no_output_____### Compute the stability of Equation (1)
Similar to what we did in the linear system above, in order to determine the stability of a fixed point $E_{\rm fp}$ of the excitatory population dynamics, we perturb Equation $1$ around $E_{\rm fp}$ by $\epsilon$, i.e. $E = E_{\rm fp} + \epsilon$. We can plug in Equation $1$ and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:
\begin{align}
\tau_E \frac{d\epsilon}{dt} \approx -\epsilon + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E) \epsilon
\end{align}
where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:
\begin{align}
\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau_E }[-1 + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]
\end{align}
That is, as in the linear system above, the value of $\lambda = [-1+ w_{EE}F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]/\tau_E$ determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system._____no_output_____### Exercise 4: Compute $dF$ and Eigenvalue
The derivative of the sigmoid transfer function is:
\begin{align}
\frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\
& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}.
\end{align}
Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it._____no_output_____
<code>
# Exercise 4
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
#####################################################################
## TODO for students: compute dFdx, then remove NotImplementedError #
#####################################################################
# dFdx = ...
raise NotImplementedError("Student excercise: compute the deravitive of F(x)")
return dFdx
pars = default_parsE() # get default parameters
x = np.arange(0,10,.1) # set the range of input
# Uncomment below lines after completing the dF function
# plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
_____no_output_____# to_remove solution
def dF(x,a,theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
dFdx = a*np.exp(-a*(x-theta))*(1+np.exp(-a*(x-theta)))**-2
return dFdx
# get default parameters
pars = default_parsE()
# set the range of input
x = np.arange(0,10,.1)
# plot figure
with plt.xkcd():
plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E']))
_____no_output_____
</code>
### Exercise 5: Compute eigenvalues
As discussed above, for the case with $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, the system displays **3** fixed points. However, when we simulated the dynamics and varied the initial conditions $E_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the $3$ fixed points by calculating the corresponding eigenvalues with the function `eig_E` defined above. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?_____no_output_____
<code>
# Exercise 5
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
#######################################################################
## TODO for students: compute eigenvalue, remove NotImplementedError #
#######################################################################
# eig = ...
raise NotImplementedError("Student excercise: compute the eigenvalue")
return eig
# Uncomment below lines after completing the eigE function.
# x_fp_1 = fpE(pars, 0.)
# eig_fp_1 = eig_E(pars, x_fp_1)
# print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2_____no_output_____# to_remove solution
pars = default_parsE()
pars['wEE'] = 5.0
pars['I_ext_E'] = 0.5
def eig_E(pars, fp):
"""
Args:
pars : Parameter dictionary
fp : fixed point E
Returns:
eig : eigevalue of the linearized system
"""
#get the parameters
tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E']
wEE, I_ext_E = pars['wEE'], pars['I_ext_E']
# fixed point
E = fp
eig = (-1. + wEE*dF(wEE*E + I_ext_E, a_E, theta_E)) / tau_E
return eig
# Uncomment below lines after completing the eigE function
x_fp_1 = my_fpE(pars, 0.)
eig_E1 = eig_E(pars, x_fp_1)
print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1))
# Continue by finding the eigenvalues for all fixed points of Exercise 2
x_fp_2 = my_fpE(pars, 0.4)
eig_E2 = eig_E(pars, x_fp_2)
print('Fixed point2=%.3f, Eigenvalue=%.3f' % (x_fp_2, eig_E2))
x_fp_3 = my_fpE(pars, 0.9)
eig_E3 = eig_E(pars, x_fp_3)
print('Fixed point3=%.3f, Eigenvalue=%.3f' % (x_fp_3, eig_E3))Fixed point1=0.042, Eigenvalue=-0.583
Fixed point2=0.447, Eigenvalue=0.498
Fixed point3=0.900, Eigenvalue=-0.626
</code>
### Think!
Throughout the tutorial, we have assumed $w_{\rm EE}> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w_{\rm EE}> 0$ is replaced by $w_{\rm II}< 0$? _____no_output_____## Bonus 2: Noisy input drives transition between two stable states
_____no_output_____### Ornstein-Uhlenbeck (OU) process
As discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows:
$$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$
Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process._____no_output_____
<code>
#@title OU process `my_OU(pars, sig, myseed=False)`
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I = np.zeros(Lt)
I[0] = noise[0] * sig
#generate OU
for it in range(Lt-1):
I[it+1] = I[it] + dt/tau_ou*(0.-I[it]) + np.sqrt(2.*dt/tau_ou) * sig * noise[it+1]
return I
pars = default_parsE(T=100)
pars['tau_ou'] = 1. #[ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=1998)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$');_____no_output_____
</code>
### Bonus Example: Up-Down transition
In the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs._____no_output_____
<code>
#@title Simulation of an E population with OU inputs
pars = default_parsE(T = 1000)
pars['wEE'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. #[ms]
pars['I_ext_E'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
E = simulate_E(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], E, 'r', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel('E activity')
plt.show()_____no_output_____
</code>
| {
"repository": "sanchobarriga/course-content",
"path": "tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 472032,
"hexsha": "d0f76209c655dbcedaae554b16787e4594ee0f14",
"max_line_length": 45008,
"avg_line_length": 155.6320474777,
"alphanum_fraction": 0.8813576198
} |
# Notebook from shrikumarp/ADSSpring18
Path: ADS_Project1.ipynb
<code>
import numpy as np
import pandas as pd_____no_output_____yelp = pd.read_csv('https://raw.githubusercontent.com/shrikumarp/shrikumarpp1/master/yelp.csv')_____no_output_____yelp.head()_____no_output_____yelp.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 10 columns):
business_id 10000 non-null object
date 10000 non-null object
review_id 10000 non-null object
stars 10000 non-null int64
text 10000 non-null object
type 10000 non-null object
user_id 10000 non-null object
cool 10000 non-null int64
useful 10000 non-null int64
funny 10000 non-null int64
dtypes: int64(4), object(6)
memory usage: 781.3+ KB
yelp.describe()_____no_output_____yelp['text length'] = yelp['text'].apply(len)_____no_output_____import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline_____no_output_____#visualising average text length for star reviews
g = sns.FacetGrid(yelp,col='stars')
g.map(plt.hist,'text length')_____no_output_____sns.boxplot(x='stars',y='text length',data=yelp,palette='rainbow')
_____no_output_____sns.countplot(x='stars',data=yelp,palette='rainbow')_____no_output_____stars = yelp.groupby('stars').mean()
stars_____no_output_____stars.corr()_____no_output_____sns.heatmap(stars.corr(),cmap='rainbow',annot=True)_____no_output_____
</code>
._____no_output_____Machine Learning models for text classification:_____no_output_____
<code>
yelp_class = yelp[(yelp.stars==1) | (yelp.stars==5)]_____no_output_____X = yelp_class['text']
y = yelp_class['stars']_____no_output_____from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
# count vectorization converts the text into integerized count vectors corresponding to the occurence of a particular word in the sentence._____no_output_____X = cv.fit_transform(X)_____no_output_____from sklearn.model_selection import train_test_split_____no_output_____X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101)_____no_output_____# Frst we use Naive Bayes Algorithm.
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()_____no_output_____nb.fit(X_train,y_train)_____no_output_____predictions = nb.predict(X_test)_____no_output_____from sklearn.metrics import confusion_matrix,classification_report_____no_output_____print(confusion_matrix(y_test,predictions))
print('\n')
print(classification_report(y_test,predictions))[[159 69]
[ 22 976]]
precision recall f1-score support
1 0.88 0.70 0.78 228
5 0.93 0.98 0.96 998
avg / total 0.92 0.93 0.92 1226
</code>
._____no_output_____Support Vector Classifier_____no_output_____
<code>
from sklearn.model_selection import GridSearchCV
parameters = [{'C': [1, 10, 100], 'kernel': ['linear']},
{'C': [1, 10, 100], 'kernel': ['rbf']}]_____no_output_____from sklearn.svm import SVC_____no_output_____svmmodel = SVC()_____no_output_____svmmodel.fit(X_train,y_train)_____no_output_____predsvm= svmmodel.predict(X_test)_____no_output_____print(confusion_matrix(y_test,predsvm))
print('\n')
print(classification_report(y_test,predsvm))[[ 0 228]
[ 0 998]]
precision recall f1-score support
1 0.00 0.00 0.00 228
5 0.81 1.00 0.90 998
avg / total 0.66 0.81 0.73 1226
</code>
Now we use the parameters list defined above to try to tune the parameter 'C' to see if we can squeeze out more accuracy out of this classifier._____no_output_____
<code>
grid_search_svm = GridSearchCV(estimator = svmmodel,
param_grid = parameters,
scoring = 'accuracy',
cv = 10,
n_jobs = -1)
#GridSearchCV(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch=‘2*n_jobs’, error_score=’raise’, return_train_score=’warn’
# scoring and comparison within models is based on accuracy.The most accurate model will be chosen after training and croass validation._____no_output_____grid_search_svm = grid_search_svm.fit(X_train, y_train)_____no_output_____grid_search_svm.best_score______no_output_____grid_search_svm.best_params______no_output_____
</code>
As we can wee, the best selected parameters from the Girid search of better parameters turned out to be C= 1 and kernel ='Linear', and this model performed with the accuracy of 91.57%._____no_output_____._____no_output_____Using Random Forest Classifier Algorithm_____no_output_____
<code>
from sklearn.ensemble import RandomForestClassifier
rforest = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)_____no_output_____rforest.fit(X_train, y_train)_____no_output_____rfpred= rforest.predict(X_test)_____no_output_____print(confusion_matrix(y_test,rfpred))
print('\n')
print(classification_report(y_test,rfpred))[[ 81 147]
[ 19 979]]
precision recall f1-score support
1 0.81 0.36 0.49 228
5 0.87 0.98 0.92 998
avg / total 0.86 0.86 0.84 1226
</code>
._____no_output_____Using Logistic Regression_____no_output_____
<code>
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV_____no_output_____
</code>
We set the parameter grid again with a random space of values for parameter C, and we try to get the maximum accuracy by tuning the hyperparameters._____no_output_____
<code>
c_space = np.logspace(-5, 8, 15)
param_grid = {'C': c_space}_____no_output_____logreg = LogisticRegression()_____no_output_____logreg_cv = GridSearchCV(logreg, param_grid, cv= 10)
# here we pass the parameters of the Logistic regression model, the parameter values array and the number of Cross validations as 10 to the GridSearchCV function._____no_output_____logreg_cv.fit(X_train,y_train)_____no_output_____print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_))
print("Best score is {}".format(logreg_cv.best_score_))Tuned Logistic Regression Parameters: {'C': 3.7275937203149381}
Best score is 0.9276223776223776
# we predict on the test set
pred = logreg_cv.predict(X_test)_____no_output_____print(classification_report(y_test,pred)) precision recall f1-score support
1 0.86 0.79 0.83 228
5 0.95 0.97 0.96 998
avg / total 0.94 0.94 0.94 1226
</code>
As we can see, Logistic regression with hyperparameter tuning performs with the best accuracy with 94%._____no_output_____
| {
"repository": "shrikumarp/ADSSpring18",
"path": "ADS_Project1.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 72188,
"hexsha": "d0f76c26b8de0f4515afc0c91208b62d2120f052",
"max_line_length": 13476,
"avg_line_length": 56.0031031808,
"alphanum_fraction": 0.7241092702
} |
# Notebook from stjordanis/CS7641
Path: Assignmet_4/CS7641_Assignment4_MDP_1.ipynb
<code>
%%html
<style>
body {
font-family: "Cambria", cursive, sans-serif;
}
</style> _____no_output_____import random, time
import numpy as np
from collections import defaultdict
import operator
import matplotlib.pyplot as plt_____no_output_____
</code>
## Misc functions and utilities_____no_output_____
<code>
orientations = EAST, NORTH, WEST, SOUTH = [(1, 0), (0, 1), (-1, 0), (0, -1)]
turns = LEFT, RIGHT = (+1, -1)_____no_output_____def vector_add(a, b):
"""Component-wise addition of two vectors."""
return tuple(map(operator.add, a, b))_____no_output_____def turn_heading(heading, inc, headings=orientations):
return headings[(headings.index(heading) + inc) % len(headings)]
def turn_right(heading):
return turn_heading(heading, RIGHT)
def turn_left(heading):
return turn_heading(heading, LEFT)
def distance(a, b):
"""The distance between two (x, y) points."""
xA, yA = a
xB, yB = b
return math.hypot((xA - xB), (yA - yB))_____no_output_____def isnumber(x):
"""Is x a number?"""
return hasattr(x, '__int__')_____no_output_____
</code>
## Class definitions_____no_output_____### Base `MDP` class_____no_output_____
<code>
class MDP:
"""A Markov Decision Process, defined by an initial state, transition model,
and reward function. We also keep track of a gamma value, for use by
algorithms. The transition model is represented somewhat differently from
the text. Instead of P(s' | s, a) being a probability number for each
state/state/action triplet, we instead have T(s, a) return a
list of (p, s') pairs. We also keep track of the possible states,
terminal states, and actions for each state."""
def __init__(self, init, actlist, terminals, transitions = {}, reward = None, states=None, gamma=.9):
if not (0 < gamma <= 1):
raise ValueError("An MDP must have 0 < gamma <= 1")
if states:
self.states = states
else:
## collect states from transitions table
self.states = self.get_states_from_transitions(transitions)
self.init = init
if isinstance(actlist, list):
## if actlist is a list, all states have the same actions
self.actlist = actlist
elif isinstance(actlist, dict):
## if actlist is a dict, different actions for each state
self.actlist = actlist
self.terminals = terminals
self.transitions = transitions
#if self.transitions == {}:
#print("Warning: Transition table is empty.")
self.gamma = gamma
if reward:
self.reward = reward
else:
self.reward = {s : 0 for s in self.states}
#self.check_consistency()
def R(self, state):
"""Return a numeric reward for this state."""
return self.reward[state]
def T(self, state, action):
"""Transition model. From a state and an action, return a list
of (probability, result-state) pairs."""
if(self.transitions == {}):
raise ValueError("Transition model is missing")
else:
return self.transitions[state][action]
def actions(self, state):
"""Set of actions that can be performed in this state. By default, a
fixed list of actions, except for terminal states. Override this
method if you need to specialize by state."""
if state in self.terminals:
return [None]
else:
return self.actlist
def get_states_from_transitions(self, transitions):
if isinstance(transitions, dict):
s1 = set(transitions.keys())
s2 = set([tr[1] for actions in transitions.values()
for effects in actions.values() for tr in effects])
return s1.union(s2)
else:
print('Could not retrieve states from transitions')
return None
def check_consistency(self):
# check that all states in transitions are valid
assert set(self.states) == self.get_states_from_transitions(self.transitions)
# check that init is a valid state
assert self.init in self.states
# check reward for each state
#assert set(self.reward.keys()) == set(self.states)
assert set(self.reward.keys()) == set(self.states)
# check that all terminals are valid states
assert all([t in self.states for t in self.terminals])
# check that probability distributions for all actions sum to 1
for s1, actions in self.transitions.items():
for a in actions.keys():
s = 0
for o in actions[a]:
s += o[0]
assert abs(s - 1) < 0.001_____no_output_____
</code>
### A custom MDP class to extend functionality
We will write a CustomMDP class to extend the MDP class for the problem at hand. <br>This class will implement the `T` method to implement the transition model._____no_output_____
<code>
class CustomMDP(MDP):
def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9):
# All possible actions.
actlist = []
for state in transition_matrix.keys():
actlist.extend(transition_matrix[state])
actlist = list(set(actlist))
#print(actlist)
MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma)
self.t = transition_matrix
self.reward = rewards
for state in self.t:
self.states.add(state)
def T(self, state, action):
if action is None:
return [(0.0, state)]
else:
return [(prob, new_state) for new_state, prob in self.t[state][action].items()]_____no_output_____
</code>
## Problem 1: Simple MDP
---
### State dependent reward function
Markov Decision Processes are formally described as processes that follow the Markov property which states that "The future is independent of the past given the present". MDPs formally describe environments for reinforcement learning and we assume that the environment is fully observable.
Let us take a toy example MDP and solve it using value iteration and policy iteration. This is a simple example adapted from a similar problem by Dr. David Silver, tweaked to fit the limitations of the current functions.
Let's say you're a student attending lectures in a university. There are three lectures you need to attend on a given day. Attending the first lecture gives you 4 points of reward. After the first lecture, you have a 0.6 probability to continue into the second one, yielding 6 more points of reward. But, with a probability of 0.4, you get distracted and start using Facebook instead and get a reward of -1. From then onwards, you really can't let go of Facebook and there's just a 0.1 probability that you will concentrate back on the lecture.
After the second lecture, you have an equal chance of attending the next lecture or just falling asleep. Falling asleep is the terminal state and yields you no reward, but continuing on to the final lecture gives you a big reward of 10 points. From there on, you have a 40% chance of going to study and reach the terminal state, but a 60% chance of going to the pub with your friends instead. You end up drunk and don't know which lecture to attend, so you go to one of the lectures according to the probabilities given above.
_____no_output_____### Definition of transition matrix
We first have to define our Transition Matrix as a nested dictionary to fit the requirements of the MDP class._____no_output_____
<code>
t = {
'leisure': {
'facebook': {'leisure':0.9, 'class1':0.1},
'quit': {'leisure':0.1, 'class1':0.9},
'study': {},
'sleep': {},
'pub': {}
},
'class1': {
'study': {'class2':0.6, 'leisure':0.4},
'facebook': {'class2':0.4, 'leisure':0.6},
'quit': {},
'sleep': {},
'pub': {}
},
'class2': {
'study': {'class3':0.5, 'end':0.5},
'sleep': {'end':0.5, 'class3':0.5},
'facebook': {},
'quit': {},
'pub': {},
},
'class3': {
'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16},
'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24},
'facebook': {},
'quit': {},
'sleep': {}
},
'end': {}
}_____no_output_____
</code>
### Defining rewards
We now need to define the reward for each state._____no_output_____
<code>
rewards = {
'class1': 4,
'class2': 6,
'class3': 10,
'leisure': -1,
'end': 0
}_____no_output_____
</code>
### Terminal state
This MDP has only one terminal state_____no_output_____
<code>
terminals = ['end']_____no_output_____
</code>
### Setting initial state to `Class 1`_____no_output_____
<code>
init = 'class1'_____no_output_____
</code>
### Read in an instance of the custom class_____no_output_____
<code>
school_mdp = CustomMDP(t, rewards, terminals, init, gamma=.95)_____no_output_____
</code>
### Let's see the actions and rewards of the MDP_____no_output_____
<code>
school_mdp.states_____no_output_____school_mdp.actions('class1')_____no_output_____school_mdp.actions('leisure')_____no_output_____school_mdp.T('class1','sleep')_____no_output_____school_mdp.actions('end')_____no_output_____school_mdp.reward_____no_output_____
</code>
## Value iteration_____no_output_____
<code>
def value_iteration(mdp, epsilon=0.001):
"""Solving an MDP by value iteration.
mdp: The MDP object
epsilon: Stopping criteria
"""
U1 = {s: 0 for s in mdp.states}
R, T, gamma = mdp.R, mdp.T, mdp.gamma
while True:
U = U1.copy()
delta = 0
for s in mdp.states:
U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])
for a in mdp.actions(s)])
delta = max(delta, abs(U1[s] - U[s]))
if delta < epsilon * (1 - gamma) / gamma:
return U_____no_output_____def value_iteration_over_time(mdp, iterations=20):
U_over_time = []
U1 = {s: 0 for s in mdp.states}
R, T, gamma = mdp.R, mdp.T, mdp.gamma
for _ in range(iterations):
U = U1.copy()
for s in mdp.states:
U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])
for a in mdp.actions(s)])
U_over_time.append(U)
return U_over_time_____no_output_____def best_policy(mdp, U):
"""Given an MDP and a utility function U, determine the best policy,
as a mapping from state to action."""
pi = {}
for s in mdp.states:
pi[s] = max(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))
return pi_____no_output_____
</code>
## Value iteration on the school MDP_____no_output_____
<code>
value_iteration(school_mdp)_____no_output_____value_iteration_over_time(school_mdp,iterations=10)_____no_output_____
</code>
### Plotting value updates over time/iterations_____no_output_____
<code>
def plot_value_update(mdp,iterations=10,plot_kw=None):
"""
Plot value updates over iterations for a given MDP.
"""
x = value_iteration_over_time(mdp,iterations=iterations)
value_states = {k:[] for k in mdp.states}
for i in x:
for k,v in i.items():
value_states[k].append(v)
plt.figure(figsize=(8,5))
plt.title("Evolution of state utilities over iteration", fontsize=18)
for v in value_states:
plt.plot(value_states[v])
plt.legend(list(value_states.keys()),fontsize=14)
plt.grid(True)
plt.xlabel("Iterations",fontsize=16)
plt.ylabel("Utilities of states",fontsize=16)
plt.show()_____no_output_____plot_value_update(school_mdp,15)_____no_output_____
</code>
### Value iterations for various discount factors ($\gamma$)_____no_output_____
<code>
for i in range(4):
mdp = CustomMDP(t, rewards, terminals, init, gamma=1-0.2*i)
plot_value_update(mdp,10)_____no_output_____
</code>
### Value iteration for two different reward structures_____no_output_____
<code>
rewards1 = {
'class1': 4,
'class2': 6,
'class3': 10,
'leisure': -1,
'end': 0
}
mdp1 = CustomMDP(t, rewards1, terminals, init, gamma=.95)
plot_value_update(mdp1,20)
rewards2 = {
'class1': 1,
'class2': 1.5,
'class3': 2.5,
'leisure': -4,
'end': 0
}
mdp2 = CustomMDP(t, rewards2, terminals, init, gamma=.95)
plot_value_update(mdp2,20)_____no_output_____value_iteration(mdp2)_____no_output_____
</code>
## Policy iteration_____no_output_____
<code>
def expected_utility(a, s, U, mdp):
"""The expected utility of doing a in state s, according to the MDP and U."""
return sum([p * U[s1] for (p, s1) in mdp.T(s, a)])_____no_output_____def policy_evaluation(pi, U, mdp, k=20):
"""Returns an updated utility mapping U from each state in the MDP to its
utility, using an approximation (modified policy iteration)."""
R, T, gamma = mdp.R, mdp.T, mdp.gamma
for i in range(k):
for s in mdp.states:
U[s] = R(s) + gamma * sum([p * U[s1] for (p, s1) in T(s, pi[s])])
return U_____no_output_____def policy_iteration(mdp,verbose=0):
"""Solves an MDP by policy iteration"""
U = {s: 0 for s in mdp.states}
pi = {s: random.choice(mdp.actions(s)) for s in mdp.states}
if verbose:
print("Initial random choice:",pi)
iter_count=0
while True:
iter_count+=1
U = policy_evaluation(pi, U, mdp)
unchanged = True
for s in mdp.states:
a = max(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))
if a != pi[s]:
pi[s] = a
unchanged = False
if unchanged:
return (pi,iter_count)
if verbose:
print("Policy after iteration {}: {}".format(iter_count,pi))_____no_output_____
</code>
## Policy iteration over the school MDP_____no_output_____
<code>
policy_iteration(school_mdp)_____no_output_____policy_iteration(school_mdp,verbose=1)Initial random choice: {'leisure': 'study', 'class2': 'sleep', 'class3': 'facebook', 'class1': 'pub', 'end': None}
Policy after iteration 1: {'leisure': 'quit', 'class2': 'study', 'class3': 'pub', 'class1': 'study', 'end': None}
Policy after iteration 2: {'leisure': 'quit', 'class2': 'study', 'class3': 'pub', 'class1': 'facebook', 'end': None}
</code>
### Does the result match using value iteration? We use the `best_policy` function to find out_____no_output_____
<code>
best_policy(school_mdp,value_iteration(school_mdp,0.01))_____no_output_____
</code>
## Comparing computation efficiency (time) of value and policy iterations
Clearly values iteration method takes more iterations to reach the same steady-state compared to policy iteration technique. But how does their computation time compare? Let's find out._____no_output_____### Running value and policy iteration on the school MDP many times and averaging_____no_output_____
<code>
def compute_time(mdp,iteration_technique='value',n_run=1000,epsilon=0.01):
"""
Computes the average time for value or policy iteration for a given MDP
n_run: Number of runs to average over, default 1000
epsilon: Error margin for the value iteration
"""
if iteration_technique=='value':
t1 = time.time()
for _ in range(n_run):
value_iteration(mdp,epsilon=epsilon)
t2 = time.time()
print("Average value iteration took {} milliseconds".format((t2-t1)*1000/n_run))
else:
t1 = time.time()
for _ in range(n_run):
policy_iteration(mdp)
t2 = time.time()
print("Average policy iteration took {} milliseconds".format((t2-t1)*1000/n_run))_____no_output_____compute_time(school_mdp,'value')Average value iteration took 1.551398515701294 milliseconds
compute_time(school_mdp,'policy')Average policy iteration took 0.7556800842285156 milliseconds
</code>
## Q-learning_____no_output_____### Q-learning class_____no_output_____
<code>
class QLearningAgent:
""" An exploratory Q-learning agent. It avoids having to learn the transition
model because the Q-value of a state can be related directly to those of
its neighbors.
"""
def __init__(self, mdp, Ne, Rplus, alpha=None):
self.gamma = mdp.gamma
self.terminals = mdp.terminals
self.all_act = mdp.actlist
self.Ne = Ne # iteration limit in exploration function
self.Rplus = Rplus # large value to assign before iteration limit
self.Q = defaultdict(float)
self.Nsa = defaultdict(float)
self.s = None
self.a = None
self.r = None
self.states = mdp.states
self.T = mdp.T
if alpha:
self.alpha = alpha
else:
self.alpha = lambda n: 1./(1+n)
def f(self, u, n):
""" Exploration function. Returns fixed Rplus until
agent has visited state, action a Ne number of times."""
if n < self.Ne:
return self.Rplus
else:
return u
def actions_in_state(self, state):
""" Return actions possible in given state.
Useful for max and argmax. """
if state in self.terminals:
return [None]
else:
act_list=[]
for a in self.all_act:
if len(self.T(state,a))>0:
act_list.append(a)
return act_list
def __call__(self, percept):
s1, r1 = self.update_state(percept)
Q, Nsa, s, a, r = self.Q, self.Nsa, self.s, self.a, self.r
alpha, gamma, terminals = self.alpha, self.gamma, self.terminals,
actions_in_state = self.actions_in_state
if s in terminals:
Q[s, None] = r1
if s is not None:
Nsa[s, a] += 1
Q[s, a] += alpha(Nsa[s, a]) * (r + gamma * max(Q[s1, a1]
for a1 in actions_in_state(s1)) - Q[s, a])
if s in terminals:
self.s = self.a = self.r = None
else:
self.s, self.r = s1, r1
self.a = max(actions_in_state(s1), key=lambda a1: self.f(Q[s1, a1], Nsa[s1, a1]))
return self.a
def update_state(self, percept):
"""To be overridden in most cases. The default case
assumes the percept to be of type (state, reward)."""
return percept_____no_output_____
</code>
### Trial run_____no_output_____
<code>
def run_single_trial(agent_program, mdp):
"""Execute trial for given agent_program
and mdp."""
def take_single_action(mdp, s, a):
"""
Select outcome of taking action a
in state s. Weighted Sampling.
"""
x = random.uniform(0, 1)
cumulative_probability = 0.0
for probability_state in mdp.T(s, a):
probability, state = probability_state
cumulative_probability += probability
if x < cumulative_probability:
break
return state
current_state = mdp.init
while True:
current_reward = mdp.R(current_state)
percept = (current_state, current_reward)
next_action = agent_program(percept)
if next_action is None:
break
current_state = take_single_action(mdp, current_state, next_action)_____no_output_____
</code>
### Testing Q-learning_____no_output_____
<code>
# Define an agent
q_agent = QLearningAgent(school_mdp, Ne=1000, Rplus=2,alpha=lambda n: 60./(59+n))_____no_output_____q_agent.actions_in_state('leisure')_____no_output_____run_single_trial(q_agent,school_mdp)_____no_output_____q_agent.Q_____no_output_____for i in range(200):
run_single_trial(q_agent,school_mdp)_____no_output_____q_agent.Q_____no_output_____def get_U_from_Q(q_agent):
U = defaultdict(lambda: -100.) # Large negative value for comparison
for state_action, value in q_agent.Q.items():
state, action = state_action
if U[state] < value:
U[state] = value
return U_____no_output_____get_U_from_Q(q_agent)_____no_output_____q_agent = QLearningAgent(school_mdp, Ne=100, Rplus=25,alpha=lambda n: 10/(9+n))
qhistory=[]
for i in range(100000):
run_single_trial(q_agent,school_mdp)
U=get_U_from_Q(q_agent)
qhistory.append(U)
print(get_U_from_Q(q_agent))defaultdict(<function get_U_from_Q.<locals>.<lambda> at 0x00000255290A4F28>, {'class1': 23.240828242090135, 'class2': 19.233409838752596, 'end': 4.003615106646801, 'class3': 24.108995803027188, 'leisure': 20.878772382041472})
print(value_iteration(school_mdp,epsilon=0.001)){'leisure': 18.079639654154484, 'class2': 15.792664558035112, 'class3': 20.61614864677164, 'class1': 20.306571436730533, 'end': 0.0}
</code>
### Function for utility estimate by Q-learning by many iterations_____no_output_____
<code>
def qlearning_iter(agent_program,mdp,iterations=1000,print_final_utility=True):
"""
Function for utility estimate by Q-learning by many iterations
Returns a history object i.e. a list of dictionaries, where utility estimate for each iteration is stored
q_agent = QLearningAgent(grid_1, Ne=25, Rplus=1.5,
alpha=lambda n: 10000./(9999+n))
hist=qlearning_iter(q_agent,grid_1,iterations=10000)
"""
qhistory=[]
for i in range(iterations):
run_single_trial(agent_program,mdp)
U=get_U_from_Q(agent_program)
if len(U)==len(mdp.states):
qhistory.append(U)
if print_final_utility:
print(U)
return qhistory_____no_output_____
</code>
### How do the long-term utility estimates with Q-learning compare with value iteration?_____no_output_____
<code>
def plot_qlearning_vi(hist, vi,plot_n_states=None):
"""
Compares and plots a Q-learning and value iteration results for the utility estimate of an MDP's states
hist: A history object from a Q-learning run
vi: A value iteration estimate for the same MDP
plot_n_states: Restrict the plotting for n states (randomly chosen)
"""
utilities={k:[] for k in list(vi.keys())}
for h in hist:
for state in h.keys():
utilities[state].append(h[state])
if plot_n_states==None:
for state in list(vi.keys()):
plt.figure(figsize=(7,4))
plt.title("Plot of State: {} over Q-learning iterations".format(str(state)),fontsize=16)
plt.plot(utilities[state])
plt.hlines(y=vi[state],xmin=0,xmax=1.1*len(hist))
plt.legend(['Q-learning estimates','Value iteration estimate'],fontsize=14)
plt.xlabel("Iterations",fontsize=14)
plt.ylabel("Utility of the state",fontsize=14)
plt.grid(True)
plt.show()
else:
for state in list(vi.keys())[:plot_n_states]:
plt.figure(figsize=(7,4))
plt.title("Plot of State: {} over Q-learning iterations".format(str(state)),fontsize=16)
plt.plot(utilities[state])
plt.hlines(y=vi[state],xmin=0,xmax=1.1*len(hist))
plt.legend(['Q-learning estimates','Value iteration estimate'],fontsize=14)
plt.xlabel("Iterations",fontsize=14)
plt.ylabel("Utility of the state",fontsize=14)
plt.grid(True)
plt.show()_____no_output_____
</code>
### Testing the long-term utility learning for the small (default) grid world_____no_output_____
<code>
# Define the Q-learning agent
q_agent = QLearningAgent(school_mdp, Ne=100, Rplus=2,alpha=lambda n: 100/(99+n))
# Obtain the history by running the Q-learning for many iterations
hist=qlearning_iter(q_agent,school_mdp,iterations=20000,print_final_utility=False)
# Get a value iteration estimate using the same MDP
vi = value_iteration(school_mdp,epsilon=0.001)
# Compare the utility estimates from two methods
plot_qlearning_vi(hist,vi)_____no_output_____for alpha in range(100,5100,1000):
q_agent = QLearningAgent(school_mdp, Ne=10, Rplus=2,alpha=lambda n: alpha/(alpha-1+n))
# Obtain the history by running the Q-learning for many iterations
hist=qlearning_iter(q_agent,school_mdp,iterations=10000,print_final_utility=False)
# Get a value iteration estimate using the same MDP
vi = value_iteration(school_mdp,epsilon=0.001)
# Compare the utility estimates from two methods
plot_qlearning_vi(hist,vi,plot_n_states=1)_____no_output_____
</code>
| {
"repository": "stjordanis/CS7641",
"path": "Assignmet_4/CS7641_Assignment4_MDP_1.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 692618,
"hexsha": "d0fae100f724121698d1af1a94e3b36e833958ea",
"max_line_length": 47496,
"avg_line_length": 406.2275659824,
"alphanum_fraction": 0.9318830871
} |
# Notebook from kozo2/pyWikiPathways
Path: docs/pywikipathways-and-bridgedbpy.ipynb
# pywikipathways and bridgedbpy
[](https://colab.research.google.com/github/kozo2/pywikipathways/blob/main/docs/pywikipathways-and-bridgedbpy.ipynb)
by Kozo Nishida and Alexander Pico
pywikipathways 0.0.2
bridgedbpy 0.0.2
*WikiPathways* is a well-known repository for biological pathways that provides unique tools to the research community for content creation, editing and utilization [1].
Python is a powerful programming language and environment for statistical and exploratory data analysis.
*pywikipathways* leverages the WikiPathways API to communicate between **Python** and WikiPathways, allowing any pathway to be queried, interrogated and downloaded in both data and image formats. Queries are typically performed based on “Xrefs”, standardized identifiers for genes, proteins and metabolites. Once you can identified a pathway, you can use the WPID (WikiPathways identifier) to make additional queries.
[bridgedbpy](https://pypi.org/project/bridgedbpy/) leverages the BridgeDb API [2] to provide a number of functions related to ID mapping and identifiers in general for gene, proteins and metabolites.
Together, *bridgedbpy* provides convience to the typical *pywikipathways* user by supplying formal names and codes defined by BridgeDb and used by WikiPathways.
## Prerequisites
In addition to this **pywikipathways** package, you’ll also need to install **bridgedbpy**:_____no_output_____
<code>
!pip install pywikipathways bridgedbpy_____no_output_____import pywikipathways as pwpw
import bridgedbpy as brdgdbp_____no_output_____
</code>
## Getting started
Lets first check some of the most basic functions from each package. For example, here’s how you check to see which species are currently supported by WikiPathways:_____no_output_____
<code>
org_names = pwpw.list_organisms()_____no_output_____org_names_____no_output_____
</code>
You should see 30 or more species listed. This list is useful for subsequent queries that take an *organism* argument, to avoid misspelling.
However, some function want the organism code, rather than the full name. Using bridgedbpy’s *getOrganismCode* function, we can get those:_____no_output_____
<code>
org_names[14]_____no_output_____brdgdbp.get_organism_code(org_names[14])_____no_output_____
</code>
## Identifier System Names and Codes
Even more obscure are the various datasources providing official identifiers and how they are named and coded. Fortunately, BridgeDb defines these clearly and simply. And WikiPathways relies on these BridgeDb definitions.
For example, this is how we find the system code for Ensembl:_____no_output_____
<code>
brdgdbp.get_system_code("Ensembl")_____no_output_____
</code>
It’s “En”! That’s simple enough. But some are less obvious…_____no_output_____
<code>
brdgdbp.get_system_code("Entrez Gene")_____no_output_____
</code>
It’s “L” because the resource used to be named “Locus Link”. Sigh… Don’t try to guess these codes. Use this function from BridgeDb (above) to get the correct code. By the way, all the systems supported by BridgeDb are here: https://github.com/bridgedb/datasources/blob/main/datasources.tsv
## How to use bridgedbpy with pywikipathways
Here are some specific combo functions that are useful. They let you skip worrying about system codes altogether!
1. Getting all the pathways containing the HGNC symbol “TNF”:
_____no_output_____
<code>
tnf_pathways = pwpw.find_pathway_ids_by_xref('TNF', brdgdbp.get_system_code('HGNC'))
tnf_pathways_____no_output_____
</code>
2. Getting all the genes from a pathway as Ensembl identifiers:_____no_output_____
<code>
pwpw.get_xref_list('WP554', brdgdbp.get_system_code('Ensembl'))_____no_output_____
</code>
3. Getting all the metabolites from a pathway as ChEBI identifiers:_____no_output_____
<code>
pwpw.get_xref_list('WP554', brdgdbp.get_system_code('ChEBI'))_____no_output_____
</code>
## Other tips
And if you ever find yourself with a system code, e.g., from a pywikipathways return result and you’re not sure what it is, then you can use this function:_____no_output_____
<code>
brdgdbp.get_full_name('Ce')_____no_output_____
</code>
## References
1. Pico AR, Kelder T, Iersel MP van, Hanspers K, Conklin BR, Evelo C: **WikiPathways: Pathway editing for the people.** *PLoS Biol* 2008, **6:**e184+.
2. Iersel M van, Pico A, Kelder T, Gao J, Ho I, Hanspers K, Conklin B, Evelo C: **The BridgeDb framework: Standardized access to gene, protein and metabolite identifier mapping services.** *BMC Bioinformatics* 2010, **11:**5+._____no_output_____
| {
"repository": "kozo2/pyWikiPathways",
"path": "docs/pywikipathways-and-bridgedbpy.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 12162,
"hexsha": "d0fb300181780f15fbd69f4e9f4d04dd8f20e6d0",
"max_line_length": 426,
"avg_line_length": 27.5782312925,
"alphanum_fraction": 0.5633119553
} |
# Notebook from TotoArt/arnheim
Path: arnheim_1.ipynb
Copyright 2021 DeepMind Technologies Limited
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License._____no_output_____#Generative Art Using Neural Visual Grammars and Dual Encoders
**Chrisantha Fernando, Piotr Mirowski, Dylan Banarse, S. M. Ali Eslami, Jean-Baptiste Alayrac, Simon Osindero**
DeepMind, 2021
##Arnheim 1
###Generate paintings from text prompts.
Whilst there are perhaps only a few scientific methods, there seem to be almost as many artistic methods as there are artists. Artistic processes appear to inhabit the highest order of open-endedness. To begin to understand some of the processes of art making it is helpful to try to automate them even partially.
In this paper, a novel algorithm for producing generative art is described which allows a user to input a text string, and which in a creative response to this string, outputs an image which interprets that string. It does so by evolving images using a hierarchical neural [Lindenmeyer system](https://en.wikipedia.org/wiki/L-system), and evaluating these images along the way using an image text dual encoder trained on billions of images and their associated text from the internet.
In doing so we have access to and control over an instance of an artistic process, allowing analysis of which aspects of the artistic process become the task of the algorithm, and which elements remain the responsibility of the artist.
This colab accompanies the paper [Generative Art Using Neural Visual Grammars and Dual Encoders](https://arxiv.org/abs/2105.00162)
##Instructions
1. Click "Connect" button in the top right corner of this Colab
1. Select Runtime -> Change runtime type -> Hardware accelerator -> GPU
1. Select High-RAM for "Runtime shape" option
1. Navigate to "Get text input"
1. Enter text for IMAGE_NAME
1. Select "Run All" from Runtime menu
_____no_output_____# Imports_____no_output_____
<code>
#@title Set CUDA version for PyTorch
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]
).decode("UTF-8").split(", ")
if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
! nvidia-smi_____no_output_____#@title Install and import PyTorch and Clip
! pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html
! pip install git+https://github.com/openai/CLIP.git --no-deps
! pip install ftfy regex
import torch
import torch.nn as nn
import clip
print("Torch version:", torch.__version__)_____no_output_____#@title Install and import ray multiprocessing
! pip install -q -U ray[default]
import ray_____no_output_____#@title Import all other needed libraries
import collections
import copy
import cloudpickle
import time
import numpy as np
import matplotlib.pyplot as plt
import math
from PIL import Image
from PIL import ImageDraw
from skimage import transform_____no_output_____#@title Load CLIP {vertical-output: true}
CLIP_MODEL = "ViT-B/32"
device = torch.device("cuda")
print(f"Downloading CLIP model {CLIP_MODEL}...")
model, _ = clip.load(CLIP_MODEL, device, jit=False)_____no_output_____
</code>
# Neural Visual Grammar_____no_output_____### Drawing primitives_____no_output_____
<code>
def to_homogeneous(p):
r, c = p
return np.stack((r, c, np.ones_like(p[0])), axis=0)
def from_homogeneous(p):
p = p / p.T[:, 2]
return p[0].astype("int32"), p[1].astype("int32")
def apply_scale(scale, lineh):
return np.stack([lineh[0, :] * scale,
lineh[1, :] * scale,
lineh[2, :]])
def apply_translation(translation, lineh, offset_r=0, offset_c=0):
r, c = translation
return np.stack([lineh[0, :] + c + offset_c,
lineh[1, :] + r + offset_r,
lineh[2, :]])
def apply_rotation(translation, rad, lineh):
r, c = translation
cos_rad = np.cos(rad)
sin_rad = np.sin(rad)
return np.stack(
[(lineh[0, :] - c) * cos_rad - (lineh[1, :] - r) * sin_rad + c,
(lineh[0, :] - c) * sin_rad + (lineh[1, :] - r) * cos_rad + r,
lineh[2, :]])_____no_output_____def transform_lines(line_from, line_to, translation, angle, scale,
translation2, angle2, scale2, img_siz2):
"""Transform lines by translation, angle and scale, twice.
Args:
line_from: Line start point.
line_to: Line end point.
translation: 1st translation to line.
angle: 1st angle of rotation for line.
scale: 1st scale for line.
translation2: 2nd translation to line.
angle2: 2nd angle of rotation for line.
scale2: 2nd scale for line.
img_siz2: Offset for 2nd translation.
Returns:
Transformed lines.
"""
if len(line_from.shape) == 1:
line_from = np.expand_dims(line_from, 0)
if len(line_to.shape) == 1:
line_to = np.expand_dims(line_to, 0)
# First transform.
line_from_h = to_homogeneous(line_from.T)
line_to_h = to_homogeneous(line_to.T)
line_from_h = apply_scale(scale, line_from_h)
line_to_h = apply_scale(scale, line_to_h)
translated_line_from = apply_translation(translation, line_from_h)
translated_line_to = apply_translation(translation, line_to_h)
translated_mid_point = (translated_line_from + translated_line_to) / 2.0
translated_mid_point = translated_mid_point[[1, 0]]
line_from_transformed = apply_rotation(translated_mid_point,
np.pi * angle,
translated_line_from)
line_to_transformed = apply_rotation(translated_mid_point,
np.pi * angle,
translated_line_to)
line_from_transformed = np.array(from_homogeneous(line_from_transformed))
line_to_transformed = np.array(from_homogeneous(line_to_transformed))
# Second transform.
line_from_h = to_homogeneous(line_from_transformed)
line_to_h = to_homogeneous(line_to_transformed)
line_from_h = apply_scale(scale2, line_from_h)
line_to_h = apply_scale(scale2, line_to_h)
translated_line_from = apply_translation(
translation2, line_from_h, offset_r=img_siz2, offset_c=img_siz2)
translated_line_to = apply_translation(
translation2, line_to_h, offset_r=img_siz2, offset_c=img_siz2)
translated_mid_point = (translated_line_from + translated_line_to) / 2.0
translated_mid_point = translated_mid_point[[1, 0]]
line_from_transformed = apply_rotation(translated_mid_point,
np.pi * angle2,
translated_line_from)
line_to_transformed = apply_rotation(translated_mid_point,
np.pi * angle2,
translated_line_to)
return np.concatenate([from_homogeneous(line_from_transformed),
from_homogeneous(line_to_transformed)],
axis=1)_____no_output_____
</code>
### Hierarchical stroke painting functions_____no_output_____
<code>
# PaintingCommand
# origin_top: Origin of line defined by top level LSTM
# angle_top: Angle of line defined by top level LSTM
# scale_top: Scale for line defined by top level LSTM
# origin_bottom: Origin of line defined by bottom level LSTM
# angle_bottom: Angle of line defined by bottom level LSTM
# scale_bottom: Scale for line defined by bottom level LSTM
# position_choice: Selects between use of:
# Origin, angle and scale from both LSTM levels
# Origin, angle and scale just from top level LSTM
# Origin, angle and scale just from bottom level LSTM
# transparency: Line transparency determined by bottom level LSTM
PaintingCommand = collections.namedtuple("PaintingCommand",
["origin_top",
"angle_top",
"scale_top",
"origin_bottom",
"angle_bottom",
"scale_bottom",
"position_choice",
"transparency"])
def paint_over_image(img, strokes, painting_commands,
allow_strokes_beyond_image_edges, coeff_size=1):
"""Make marks over an existing image.
Args:
img: Image to draw on.
strokes: Stroke descriptions.
painting_commands: Top-level painting commands with transforms for the i
sets of strokes.
allow_strokes_beyond_image_edges: Allow strokes beyond image boundary.
coeff_size: Determines low res (1) or high res (10) image will be drawn.
Returns:
num_strokes: The number of strokes made.
"""
img_center = 112. * coeff_size
# a, b and c: determines the stroke width distribution (see 'weights' below)
a = 10. * coeff_size
b = 2. * coeff_size
c = 300. * coeff_size
# d: extent that the strokes are allowed to go beyond the edge of the canvas
d = 223 * coeff_size
def _clip_colour(col):
return np.clip((np.round(col * 255. + 128.)).astype(np.int32), 0, 255)
# Loop over all the top level...
t0_over = time.time()
num_strokes = sum(len(s) for s in strokes)
translations = np.zeros((2, num_strokes,), np.float32)
translations2 = np.zeros((2, num_strokes,), np.float32)
angles = np.zeros((num_strokes,), np.float32)
angles2 = np.zeros((num_strokes,), np.float32)
scales = np.zeros((num_strokes,), np.float32)
scales2 = np.zeros((num_strokes,), np.float32)
weights = np.zeros((num_strokes,), np.float32)
lines_from = np.zeros((num_strokes, 2), np.float32)
lines_to = np.zeros((num_strokes, 2), np.float32)
rgbas = np.zeros((num_strokes, 4), np.float32)
k = 0
for i in range(len(strokes)):
# Get the top-level transforms for the i-th bunch of strokes
painting_comand = painting_commands[i]
translation_a = painting_comand.origin_top
angle_a = (painting_comand.angle_top + 1) / 5.0
scale_a = 0.5 + (painting_comand.scale_top + 1) / 3.0
translation_b = painting_comand.origin_bottom
angle_b = (painting_comand.angle_bottom + 1) / 5.0
scale_b = 0.5 + (painting_comand.scale_bottom + 1) / 3.0
position_choice = painting_comand.position_choice
solid_colour = painting_comand.transparency
# Do we use origin, angle and scale from both, top or bottom LSTM levels?
if position_choice > 0.33:
translation = translation_a
angle = angle_a
scale = scale_a
translation2 = translation_b
angle2 = angle_b
scale2 = scale_b
elif position_choice > -0.33:
translation = translation_a
angle = angle_a
scale = scale_a
translation2 = [-img_center, -img_center]
angle2 = 0.
scale2 = 1.
else:
translation = translation_b
angle = angle_b
scale = scale_b
translation2 = [-img_center, -img_center]
angle2 = 0.
scale2 = 1.
# Store top-level transforms
strokes_i = strokes[i]
n_i = len(strokes_i)
angles[k:(k+n_i)] = angle
angles2[k:(k+n_i)] = angle2
scales[k:(k+n_i)] = scale
scales2[k:(k+n_i)] = scale2
translations[0, k:(k+n_i)] = translation[0]
translations[1, k:(k+n_i)] = translation[1]
translations2[0, k:(k+n_i)] = translation2[0]
translations2[1, k:(k+n_i)] = translation2[1]
# ... and the bottom level stroke definitions.
for j in range(n_i):
z_ij = strokes_i[j]
# Store line weight (we will process micro-strokes later)
weights[k] = z_ij[4]
# Store line endpoints
lines_from[k, :] = (z_ij[0], z_ij[1])
lines_to[k, :] = (z_ij[2], z_ij[3])
# Store colour and alpha
rgbas[k, 0] = z_ij[7]
rgbas[k, 1] = z_ij[8]
rgbas[k, 2] = z_ij[9]
if solid_colour > -0.5:
rgbas[k, 3] = 25.5
else:
rgbas[k, 3] = z_ij[11]
k += 1
# Draw all the strokes in a batch as sequence of length 2 * num_strokes
t1_over = time.time()
lines_from *= img_center/2.0
lines_to *= img_center/2.0
rr, cc = transform_lines(lines_from, lines_to, translations, angles, scales,
translations2, angles2, scales2, img_center)
if not allow_strokes_beyond_image_edges:
rrm = np.round(np.clip(rr, 1, d-1)).astype(int)
ccm = np.round(np.clip(cc, 1, d-1)).astype(int)
else:
rrm = np.round(rr).astype(int)
ccm = np.round(cc).astype(int)
# Plot all the strokes
t2_over = time.time()
img_pil = Image.fromarray(img)
canvas = ImageDraw.Draw(img_pil, "RGBA")
rgbas[:, :3] = _clip_colour(rgbas[:, :3])
rgbas[:, 3] = (np.clip(5.0 * np.abs(rgbas[:, 3]), 0, 255)).astype(np.int32)
weights = (np.clip(np.round(weights * b + a), 2, c)).astype(np.int32)
for k in range(num_strokes):
canvas.line((rrm[k], ccm[k], rrm[k+num_strokes], ccm[k+num_strokes]),
fill=tuple(rgbas[k]), width=weights[k])
img[:] = np.asarray(img_pil)[:]
t3_over = time.time()
if VERBOSE_CODE:
print("{:.2f}s to store {} stroke defs, {:.4f}s to "
"compute them, {:.4f}s to plot them".format(
t1_over - t0_over, num_strokes, t2_over - t1_over,
t3_over - t2_over))
return num_strokes
_____no_output_____
</code>
### Recurrent Neural Network Layer Generator_____no_output_____
<code>
# DrawingLSTMSpec - parameters defining the LSTM architecture
# input_spec_size: Size if sequence elements
# num_lstms: Number of LSTMs at each layer
# net_lstm_hiddens: Number of hidden LSTM units
# net_mlp_hiddens: Number of hidden units in MLP layer
DrawingLSTMSpec = collections.namedtuple("DrawingLSTMSpec",
["input_spec_size",
"num_lstms",
"net_lstm_hiddens",
"net_mlp_hiddens"])
class MakeGeneratorLstm(nn.Module):
"""Block of parallel LSTMs with MLP output heads."""
def __init__(self, drawing_lstm_spec, output_size):
"""Build drawing LSTM architecture using spec.
Args:
drawing_lstm_spec: DrawingLSTMSpec with architecture parameters
output_size: Number of outputs for the MLP head layer
"""
super(MakeGeneratorLstm, self).__init__()
self._num_lstms = drawing_lstm_spec.num_lstms
self._input_layer = nn.Sequential(
nn.Linear(drawing_lstm_spec.input_spec_size,
drawing_lstm_spec.net_lstm_hiddens),
torch.nn.LeakyReLU(0.2, inplace=True))
lstms = []
heads = []
for _ in range(self._num_lstms):
lstm_layer = nn.LSTM(
input_size=drawing_lstm_spec.net_lstm_hiddens,
hidden_size=drawing_lstm_spec.net_lstm_hiddens,
num_layers=2, batch_first=True, bias=True)
head_layer = nn.Sequential(
nn.Linear(drawing_lstm_spec.net_lstm_hiddens,
drawing_lstm_spec.net_mlp_hiddens),
torch.nn.LeakyReLU(0.2, inplace=True),
nn.Linear(drawing_lstm_spec.net_mlp_hiddens, output_size))
lstms.append(lstm_layer)
heads.append(head_layer)
self._lstms = nn.ModuleList(lstms)
self._heads = nn.ModuleList(heads)
def forward(self, x):
pred = []
x = self._input_layer(x)*10.0
for i in range(self._num_lstms):
y, _ = self._lstms[i](x)
y = self._heads[i](y)
pred.append(y)
return pred_____no_output_____
</code>
### DrawingLSTM - A Drawing Recurrent Neural Network_____no_output_____
<code>
Genotype = collections.namedtuple("Genotype",
["top_lstm",
"bottom_lstm",
"input_sequence",
"initial_img"])
class DrawingLSTM:
"""LSTM for processing input sequences and generating resultant drawings.
Comprised of two LSTM layers.
"""
def __init__(self, drawing_lstm_spec, allow_strokes_beyond_image_edges):
"""Create DrawingLSTM to interpret input sequences and paint an image.
Args:
drawing_lstm_spec: DrawingLSTMSpec with LSTM architecture parameters
allow_strokes_beyond_image_edges: Draw lines outside image boundary
"""
self._input_spec_size = drawing_lstm_spec.input_spec_size
self._num_lstms = drawing_lstm_spec.num_lstms
self._allow_strokes_beyond_image_edges = allow_strokes_beyond_image_edges
with torch.no_grad():
self.top_lstm = MakeGeneratorLstm(drawing_lstm_spec,
self._input_spec_size)
self.bottom_lstm = MakeGeneratorLstm(drawing_lstm_spec, 12)
self._init_all(self.top_lstm, torch.nn.init.normal_, mean=0., std=0.2)
self._init_all(self.bottom_lstm, torch.nn.init.normal_, mean=0., std=0.2)
def _init_all(self, a_model, init_func, *params, **kwargs):
"""Method for initialising model with given init_func, params and kwargs."""
for p in a_model.parameters():
init_func(p, *params, **kwargs)
def _feed_top_lstm(self, input_seq):
"""Feed all input sequences input_seq into the LSTM models."""
x_in = input_seq.reshape((len(input_seq), 1, self._input_spec_size))
x_in = np.tile(x_in, (SEQ_LENGTH, 1))
x_torch = torch.from_numpy(x_in).type(torch.FloatTensor)
y_torch = self.top_lstm(x_torch)
y_torch = [y_torch_k.detach().numpy() for y_torch_k in y_torch]
del x_in
del x_torch
# There are multiple LSTM heads. For each sequence, read out the head and
# length of intermediary output to keep and return intermediary outputs.
readouts_top = np.clip(
np.round(self._num_lstms/2.0 * (1 + input_seq[:, 1])).astype(np.int32),
0, self._num_lstms-1)
lengths_top = np.clip(
np.round(10.0 * (1 + input_seq[:, 0])).astype(np.int32),
0, SEQ_LENGTH) + 1
intermediate_strings = []
for i in range(len(readouts_top)):
y_torch_i = y_torch[readouts_top[i]][i]
intermediate_strings.append(y_torch_i[0:lengths_top[i], :])
return intermediate_strings
def _feed_bottom_lstm(self, intermediate_strings, input_seq, coeff_size=1):
"""Feed all input sequences into the LSTM models.
Args:
intermediate_strings: top level strings
input_seq: input sequences fed to the top LSTM
coeff_size: sets centre origin
Returns:
strokes: Painting strokes.
painting_commands: Top-level painting commands with origin, angle and scale
information, as well as transparency.
"""
img_center = 112. * coeff_size
coeff_origin = 100. * coeff_size
top_lengths = []
for i in range(len(intermediate_strings)):
top_lengths.append(len(intermediate_strings[i]))
y_flat = np.concatenate(intermediate_strings, axis=0)
tiled_y_flat = y_flat.reshape((len(y_flat), 1, self._input_spec_size))
tiled_y_flat = np.tile(tiled_y_flat, (SEQ_LENGTH, 1))
y_torch = torch.from_numpy(tiled_y_flat).type(torch.FloatTensor)
z_torch = self.bottom_lstm(y_torch)
z_torch = [z_torch_k.detach().numpy() for z_torch_k in z_torch]
del tiled_y_flat
del y_torch
# There are multiple LSTM heads. For each sequence, read out the head and
# length of intermediary output to keep and return intermediary outputs.
readouts = np.clip(np.round(
NUM_LSTMS/2.0 * (1 + y_flat[:, 0])).astype(np.int32), 0, NUM_LSTMS-1)
lengths_bottom = np.clip(
np.round(10.0 * (1 + y_flat[:, 1])).astype(np.int32), 0, SEQ_LENGTH) + 1
strokes = []
painting_commands = []
offset = 0
for i in range(len(intermediate_strings)):
origin_top = [(1+input_seq[i, 2]) * img_center,
(1+input_seq[i, 3]) * img_center]
angle_top = input_seq[i, 4]
scale_top = input_seq[i, 5]
for j in range(len(intermediate_strings[i])):
k = j + offset
z_torch_ij = z_torch[readouts[k]][k]
strokes.append(z_torch_ij[0:lengths_bottom[k], :])
y_ij = y_flat[k]
origin_bottom = [y_ij[2] * coeff_origin, y_ij[3] * coeff_origin]
angle_bottom = y_ij[4]
scale_bottom = y_ij[5]
position_choice = y_ij[6]
transparency = y_ij[7]
painting_command = PaintingCommand(
origin_top, angle_top, scale_top, origin_bottom, angle_bottom,
scale_bottom, position_choice, transparency)
painting_commands.append(painting_command)
offset += top_lengths[i]
del y_flat
return strokes, painting_commands
def make_initial_genotype(self, initial_img, sequence_length,
input_spec_size):
"""Make and return initial DNA weights for LSTMs, input sequence, and image.
Args:
initial_img: Image (to be appended to the genotype)
sequence_length: Length of the input sequence (i.e. number of strokes)
input_spec_size: Number of inputs for each element in the input sequences
Returns:
Genotype NamedTuple with fields: [parameters of network 0,
parameters of network 1,
input sequence,
initial_img]
"""
dna_top = []
with torch.no_grad():
for _, params in self.top_lstm.named_parameters():
dna_top.append(params.clone())
param_size = params.numpy().shape
dna_top[-1] = np.random.uniform(
0.1 * DNA_SCALE, 0.3
* DNA_SCALE) * np.random.normal(size=param_size)
dna_bottom = []
with torch.no_grad():
for _, params in self.bottom_lstm.named_parameters():
dna_bottom.append(params.clone())
param_size = params.numpy().shape
dna_bottom[-1] = np.random.uniform(
0.1 * DNA_SCALE, 0.3
* DNA_SCALE) * np.random.normal(size=param_size)
input_sequence = np.random.uniform(
-1, 1, size=(sequence_length, input_spec_size))
return Genotype(dna_top, dna_bottom, input_sequence, initial_img)
def draw(self, img, genotype):
"""Add to the image using the latest genotype and get latest input sequence.
Args:
img: image to add to.
genotype: as created by make_initial_genotype.
Returns:
image with new strokes added.
"""
t0_draw = time.time()
img = img + genotype.initial_img
input_sequence = genotype.input_sequence
# Generate the strokes for drawing in batch mode.
# input_sequence is between 10 and 20 but is evolved, can go to 200.
intermediate_strings = self._feed_top_lstm(input_sequence)
strokes, painting_commands = self._feed_bottom_lstm(
intermediate_strings, input_sequence)
del intermediate_strings
# Now we can go through the output strings producing the strokes.
t1_draw = time.time()
num_strokes = paint_over_image(
img, strokes, painting_commands, self._allow_strokes_beyond_image_edges,
coeff_size=1)
t2_draw = time.time()
if VERBOSE_CODE:
print(
"Draw {:.2f}s (net {:.2f}s plot {:.2f}s {:.1f}ms/strk {}".format(
t2_draw - t0_draw, t1_draw - t0_draw, t2_draw - t1_draw,
(t2_draw - t1_draw) / num_strokes * 1000, num_strokes))
return img_____no_output_____
</code>
## DrawingGenerator_____no_output_____
<code>
class DrawingGenerator:
"""Creates a drawing using a DrawingLSTM."""
def __init__(self, image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges):
self.primitives = ["c", "r", "l", "b", "p", "j"]
self.pop = []
self.size = image_size
self.fitnesses = np.zeros(1)
self.noise = 2
self.mutation_std = 0.0004
# input_spec_size, num_lstms, net_lstm_hiddens,
# net_mlp_hiddens, output_size, allow_strokes_beyond_image_edges
self.drawing_lstm = DrawingLSTM(drawing_lstm_spec,
allow_strokes_beyond_image_edges)
def make_initial_genotype(self, initial_img, sequence_length, input_spec_size):
"""Use drawing_lstm to create initial genotypye."""
self.genotype = self.drawing_lstm.make_initial_genotype(
initial_img, sequence_length, input_spec_size)
return self.genotype
def _copy_genotype_to_generator(self, genotype):
"""Copy genotype's data into generator's parameters.
Copies the parameters in genotype (genotype.top_lstm[:] and
genotype.bottom_lstm[:]) into the parmaters for the drawing network so it
can be used to evaluate the genotype.
Args:
genotype: as created by make_initial_genotype.
Returns:
None
"""
self.genotype = copy.deepcopy(genotype)
i = 0
with torch.no_grad():
for _, param in self.drawing_lstm.top_lstm.named_parameters():
param.copy_(torch.tensor(self.genotype.top_lstm[i]))
i = i + 1
i = 0
with torch.no_grad():
for _, param in self.drawing_lstm.bottom_lstm.named_parameters():
param.copy_(torch.tensor(self.genotype.bottom_lstm[i]))
i = i + 1
def _interpret_genotype(self, genotype):
img = np.zeros((self.size, self.size, 3), dtype=np.uint8)
img = self.drawing_lstm.draw(img, genotype)
return img
def draw_from_genotype(self, genotype):
"""Copy input sequence and LSTM weights from `genotype`, run and draw."""
self._copy_genotype_to_generator(genotype)
return self._interpret_genotype(self.genotype)
def visualize_genotype(self, genotype):
"""Plot histograms of genotype"s data."""
plt.show()
inp_seq = np.array(genotype.input_sequence).flatten()
plt.title("input seq")
plt.hist(inp_seq)
plt.show()
inp_seq = np.array(genotype.top_lstm).flatten()
plt.title("LSTM top")
plt.hist(inp_seq)
plt.show()
inp_seq = np.array(genotype.bottom_lstm).flatten()
plt.title("LSTM bottom")
plt.hist(inp_seq)
plt.show()
def mutate(self, genotype):
"""Mutates `genotype`. This function is static.
Args:
genotype: genotype structure to mutate parameters of.
Returns:
new_genotype: Mutated copy of supplied genotype.
"""
new_genotype = copy.deepcopy(genotype)
new_input_seq = new_genotype.input_sequence
n = len(new_input_seq)
if np.random.uniform() < 1.0:
# Standard gaussian small mutation of input sequence.
if np.random.uniform() > 0.5:
new_input_seq += (
np.random.uniform(0.001, 0.2) * np.random.normal(
size=new_input_seq.shape))
# Low frequency large mutation of individual parts of the input sequence.
for i in range(n):
if np.random.uniform() < 2.0/n:
for j in range(len(new_input_seq[i])):
if np.random.uniform() < 2.0/len(new_input_seq[i]):
new_input_seq[i][j] = new_input_seq[i][j] + 0.5*np.random.normal()
# Adding and deleting elements from the input sequence.
if np.random.uniform() < 0.01:
if VERBOSE_MUTATION:
print("Mutation: adding")
a = np.random.uniform(-1, 1, size=(1, INPUT_SPEC_SIZE))
pos = np.random.randint(1, len(new_input_seq))
new_input_seq = np.insert(new_input_seq, pos, a, axis=0)
if np.random.uniform() < 0.02:
if VERBOSE_MUTATION:
print("Mutation: deleting")
pos = np.random.randint(1, len(new_input_seq))
new_input_seq = np.delete(new_input_seq, pos, axis=0)
n = len(new_input_seq)
# Swapping two elements in the input sequence.
if np.random.uniform() < 0.01:
element1 = np.random.randint(0, n)
element2 = np.random.randint(0, n)
while element1 == element2:
element2 = np.random.randint(0, n)
temp = copy.deepcopy(new_input_seq[element1])
new_input_seq[element1] = copy.deepcopy(new_input_seq[element2])
new_input_seq[element2] = temp
# Duplicate an element in the input sequence (with some mutation).
if np.random.uniform() < 0.01:
if VERBOSE_MUTATION:
print("Mutation: duplicating")
element1 = np.random.randint(0, n)
element2 = np.random.randint(0, n)
while element1 == element2:
element2 = np.random.randint(0, n)
new_input_seq[element1] = copy.deepcopy(new_input_seq[element2])
noise = 0.05 * np.random.normal(size=new_input_seq[element1].shape)
new_input_seq[element1] += noise
# Ensure that the input sequence is always between -1 and 1
# so that positions make sense.
new_genotype = new_genotype._replace(
input_sequence=np.clip(new_input_seq, -1.0, 1.0))
# Mutates dna of networks.
if np.random.uniform() < 1.0:
for net in range(2):
for layer in range(len(new_genotype[net])):
weights = new_genotype[net][layer]
if np.random.uniform() < 0.5:
noise = 0.00001 * np.random.standard_cauchy(size=weights.shape)
weights += noise
else:
noise = np.random.normal(size=weights.shape)
noise *= np.random.uniform(0.0001, 0.006)
weights += noise
if np.random.uniform() < 0.01:
noise = np.random.normal(size=weights.shape)
noise *= np.random.uniform(0.1, 0.3)
weights = noise
# Ensure weights are between -10 and 10.
weights = np.clip(weights, -1.0, 1.0)
new_genotype[net][layer] = weights
return new_genotype_____no_output_____
</code>
## Evaluator_____no_output_____
<code>
class Evaluator:
"""Evaluator for a drawing."""
def __init__(self, image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges):
self.drawing_generator = DrawingGenerator(image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges)
self.calls = 0
def make_initial_genotype(self, img, sequence_length, input_spec_size):
return self.drawing_generator.make_initial_genotype(img, sequence_length,
input_spec_size)
def evaluate_genotype(self, pickled_genotype, id_num):
"""Evaluate genotype and return genotype's image.
Args:
pickled_genotype: pickled genotype to be evaluated.
id_num: ID number of genotype.
Returns:
dict: drawing and id_num.
"""
genotype = cloudpickle.loads(pickled_genotype)
drawing = self.drawing_generator.draw_from_genotype(genotype)
self.calls += 1
return {"drawing": drawing, "id": id_num}
def mutate(self, genotype):
"""Create a mutated version of genotype."""
return self.drawing_generator.mutate(genotype)_____no_output_____
</code>
# Evolution_____no_output_____## Fitness calculation, tournament, and crossover_____no_output_____
<code>
IMAGE_MEAN = torch.tensor([0.48145466, 0.4578275, 0.40821073]).cuda()
IMAGE_STD = torch.tensor([0.26862954, 0.26130258, 0.27577711]).cuda()
def get_fitness(pictures, use_projective_transform,
projective_transform_coefficient):
"""Run CLIP on a batch of `pictures` and return `fitnesses`.
Args:
pictures: batch if images to evaluate
use_projective_transform: Add transformed versions of the image
projective_transform_coefficient: Degree of transform
Returns:
Similarities between images and the text
"""
# Do we use projective transforms of images before CLIP eval?
t0 = time.time()
pictures_trans = np.swapaxes(np.array(pictures), 1, 3) / 244.0
if use_projective_transform:
for i in range(len(pictures_trans)):
matrix = np.eye(3) + (
projective_transform_coefficient * np.random.normal(size=(3, 3)))
tform = transform.ProjectiveTransform(matrix=matrix)
pictures_trans[i] = transform.warp(pictures_trans[i], tform.inverse)
# Run the CLIP evaluator.
t1 = time.time()
image_input = torch.tensor(np.stack(pictures_trans)).cuda()
image_input -= IMAGE_MEAN[:, None, None]
image_input /= IMAGE_STD[:, None, None]
with torch.no_grad():
image_features = model.encode_image(image_input).float()
t2 = time.time()
similarity = torch.cosine_similarity(
text_features, image_features, dim=1).cpu().numpy()
t3 = time.time()
if VERBOSE_CODE:
print(f"get_fitness init {t1-t0:.4f}s, CLIP {t2-t1:.4f}s, sim {t3-t2:.4f}s")
return similarity
def crossover(dna_winner, dna_loser, crossover_prob):
"""Create new genotype by combining two genotypes.
Randomly replaces parts of the genotype 'dna_winner' with parts of dna_loser
to create a new genotype based mostly on on both 'parents'.
Args:
dna_winner: The high-fitness parent genotype - gets replaced with child.
dna_loser: The lower-fitness parent genotype.
crossover_prob: Probability of crossover between winner and loser.
Returns:
dna_winner: The result of crossover from parents.
"""
# Copy single input signals
for i in range(len(dna_winner[2])):
if i < len(dna_loser[2]):
if np.random.uniform() < crossover_prob:
dna_winner[2][i] = copy.deepcopy(dna_loser[2][i])
# Copy whole modules
for i in range(len(dna_winner[0])):
if i < len(dna_loser[0]):
if np.random.uniform() < crossover_prob:
dna_winner[0][i] = copy.deepcopy(dna_loser[0][i])
# Copy whole modules
for i in range(len(dna_winner[1])):
if i < len(dna_loser[1]):
if np.random.uniform() < crossover_prob:
dna_winner[1][i] = copy.deepcopy(dna_loser[1][i])
return dna_winner
def truncation_selection(population, fitnesses, evaluator, use_crossover,
crossover_prob):
"""Create new population using truncation selection.
Creates a new population by copying across the best 50% genotypes and
filling the rest with (for use_crossover==False) a mutated copy of each
genotype or (for use_crossover==True) with children created through crossover
between each winner and a genotype in the bottom 50%.
Args:
population: list of current population genotypes.
fitnesses: list of evaluated fitnesses.
evaluator: class that evaluates a draw generator.
use_crossover: Whether to use crossover between winner and loser.
crossover_prob: Probability of crossover between winner and loser.
Returns:
new_pop: the new population.
best: genotype.
"""
fitnesses = np.array(-fitnesses)
ordered_fitness_ids = fitnesses.argsort()
best = copy.deepcopy(population[ordered_fitness_ids[0]])
pop_size = len(population)
if not use_crossover:
new_pop = []
for i in range(int(pop_size/2)):
new_pop.append(copy.deepcopy(population[ordered_fitness_ids[i]]))
for i in range(int(pop_size/2)):
new_pop.append(evaluator.mutate(
copy.deepcopy(population[ordered_fitness_ids[i]])))
else:
new_pop = []
for i in range(int(pop_size/2)):
new_pop.append(copy.deepcopy(population[ordered_fitness_ids[i]]))
for i in range(int(pop_size/2)):
new_pop.append(evaluator.mutate(crossover(
copy.deepcopy(population[ordered_fitness_ids[i]]),
population[ordered_fitness_ids[int(pop_size/2) + i]], crossover_prob
)))
return new_pop, best_____no_output_____
</code>
##Remote workers_____no_output_____
<code>
VERBOSE_DURATION = False
@ray.remote
class Worker(object):
"""Takes a pickled dna and evaluates it, returning result."""
def __init__(self, image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges):
self.evaluator = Evaluator(image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges)
def compute(self, dna_pickle, genotype_id):
if VERBOSE_DURATION:
t0 = time.time()
res = self.evaluator.evaluate_genotype(dna_pickle, genotype_id)
if VERBOSE_DURATION:
duration = time.time() - t0
print(f"Worker {genotype_id} evaluated params in {duration:.1f}sec")
return res
def create_workers(num_workers, image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges):
"""Create the workers.
Args:
num_workers: Number of parallel workers for evaluation.
image_size: Length of side of (square) image
drawing_lstm_spec: DrawingLSTMSpec for LSTM network
allow_strokes_beyond_image_edges: Whether to draw outside the edges
Returns:
List of workers.
"""
worker_pool = []
for w_i in range(num_workers):
print("Creating worker", w_i, flush=True)
worker_pool.append(Worker.remote(image_size, drawing_lstm_spec,
allow_strokes_beyond_image_edges))
return worker_pool
_____no_output_____
</code>
##Plotting_____no_output_____
<code>
def plot_training_res(batch_drawings, fitness_history, idx=None):
"""Plot fitnesses and timings.
Args:
batch_drawings: Drawings
fitness_history: History of fitnesses
idx: Index of drawing to show, default is highest fitness
"""
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
if idx is None:
idx = np.argmax(fitness_history[-1])
ax1.plot(fitness_history, ".")
ax1.set_title("Fitnesses")
ax2.imshow(batch_drawings[idx])
ax2.set_title(f"{PROMPT} (fit: {fitness_history[-1][idx]:.3f})")
plt.show()
def plot_samples(batch_drawings, num_samples=16):
"""Plot sample of drawings.
Args:
batch_drawings: Batch of drawings to sample from
num_samples: Number to displa
"""
num_samples = min(len(batch_drawings), num_samples)
num_rows = int(math.floor(np.sqrt(num_samples)))
num_cols = int(math.ceil(num_samples / num_rows))
row_images = []
for c in range(0, num_samples, num_cols):
if c + num_cols <= num_samples:
row_images.append(np.concatenate(batch_drawings[c:(c+num_cols)], axis=1))
composite_image = np.concatenate(row_images, axis=0)
_, ax = plt.subplots(1, 1, figsize=(20, 20))
ax.imshow(composite_image)
ax.set_title(PROMPT)_____no_output_____
</code>
## Population and evolution main loop_____no_output_____
<code>
def make_population(pop_size, evaluator, image_size, input_spec_size,
sequence_length):
"""Make initial population.
Args:
pop_size: number of genotypes in population.
evaluator: An Evaluator class instance for generating initial genotype.
image_size: Size of initial image for genotype to draw on.
input_spec_size: Sequence element size
sequence_length: Initial length of sequences
Returns:
Initialised population.
"""
print(f"Creating initial population of size {pop_size}")
pop = []
for _ in range(pop_size):
a_genotype = evaluator.make_initial_genotype(
img=np.zeros((image_size, image_size, 3), dtype=np.uint8),
sequence_length=sequence_length,
input_spec_size=input_spec_size)
pop.append(a_genotype)
return pop
def evolution_loop(population, worker_pool, evaluator, num_generations,
use_crossover, crossover_prob,
use_projective_transform, projective_transform_coefficient,
plot_every, plot_batch):
"""Create population and run evolution.
Args:
population: Initial population of genotypes
worker_pool: List of workers of parallel evaluations
evaluator: image evaluator to calculate fitnesses
num_generations: number of generations to run
use_crossover: Whether crossover is used for offspring
crossover_prob: Probability that crossover takes place
use_projective_transform: Use projective transforms in evaluation
projective_transform_coefficient: Degree of projective transform
plot_every: number of generations between new plots
plot_batch: whether to show all samples in the batch then plotting
"""
population_size = len(population)
num_workers = len(worker_pool)
print("Population of {} genotypes being evaluated by {} workers".format(
population_size, num_workers))
drawings = {}
fitness_history = []
init_gen = len(fitness_history)
print(f"(Re)starting evolution at generation {init_gen}")
for gen in range(init_gen, num_generations):
# Drawing
t0_loop = time.time()
futures = []
for j in range(0, population_size, num_workers):
for i in range(num_workers):
futures.append(worker_pool[i].compute.remote(
cloudpickle.dumps(population[i+j]), i+j))
data = ray.get(futures)
for i in range(num_workers):
drawings[data[i+j]["id"]] = data[j+i]["drawing"]
batch_drawings = []
for i in range(population_size):
batch_drawings.append(drawings[i])
# Fitness evaluation using CLIP
t1_loop = time.time()
fitnesses = get_fitness(batch_drawings, use_projective_transform,
projective_transform_coefficient)
fitness_history.append(copy.deepcopy(fitnesses))
# Tournament
t2_loop = time.time()
population, best_genotype = truncation_selection(
population, fitnesses, evaluator, use_crossover, crossover_prob)
t3_loop = time.time()
duration_draw = t1_loop - t0_loop
duration_fit = t2_loop - t1_loop
duration_tournament = t3_loop - t2_loop
duration_total = t3_loop - t0_loop
if gen % plot_every == 0:
if VISUALIZE_GENOTYPE:
evaluator.drawing_generator.visualize_genotype(best_genotype)
print("Draw: {:.2f}s fit: {:.2f}s evol: {:.2f}s total: {:.2f}s".format(
duration_draw, duration_fit, duration_tournament, duration_total))
plot_training_res(batch_drawings, fitness_history)
if plot_batch:
num_samples_to_plot = int(math.pow(
math.floor(np.sqrt(population_size)), 2))
plot_samples(batch_drawings, num_samples=num_samples_to_plot)
_____no_output_____
</code>
# Configure and Generate_____no_output_____
<code>
#@title Hyperparameters
#@markdown Evolution parameters: population size and number of generations.
POPULATION_SIZE = 10 #@param {type:"slider", min:4, max:100, step:2}
NUM_GENERATIONS = 5000 #@param {type:"integer", min:100}
#@markdown Number of workers working in parallel (should be equal to or smaller than the population size).
NUM_WORKERS = 10 #@param {type:"slider", min:4, max:100, step:2}
#@markdown Crossover in evolution.
USE_CROSSOVER = True #@param {type:"boolean"}
CROSSOVER_PROB = 0.01 #@param {type:"number"}
#@markdown Number of LSTMs, each one encoding a group of strokes.
NUM_LSTMS = 5 #@param {type:"integer", min:1, max:5}
#@markdown Number of inputs for each element in the input sequences.
INPUT_SPEC_SIZE = 10 #@param {type:"integer"}
#@markdown Length of the input sequence fed to the LSTMs (determines number of strokes).
SEQ_LENGTH = 20 #@param {type:"integer", min:20, max:200}
#@markdown Rendering parameter.
ALLOW_STROKES_BEYOND_IMAGE_EDGES = True #@param {type:"boolean"}
#@markdown CLIP evaluation: do we use projective transforms of images?
USE_PROJECTIVE_TRANSFORM = True #@param {type:"boolean"}
PROJECTIVE_TRANSFORM_COEFFICIENT = 0.000001 #@param {type:"number"}
#@markdown These parameters should be edited mostly only for debugging reasons.
NET_LSTM_HIDDENS = 40 #@param {type:"integer"}
NET_MLP_HIDDENS = 20 #@param {type:"integer"}
# Scales the values used in genotype's initialisation.
DNA_SCALE = 1.0 #@param {type:"number"}
IMAGE_SIZE = 224 #@param {type:"integer"}
VERBOSE_CODE = False #@param {type:"boolean"}
VISUALIZE_GENOTYPE = False #@param {type:"boolean"}
VERBOSE_MUTATION = False #@param {type:"boolean"}
#@markdown Number of generations between new plots.
PLOT_EVERY_NUM_GENS = 5 #@param {type:"integer"}
#@markdown Whether to show all samples in the batch when plotting.
PLOT_BATCH = True # @param {type:"boolean"}
assert POPULATION_SIZE % NUM_WORKERS == 0, "POPULATION_SIZE not multiple of NUM_WORKERS"_____no_output_____
</code>
#Running the original evolutionary algorithm
This is the original inefficient version of Arnheim which uses a genetic algorithm to optimize the picture. It takes at least 12 hours to produce an image using 50 workers. In our paper we used 500-1000 GPUs which speeded things up considerably. Refer to Arnheim 2 for a far more efficient way to generate images with a similar architecture.
Try prompts like “A photorealistic chicken”. Feel free to modify this colab to include your own way of generating and evolving images like we did in figure 2 here https://arxiv.org/pdf/2105.00162.pdf._____no_output_____
<code>
# @title Get text input and run evolution
PROMPT = "an apple" #@param {type:"string"}
# Tokenize prompts and coompute CLIP features.
text_input = clip.tokenize(PROMPT).to(device)
with torch.no_grad():
text_features = model.encode_text(text_input)
ray.shutdown()
ray.init()
drawing_lstm_arch = DrawingLSTMSpec(INPUT_SPEC_SIZE,
NUM_LSTMS,
NET_LSTM_HIDDENS,
NET_MLP_HIDDENS)
workers = create_workers(NUM_WORKERS, IMAGE_SIZE, drawing_lstm_arch,
ALLOW_STROKES_BEYOND_IMAGE_EDGES)
drawing_evaluator = Evaluator(IMAGE_SIZE, drawing_lstm_arch,
ALLOW_STROKES_BEYOND_IMAGE_EDGES)
drawing_population = make_population(POPULATION_SIZE, drawing_evaluator,
IMAGE_SIZE, INPUT_SPEC_SIZE, SEQ_LENGTH)
evolution_loop(drawing_population, workers, drawing_evaluator, NUM_GENERATIONS,
USE_CROSSOVER, CROSSOVER_PROB,
USE_PROJECTIVE_TRANSFORM, PROJECTIVE_TRANSFORM_COEFFICIENT,
PLOT_EVERY_NUM_GENS, PLOT_BATCH)_____no_output_____
</code>
| {
"repository": "TotoArt/arnheim",
"path": "arnheim_1.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 186,
"size": 66874,
"hexsha": "d0ff550e4442387f5cdb402f87432f5d692496d9",
"max_line_length": 497,
"avg_line_length": 44.3167660702,
"alphanum_fraction": 0.5114394234
} |
# Notebook from jjc2718/generic-expression-patterns
Path: LV_analysis/1_get_multiplier_LV_coverage.ipynb
# Coverage of MultiPLIER LV
The goal of this notebook is to examine why genes were found to be generic. Specifically, this notebook is trying to answer the question: Are generic genes found in more multiplier latent variables compared to specific genes?
The PLIER model performs a matrix factorization of gene expression data to get two matrices: loadings (Z) and latent matrix (B). The loadings (Z) are constrained to aligned with curated pathways and gene sets specified by prior knowledge [Figure 1B of Taroni et. al.](https://www.cell.com/cell-systems/pdfExtended/S2405-4712(19)30119-X). This ensure that some but not all latent variables capture known biology. The way PLIER does this is by applying a penalty such that the individual latent variables represent a few gene sets in order to make the latent variables more interpretable. Ideally there would be one latent variable associated with one gene set unambiguously.
While the PLIER model was trained on specific datasets, MultiPLIER extended this approach to all of recount2, where the latent variables should correspond to specific pathways or gene sets of interest. Therefore, we will look at the coverage of generic genes versus other genes across these MultiPLIER latent variables, which represent biological patterns.
**Definitions:**
* Generic genes: Are genes that are consistently differentially expressed across multiple simulated experiments.
* Other genes: These are all other non-generic genes. These genes include those that are not consistently differentially expressed across simulated experiments - i.e. the genes are specifically changed in an experiment. It could also indicate genes that are consistently unchanged (i.e. housekeeping genes)_____no_output_____
<code>
%load_ext autoreload
%autoreload 2
import os
import random
import textwrap
import scipy
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import rpy2.robjects as ro
from rpy2.robjects import pandas2ri
from rpy2.robjects.conversion import localconverter
from ponyo import utils
from generic_expression_patterns_modules import lv/home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning:
examples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.
"found relative to the 'datapath' directory.".format(key))
# Get data directory containing gene summary data
base_dir = os.path.abspath(os.path.join(os.getcwd(), "../"))
data_dir = os.path.join(base_dir, "human_general_analysis")
# Read in config variables
config_filename = os.path.abspath(
os.path.join(base_dir, "configs", "config_human_general.tsv")
)
params = utils.read_config(config_filename)
local_dir = params["local_dir"]
project_id = params["project_id"]
quantile_threshold = 0.98_____no_output_____# Output file
nonzero_figure_filename = "nonzero_LV_coverage.svg"
highweight_figure_filename = "highweight_LV_coverage.svg"_____no_output_____
</code>
## Load data_____no_output_____
<code>
# Get gene summary file
summary_data_filename = os.path.join(data_dir, f"generic_gene_summary_{project_id}.tsv")_____no_output_____# Load gene summary data
data = pd.read_csv(summary_data_filename, sep="\t", index_col=0, header=0)
# Check that genes are unique since we will be using them as dictionary keys below
assert data.shape[0] == len(data["Gene ID"].unique())_____no_output_____# Load multiplier models
# Converted formatted pickle files (loaded using phenoplier environment) from
# https://github.com/greenelab/phenoplier/blob/master/nbs/01_preprocessing/005-multiplier_recount2_models.ipynb
# into .tsv files
multiplier_model_z = pd.read_csv(
"multiplier_model_z.tsv", sep="\t", index_col=0, header=0
)_____no_output_____# Get a rough sense for how many genes contribute to a given LV
# (i.e. how many genes have a value != 0 for a given LV)
# Notice that multiPLIER is a sparse model
(multiplier_model_z != 0).sum().sort_values(ascending=True)_____no_output_____
</code>
## Get gene data
Define generic genes based on simulated gene ranking. Refer to [figure](https://github.com/greenelab/generic-expression-patterns/blob/master/human_general_analysis/gene_ranking_log2FoldChange.svg) as a guide.
**Definitions:**
* Generic genes: `Percentile (simulated) >= 60`
(Having a high rank indicates that these genes are consistently changed across simulated experiments.)
* Other genes: `Percentile (simulated) < 60`
(Having a lower rank indicates that these genes are not consistently changed across simulated experiments - i.e. the genes are specifically changed in an experiment. It could also indicate genes that are consistently unchanged.)_____no_output_____
<code>
generic_threshold = 60
dict_genes = lv.get_generic_specific_genes(data, generic_threshold)(17755, 13)
No. of generic genes: 7102
No. of other genes: 10653
# Check overlap between multiplier genes and our genes
multiplier_genes = list(multiplier_model_z.index)
our_genes = list(data.index)
shared_genes = set(our_genes).intersection(multiplier_genes)
print(len(our_genes))
print(len(shared_genes))17755
6374
# Drop gene ids not used in multiplier analysis
processed_dict_genes = lv.process_generic_specific_gene_lists(
dict_genes, multiplier_model_z
)_____no_output_____# Check numbers add up
assert len(shared_genes) == len(processed_dict_genes["generic"]) + len(
processed_dict_genes["other"]
)_____no_output_____
</code>
## Get coverage of LVs
For each gene (generic or other) we want to find:
1. The number of LVs that gene is present
2. The number of LVs that the gene contributes a lot to (i.e. the gene is highly weighted within that LV)_____no_output_____### Nonzero LV coverage_____no_output_____
<code>
dict_nonzero_coverage = lv.get_nonzero_LV_coverage(
processed_dict_genes, multiplier_model_z
)_____no_output_____# Check genes mapped correctly
assert processed_dict_genes["generic"][0] in dict_nonzero_coverage["generic"].index
assert len(dict_nonzero_coverage["generic"]) == len(processed_dict_genes["generic"])
assert len(dict_nonzero_coverage["other"]) == len(processed_dict_genes["other"])_____no_output_____
</code>
### High weight LV coverage_____no_output_____
<code>
# Quick look at the distribution of gene weights per LV
sns.distplot(multiplier_model_z["LV3"], kde=False)
plt.yscale("log")_____no_output_____dict_highweight_coverage = lv.get_highweight_LV_coverage(
processed_dict_genes, multiplier_model_z, quantile_threshold
)_____no_output_____# Check genes mapped correctly
assert processed_dict_genes["generic"][0] in dict_highweight_coverage["generic"].index
assert len(dict_highweight_coverage["generic"]) == len(processed_dict_genes["generic"])
assert len(dict_highweight_coverage["other"]) == len(processed_dict_genes["other"])_____no_output_____
</code>
### Assemble LV coverage and plot_____no_output_____
<code>
all_coverage = []
for gene_label in dict_genes.keys():
merged_df = pd.DataFrame(
dict_nonzero_coverage[gene_label], columns=["nonzero LV coverage"]
).merge(
pd.DataFrame(
dict_highweight_coverage[gene_label], columns=["highweight LV coverage"]
),
left_index=True,
right_index=True,
)
merged_df["gene type"] = gene_label
all_coverage.append(merged_df)
all_coverage_df = pd.concat(all_coverage)_____no_output_____all_coverage_df = lv.assemble_coverage_df(
processed_dict_genes, dict_nonzero_coverage, dict_highweight_coverage
)
all_coverage_df.head()_____no_output_____# Plot coverage distribution given list of generic coverage, specific coverage
nonzero_fig = sns.boxplot(
data=all_coverage_df,
x="gene type",
y="nonzero LV coverage",
notch=True,
palette=["#2c7fb8", "lightgrey"],
)
nonzero_fig.set_xlabel(None)
nonzero_fig.set_xticklabels(
["generic genes", "other genes"], fontsize=14, fontname="Verdana"
)
nonzero_fig.set_ylabel(
textwrap.fill("Number of LVs", width=30), fontsize=14, fontname="Verdana"
)
nonzero_fig.tick_params(labelsize=14)
nonzero_fig.set_title(
"Number of LVs genes are present in", fontsize=16, fontname="Verdana"
)_____no_output_____# Plot coverage distribution given list of generic coverage, specific coverage
highweight_fig = sns.boxplot(
data=all_coverage_df,
x="gene type",
y="highweight LV coverage",
notch=True,
palette=["#2c7fb8", "lightgrey"],
)
highweight_fig.set_xlabel(None)
highweight_fig.set_xticklabels(
["generic genes", "other genes"], fontsize=14, fontname="Verdana"
)
highweight_fig.set_ylabel(
textwrap.fill("Number of LVs", width=30), fontsize=14, fontname="Verdana"
)
highweight_fig.tick_params(labelsize=14)
highweight_fig.set_title(
"Number of LVs genes contribute highly to", fontsize=16, fontname="Verdana"
)_____no_output_____
</code>
## Calculate statistics
* Is the reduction in generic coverage significant?
* Is the difference between generic versus other genes signficant?_____no_output_____
<code>
# Test: mean number of LVs generic genes present in vs mean number of LVs that generic gene is high weight in
# (compare two blue boxes between plots)
generic_nonzero = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"nonzero LV coverage"
].values
generic_highweight = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"highweight LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(generic_nonzero, generic_highweight)
print(pvalue)0.0
# Test: mean number of LVs generic genes present in vs mean number of LVs other genes high weight in
# (compare blue and grey boxes in high weight plot)
other_highweight = all_coverage_df[all_coverage_df["gene type"] == "other"][
"highweight LV coverage"
].values
generic_highweight = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"highweight LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(other_highweight, generic_highweight)
print(pvalue)6.307987998525766e-119
# Check that coverage of other and generic genes across all LVs is NOT signficantly different
# (compare blue and grey boxes in nonzero weight plot)
other_nonzero = all_coverage_df[all_coverage_df["gene type"] == "other"][
"nonzero LV coverage"
].values
generic_nonzero = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"nonzero LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(other_nonzero, generic_nonzero)
print(pvalue)0.23947472582519233
</code>
## Get LVs that generic genes are highly weighted in
Since we are using quantiles to get high weight genes per LV, each LV has the same number of high weight genes. For each set of high weight genes, we will get the proportion of generic vs other genes. We will select the LVs that have a high proportion of generic genes to examine._____no_output_____
<code>
# Get proportion of generic genes per LV
prop_highweight_generic_dict = lv.get_prop_highweight_generic_genes(
processed_dict_genes, multiplier_model_z, quantile_threshold
)135
# Return selected rows from summary matrix
multiplier_model_summary = pd.read_csv(
"multiplier_model_summary.tsv", sep="\t", index_col=0, header=0
)
lv.create_LV_df(
prop_highweight_generic_dict,
multiplier_model_summary,
0.5,
"Generic_LV_summary_table.tsv",
)LV2 0.5185185185185185
LV3 0.6444444444444445
LV7 0.562962962962963
LV11 0.5851851851851851
LV17 0.5703703703703704
LV18 0.5333333333333333
LV22 0.5185185185185185
LV26 0.5111111111111111
LV32 0.5481481481481482
LV34 0.5333333333333333
LV54 0.562962962962963
LV57 0.5259259259259259
LV58 0.5703703703703704
LV61 0.6222222222222222
LV68 0.6296296296296297
LV101 0.5481481481481482
LV135 0.5185185185185185
LV473 0.5481481481481482
LV524 0.6074074074074074
LV542 0.5259259259259259
LV603 0.5185185185185185
LV719 0.6074074074074074
LV728 0.6074074074074074
LV765 0.6370370370370371
LV767 0.5333333333333333
LV787 0.5037037037037037
LV823 0.6222222222222222
LV913 0.5777777777777777
LV920 0.6074074074074074
LV932 0.5185185185185185
LV958 0.5333333333333333
LV960 0.5333333333333333
LV977 0.5481481481481482
# Plot distribution of weights for these nodes
node = "LV61"
lv.plot_dist_weights(
node,
multiplier_model_z,
shared_genes,
20,
all_coverage_df,
f"weight_dist_{node}.svg",
)0 6.540359
1 6.527677
2 2.738563
3 1.470443
4 1.272303
5 0.982001
6 0.842076
7 0.715623
8 0.714085
9 0.615867
10 0.567222
11 0.535279
12 0.532891
13 0.525537
14 0.522841
15 0.483024
16 0.475159
17 0.469341
18 0.462943
19 0.439901
Name: LV61, dtype: float64
</code>
## Save_____no_output_____
<code>
# Save plot
nonzero_fig.figure.savefig(
nonzero_figure_filename,
format="svg",
bbox_inches="tight",
transparent=True,
pad_inches=0,
dpi=300,
)
# Save plot
highweight_fig.figure.savefig(
highweight_figure_filename,
format="svg",
bbox_inches="tight",
transparent=True,
pad_inches=0,
dpi=300,
)_____no_output_____
</code>
**Takeaway:**
* In the first nonzero boxplot, generic and other genes are present in a similar number of LVs. This isn't surprising since the number of genes that contribute to each LV is <1000.
* In the second highweight boxplot, other genes are highly weighted in more LVs compared to generic genes. This would indicate that generic genes contribute alot to few LVs.
This is the opposite trend found using [_P. aeruginosa_ data](1_get_eADAGE_LV_coverage.ipynb). Perhaps this indicates that generic genes have different behavior/roles depending on the organism. In humans, perhaps these generic genes are related to a few hyper-responsive pathways, whereas in _P. aeruginosa_ perhaps generic genes are associated with many pathways, acting as *gene hubs*.
* There are a number of LVs that contain a high proportion of generic genes can be found in [table](Generic_LV_summary_table.tsv). By quick visual inspection, it looks like many LVs are associated with immune response, signaling and metabolism. Which are consistent with the hypothesis that these generic genes are related to hyper-responsive pathways._____no_output_____
| {
"repository": "jjc2718/generic-expression-patterns",
"path": "LV_analysis/1_get_multiplier_LV_coverage.ipynb",
"matched_keywords": [
"gene expression",
"biology"
],
"stars": null,
"size": 87432,
"hexsha": "d0ff69c2b0c3ddf00c15128fbd4e7372c1fd6d94",
"max_line_length": 20612,
"avg_line_length": 92.422832981,
"alphanum_fraction": 0.8346715161
} |
# Notebook from PythonOT/pythonot.github.io
Path: master/_downloads/cc2d1e2d3ec6e3b42bceea0b50c4db77/plot_wass1d_torch.ipynb
<code>
%matplotlib inline_____no_output_____
</code>
# Wasserstein 1D with PyTorch
In this small example, we consider the following minization problem:
\begin{align}\mu^* = \min_\mu W(\mu,\nu)\end{align}
where $\nu$ is a reference 1D measure. The problem is handled
by a projected gradient descent method, where the gradient is computed
by pyTorch automatic differentiation. The projection on the simplex
ensures that the iterate will remain on the probability simplex.
This example illustrates both `wasserstein_1d` function and backend use within
the POT framework.
_____no_output_____
<code>
# Author: Nicolas Courty <[email protected]>
# Rémi Flamary <[email protected]>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import matplotlib as mpl
import torch
from ot.lp import wasserstein_1d
from ot.datasets import make_1D_gauss as gauss
from ot.utils import proj_simplex
red = np.array(mpl.colors.to_rgb('red'))
blue = np.array(mpl.colors.to_rgb('blue'))
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a = gauss(n, m=20, s=5) # m= mean, s= std
b = gauss(n, m=60, s=10)
# enforce sum to one on the support
a = a / a.sum()
b = b / b.sum()
device = "cuda" if torch.cuda.is_available() else "cpu"
# use pyTorch for our data
x_torch = torch.tensor(x).to(device=device)
a_torch = torch.tensor(a).to(device=device).requires_grad_(True)
b_torch = torch.tensor(b).to(device=device)
lr = 1e-6
nb_iter_max = 800
loss_iter = []
pl.figure(1, figsize=(8, 4))
pl.plot(x, a, 'b', label='Source distribution')
pl.plot(x, b, 'r', label='Target distribution')
for i in range(nb_iter_max):
# Compute the Wasserstein 1D with torch backend
loss = wasserstein_1d(x_torch, x_torch, a_torch, b_torch, p=2)
# record the corresponding loss value
loss_iter.append(loss.clone().detach().cpu().numpy())
loss.backward()
# performs a step of projected gradient descent
with torch.no_grad():
grad = a_torch.grad
a_torch -= a_torch.grad * lr # step
a_torch.grad.zero_()
a_torch.data = proj_simplex(a_torch) # projection onto the simplex
# plot one curve every 10 iterations
if i % 10 == 0:
mix = float(i) / nb_iter_max
pl.plot(x, a_torch.clone().detach().cpu().numpy(), c=(1 - mix) * blue + mix * red)
pl.legend()
pl.title('Distribution along the iterations of the projected gradient descent')
pl.show()
pl.figure(2)
pl.plot(range(nb_iter_max), loss_iter, lw=3)
pl.title('Evolution of the loss along iterations', fontsize=16)
pl.show()_____no_output_____
</code>
## Wasserstein barycenter
In this example, we consider the following Wasserstein barycenter problem
$$ \\eta^* = \\min_\\eta\;\;\; (1-t)W(\\mu,\\eta) + tW(\\eta,\\nu)$$
where $\\mu$ and $\\nu$ are reference 1D measures, and $t$
is a parameter $\in [0,1]$. The problem is handled by a project gradient
descent method, where the gradient is computed by pyTorch automatic differentiation.
The projection on the simplex ensures that the iterate will remain on the
probability simplex.
This example illustrates both `wasserstein_1d` function and backend use within the
POT framework.
_____no_output_____
<code>
device = "cuda" if torch.cuda.is_available() else "cpu"
# use pyTorch for our data
x_torch = torch.tensor(x).to(device=device)
a_torch = torch.tensor(a).to(device=device)
b_torch = torch.tensor(b).to(device=device)
bary_torch = torch.tensor((a + b).copy() / 2).to(device=device).requires_grad_(True)
lr = 1e-6
nb_iter_max = 2000
loss_iter = []
# instant of the interpolation
t = 0.5
for i in range(nb_iter_max):
# Compute the Wasserstein 1D with torch backend
loss = (1 - t) * wasserstein_1d(x_torch, x_torch, a_torch.detach(), bary_torch, p=2) + t * wasserstein_1d(x_torch, x_torch, b_torch, bary_torch, p=2)
# record the corresponding loss value
loss_iter.append(loss.clone().detach().cpu().numpy())
loss.backward()
# performs a step of projected gradient descent
with torch.no_grad():
grad = bary_torch.grad
bary_torch -= bary_torch.grad * lr # step
bary_torch.grad.zero_()
bary_torch.data = proj_simplex(bary_torch) # projection onto the simplex
pl.figure(3, figsize=(8, 4))
pl.plot(x, a, 'b', label='Source distribution')
pl.plot(x, b, 'r', label='Target distribution')
pl.plot(x, bary_torch.clone().detach().cpu().numpy(), c='green', label='W barycenter')
pl.legend()
pl.title('Wasserstein barycenter computed by gradient descent')
pl.show()
pl.figure(4)
pl.plot(range(nb_iter_max), loss_iter, lw=3)
pl.title('Evolution of the loss along iterations', fontsize=16)
pl.show()_____no_output_____
</code>
| {
"repository": "PythonOT/pythonot.github.io",
"path": "master/_downloads/cc2d1e2d3ec6e3b42bceea0b50c4db77/plot_wass1d_torch.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 5,
"size": 6020,
"hexsha": "cb01861e0e9b4d336b5cd2c214762a4b8cd61124",
"max_line_length": 2103,
"avg_line_length": 83.6111111111,
"alphanum_fraction": 0.6478405316
} |
# Notebook from letsgoexploring/econ126
Path: Lecture Notebooks/Econ126_Class_14.ipynb
<code>
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline_____no_output_____
</code>
# Class 14: Prescott's Real Business Cycle Model I
In this notebook, we'll consider a centralized version of the model from pages 11-17 in Edward Prescott's article "Theory Ahead of Business Cycle Measurement in the Fall 1986 of the Federal Reserve Bank of Minneapolis' *Quarterly Review* (link to article: https://www.minneapolisfed.org/research/qr/qr1042.pdf). The model is just like the RBC model that we studying in the previous lecture, except that now we include an endogenous labor supply._____no_output_____## Prescott's RBC Model with Labor
The equilibrium conditions for Prescott's RBC model with labor are:
\begin{align}
\frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right]\\
\frac{\varphi}{1-L_t} & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} \\
Y_t & = A_t K_t^{\alpha}L_t^{1-\alpha}\\
K_{t+1} & = I_t + (1-\delta) K_t\\
Y_t & = C_t + I_t\\
\log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
\end{align}
where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$.
The objective is use `linearsolve` to simulate impulse responses to a TFP shock using the following parameter values for the simulation:
| $$\rho$$ | $$\sigma$$ | $$\beta$$ | $$\varphi$$ | $$\alpha$$ | $$\delta $$ |
|----------|------------|-------------|-----------|------------|-------------|
| 0.75 | 0.006 | 0.99 | 1.7317 | 0.35 | 0.025 |
The value for $\beta$ implies a steady state (annualized) real interest rate of about 4 percent:
\begin{align}
4 \cdot \left(\beta^{-1} - 1\right) & \approx 0.04040
\end{align}
$\rho = 0.75$ and $\sigma = 0.006$ are consistent with the statistical properties of the cyclical component of TFP in the US. $\alpha$ is set so that, consistent with the long-run average of the US, the labor share of income is about 65 percent of GDP. The deprecation rate of capital is calibrated to be about 10 percent annually. Finally, $\varphi$ was chosen last to ensure that in the steady state households allocate about 33 percent of their available time to labor._____no_output_____## Model Preparation
Before proceding, let's recast the model in the form required for `linearsolve`. Write the model with all variables moved to the left-hand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$:
\begin{align}
0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\
0 & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} - \frac{\varphi}{1-L_t}\\
0 & = A_t K_t^{\alpha}L_t^{1-\alpha} - Y_t\\
0 & = I_t + (1-\delta) K_t - K_{t+1}\\
0 & = C_t + I_t - Y_t\\
0 & = \rho \log A_t - \log A_{t+1}
\end{align}
Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables.
_____no_output_____## Initialization, Approximation, and Solution
The next several cells initialize the model in `linearsolve` and then approximate and solve it._____no_output_____
<code>
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
parameters = pd.Series(dtype=float)
parameters['rho'] = .75
parameters['beta'] = 0.99
parameters['phi'] = 1.7317
parameters['alpha'] = 0.35
parameters['delta'] = 0.025
# Print the model's parameters
print(parameters)rho 0.7500
beta 0.9900
phi 1.7317
alpha 0.3500
delta 0.0250
dtype: float64
# Create a variable called 'sigma' that stores the value of sigma
sigma = 0.006_____no_output_____# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
var_names = ['a','k','y','c','i','l']
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
shock_names = ['e_a','e_k']_____no_output_____# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# Define variable to store MPK. Will make things easier later.
mpk = p.alpha*fwd.a*fwd.k**(p.alpha-1)*fwd.l**(1-p.alpha)
# Define variable to store MPL. Will make things easier later.
mpl = (1-p.alpha)*fwd.a*fwd.k**p.alpha*fwd.l**-p.alpha
# Euler equation
euler_equation = p.beta*(mpk+1-p.delta)/fwd.c - 1/cur.c
# Labor-labor choice
labor_leisure = mpl/cur.c - p.phi/(1-cur.l)
# Production function
production_function = cur.a*cur.k**p.alpha*cur.l**(1-p.alpha) - cur.y
# Capital evolution. PROVIDED
capital_evolution = cur.i + (1 - p.delta)*cur.k - fwd.k
# Market clearing. PROVIDED
market_clearing = cur.c+cur.i - cur.y
# Exogenous tfp. PROVIDED
tfp_process = p.rho*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_equation,
labor_leisure,
production_function,
capital_evolution,
market_clearing,
tfp_process
])_____no_output_____
</code>
Next, initialize the model using `ls.model` which takes the following required arguments:
* `equations`
* `n_states`
* `var_names`
* `shock_names`
* `parameters`_____no_output_____
<code>
# Initialize the model into a variable named 'rbc_model'
rbc_model = ls.model(equations = equilibrium_equations,
n_states=2,
var_names=var_names,
shock_names=shock_names,
parameters=parameters)_____no_output_____# Compute the steady state numerically using .compute_ss() method of rbc_model
guess = [1,4,1,1,1,0.5]
rbc_model.compute_ss(guess)
# Print the computed steady state
print(rbc_model.ss)a 1.000000
k 11.465953
y 1.149904
c 0.863256
i 0.286649
l 0.333330
dtype: float64
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of rbc_model
rbc_model.approximate_and_solve()_____no_output_____
</code>
## Impulse Responses
Compute a 26 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5._____no_output_____
<code>
# Compute impulse responses
rbc_model.impulse(T=26,t0=5,shocks=[0.01,0])
# Print the first 10 rows of the computed impulse responses to the TFP shock
print(rbc_model.irs['e_a'].head(10)) e_a a k y c i l
0 0.00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
1 0.00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
2 0.00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
3 0.00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
4 0.00 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
5 0.01 0.010000 0.000000 0.015475 0.001708 0.056935 0.008423
6 0.00 0.007500 0.001423 0.011858 0.002071 0.041333 0.005938
7 0.00 0.005625 0.002421 0.009133 0.002304 0.029698 0.004093
8 0.00 0.004219 0.003103 0.007077 0.002442 0.021036 0.002727
9 0.00 0.003164 0.003551 0.005524 0.002511 0.014600 0.001719
</code>
Construct a $2\times3$ grid of plots of simulated TFP, output, labor, consumption, investment, and capital. Be sure to multiply simulated values by 100 so that vertical axis units are in "percent deviation from steady state."_____no_output_____
<code>
# Create figure. PROVIDED
fig = plt.figure(figsize=(18,8))
# Create upper-left axis. PROVIDED
ax = fig.add_subplot(2,3,1)
ax.plot(rbc_model.irs['e_a']['a']*100,'b',lw=5,alpha=0.75)
ax.set_title('TFP')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create upper-center axis. PROVIDED
ax = fig.add_subplot(2,3,2)
ax.plot(rbc_model.irs['e_a']['y']*100,'b',lw=5,alpha=0.75)
ax.set_title('Output')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create upper-right axis. PROVIDED
ax = fig.add_subplot(2,3,3)
ax.plot(rbc_model.irs['e_a']['l']*100,'b',lw=5,alpha=0.75)
ax.set_title('Labor')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.5,2])
ax.grid()
# Create lower-left axis. PROVIDED
ax = fig.add_subplot(2,3,4)
ax.plot(rbc_model.irs['e_a']['c']*100,'b',lw=5,alpha=0.75)
ax.set_title('Consumption')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.1,0.4])
ax.grid()
# Create lower-center axis. PROVIDED
ax = fig.add_subplot(2,3,5)
ax.plot(rbc_model.irs['e_a']['i']*100,'b',lw=5,alpha=0.75)
ax.set_title('Investment')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-2,8])
ax.grid()
# Create lower-right axis. PROVIDED
ax = fig.add_subplot(2,3,6)
ax.plot(rbc_model.irs['e_a']['k']*100,'b',lw=5,alpha=0.75)
ax.set_title('Capital')
ax.set_ylabel('% dev from steady state')
ax.set_ylim([-0.2,0.8])
ax.grid()
fig.tight_layout()_____no_output_____
</code>
| {
"repository": "letsgoexploring/econ126",
"path": "Lecture Notebooks/Econ126_Class_14.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 81847,
"hexsha": "cb01c80784770013728348b68fb2f59f898c5ff8",
"max_line_length": 68080,
"avg_line_length": 199.6268292683,
"alphanum_fraction": 0.8922990458
} |
# Notebook from tfburns/deep-learning-specialization
Path: notebooks/Dinosaurus_Island_Character_level_language_model_final_v3b.ipynb
# Character level language model - Dinosaurus Island
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
- How to store text data for processing using an RNN
- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
- How to build a character-level text generation recurrent neural network
- Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. _____no_output_____## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3b".
* You can find your original work saved in the notebook with the previous version name ("v3a")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates 3b
- removed redundant numpy import
* `clip`
- change test code to use variable name 'mvalue' rather than 'maxvalue' and deleted it from namespace to avoid confusion.
* `optimize`
- removed redundant description of clip function to discourage use of using 'maxvalue' which is not an argument to optimize
* `model`
- added 'verbose mode to print X,Y to aid in creating that code.
- wordsmith instructions to prevent confusion
- 2000 examples vs 100, 7 displayed vs 10
- no randomization of order
* `sample`
- removed comments regarding potential different sample outputs to reduce confusion._____no_output_____
<code>
import numpy as np
from utils import *
import random
import pprint_____no_output_____
</code>
## 1 - Problem Statement
### 1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size. _____no_output_____
<code>
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))There are 19909 total characters and 27 unique characters in your data.
</code>
* The characters are a-z (26 characters) plus the "\n" (or newline character).
* In this assignment, the newline character "\n" plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture.
- Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence.
* `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.
* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character.
- This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. _____no_output_____
<code>
chars = sorted(chars)
print(chars)['\n', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char){ 0: '\n',
1: 'a',
2: 'b',
3: 'c',
4: 'd',
5: 'e',
6: 'f',
7: 'g',
8: 'h',
9: 'i',
10: 'j',
11: 'k',
12: 'l',
13: 'm',
14: 'n',
15: 'o',
16: 'p',
17: 'q',
18: 'r',
19: 's',
20: 't',
21: 'u',
22: 'v',
23: 'w',
24: 'x',
25: 'y',
26: 'z'}
</code>
### 1.2 - Overview of the model
Your model will have the following structure:
- Initialize parameters
- Run the optimization loop
- Forward propagation to compute the loss function
- Backward propagation to compute the gradients with respect to the loss function
- Clip the gradients to avoid exploding gradients
- Using the gradients, update your parameters with the gradient descent update rule.
- Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". </center></caption>
* At each time-step, the RNN tries to predict what is the next character given the previous characters.
* The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.
* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward.
* At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$._____no_output_____## 2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model._____no_output_____### 2.1 - Clipping the gradients in the optimization loop
In this section you will implement the `clip` function that you will call inside of your optimization loop.
#### Exploding gradients
* When gradients are very large, they're called "exploding gradients."
* Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.
Recall that your overall loop structure usually consists of:
* forward pass,
* cost computation,
* backward pass,
* parameter update.
Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding."
#### gradient clipping
In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed.
* There are different ways to clip gradients.
* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N].
* For example, if the N=10
- The range is [-10, 10]
- If any component of the gradient vector is greater than 10, it is set to 10.
- If any component of the gradient vector is less than -10, it is set to -10.
- If any components are between -10 and 10, they keep their original values.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. </center></caption>
**Exercise**:
Implement the function below to return the clipped gradients of your dictionary `gradients`.
* Your function takes in a maximum threshold and returns the clipped versions of the gradients.
* You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html).
- You will need to use the argument "`out = ...`".
- Using the "`out`" parameter allows you to update a variable "in-place".
- If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`._____no_output_____
<code>
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWaa, dWax, dWya, db, dby]:
gradient = np.clip(gradient,-maxValue,maxValue,out = gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients_____no_output_____# Test with a maxvalue of 10
mValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
</code>
** Expected output:**
```Python
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
```_____no_output_____
<code>
# Test with a maxValue of 5
mValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
del mValue # avoid common issuegradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
</code>
** Expected Output: **
```Python
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
```_____no_output_____### 2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. </center></caption>_____no_output_____**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:
- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$.
- This is the default input before we've generated any characters.
We also set $a^{\langle 0 \rangle} = \vec{0}$_____no_output_____- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
hidden state:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
activation:
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
prediction:
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
- Details about $\hat{y}^{\langle t+1 \rangle }$:
- Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1).
- $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character.
- We have provided a `softmax()` function that you can use._____no_output_____#### Additional Hints
- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.
- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.
- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)_____no_output_____#### Using 2D arrays instead of 1D arrays
* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.
* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.
* This becomes a problem when we add two arrays where we expected them to have the same shape.
* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.
* Here is some sample code that shows the difference between using a 1D and 2D array._____no_output_____
<code>
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)matrix1
[[1 1]
[2 2]
[3 3]]
matrix2
[[0]
[0]
[0]]
vector1D
[1 1]
vector2D
[[1]
[1]]
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))Multiply 2D and 1D arrays: result is a 1D array
[2 4 6]
Multiply 2D and 2D arrays: result is a 2D array
[[2]
[4]
[6]]
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector
This is what we want here!
[[2]
[4]
[6]]
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)Adding a (3,) vector to a (3 x 1) vector
broadcasts the 1D array across the second dimension
Not what we want here!
[[2 4 6]
[2 4 6]
[2 4 6]]
</code>
- **Step 3**: Sampling:
- Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. To make the results more interesting, we will use np.random.choice to select a next letter that is *likely*, but not always the same.
- Pick the next character's **index** according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$.
- This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability.
- Use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).
Example of how to use `np.random.choice()`:
```python
np.random.seed(0)
probs = np.array([0.1, 0.0, 0.7, 0.2])
idx = np.random.choice(range(len((probs)), p = probs)
```
- This means that you will pick the index (`idx`) according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
- Note that the value that's set to `p` should be set to a 1D vector.
- Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array.
- Also notice, while in your implementation, the first argument to np.random.choice is just an ordered list [0,1,.., vocab_len-1], it is *Not* appropriate to use char_to_ix.values(). The *order* of values returned by a python dictionary .values() call will be the same order as they are added to the dictionary. The grader may have a different order when it runs your routine than when you run it in your notebook._____no_output_____##### Additional Hints
- [range](https://docs.python.org/3/library/functions.html#func-range)
- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.
```Python
arr = np.array([[1,2],[3,4]])
print("arr")
print(arr)
print("arr.ravel()")
print(arr.ravel())
```
Output:
```Python
arr
[[1 2]
[3 4]]
arr.ravel()
[1 2 3 4]
```
- Note that `append` is an "in-place" operation. In other words, don't do this:
```Python
fun_hobbies = fun_hobbies.append('learning') ## Doesn't give you what you want
```_____no_output_____- **Step 4**: Update to $x^{\langle t \rangle }$
- The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$.
- You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction.
- You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name. _____no_output_____##### Additional Hints
- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero.
- You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)
- Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)_____no_output_____
<code>
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros(( vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros(( n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh( np.dot(Waa, a_prev) + np.dot(Wax, x) + b)
z = np.dot(Wya, a) + by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
# (see additional hints above)
idx = np.random.choice(range(vocab_size), p = y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros(( vocab_size, 1))
x[idx,:] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices_____no_output_____np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
</code>
** Expected output:**
```Python
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
```
_____no_output_____## 3 - Building the language model
It is time to build the character-level language model for text generation.
### 3.1 - Gradient descent
* In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients).
* You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent.
As a reminder, here are the steps of a common optimization loop for an RNN:
- Forward propagate through the RNN to compute the loss
- Backward propagate through time to compute the gradients of the loss with respect to the parameters
- Clip the gradients
- Update the parameters using gradient descent
**Exercise**: Implement the optimization process (one step of stochastic gradient descent).
The following functions are provided:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
Recall that you previously implemented the `clip` function:
_____no_output_____#### parameters
* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.
* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary)._____no_output_____
<code>
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]_____no_output_____np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
</code>
** Expected output:**
```Python
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
```_____no_output_____### 3.2 - Training the model _____no_output_____* Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example.
* Every 2000 steps of stochastic gradient descent, you will sample several randomly chosen names to see how the algorithm is doing.
**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:
##### Set the index `idx` into the list of examples
* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".
* For example, if there are n_e examples, and the for-loop increments the index to n_e onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is n_e, n_e + 1, etc.
* Hint: n_e + 1 divided by n_e is zero with a remainder of 1.
* `%` is the modulus operator in python.
##### Extract a single example from the list of examples
* `single_example`: use the `idx` index that you set previously to get one word from the list of examples._____no_output_____##### Convert a string into a list of characters: `single_example_chars`
* `single_example_chars`: A string is a list of characters.
* You can use a list comprehension (recommended over for-loops) to generate a list of characters.
```Python
str = 'I love learning'
list_of_chars = [c for c in str]
print(list_of_chars)
```
```
['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']
```_____no_output_____##### Convert list of characters to a list of integers: `single_example_ix`
* Create a list that contains the index numbers associated with each character.
* Use the dictionary `char_to_ix`
* You can combine this with the list comprehension that is used to get a list of characters from a string._____no_output_____##### Create the list of input characters: `X`
* `rnn_forward` uses the **`None`** value as a flag to set the input vector as a zero-vector.
* Prepend the list [**`None`**] in front of the list of input characters.
* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']`_____no_output_____##### Get the integer representation of the newline character `ix_newline`
* `ix_newline`: The newline character signals the end of the dinosaur name.
- get the integer representation of the newline character `'\n'`.
- Use `char_to_ix`_____no_output_____##### Set the list of labels (integer representation of the characters): `Y`
* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`.
- For example, `Y[0]` contains the same value as `X[1]`
* The RNN should predict a newline at the last letter so add ix_newline to the end of the labels.
- Append the integer representation of the newline character to the end of `Y`.
- Note that `append` is an in-place operation.
- It might be easier for you to add two lists together._____no_output_____
<code>
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27, verbose = False):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
single_example_chars = [c for c in single_example]
single_example_ix = [char_to_ix[c] for c in single_example_chars]
X = [None]+[single_example_ix]
X = [None]+[char_to_ix[ch] for ch in examples[idx]];
# Set the labels Y (see instructions above)
ix_newline = char_to_ix["\n"]
Y = X[1:]+[ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate=0.01)
### END CODE HERE ###
# debug statements to aid in correctly forming X, Y
if verbose and j in [0, len(examples) -1, len(examples)]:
print("j = " , j, "idx = ", idx,)
if verbose and j in [0]:
print("single_example =", single_example)
print("single_example_chars", single_example_chars)
print("single_example_ix", single_example_ix)
print(" X = ", X, "\n", "Y = ", Y, "\n")
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters_____no_output_____
</code>
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names. _____no_output_____
<code>
parameters = model(data, ix_to_char, char_to_ix, verbose = True)j = 0 idx = 0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0]
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
j = 1535 idx = 1535
j = 1536 idx = 0
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382338
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.255630
Meustolkanolus
Indabestacarospceryradwalosaurus
Justolopinaveraterasauracoptelalenyden
Maca
Yusocles
Daahosaurus
Trodon
Iteration: 18000, Loss: 22.905483
Phytronn
Meicanstolanthus
Mustrisaurus
Pegalosaurus
Yuskercis
Egalosaurus
Tromelosaurus
Iteration: 20000, Loss: 22.873854
Nlyushanerohyisaurus
Loga
Lustrhigosaurus
Nedalosaurus
Yuslangosaurus
Elagosaurus
Trrangosaurus
Iteration: 22000, Loss: 22.710545
Onyxromicoraurospareiosatrus
Liga
Mustoffankeugoptardoros
Ola
Yusodogongterosaurus
Ehaerona
Trododongxernochenhus
Iteration: 24000, Loss: 22.604827
Meustognathiterhucoplithaloptha
Jigaadosaurus
Kurrodon
Mecaistheansaurus
Yuromelosaurus
Eiaeropeeton
Troenathiteritaus
Iteration: 26000, Loss: 22.714486
Nhyxosaurus
Kola
Lvrosaurus
Necalosaurus
Yurolonlus
Ejakosaurus
Troindronykus
Iteration: 28000, Loss: 22.647640
Onyxosaurus
Loceahosaurus
Lustleonlonx
Olabasicachudrakhurgawamosaurus
Ytrojianiisaurus
Eladon
Tromacimathoshargicitan
Iteration: 30000, Loss: 22.598485
Oryuton
Locaaesaurus
Lustoendosaurus
Olaahus
Yusaurus
Ehadopldarshuellus
Troia
Iteration: 32000, Loss: 22.211861
Meutronlapsaurus
Kracallthcaps
Lustrathus
Macairugeanosaurus
Yusidoneraverataus
Eialosaurus
Troimaniathonsaurus
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
</code>
** Expected Output**
```Python
j = 0 idx = 0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0]
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
j = 1535 idx = 1535
j = 1536 idx = 0
Iteration: 2000, Loss: 27.884160
...
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
```_____no_output_____## Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">_____no_output_____## 4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes. _____no_output_____
<code>
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import ioUsing TensorFlow backend.
</code>
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). _____no_output_____Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
_____no_output_____
<code>
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])Epoch 1/1
1408/31412 [>.............................] - ETA: 317s - loss: 3.5865# Run this cell to try with different inputs without having to re-train the model
generate_output()_____no_output_____
</code>
The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:
- LSTMs instead of the basic RNN to capture longer-range dependencies
- The model is a deeper, stacked LSTM model (2 layer)
- Using Keras instead of python to simplify the code
If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.
Congratulations on finishing this notebook! _____no_output_____**References**:
- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).
- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py _____no_output_____
| {
"repository": "tfburns/deep-learning-specialization",
"path": "notebooks/Dinosaurus_Island_Character_level_language_model_final_v3b.ipynb",
"matched_keywords": [
"biology"
],
"stars": null,
"size": 75600,
"hexsha": "cb01dd68ebc12f210136309d77a63c4f35275091",
"max_line_length": 1599,
"avg_line_length": 41.8836565097,
"alphanum_fraction": 0.5716666667
} |
# Notebook from Simonm952/simonm952.github.io
Path: App market.ipynb
## 1. Google Play Store apps and reviews
<p>Mobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.</p>
<p><img src="https://assets.datacamp.com/production/project_619/img/google_play_store.png" alt="Google Play logo"></p>
<p>Let's take a look at the data, which consists of two files:</p>
<ul>
<li><code>apps.csv</code>: contains all the details of the applications on Google Play. There are 13 features that describe a given app.</li>
<li><code>user_reviews.csv</code>: contains 100 reviews for each app, <a href="https://www.androidpolice.com/2019/01/21/google-play-stores-redesigned-ratings-and-reviews-section-lets-you-easily-filter-by-star-rating/">most helpful first</a>. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.</li>
</ul>_____no_output_____
<code>
# Read in dataset
import pandas as pd
import pandas as pd
apps_with_duplicates = pd.read_csv('datasets/apps.csv')
# Drop duplicates
apps = apps_with_duplicates.drop_duplicates()
# Print the total number of apps
print('Total number of apps in the dataset = ', apps.shape[0])
# Have a look at a random sample of 5 entries
n = 5
apps.sample(n)
Total number of apps in the dataset = 9659
</code>
## 2. Data cleaning
<p>The four features that we will be working with most frequently henceforth are <code>Installs</code>, <code>Size</code>, <code>Rating</code> and <code>Price</code>. The <code>info()</code> function (from the previous task) told us that <code>Installs</code> and <code>Price</code> columns are of type <code>object</code> and not <code>int64</code> or <code>float64</code> as we would expect. This is because the column contains some characters more than just [0,9] digits. Ideally, we would want these columns to be numeric as their name suggests. <br>
Hence, we now proceed to data cleaning and prepare our data to be consumed in our analyis later. Specifically, the presence of special characters (<code>, $ +</code>) in the <code>Installs</code> and <code>Price</code> columns make their conversion to a numerical data type difficult.</p>_____no_output_____
<code>
# List of characters to remove
chars_to_remove = ['+',",","$"]
# List of column names to clean
cols_to_clean = ["Installs","Price"]
# Loop for each column
for col in cols_to_clean:
# Replace each character with an empty string
for char in chars_to_remove:
apps[col] = apps[col].astype(str).str.replace(char, '')
# Convert col to numeric
apps[col] = pd.to_numeric( apps[col]) _____no_output_____
</code>
## 3. Exploring app categories
<p>With more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.</p>
<p>This brings us to the following questions:</p>
<ul>
<li>Which category has the highest share of (active) apps in the market? </li>
<li>Is any specific category dominating the market?</li>
<li>Which categories have the fewest number of apps?</li>
</ul>
<p>We will see that there are <code>33</code> unique app categories present in our dataset. <em>Family</em> and <em>Game</em> apps have the highest market prevalence. Interestingly, <em>Tools</em>, <em>Business</em> and <em>Medical</em> apps are also at the top.</p>_____no_output_____
<code>
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
# Print the total number of unique categories
num_categories = len(apps["Category"].unique())
print('Number of categories = ', num_categories)
# Count the number of apps in each 'Category' and sort them in descending order
num_apps_in_category = apps["Category"].value_counts().sort_values(ascending = False)
data = [go.Bar(
x = num_apps_in_category.index, # index = category name
y = num_apps_in_category.values, # value = count
)]
plotly.offline.iplot(data)_____no_output_____
</code>
## 4. Distribution of app ratings
<p>After having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.</p>
<p>From our research, we found that the average volume of ratings across all app categories is <code>4.17</code>. The histogram plot is skewed to the right indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.</p>_____no_output_____
<code>
# Average rating of apps
avg_app_rating = apps['Rating'].mean()
print('Average app rating = ', avg_app_rating)
# Distribution of apps according to their ratings
data = [go.Histogram(
x = apps['Rating']
)]
# Vertical dashed line to indicate the average app rating
layout = {'shapes': [{
'type' :'line',
'x0': avg_app_rating,
'y0': 0,
'x1': avg_app_rating,
'y1': 1000,
'line': { 'dash': 'dashdot'}
}]
}
plotly.offline.iplot({'data': data, 'layout': layout})Average app rating = 4.173243045387994
</code>
## 5. Size and price of an app
<p>Let's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.</p>
<p>How can we effectively come up with strategies to size and price our app?</p>
<ul>
<li>Does the size of an app affect its rating? </li>
<li>Do users really care about system-heavy apps or do they prefer light-weighted apps? </li>
<li>Does the price of an app affect its rating? </li>
<li>Do users always prefer free apps over paid apps?</li>
</ul>
<p>We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.</p>_____no_output_____
<code>
%matplotlib inline
import seaborn as sns
sns.set_style("darkgrid")
import warnings
warnings.filterwarnings("ignore")
# Filter rows where both Rating and Size values are not null
apps_with_size_and_rating_present = apps[(~apps["Rating"].isnull()) & (~apps["Size"].isnull())]
# Subset for categories with at least 250 apps
large_categories = apps_with_size_and_rating_present.groupby("Category").filter(lambda x: len(x) >= 250).reset_index()
# Plot size vs. rating
plt1 = sns.jointplot(x = large_categories["Size"], y = large_categories["Rating"], kind = 'hex')
# Subset apps whose 'Type' is 'Paid'
paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present["Type"] == "Paid"]
# Plot price vs. rating
plt2 = sns.jointplot(x = paid_apps["Price"], y = paid_apps["Rating"])_____no_output_____
</code>
## 6. Relation between app category and app price
<p>So now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.</p>
<p>There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.</p>
<p>Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that <em>Medical and Family</em> apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.</p>_____no_output_____
<code>
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Select a few popular app categories
popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY',
'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
# Examine the price trend by plotting Price vs Category
ax = sns.stripplot(x = popular_app_cats["Price"], y = popular_app_cats["Category"], jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories')
# Apps whose Price is greater than 200
apps_above_200 = popular_app_cats[['Category', 'App', 'Price']][popular_app_cats["Price"] > 200]
apps_above_200_____no_output_____
</code>
## 7. Filter out "junk" apps
<p>It looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called <em>I Am Rich Premium</em> or <em>most expensive app (H)</em> just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.</p>
<p>Let's filter out these junk apps and re-do our visualization. The distribution of apps under \$20 becomes clearer.</p>_____no_output_____
<code>
# Select apps priced below $100
apps_under_100 =popular_app_cats[popular_app_cats["Price"]<100]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Examine price vs category with the authentic apps
ax = sns.stripplot(x=apps_under_100["Price"], y=apps_under_100["Category"], data=apps_under_100,
jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories after filtering for junk apps')_____no_output_____
</code>
## 8. Popularity of paid apps vs free apps
<p>For apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:</p>
<ul>
<li>Free to download.</li>
<li>Main source of income often comes from advertisements.</li>
<li>Often created by companies that have other products and the app serves as an extension of those products.</li>
<li>Can serve as a tool for customer retention, communication, and customer service.</li>
</ul>
<p>Some characteristics of paid apps are:</p>
<ul>
<li>Users are asked to pay once for the app to download and use it.</li>
<li>The user can't really get a feel for the app before buying it.</li>
</ul>
<p>Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!</p>_____no_output_____
<code>
trace0 = go.Box(
# Data for paid apps
y=apps[apps['Type'] == 'Paid']['Installs'],
name = 'Paid'
)
trace1 = go.Box(
# Data for free apps
y=apps[apps['Type'] == 'Free']['Installs'],
name = 'Free'
)
layout = go.Layout(
title = "Number of downloads of paid apps vs. free apps",
yaxis = dict(
type = 'log',
autorange = True
)
)
# Add trace0 and trace1 to a list for plotting
data = [trace0,trace1]
plotly.offline.iplot({'data': data, 'layout': layout})_____no_output_____
</code>
## 9. Sentiment analysis of user reviews
<p>Mining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.</p>
<p>By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.</p>
<p>In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.</p>_____no_output_____
<code>
# Load user_reviews.csv
reviews_df = pd.read_csv('datasets/user_reviews.csv')
# Join and merge the two dataframe
merged_df = pd.merge(apps, reviews_df, on = 'App', how = "inner")
# Drop NA values from Sentiment and Translated_Review columns
merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = 'Type', y = 'Sentiment_Polarity', data = merged_df)
ax.set_title('Sentiment Polarity Distribution')_____no_output_____reviews_df_____no_output_____
</code>
| {
"repository": "Simonm952/simonm952.github.io",
"path": "App market.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 613806,
"hexsha": "cb029efc94bcfd0bc7f0c7c0f915bec3c637d7b1",
"max_line_length": 613805,
"avg_line_length": 306903,
"alphanum_fraction": 0.7519183586
} |
# Notebook from hemanth22/pythoncode-tutorials
Path: machine-learning/recommender-system-using-association-rules/recommender_systems_association_rules.ipynb
<code>
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from mlxtend.frequent_patterns import apriori, association_rules
from collections import Counter_____no_output_____# dataset = pd.read_csv("data.csv",encoding= 'unicode_escape')
dataset = pd.read_excel("Online Retail.xlsx")
dataset.head()_____no_output_____dataset.shape_____no_output_____## Verify missing value
dataset.isnull().sum().sort_values(ascending=False)_____no_output_____## Remove missing values
dataset1 = dataset.dropna()
dataset1.describe()_____no_output_____#selecting data where quantity > 0
dataset1= dataset1[dataset1.Quantity > 0]
dataset1.describe()_____no_output_____# Creating a new feature 'Amount' which is the product of Quantity and its Unit Price
dataset1['Amount'] = dataset1['Quantity'] * dataset1['UnitPrice']
# to highlight the Customers with most no. of orders (invoices) with groupby function
orders = dataset1.groupby(by=['CustomerID','Country'], as_index=False)['InvoiceNo'].count()
print('The TOP 5 loyal customers with most number of orders...')
orders.sort_values(by='InvoiceNo', ascending=False).head()_____no_output_____# Creating a subplot of size 15x6
plt.subplots(figsize=(15,6))
# Using the style bmh for better visualization
plt.style.use('bmh')
# X axis will denote the customer ID, Y axis will denote the number of orders
plt.plot(orders.CustomerID, orders.InvoiceNo)
# Labelling the X axis
plt.xlabel('Customers ID')
# Labelling the Y axis
plt.ylabel('Number of Orders')
# Title to the plot
plt.title('Number of Orders by different Customers')
plt.show()_____no_output_____#Using groupby function to highlight the Customers with highest spent amount (invoices)
money = dataset1.groupby(by=['CustomerID','Country'], as_index=False)['Amount'].sum()
print('The TOP 5 profitable customers with highest money spent...')
money.sort_values(by='Amount', ascending=False).head()_____no_output_____# Creating a subplot of size 15*6
plt.subplots(figsize=(15,6))
# X axis will denote the customer ID, Y axis will denote the amount spent
plt.plot(money.CustomerID, money.Amount)
# Using bmh style for better visualization
plt.style.use('bmh')
# Labelling the X-axis
plt.xlabel('Customers ID')
# Labelling the Y-axis
plt.ylabel('Money spent')
# Giving a suitable title to the plot
plt.title('Money Spent by different Customers')
plt.show()_____no_output_____# Convert InvoiceDate from object to datetime
dataset1['InvoiceDate'] = pd.to_datetime(dataset.InvoiceDate, format='%m/%d/%Y %H:%M')
# Creating a new feature called year_month, such that December 2010 will be denoted as 201012
dataset1.insert(loc=2, column='year_month', value=dataset1['InvoiceDate'].map(lambda x: 100*x.year + x.month))
# Creating a new feature for Month
dataset1.insert(loc=3, column='month', value=dataset1.InvoiceDate.dt.month)
# Creating a new feature for Day
# +1 to make Monday=1.....until Sunday=7
dataset1.insert(loc=4, column='day', value=(dataset1.InvoiceDate.dt.dayofweek)+1)
# Creating a new feature for Hour
dataset1.insert(loc=5, column='hour', value=dataset1.InvoiceDate.dt.hour)_____no_output_____# Using bmh style for better visualization
plt.style.use('bmh')
# Using groupby to extract No. of Invoices year-monthwise
ax = dataset1.groupby('InvoiceNo')['year_month'].unique().value_counts().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling the X axis
ax.set_xlabel('Month',fontsize=15)
# Labelling the Y-axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing with X tick labels
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','Jun_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11','Dec_11'), rotation='horizontal', fontsize=13)
plt.show()_____no_output_____# Day = 6 is Saturday.no orders placed
dataset1[dataset1['day']==6]_____no_output_____# Using groupby to count no. of Invoices daywise
ax = dataset1.groupby('InvoiceNo')['day'].unique().value_counts().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Day',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Days',fontsize=15)
# Providing with X tick labels
# Since there are no orders placed on Saturdays, we are excluding Sat from xticklabels
ax.set_xticklabels(('Mon','Tue','Wed','Thur','Fri','Sun'), rotation='horizontal', fontsize=15)
plt.show()_____no_output_____# Using groupby to count the no. of Invoices hourwise
ax = dataset1.groupby('InvoiceNo')['hour'].unique().value_counts().iloc[:-2].sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Hour',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Number of Orders',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Number of orders for different Hours', fontsize=15)
# Providing with X tick lables ( all orders are placed between 6 and 20 hour )
ax.set_xticklabels(range(6,21), rotation='horizontal', fontsize=15)
plt.show()_____no_output_____dataset1.UnitPrice.describe()_____no_output_____# checking the distribution of unit price
plt.subplots(figsize=(12,6))
# Using darkgrid style for better visualization
sns.set_style('darkgrid')
# Applying boxplot visualization on Unit Price
sns.boxplot(dataset1.UnitPrice)
plt.show()_____no_output_____# Creating a new df of free items
freeproducts = dataset1[dataset1['UnitPrice'] == 0]
freeproducts.head()_____no_output_____# Counting how many free items were given out year-month wise
freeproducts.year_month.value_counts().sort_index()_____no_output_____# Counting how many free items were given out year-month wise
ax = freeproducts.year_month.value_counts().sort_index().plot(kind='bar',figsize=(12,6))
# Labelling X-axis
ax.set_xlabel('Month',fontsize=15)
# Labelling Y-axis
ax.set_ylabel('Frequency',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Frequency for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing X tick labels
# Since there are 0 free items in June 2011, we are excluding it
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11'), rotation='horizontal', fontsize=13)
plt.show()_____no_output_____plt.style.use('bmh')
# Using groupby to sum the amount spent year-month wise
ax = dataset1.groupby('year_month')['Amount'].sum().sort_index().plot(kind='bar',figsize=(15,6))
# Labelling X axis
ax.set_xlabel('Month',fontsize=15)
# Labelling Y axis
ax.set_ylabel('Amount',fontsize=15)
# Giving suitable title to the plot
ax.set_title('Revenue Generated for different Months (Dec 2010 - Dec 2011)',fontsize=15)
# Providing with X tick labels
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','Jun_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11','Dec_11'), rotation='horizontal', fontsize=13)
plt.show()_____no_output_____# Creating a new pivot table which sums the Quantity ordered for each item
most_sold= dataset1.pivot_table(index=['StockCode','Description'], values='Quantity', aggfunc='sum').sort_values(by='Quantity', ascending=False)
most_sold.reset_index(inplace=True)
sns.set_style('white')
# Creating a bar plot of Description ( or the item ) on the Y axis and the sum of Quantity on the X axis
# We are plotting only the 10 most ordered items
sns.barplot(y='Description', x='Quantity', data=most_sold.head(10))
# Giving suitable title to the plot
plt.title('Top 10 Items based on No. of Sales', fontsize=14)
plt.ylabel('Item')_____no_output_____# choosing WHITE HANGING HEART T-LIGHT HOLDER as a sample
d_white = dataset1[dataset1['Description']=='WHITE HANGING HEART T-LIGHT HOLDER']_____no_output_____# WHITE HANGING HEART T-LIGHT HOLDER has been ordered 2028 times
d_white.shape_____no_output_____# WHITE HANGING HEART T-LIGHT HOLDER has been ordered by 856 customers
len(d_white.CustomerID.unique())_____no_output_____# Creating a pivot table that displays the sum of unique Customers who bought particular item
most_customers = dataset1.pivot_table(index=['StockCode','Description'], values='CustomerID', aggfunc=lambda x: len(x.unique())).sort_values(by='CustomerID', ascending=False)
most_customers
# Since the count for WHITE HANGING HEART T-LIGHT HOLDER matches above length 856, the pivot table looks correct for all items_____no_output_____most_customers.reset_index(inplace=True)
sns.set_style('white')
# Creating a bar plot of Description ( or the item ) on the Y axis and the sum of unique Customers on the X axis
# We are plotting only the 10 most bought items
sns.barplot(y='Description', x='CustomerID', data=most_customers.head(10))
# Giving suitable title to the plot
plt.title('Top 10 Items bought by Most no. of Customers', fontsize=14)
plt.ylabel('Item')_____no_output_____# Storing all the invoice numbers into a list y
y = dataset1['InvoiceNo']
y = y.to_list()
# Using set function to find unique invoice numbers only and storing them in invoices list
invoices = list(set(y))
# Creating empty list first_choices
firstchoices = []
# looping into list of unique invoice numbers
for i in invoices:
# the first item (index = 0) of every invoice is the first purchase
# extracting the item name for the first purchase
firstpurchase = dataset1[dataset1['InvoiceNo']==i]['items'].reset_index(drop=True)[0]
# Appending the first purchase name into first choices list
firstchoices.append(firstpurchase)
firstchoices[:5]_____no_output_____# Using counter to count repeating first choices
count = Counter(firstchoices)
# Storing the counter into a datafrane
data_first_choices = pd.DataFrame.from_dict(count, orient='index').reset_index()
# Rename columns as item and count
data_first_choices.rename(columns={'index':'item', 0:'count'},inplace=True)
# Sorting the data based on count
data_first_choices.sort_values(by='count',ascending=False)_____no_output_____plt.subplots(figsize=(20,10))
sns.set_style('white')
# Creating a bar plot that displays Item name on the Y axis and Count on the X axis
sns.barplot(y='item', x='count', data=data_first_choices.sort_values(by='count',ascending=False).head(10))
# Giving suitable title to the plot
plt.title('Top 10 First Choices', fontsize=14)
plt.ylabel('Item')_____no_output_____basket = (dataset1.groupby(['InvoiceNo', 'Description'])['Quantity'].sum().unstack().reset_index().fillna(0).set_index('InvoiceNo'))
basket.head(10)_____no_output_____def encode_u(x):
if x < 1:
return 0
if x >= 1:
return 1
basket = basket.applymap(encode_u)
# everything is encoded into 0 and 1
basket.head(10)_____no_output_____# trying out on a sample item
wooden_star = basket.loc[basket['WOODEN STAR CHRISTMAS SCANDINAVIAN']==1]
# Using apriori algorithm, creating association rules for the sample item
# Applying apriori algorithm for wooden_star
frequentitemsets = apriori(wooden_star, min_support=0.15, use_colnames=True)
# Storing the association rules into rules
wooden_star_rules = association_rules(frequentitemsets, metric="lift", min_threshold=1)
# Sorting the rules on lift and support
wooden_star_rules.sort_values(['lift','support'],ascending=False).reset_index(drop=True)_____no_output_____# In other words, it returns the items which are likely to be bought by user because he bought the item passed into function
def frequently_bought_t(item):
# df of item passed
item_d = basket.loc[basket[item]==1]
# Applying apriori algorithm on item df
frequentitemsets = apriori(item_d, min_support=0.15, use_colnames=True)
# Storing association rules
rules = association_rules(frequentitemsets, metric="lift", min_threshold=1)
# Sorting on lift and support
rules.sort_values(['lift','support'],ascending=False).reset_index(drop=True)
print('Items frequently bought together with {0}'.format(item))
# Returning top 6 items with highest lift and support
return rules['consequents'].unique()[:6]_____no_output_____frequently_bought_t('WOODEN STAR CHRISTMAS SCANDINAVIAN')_____no_output_____frequently_bought_t('JAM MAKING SET WITH JARS')_____no_output_____
</code>
| {
"repository": "hemanth22/pythoncode-tutorials",
"path": "machine-learning/recommender-system-using-association-rules/recommender_systems_association_rules.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 18517,
"hexsha": "cb03e1eda1bef5162865bb5f8725b23e4192bd73",
"max_line_length": 184,
"avg_line_length": 32.8315602837,
"alphanum_fraction": 0.6080358589
} |
# Notebook from charanhu/Amazon-Fine-Food-Reviews-Analysis.
Path: Amazon_Fine_Food_Reviews_Analysis.ipynb
<a href="https://colab.research.google.com/github/charanhu/Amazon-Fine-Food-Reviews-Analysis./blob/main/Amazon_Fine_Food_Reviews_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____
<code>
!wget --header="Host: storage.googleapis.com" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" --header="Accept-Language: en-US,en;q=0.9" --header="Referer: https://www.kaggle.com/" "https://storage.googleapis.com/kaggle-data-sets/18/2157/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20210617%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210617T121304Z&X-Goog-Expires=259199&X-Goog-SignedHeaders=host&X-Goog-Signature=9a911766595be1862a3092d3324b51b0eb4e7c743ee7ace0cc6b48e3a0ab779e2e96add73b40062ee946e3a7b891cb652614cbe80f81d51dd11ef64c34e8f66d20ee312b2a391db6f0a171f6c094a42f1d6a97bb8ab50db5b630deed8a54cb6f111abe3e2ff557fc86028b38e8661c472ddfe51379540258b0509072c9278614c43d89f04652fa6c29459b57731f85d1fbb2c723b7f26beb14dc8b56220d68215fae03beb865641df4147c536bdb8e44704fc32f152a0ef51b7de8f138289474bd83413a04e0f048af50d9c31fa2a0edff2a6151ce7cfdb6dfa139130f27c39fdfa1787aa973c6ec01a43b824eb42103e12aa3e0bfc8044a347bda9640692ea7" -c -O 'archive.zip'--2021-08-16 09:11:38-- https://storage.googleapis.com/kaggle-data-sets/18/2157/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20210617%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210617T121304Z&X-Goog-Expires=259199&X-Goog-SignedHeaders=host&X-Goog-Signature=9a911766595be1862a3092d3324b51b0eb4e7c743ee7ace0cc6b48e3a0ab779e2e96add73b40062ee946e3a7b891cb652614cbe80f81d51dd11ef64c34e8f66d20ee312b2a391db6f0a171f6c094a42f1d6a97bb8ab50db5b630deed8a54cb6f111abe3e2ff557fc86028b38e8661c472ddfe51379540258b0509072c9278614c43d89f04652fa6c29459b57731f85d1fbb2c723b7f26beb14dc8b56220d68215fae03beb865641df4147c536bdb8e44704fc32f152a0ef51b7de8f138289474bd83413a04e0f048af50d9c31fa2a0edff2a6151ce7cfdb6dfa139130f27c39fdfa1787aa973c6ec01a43b824eb42103e12aa3e0bfc8044a347bda9640692ea7
Resolving storage.googleapis.com (storage.googleapis.com)... 108.177.97.128, 108.177.125.128, 142.250.157.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.97.128|:443... connected.
HTTP request sent, awaiting response... 400 Bad Request
2021-08-16 09:11:39 ERROR 400: Bad Request.
from google.colab import drive
drive.mount('/content/drive')Mounted at /content/drive
from google.colab import files
files.upload()_____no_output_____!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json_____no_output_____!kaggle datasets download -d snap/amazon-fine-food-reviewsDownloading amazon-fine-food-reviews.zip to /content
98% 237M/242M [00:10<00:00, 15.5MB/s]
100% 242M/242M [00:10<00:00, 23.1MB/s]
!unzip amazon-fine-food-reviewsArchive: amazon-fine-food-reviews.zip
inflating: Reviews.csv
inflating: database.sqlite
inflating: hashes.txt
</code>
# Amazon Fine Food Reviews Analysis
Data Source: https://www.kaggle.com/snap/amazon-fine-food-reviews <br>
EDA: https://nycdatascience.com/blog/student-works/amazon-fine-foods-visualization/
The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.<br>
Number of reviews: 568,454<br>
Number of users: 256,059<br>
Number of products: 74,258<br>
Timespan: Oct 1999 - Oct 2012<br>
Number of Attributes/Columns in data: 10
Attribute Information:
1. Id
2. ProductId - unique identifier for the product
3. UserId - unqiue identifier for the user
4. ProfileName
5. HelpfulnessNumerator - number of users who found the review helpful
6. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not
7. Score - rating between 1 and 5
8. Time - timestamp for the review
9. Summary - brief summary of the review
10. Text - text of the review
#### Objective:
Given a review, determine whether the review is positive (Rating of 4 or 5) or negative (rating of 1 or 2).
<br>
[Q] How to determine if a review is positive or negative?<br>
<br>
[Ans] We could use the Score/Rating. A rating of 4 or 5 could be cosnidered a positive review. A review of 1 or 2 could be considered negative. A review of 3 is nuetral and ignored. This is an approximate and proxy way of determining the polarity (positivity/negativity) of a review.
_____no_output_____## Loading the data
The dataset is available in two forms
1. .csv file
2. SQLite Database
In order to load the data, We have used the SQLITE dataset as it easier to query the data and visualise the data efficiently.
<br>
Here as we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score id above 3, then the recommendation wil be set to "positive". Otherwise, it will be set to "negative"._____no_output_____
<code>
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os_____no_output_____
</code>
# [1]. Reading Data_____no_output_____
<code>
# using the SQLite Table to read data.
con = sqlite3.connect('/content/database.sqlite')
#filtering only positive and negative reviews i.e.
# not taking into consideration those reviews with Score=3
# SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000, will give top 500000 data points
# you can change the number to any other number based on your computing power
# filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000""", con)
# for tsne assignment you can take 5k data points
filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000""", con)
# Give reviews with Score>3 a positive rating, and reviews with a score<3 a negative rating.
def partition(x):
if x < 3:
return 0
return 1
#changing reviews with score less than 3 to be positive and vice-versa
actualScore = filtered_data['Score']
positiveNegative = actualScore.map(partition)
filtered_data['Score'] = positiveNegative
print("Number of data points in our data", filtered_data.shape)
# filtered_data.head()
print()
filtered_data.tail()Number of data points in our data (5000, 10)
display = pd.read_sql_query("""
SELECT UserId, ProductId, ProfileName, Time, Score, Text, COUNT(*)
FROM Reviews
GROUP BY UserId
HAVING COUNT(*)>1
""", con)_____no_output_____print(display.shape)
display.head()(80668, 7)
display[display['UserId']=='AZY10LLTJ71NX']_____no_output_____display['COUNT(*)'].sum()_____no_output_____
</code>
# Exploratory Data Analysis
## [2] Data Cleaning: Deduplication
It is observed (as shown in the table below) that the reviews data had many duplicate entries. Hence it was necessary to remove duplicates in order to get unbiased results for the analysis of the data. Following is an example:_____no_output_____
<code>
display= pd.read_sql_query("""
SELECT *
FROM Reviews
WHERE Score != 3 AND UserId="AR5J8UI46CURR"
ORDER BY ProductID
""", con)
display.head()_____no_output_____
</code>
As can be seen above the same user has multiple reviews of the with the same values for HelpfulnessNumerator, HelpfulnessDenominator, Score, Time, Summary and Text and on doing analysis it was found that <br>
<br>
ProductId=B000HDOPZG was Loacker Quadratini Vanilla Wafer Cookies, 8.82-Ounce Packages (Pack of 8)<br>
<br>
ProductId=B000HDL1RQ was Loacker Quadratini Lemon Wafer Cookies, 8.82-Ounce Packages (Pack of 8) and so on<br>
It was inferred after analysis that reviews with same parameters other than ProductId belonged to the same product just having different flavour or quantity. Hence in order to reduce redundancy it was decided to eliminate the rows having same parameters.<br>
The method used for the same was that we first sort the data according to ProductId and then just keep the first similar product review and delelte the others. for eg. in the above just the review for ProductId=B000HDL1RQ remains. This method ensures that there is only one representative for each product and deduplication without sorting would lead to possibility of different representatives still existing for the same product._____no_output_____
<code>
#Sorting data according to ProductId in ascending order
sorted_data=filtered_data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last')_____no_output_____#Deduplication of entries
final=sorted_data.drop_duplicates(subset={"UserId","ProfileName","Time","Text"}, keep='first', inplace=False)
final.shape_____no_output_____#Checking to see how much % of data still remains
(final['Id'].size*1.0)/(filtered_data['Id'].size*1.0)*100_____no_output_____
</code>
<b>Observation:-</b> It was also seen that in two rows given below the value of HelpfulnessNumerator is greater than HelpfulnessDenominator which is not practically possible hence these two rows too are removed from calcualtions_____no_output_____
<code>
display= pd.read_sql_query("""
SELECT *
FROM Reviews
WHERE Score != 3 AND Id=44737 OR Id=64422
ORDER BY ProductID
""", con)
display.head()_____no_output_____final=final[final.HelpfulnessNumerator<=final.HelpfulnessDenominator]_____no_output_____#Before starting the next phase of preprocessing lets see the number of entries left
print(final.shape)
#How many positive and negative reviews are present in our dataset?
final['Score'].value_counts()(4986, 10)
</code>
# [3]. Text Preprocessing.
Now that we have finished deduplication our data requires some preprocessing before we go on further with analysis and making the prediction model.
Hence in the Preprocessing phase we do the following in the order below:-
1. Begin by removing the html tags
2. Remove any punctuations or limited set of special characters like , or . or # etc.
3. Check if the word is made up of english letters and is not alpha-numeric
4. Check to see if the length of the word is greater than 2 (as it was researched that there is no adjective in 2-letters)
5. Convert the word to lowercase
6. Remove Stopwords
7. Finally Snowball Stemming the word (it was obsereved to be better than Porter Stemming)<br>
After which we collect the words used to describe positive and negative reviews_____no_output_____
<code>
# printing some random reviews
sent_0 = final['Text'].values[0]
print(sent_0)
print("="*50)
sent_1000 = final['Text'].values[1000]
print(sent_1000)
print("="*50)
sent_1500 = final['Text'].values[1500]
print(sent_1500)
print("="*50)
sent_4900 = final['Text'].values[4900]
print(sent_4900)
print("="*50)Why is this $[...] when the same product is available for $[...] here?<br />http://www.amazon.com/VICTOR-FLY-MAGNET-BAIT-REFILL/dp/B00004RBDY<br /><br />The Victor M380 and M502 traps are unreal, of course -- total fly genocide. Pretty stinky, but only right nearby.
==================================================
I recently tried this flavor/brand and was surprised at how delicious these chips are. The best thing was that there were a lot of "brown" chips in the bsg (my favorite), so I bought some more through amazon and shared with family and friends. I am a little disappointed that there are not, so far, very many brown chips in these bags, but the flavor is still very good. I like them better than the yogurt and green onion flavor because they do not seem to be as salty, and the onion flavor is better. If you haven't eaten Kettle chips before, I recommend that you try a bag before buying bulk. They are thicker and crunchier than Lays but just as fresh out of the bag.
==================================================
Wow. So far, two two-star reviews. One obviously had no idea what they were ordering; the other wants crispy cookies. Hey, I'm sorry; but these reviews do nobody any good beyond reminding us to look before ordering.<br /><br />These are chocolate-oatmeal cookies. If you don't like that combination, don't order this type of cookie. I find the combo quite nice, really. The oatmeal sort of "calms" the rich chocolate flavor and gives the cookie sort of a coconut-type consistency. Now let's also remember that tastes differ; so, I've given my opinion.<br /><br />Then, these are soft, chewy cookies -- as advertised. They are not "crispy" cookies, or the blurb would say "crispy," rather than "chewy." I happen to like raw cookie dough; however, I don't see where these taste like raw cookie dough. Both are soft, however, so is this the confusion? And, yes, they stick together. Soft cookies tend to do that. They aren't individually wrapped, which would add to the cost. Oh yeah, chocolate chip cookies tend to be somewhat sweet.<br /><br />So, if you want something hard and crisp, I suggest Nabiso's Ginger Snaps. If you want a cookie that's soft, chewy and tastes like a combination of chocolate and oatmeal, give these a try. I'm here to place my second order.
==================================================
love to order my coffee on amazon. easy and shows up quickly.<br />This k cup is great coffee. dcaf is very good as well
==================================================
# remove urls from text python: https://stackoverflow.com/a/40823105/4084039
sent_0 = re.sub(r"http\S+", "", sent_0)
sent_1000 = re.sub(r"http\S+", "", sent_1000)
sent_150 = re.sub(r"http\S+", "", sent_1500)
sent_4900 = re.sub(r"http\S+", "", sent_4900)
print(sent_0)Why is this $[...] when the same product is available for $[...] here?<br /> /><br />The Victor M380 and M502 traps are unreal, of course -- total fly genocide. Pretty stinky, but only right nearby.
# https://stackoverflow.com/questions/16206380/python-beautifulsoup-how-to-remove-all-tags-from-an-element
from bs4 import BeautifulSoup
soup = BeautifulSoup(sent_0, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_1000, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_1500, 'lxml')
text = soup.get_text()
print(text)
print("="*50)
soup = BeautifulSoup(sent_4900, 'lxml')
text = soup.get_text()
print(text)Why is this $[...] when the same product is available for $[...] here? />The Victor M380 and M502 traps are unreal, of course -- total fly genocide. Pretty stinky, but only right nearby.
==================================================
I recently tried this flavor/brand and was surprised at how delicious these chips are. The best thing was that there were a lot of "brown" chips in the bsg (my favorite), so I bought some more through amazon and shared with family and friends. I am a little disappointed that there are not, so far, very many brown chips in these bags, but the flavor is still very good. I like them better than the yogurt and green onion flavor because they do not seem to be as salty, and the onion flavor is better. If you haven't eaten Kettle chips before, I recommend that you try a bag before buying bulk. They are thicker and crunchier than Lays but just as fresh out of the bag.
==================================================
Wow. So far, two two-star reviews. One obviously had no idea what they were ordering; the other wants crispy cookies. Hey, I'm sorry; but these reviews do nobody any good beyond reminding us to look before ordering.These are chocolate-oatmeal cookies. If you don't like that combination, don't order this type of cookie. I find the combo quite nice, really. The oatmeal sort of "calms" the rich chocolate flavor and gives the cookie sort of a coconut-type consistency. Now let's also remember that tastes differ; so, I've given my opinion.Then, these are soft, chewy cookies -- as advertised. They are not "crispy" cookies, or the blurb would say "crispy," rather than "chewy." I happen to like raw cookie dough; however, I don't see where these taste like raw cookie dough. Both are soft, however, so is this the confusion? And, yes, they stick together. Soft cookies tend to do that. They aren't individually wrapped, which would add to the cost. Oh yeah, chocolate chip cookies tend to be somewhat sweet.So, if you want something hard and crisp, I suggest Nabiso's Ginger Snaps. If you want a cookie that's soft, chewy and tastes like a combination of chocolate and oatmeal, give these a try. I'm here to place my second order.
==================================================
love to order my coffee on amazon. easy and shows up quickly.This k cup is great coffee. dcaf is very good as well
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase_____no_output_____sent_1500 = decontracted(sent_1500)
print(sent_1500)
print("="*50)Wow. So far, two two-star reviews. One obviously had no idea what they were ordering; the other wants crispy cookies. Hey, I am sorry; but these reviews do nobody any good beyond reminding us to look before ordering.<br /><br />These are chocolate-oatmeal cookies. If you do not like that combination, do not order this type of cookie. I find the combo quite nice, really. The oatmeal sort of "calms" the rich chocolate flavor and gives the cookie sort of a coconut-type consistency. Now let is also remember that tastes differ; so, I have given my opinion.<br /><br />Then, these are soft, chewy cookies -- as advertised. They are not "crispy" cookies, or the blurb would say "crispy," rather than "chewy." I happen to like raw cookie dough; however, I do not see where these taste like raw cookie dough. Both are soft, however, so is this the confusion? And, yes, they stick together. Soft cookies tend to do that. They are not individually wrapped, which would add to the cost. Oh yeah, chocolate chip cookies tend to be somewhat sweet.<br /><br />So, if you want something hard and crisp, I suggest Nabiso is Ginger Snaps. If you want a cookie that is soft, chewy and tastes like a combination of chocolate and oatmeal, give these a try. I am here to place my second order.
==================================================
#remove words with numbers python: https://stackoverflow.com/a/18082370/4084039
sent_0 = re.sub("\S*\d\S*", "", sent_0).strip()
print(sent_0)Why is this $[...] when the same product is available for $[...] here?<br /> /><br />The Victor and traps are unreal, of course -- total fly genocide. Pretty stinky, but only right nearby.
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent_1500 = re.sub('[^A-Za-z0-9]+', ' ', sent_1500)
print(sent_1500)Wow So far two two star reviews One obviously had no idea what they were ordering the other wants crispy cookies Hey I am sorry but these reviews do nobody any good beyond reminding us to look before ordering br br These are chocolate oatmeal cookies If you do not like that combination do not order this type of cookie I find the combo quite nice really The oatmeal sort of calms the rich chocolate flavor and gives the cookie sort of a coconut type consistency Now let is also remember that tastes differ so I have given my opinion br br Then these are soft chewy cookies as advertised They are not crispy cookies or the blurb would say crispy rather than chewy I happen to like raw cookie dough however I do not see where these taste like raw cookie dough Both are soft however so is this the confusion And yes they stick together Soft cookies tend to do that They are not individually wrapped which would add to the cost Oh yeah chocolate chip cookies tend to be somewhat sweet br br So if you want something hard and crisp I suggest Nabiso is Ginger Snaps If you want a cookie that is soft chewy and tastes like a combination of chocolate and oatmeal give these a try I am here to place my second order
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
# <br /><br /> ==> after the above steps, we are getting "br br"
# we are including them into stop words list
# instead of <br /> if we have <br/> these tags would have revmoved in the 1st step
stopwords= set(['br', 'the', 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"])_____no_output_____# Combining all the above stundents
from tqdm import tqdm
preprocessed_reviews = []
# tqdm is for printing the status bar
for sentance in tqdm(final['Text'].values):
sentance = re.sub(r"http\S+", "", sentance)
sentance = BeautifulSoup(sentance, 'lxml').get_text()
sentance = decontracted(sentance)
sentance = re.sub("\S*\d\S*", "", sentance).strip()
sentance = re.sub('[^A-Za-z]+', ' ', sentance)
# https://gist.github.com/sebleier/554280
sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
preprocessed_reviews.append(sentance.strip())100%|██████████| 4986/4986 [00:02<00:00, 2359.82it/s]
preprocessed_reviews[1500]_____no_output_____
</code>
<h2><font color='red'>[3.2] Preprocess Summary</font></h2>_____no_output_____
<code>
## Similartly you can do preprocessing for review summary also._____no_output_____
</code>
# [4] Featurization_____no_output_____## [4.1] BAG OF WORDS_____no_output_____
<code>
#BoW
count_vect = CountVectorizer() #in scikit-learn
count_vect.fit(preprocessed_reviews)
print("some feature names ", count_vect.get_feature_names()[:10])
print('='*50)
final_counts = count_vect.transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_counts))
print("the shape of out text BOW vectorizer ",final_counts.get_shape())
print("the number of unique words ", final_counts.get_shape()[1])some feature names ['aa', 'aahhhs', 'aback', 'abandon', 'abates', 'abbott', 'abby', 'abdominal', 'abiding', 'ability']
==================================================
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text BOW vectorizer (4986, 12997)
the number of unique words 12997
</code>
## [4.2] Bi-Grams and n-Grams._____no_output_____
<code>
#bi-gram, tri-gram and n-gram
#removing stop words like "not" should be avoided before building n-grams
# count_vect = CountVectorizer(ngram_range=(1,2))
# please do read the CountVectorizer documentation http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
# you can choose these numebrs min_df=10, max_features=5000, of your choice
count_vect = CountVectorizer(ngram_range=(1,2), min_df=10, max_features=5000)
final_bigram_counts = count_vect.fit_transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_bigram_counts))
print("the shape of out text BOW vectorizer ",final_bigram_counts.get_shape())
print("the number of unique words including both unigrams and bigrams ", final_bigram_counts.get_shape()[1])the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text BOW vectorizer (4986, 3144)
the number of unique words including both unigrams and bigrams 3144
</code>
## [4.3] TF-IDF_____no_output_____
<code>
tf_idf_vect = TfidfVectorizer(ngram_range=(1,2), min_df=10)
tf_idf_vect.fit(preprocessed_reviews)
print("some sample features(unique words in the corpus)",tf_idf_vect.get_feature_names()[0:10])
print('='*50)
final_tf_idf = tf_idf_vect.transform(preprocessed_reviews)
print("the type of count vectorizer ",type(final_tf_idf))
print("the shape of out text TFIDF vectorizer ",final_tf_idf.get_shape())
print("the number of unique words including both unigrams and bigrams ", final_tf_idf.get_shape()[1])some sample features(unique words in the corpus) ['ability', 'able', 'able find', 'able get', 'absolute', 'absolutely', 'absolutely delicious', 'absolutely love', 'absolutely no', 'according']
==================================================
the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>
the shape of out text TFIDF vectorizer (4986, 3144)
the number of unique words including both unigrams and bigrams 3144
</code>
## [4.4] Word2Vec_____no_output_____
<code>
# Train your own Word2Vec model using your own text corpus
i=0
list_of_sentance=[]
for sentance in preprocessed_reviews:
list_of_sentance.append(sentance.split())_____no_output_____# Using Google News Word2Vectors
# in this project we are using a pretrained model by google
# its 3.3G file, once you load this into your memory
# it occupies ~9Gb, so please do this step only if you have >12G of ram
# we will provide a pickle file wich contains a dict ,
# and it contains all our courpus words as keys and model[word] as values
# To use this code-snippet, download "GoogleNews-vectors-negative300.bin"
# from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit
# it's 1.9GB in size.
# http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/#.W17SRFAzZPY
# you can comment this whole cell
# or change these varible according to your need
is_your_ram_gt_16g=False
want_to_use_google_w2v = False
want_to_train_w2v = True
if want_to_train_w2v:
# min_count = 5 considers only words that occured atleast 5 times
w2v_model=Word2Vec(list_of_sentance,min_count=5,size=50, workers=4)
print(w2v_model.wv.most_similar('great'))
print('='*50)
print(w2v_model.wv.most_similar('worst'))
elif want_to_use_google_w2v and is_your_ram_gt_16g:
if os.path.isfile('GoogleNews-vectors-negative300.bin'):
w2v_model=KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
print(w2v_model.wv.most_similar('great'))
print(w2v_model.wv.most_similar('worst'))
else:
print("you don't have gogole's word2vec file, keep want_to_train_w2v = True, to train your own w2v ")[('especially', 0.9966286420822144), ('wonderful', 0.9962908625602722), ('alternative', 0.9960046410560608), ('snack', 0.9958292245864868), ('fact', 0.9957460761070251), ('baked', 0.9957442283630371), ('excellent', 0.9957252144813538), ('describe', 0.9956888556480408), ('original', 0.9956517815589905), ('kids', 0.9956348538398743)]
==================================================
[('easily', 0.9994719624519348), ('close', 0.9994195699691772), ('must', 0.9993972778320312), ('wow', 0.9993919134140015), ('perhaps', 0.9993722438812256), ('beef', 0.9993717670440674), ('peanuts', 0.999367356300354), ('experience', 0.9993497133255005), ('cherry', 0.9993361830711365), ('tomatoes', 0.9993337988853455)]
w2v_words = list(w2v_model.wv.vocab)
print("number of words that occured minimum 5 times ",len(w2v_words))
print("sample words ", w2v_words[0:50])number of words that occured minimum 5 times 3817
sample words ['product', 'available', 'course', 'total', 'pretty', 'stinky', 'right', 'nearby', 'used', 'ca', 'not', 'beat', 'great', 'received', 'shipment', 'could', 'hardly', 'wait', 'try', 'love', 'call', 'instead', 'removed', 'easily', 'daughter', 'designed', 'printed', 'use', 'car', 'windows', 'beautifully', 'shop', 'program', 'going', 'lot', 'fun', 'everywhere', 'like', 'tv', 'computer', 'really', 'good', 'idea', 'final', 'outstanding', 'window', 'everybody', 'asks', 'bought', 'made']
</code>
## [4.4.1] Converting text into vectors using wAvg W2V, TFIDF-W2V_____no_output_____#### [4.4.1.1] Avg W2v_____no_output_____
<code>
# average Word2Vec
# compute average word2vec for each review.
sent_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sent in tqdm(list_of_sentance): # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length 50, you might need to change this to 300 if you use google's w2v
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words:
vec = w2v_model.wv[word]
sent_vec += vec
cnt_words += 1
if cnt_words != 0:
sent_vec /= cnt_words
sent_vectors.append(sent_vec)
print(len(sent_vectors))
print(len(sent_vectors[0]))100%|██████████| 4986/4986 [00:06<00:00, 787.00it/s]
</code>
#### [4.4.1.2] TFIDF weighted W2v_____no_output_____
<code>
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
model = TfidfVectorizer()
model.fit(preprocessed_reviews)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(model.get_feature_names(), list(model.idf_)))_____no_output_____# TF-IDF weighted Word2Vec
tfidf_feat = model.get_feature_names() # tfidf words/col-names
# final_tf_idf is the sparse matrix with row= sentence, col=word and cell_val = tfidf
tfidf_sent_vectors = []; # the tfidf-w2v for each sentence/review is stored in this list
row=0;
for sent in tqdm(list_of_sentance): # for each review/sentence
sent_vec = np.zeros(50) # as word vectors are of zero length
weight_sum =0; # num of words with a valid vector in the sentence/review
for word in sent: # for each word in a review/sentence
if word in w2v_words and word in tfidf_feat:
vec = w2v_model.wv[word]
# tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)]
# to reduce the computation we are
# dictionary[word] = idf value of word in whole courpus
# sent.count(word) = tf valeus of word in this review
tf_idf = dictionary[word]*(sent.count(word)/len(sent))
sent_vec += (vec * tf_idf)
weight_sum += tf_idf
if weight_sum != 0:
sent_vec /= weight_sum
tfidf_sent_vectors.append(sent_vec)
row += 1100%|██████████| 4986/4986 [00:40<00:00, 123.46it/s]
</code>
| {
"repository": "charanhu/Amazon-Fine-Food-Reviews-Analysis.",
"path": "Amazon_Fine_Food_Reviews_Analysis.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 85128,
"hexsha": "cb05e61597f6a5d5df94593d64a90810bfca48b7",
"max_line_length": 7053,
"avg_line_length": 44.2913631634,
"alphanum_fraction": 0.5234235504
} |
# Notebook from bgoodr/how-to
Path: python/jupyter/basic_jupyter_scipy_tutorial.ipynb
# Introduction _____no_output_____This is a basic tutorial on using Jupyter to use the scipy modules._____no_output_____# Example of plotting sine and cosine functions in the same plot_____no_output_____Install matplotlib through conda via:
conda install -y matplotlib_____no_output_____Below we plot a sine function from 0 to 2 pi. Pretty much what you would expect:_____no_output_____
<code>
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y)
plt.show()_____no_output_____
</code>
The x values limit the range of the plot._____no_output_____Let's get help on the plt.plot function, so as to understand how to use it, in addition to the tutorial at http://matplotlib.org/users/pyplot_tutorial.html_____no_output_____
<code>
help(plt.plot)Help on function plot in module matplotlib.pyplot:
plot(*args, **kwargs)
Plot lines and/or markers to the
:class:`~matplotlib.axes.Axes`. *args* is a variable length
argument, allowing for multiple *x*, *y* pairs with an
optional format string. For example, each of the following is
legal::
plot(x, y) # plot x and y using default line style and color
plot(x, y, 'bo') # plot x and y using blue circle markers
plot(y) # plot y using x as index array 0..N-1
plot(y, 'r+') # ditto, but with red plusses
If *x* and/or *y* is 2-dimensional, then the corresponding columns
will be plotted.
If used with labeled data, make sure that the color spec is not
included as an element in data, as otherwise the last case
``plot("v","r", data={"v":..., "r":...)``
can be interpreted as the first case which would do ``plot(v, r)``
using the default line style and color.
If not used with labeled data (i.e., without a data argument),
an arbitrary number of *x*, *y*, *fmt* groups can be specified, as in::
a.plot(x1, y1, 'g^', x2, y2, 'g-')
Return value is a list of lines that were added.
By default, each line is assigned a different style specified by a
'style cycle'. To change this behavior, you can edit the
axes.prop_cycle rcParam.
The following format string characters are accepted to control
the line style or marker:
================ ===============================
character description
================ ===============================
``'-'`` solid line style
``'--'`` dashed line style
``'-.'`` dash-dot line style
``':'`` dotted line style
``'.'`` point marker
``','`` pixel marker
``'o'`` circle marker
``'v'`` triangle_down marker
``'^'`` triangle_up marker
``'<'`` triangle_left marker
``'>'`` triangle_right marker
``'1'`` tri_down marker
``'2'`` tri_up marker
``'3'`` tri_left marker
``'4'`` tri_right marker
``'s'`` square marker
``'p'`` pentagon marker
``'*'`` star marker
``'h'`` hexagon1 marker
``'H'`` hexagon2 marker
``'+'`` plus marker
``'x'`` x marker
``'D'`` diamond marker
``'d'`` thin_diamond marker
``'|'`` vline marker
``'_'`` hline marker
================ ===============================
The following color abbreviations are supported:
========== ========
character color
========== ========
'b' blue
'g' green
'r' red
'c' cyan
'm' magenta
'y' yellow
'k' black
'w' white
========== ========
In addition, you can specify colors in many weird and
wonderful ways, including full names (``'green'``), hex
strings (``'#008000'``), RGB or RGBA tuples (``(0,1,0,1)``) or
grayscale intensities as a string (``'0.8'``). Of these, the
string specifications can be used in place of a ``fmt`` group,
but the tuple forms can be used only as ``kwargs``.
Line styles and colors are combined in a single format string, as in
``'bo'`` for blue circles.
The *kwargs* can be used to set line properties (any property that has
a ``set_*`` method). You can use this to set a line label (for auto
legends), linewidth, anitialising, marker face color, etc. Here is an
example::
plot([1,2,3], [1,2,3], 'go-', label='line 1', linewidth=2)
plot([1,2,3], [1,4,9], 'rs', label='line 2')
axis([0, 4, 0, 10])
legend()
If you make multiple lines with one plot command, the kwargs
apply to all those lines, e.g.::
plot(x1, y1, x2, y2, antialiased=False)
Neither line will be antialiased.
You do not need to use format strings, which are just
abbreviations. All of the line properties can be controlled
by keyword arguments. For example, you can set the color,
marker, linestyle, and markercolor with::
plot(x, y, color='green', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=12).
See :class:`~matplotlib.lines.Line2D` for details.
The kwargs are :class:`~matplotlib.lines.Line2D` properties:
agg_filter: unknown
alpha: float (0.0 transparent through 1.0 opaque)
animated: [True | False]
antialiased or aa: [True | False]
axes: an :class:`~matplotlib.axes.Axes` instance
clip_box: a :class:`matplotlib.transforms.Bbox` instance
clip_on: [True | False]
clip_path: [ (:class:`~matplotlib.path.Path`, :class:`~matplotlib.transforms.Transform`) | :class:`~matplotlib.patches.Patch` | None ]
color or c: any matplotlib color
contains: a callable function
dash_capstyle: ['butt' | 'round' | 'projecting']
dash_joinstyle: ['miter' | 'round' | 'bevel']
dashes: sequence of on/off ink in points
drawstyle: ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post']
figure: a :class:`matplotlib.figure.Figure` instance
fillstyle: ['full' | 'left' | 'right' | 'bottom' | 'top' | 'none']
gid: an id string
label: string or anything printable with '%s' conversion.
linestyle or ls: ['solid' | 'dashed', 'dashdot', 'dotted' | (offset, on-off-dash-seq) | ``'-'`` | ``'--'`` | ``'-.'`` | ``':'`` | ``'None'`` | ``' '`` | ``''``]
linewidth or lw: float value in points
marker: :mod:`A valid marker style <matplotlib.markers>`
markeredgecolor or mec: any matplotlib color
markeredgewidth or mew: float value in points
markerfacecolor or mfc: any matplotlib color
markerfacecoloralt or mfcalt: any matplotlib color
markersize or ms: float
markevery: [None | int | length-2 tuple of int | slice | list/array of int | float | length-2 tuple of float]
path_effects: unknown
picker: float distance in points or callable pick function ``fn(artist, event)``
pickradius: float distance in points
rasterized: [True | False | None]
sketch_params: unknown
snap: unknown
solid_capstyle: ['butt' | 'round' | 'projecting']
solid_joinstyle: ['miter' | 'round' | 'bevel']
transform: a :class:`matplotlib.transforms.Transform` instance
url: a url string
visible: [True | False]
xdata: 1D array
ydata: 1D array
zorder: any number
kwargs *scalex* and *scaley*, if defined, are passed on to
:meth:`~matplotlib.axes.Axes.autoscale_view` to determine
whether the *x* and *y* axes are autoscaled; the default is
*True*.
.. note::
In addition to the above described arguments, this function can take a
**data** keyword argument. If such a **data** argument is given, the
following arguments are replaced by **data[<arg>]**:
* All arguments with the following names: 'x', 'y'.
</code>
Let's add in 'bo' string to the mix to get dots on the trace:_____no_output_____
<code>
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 2 * math.pi, 50)
y = np.sin(x)
plt.plot(x, y, 'bo')
plt.show()_____no_output_____
</code>
Let's try to add two traces, the second one is a cosine function:_____no_output_____
<code>
import math
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,2 * math.pi, 50)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1, 'bo', x, y2, 'r+')
plt.show()_____no_output_____
</code>
# Example of using optimize.fmin on the sine function_____no_output_____
<code>
import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.fmin(func2, math.pi - 0.01)Optimization terminated successfully.
Current function value: -1.000000
Iterations: 17
Function evaluations: 34
</code>
Pretty much what we expected. There is a minimum of -1 for this sine wave function (amplitude of 1 here ... would have been different if we multiplied the sine wave by some other factor). We can call the f function to see the value at that point which is pretty darn close to -1:_____no_output_____
<code>
func2(4.71237414)_____no_output_____math.pi * 2 * 0.75_____no_output_____
</code>
# Example of using optimize.root on the sine function_____no_output_____
<code>
help(optimize.root)Help on function root in module scipy.optimize._root:
root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None)
Find a root of a vector function.
Parameters
----------
fun : callable
A vector function to find a root of.
x0 : ndarray
Initial guess.
args : tuple, optional
Extra arguments passed to the objective function and its Jacobian.
method : str, optional
Type of solver. Should be one of
- 'hybr' :ref:`(see here) <optimize.root-hybr>`
- 'lm' :ref:`(see here) <optimize.root-lm>`
- 'broyden1' :ref:`(see here) <optimize.root-broyden1>`
- 'broyden2' :ref:`(see here) <optimize.root-broyden2>`
- 'anderson' :ref:`(see here) <optimize.root-anderson>`
- 'linearmixing' :ref:`(see here) <optimize.root-linearmixing>`
- 'diagbroyden' :ref:`(see here) <optimize.root-diagbroyden>`
- 'excitingmixing' :ref:`(see here) <optimize.root-excitingmixing>`
- 'krylov' :ref:`(see here) <optimize.root-krylov>`
- 'df-sane' :ref:`(see here) <optimize.root-dfsane>`
jac : bool or callable, optional
If `jac` is a Boolean and is True, `fun` is assumed to return the
value of Jacobian along with the objective function. If False, the
Jacobian will be estimated numerically.
`jac` can also be a callable returning the Jacobian of `fun`. In
this case, it must accept the same arguments as `fun`.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific
options.
callback : function, optional
Optional callback function. It is called on every iteration as
``callback(x, f)`` where `x` is the current solution and `f`
the corresponding residual. For all methods but 'hybr' and 'lm'.
options : dict, optional
A dictionary of solver options. E.g. `xtol` or `maxiter`, see
:obj:`show_options()` for details.
Returns
-------
sol : OptimizeResult
The solution represented as a ``OptimizeResult`` object.
Important attributes are: ``x`` the solution array, ``success`` a
Boolean flag indicating if the algorithm exited successfully and
``message`` which describes the cause of the termination. See
`OptimizeResult` for a description of other attributes.
See also
--------
show_options : Additional options accepted by the solvers
Notes
-----
This section describes the available solvers that can be selected by the
'method' parameter. The default method is *hybr*.
Method *hybr* uses a modification of the Powell hybrid method as
implemented in MINPACK [1]_.
Method *lm* solves the system of nonlinear equations in a least squares
sense using a modification of the Levenberg-Marquardt algorithm as
implemented in MINPACK [1]_.
Method *df-sane* is a derivative-free spectral method. [3]_
Methods *broyden1*, *broyden2*, *anderson*, *linearmixing*,
*diagbroyden*, *excitingmixing*, *krylov* are inexact Newton methods,
with backtracking or full line searches [2]_. Each method corresponds
to a particular Jacobian approximations. See `nonlin` for details.
- Method *broyden1* uses Broyden's first Jacobian approximation, it is
known as Broyden's good method.
- Method *broyden2* uses Broyden's second Jacobian approximation, it
is known as Broyden's bad method.
- Method *anderson* uses (extended) Anderson mixing.
- Method *Krylov* uses Krylov approximation for inverse Jacobian. It
is suitable for large-scale problem.
- Method *diagbroyden* uses diagonal Broyden Jacobian approximation.
- Method *linearmixing* uses a scalar Jacobian approximation.
- Method *excitingmixing* uses a tuned diagonal Jacobian
approximation.
.. warning::
The algorithms implemented for methods *diagbroyden*,
*linearmixing* and *excitingmixing* may be useful for specific
problems, but whether they will work may depend strongly on the
problem.
.. versionadded:: 0.11.0
References
----------
.. [1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom.
1980. User Guide for MINPACK-1.
.. [2] C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear
Equations. Society for Industrial and Applied Mathematics.
<http://www.siam.org/books/kelley/>
.. [3] W. La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006).
Examples
--------
The following functions define a system of nonlinear equations and its
jacobian.
>>> def fun(x):
... return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0,
... 0.5 * (x[1] - x[0])**3 + x[1]]
>>> def jac(x):
... return np.array([[1 + 1.5 * (x[0] - x[1])**2,
... -1.5 * (x[0] - x[1])**2],
... [-1.5 * (x[1] - x[0])**2,
... 1 + 1.5 * (x[1] - x[0])**2]])
A solution can be obtained as follows.
>>> from scipy import optimize
>>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
>>> sol.x
array([ 0.8411639, 0.1588361])
</code>
Let's evaludate the f function (which we know is a sine function) at not quite at the point where it is zero (at pi):_____no_output_____
<code>
func2(math.pi * 0.75)_____no_output_____import math
import numpy as np
from scipy import linalg, optimize
# Here, we called this function "func2" which is pretty arbitrary. You will need to use better name in practice, of course:
def func2(x):
return np.sin(x)
optimize.root(func2, math.pi * 0.75)_____no_output_____
</code>
So it found the root at pi. Notice the "x" value at the end of the output.
You can turn off the verbose output using keyword arguments to the `optimize.root` function._____no_output_____
| {
"repository": "bgoodr/how-to",
"path": "python/jupyter/basic_jupyter_scipy_tutorial.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 61785,
"hexsha": "cb05f179356f4eb9027b713bd958a94773f3b42c",
"max_line_length": 18250,
"avg_line_length": 88.5171919771,
"alphanum_fraction": 0.7714170106
} |
# Notebook from Nop287/game_db
Path: TinyDB.ipynb
<code>
import pandas as pd
import os
import json
import re
from tinydb import TinyDB, Query
import sqlalchemy as db_____no_output_____
</code>
# Building the Database
We use a database in the backend to serve the data over a REST API to our client. The database is being built with the data frame generated using the `build_game_db.ipynb` notebook. We stored it in the feather format as it is both efficient and compact. We start by reading the stored data frame._____no_output_____
<code>
df_tmp = pd.read_feather("spiele_db.ftr")
df_tmp_____no_output_____
</code>
We use TinyDB aus our backend. Note that TinyDB is not fit for production use. Up to a few hundred or even a few thousand entries it will do. The advantage is that it is a document database, so it should be possible to port with reasonable effort to a larger scale document database such as MongoDB or CouchDB. Note that this will make our architecture more complex since we will need a separate database server.
We create a new TinyDB database which is basically a JSON file in a specific structure._____no_output_____
<code>
if os.path.exists('spiele_tinydb.json'):
os.remove('spiele_tinydb.json')
game_db = TinyDB('spiele_tinydb.json')_____no_output_____
</code>
Now we just insert our data frame row by row. Note that this is time consuming. As a workaround we could directly manipulate the JSON but this may not work with future TinyDB versions._____no_output_____
<code>
for i in df_tmp.index:
game_db.insert(json.loads(df_tmp.loc[i].to_json()))_____no_output_____
</code>
Once we have the TinyDB database we can query it. The syntax is a bit clumsy, but generally it works. In my configuration case insensitive search did not work for some unknow reason._____no_output_____
<code>
my_query = Query()
query_result = game_db.search(my_query.game_title.matches('.*Dark.*', flags=re.IGNORECASE))
for item in query_result:
print(item["game_title"], " / ", item["game_igdb_name"], ": ", round(item["game_rating"], 2), " / ", item["game_rating_count"])Darksiders II Deathinitive Edition / Darksiders II: Deathinitive Edition : 74.78 / 59
Dark Cloud 2 / Dark Cloud 2 : 87.56 / 33
Darksiders III / Darksiders III : 70.46 / 80
</code>
Since we are dealing with JSON here there will be entries where some values are not set in the JSON. Unfortunately searching for them the gives an error. So we have to use a lambda function for searching._____no_output_____
<code>
def search_by_rating(min_rating = 0, max_rating = 100):
my_query = Query()
test_func = lambda s: True if(isinstance(s, float) and s >= min_rating and s <= max_rating) else False
return game_db.search(my_query.game_rating.test(test_func))
query_result = search_by_rating(90)
print(print(json.dumps(query_result, indent=4, sort_keys=True)))[
{
"game_buy_date": " 8/4/2018",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Indie",
"game_igdb_id": 20919,
"game_igdb_name": "In Space We Brawl",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 Mac PlayStation 4 Xbox One",
"game_rating": 90.0,
"game_rating_count": 1,
"game_size": " 238.81 MB ",
"game_themes": "Action Science fiction",
"game_title": "In Space We Brawl",
"games_found": 1,
"hltb_main": 0.5333333333,
"igdb_json": "[\n {\n \"id\": 20919,\n \"genres\": [\n 5,\n 32\n ],\n \"name\": \"In Space We Brawl\",\n \"platforms\": [\n 6,\n 9,\n 14,\n 48,\n 49\n ],\n \"release_dates\": [\n 71091,\n 71092,\n 151108,\n 151109,\n 218672\n ],\n \"summary\": \"In Space We Brawl is a frantic couch twin-stick shooter. Get ready for the excitement of pure local multiplayer matches for up to 4 players, including teams. Choose from more than 150 combinations of weapons and ships and conquer maps full of obstacles such as asteroids and black holes!\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 90.0,\n \"total_rating_count\": 1,\n \"url\": \"https://www.igdb.com/games/in-space-we-brawl\"\n }\n]",
"igdb_release_date": 1455235200000,
"summary": "In Space We Brawl is a frantic couch twin-stick shooter. Get ready for the excitement of pure local multiplayer matches for up to 4 players, including teams. Choose from more than 150 combinations of weapons and ships and conquer maps full of obstacles such as asteroids and black holes!",
"web-scraper-order": "1003"
},
{
"game_buy_date": " 13/3/2018",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Role-playing (RPG)",
"game_igdb_id": 11156,
"game_igdb_name": "Horizon Zero Dawn",
"game_platforms": "PC (Microsoft Windows) PlayStation 4",
"game_rating": 90.1021994732,
"game_rating_count": 1049,
"game_size": " 43.04 GB ",
"game_themes": "Action Science fiction Open world",
"game_title": "Horizon Zero Dawn\u2122",
"games_found": 5,
"hltb_main": 22.0,
"igdb_json": "[\n {\n \"id\": 11156,\n \"artworks\": [\n 3263,\n 3264,\n 3265,\n 3266,\n 3267,\n 3268,\n 3269,\n 3270,\n 3271,\n 3272,\n 3273,\n 3274\n ],\n \"genres\": [\n 5,\n 12\n ],\n \"name\": \"Horizon Zero Dawn\",\n \"platforms\": [\n 6,\n 48\n ],\n \"release_dates\": [\n 65848,\n 103575,\n 103576,\n 203967\n ],\n \"summary\": \"Horizon Zero Dawn, an exhilarating new action role playing game exclusively for the PlayStation 4 system, developed by the award winning Guerrilla Games, creatos of PlayStation\\u0027s venerated Killzone franchise. As Horizon Zero Dawn\\u0027s main protagonist Aloy, a skilled hunter, explore a vibrant and lush world inhabited by mysterious mechanized creatures.\",\n \"themes\": [\n 1,\n 18,\n 38\n ],\n \"total_rating\": 90.10219947324345,\n \"total_rating_count\": 1049,\n \"url\": \"https://www.igdb.com/games/horizon-zero-dawn\"\n },\n {\n \"id\": 72870,\n \"genres\": [\n 5,\n 12\n ],\n \"name\": \"Horizon Zero Dawn Complete Edition\",\n \"platforms\": [\n 6,\n 48\n ],\n \"release_dates\": [\n 130573,\n 203015\n ],\n \"summary\": \"In an era where machines roam the land and mankind is no longer the dominant species, a young hunter named Aloy embarks on a journey to discover her destiny. Explore a vibrant and lush world inhabited by mysterious mechanized creatures. Embark on a compelling, emotional journey and unravel mysteries of tribal societies, ancient artefacts and enhanced technologies that will determine the fate of this planet and of life itself. \\n \\nHorizon: Zero Dawn Complete Edition includes Horizon Zero Dawn and the frozen wilds expansion.\",\n \"themes\": [\n 1,\n 18,\n 38\n ],\n \"total_rating\": 92.41131370124066,\n \"total_rating_count\": 66,\n \"url\": \"https://www.igdb.com/games/horizon-zero-dawn-complete-edition\"\n },\n {\n \"id\": 37083,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Horizon Zero Dawn - The Frozen Wilds\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 108620,\n 133762,\n 133763\n ],\n \"summary\": \"The Frozen Wilds contains additional content for Horizon Zero Dawn, including new storylines, characters and experiences in a beautiful but unforgiving new area.\",\n \"themes\": [\n 1,\n 18,\n 38\n ],\n \"total_rating\": 85.97088953187196,\n \"total_rating_count\": 72,\n \"url\": \"https://www.igdb.com/games/horizon-zero-dawn-the-frozen-wilds\"\n },\n {\n \"id\": 42920,\n \"genres\": [\n 5,\n 12\n ],\n \"name\": \"Horizon Zero Dawn Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 89802\n ],\n \"summary\": \"The plot revolves around Aloy, a hunter and archer living in a world overrun by robots. Having been cloistered her whole life, she sets out to discover the dangers that kept her sheltered. The character makes use of ranged, melee weapons and stealth tactics to combat the mechanised creatures, whose remains can also be looted for resources. A skill tree facilitates gameplay improvements. The game features an open world environment for Aloy to explore, divided into tribes that hold side quests to undertake, while the main story guides her throughout the whole world.\",\n \"themes\": [\n 1,\n 18,\n 38\n ],\n \"url\": \"https://www.igdb.com/games/horizon-zero-dawn-collectors-edition\"\n },\n {\n \"id\": 112874,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Horizon Forbidden West\",\n \"platforms\": [\n 48,\n 167\n ],\n \"release_dates\": [\n 217641,\n 217642\n ],\n \"summary\": \"Horizon Forbidden West continues Aloy\u2019s story as she moves west to a far-future America to brave a majestic, but dangerous frontier where she\u2019ll face awe-inspiring machines and mysterious new threats.\",\n \"themes\": [\n 1,\n 18,\n 38\n ],\n \"url\": \"https://www.igdb.com/games/horizon-forbidden-west\"\n }\n]",
"igdb_release_date": 1488326400000,
"summary": "Horizon Zero Dawn, an exhilarating new action role playing game exclusively for the PlayStation 4 system, developed by the award winning Guerrilla Games, creatos of PlayStation's venerated Killzone franchise. As Horizon Zero Dawn's main protagonist Aloy, a skilled hunter, explore a vibrant and lush world inhabited by mysterious mechanized creatures.",
"web-scraper-order": "1009"
},
{
"game_buy_date": " 6/3/2018",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Role-playing (RPG) Adventure",
"game_igdb_id": 7334,
"game_igdb_name": "Bloodborne",
"game_platforms": "PlayStation 4",
"game_rating": 91.3362859512,
"game_rating_count": 846,
"game_size": " 27.19 GB ",
"game_themes": "Action Fantasy Horror",
"game_title": "Bloodborne\u2122",
"games_found": 6,
"hltb_main": 34.0,
"igdb_json": "[\n {\n \"id\": 7334,\n \"artworks\": [\n 161,\n 162,\n 163,\n 164,\n 165,\n 166,\n 167,\n 168\n ],\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Bloodborne\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 23719,\n 101953,\n 101954\n ],\n \"summary\": \"An action RPG in which the player embodies a Hunter who, after being transfused with the mysterious blood local to the city of Yharnam, sets off into a \\\"night of the Hunt\\\", an extended night in which Hunters may phase in and out of dream and reality in order to thin the outbreak of abominable beasts that plague the land and, for the more resilient and insightful Hunters, uncover the answers to the Hunt\\u0027s many mysteries.\",\n \"themes\": [\n 1,\n 17,\n 19\n ],\n \"total_rating\": 91.33628595115175,\n \"total_rating_count\": 846,\n \"url\": \"https://www.igdb.com/games/bloodborne\"\n },\n {\n \"id\": 14647,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Bloodborne: The Old Hunters\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 40863\n ],\n \"summary\": \"The Old Hunters is the first Expansion for Bloodborne. It will feature all new Locations, Bosses, Weapons, and Armor.\\n\\nSet in a nightmare world where hunters from the past are trapped forever, explore brand new stages full of dangers, rewards and deadly beasts to overcome. You\u2019ll find multiple new outfits and weapons to add to your arsenal as well as additional magic to wield and add more variety to your combat strategy.\\n\\nWith new story details, learn the tale of hunters who once made Yharnam their hunting grounds, meet new NPCs, and discover another side of the history and world of Bloodborne.\",\n \"themes\": [\n 1,\n 17,\n 19\n ],\n \"total_rating\": 91.0160745910903,\n \"total_rating_count\": 113,\n \"url\": \"https://www.igdb.com/games/bloodborne-the-old-hunters--1\"\n },\n {\n \"id\": 44651,\n \"genres\": [\n 12\n ],\n \"name\": \"Bloodborne: Nightmare Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 99903\n ],\n \"summary\": \"Face your fears as you search for answers in the ancient city of Yharnam, now cursed with a strange endemic illness spreading through the streets like wildfire. Danger, death and madness lurk around every corner of this dark and horrific world, and you must discover its darkest secrets in order to survive. \\n \\nNightmare Edition comes with: \\nBloodborne PS4 game \\nSteelBook case \\nPremium art book with exclusive concept art \\nDigital soundtrack of the game \\nBloodborne gothic notebook \\nQuill \\u0026 red ink set \\n\u201cTop Hat\u201d Messenger skin \\nBell trinket\",\n \"themes\": [\n 1,\n 17,\n 19\n ],\n \"url\": \"https://www.igdb.com/games/bloodborne-nightmare-edition\"\n },\n {\n \"id\": 44542,\n \"genres\": [\n 12\n ],\n \"name\": \"Bloodborne: Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 99910\n ],\n \"summary\": \"Face your fears as you search for answers in the ancient city of Yharnam, now cursed with a strange endemic illness spreading through the streets like wildfire. Danger, death and madness lurk around every corner of this dark and horrific world, and you must discover its darkest secrets in order to survive. \\n \\nA Terrifying New World: Journey to a horror-filled gothic city where deranged mobs and nightmarish creatures lurk around every corner. \\n \\nStrategic Action Combat: Armed with a unique arsenal of weaponry, including guns and saw cleavers, you\\u0027ll need wits, strategy and reflexes to take down the agile and intelligent enemies that guard the city\\u0027s dark secrets. \\n \\nA New Generation of Action RPG: Stunningly detailed gothic environments, atmospheric lighting, and advanced new online experiences showcase the power and prowess of the PlayStation(R)4 system.\",\n \"themes\": [\n 1,\n 17,\n 19\n ],\n \"url\": \"https://www.igdb.com/games/bloodborne-collectors-edition\"\n },\n {\n \"id\": 136691,\n \"name\": \"Bloodborne: The Old Hunters Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 214240\n ],\n \"url\": \"https://www.igdb.com/games/bloodborne-the-old-hunters-edition\"\n },\n {\n \"id\": 42931,\n \"genres\": [\n 12\n ],\n \"name\": \"Bloodborne: Game Of The Year Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 168817\n ],\n \"summary\": \"With new story details, learn the tale of hunters who once made Yharnam their hunting grounds, meet new NPCs, and discover another side of the history and world of Bloodborne. Includes: \\nThe Original Bloodborne Experience / Bloodborne: The Old Hunters Expansion / Bloodborne: The Old Hunters\",\n \"themes\": [\n 1,\n 17,\n 19\n ],\n \"total_rating\": 96.3163529387505,\n \"total_rating_count\": 52,\n \"url\": \"https://www.igdb.com/games/bloodborne-game-of-the-year-edition\"\n }\n]",
"igdb_release_date": 1427414400000,
"summary": "An action RPG in which the player embodies a Hunter who, after being transfused with the mysterious blood local to the city of Yharnam, sets off into a \"night of the Hunt\", an extended night in which Hunters may phase in and out of dream and reality in order to thin the outbreak of abominable beasts that plague the land and, for the more resilient and insightful Hunters, uncover the answers to the Hunt's many mysteries.",
"web-scraper-order": "1015"
},
{
"game_buy_date": " 3/10/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Tactical Adventure",
"game_igdb_id": 1985,
"game_igdb_name": "Metal Gear Solid V: The Phantom Pain",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 Xbox 360 PlayStation 4 Xbox One",
"game_rating": 90.4961059572,
"game_rating_count": 665,
"game_size": " 26.97 GB ",
"game_themes": "Action Fantasy Historical Stealth Open world",
"game_title": "METAL GEAR SOLID V: THE PHANTOM PAIN",
"games_found": 2,
"hltb_main": 46.0,
"igdb_json": "[\n {\n \"id\": 1985,\n \"artworks\": [\n 326,\n 328,\n 329,\n 330,\n 331,\n 4593,\n 4594,\n 6630,\n 8233,\n 8234,\n 8235,\n 8236\n ],\n \"genres\": [\n 5,\n 24,\n 31\n ],\n \"name\": \"Metal Gear Solid V: The Phantom Pain\",\n \"platforms\": [\n 6,\n 9,\n 12,\n 48,\n 49\n ],\n \"release_dates\": [\n 28440,\n 28543,\n 28544,\n 28545,\n 28546,\n 108035,\n 108036,\n 122509,\n 133875,\n 133876,\n 136144\n ],\n \"summary\": \"The 5th installment of the Metal Gear Solid saga, Metal Gear Solid V: The Phantom Pain continues the story of Big Boss (aka Naked Snake, aka David), connecting the story lines from Metal Gear Solid: Peace Walker, Metal Gear Solid: Ground Zeroes, and the rest of the Metal Gear Universe.\",\n \"themes\": [\n 1,\n 17,\n 22,\n 23,\n 38\n ],\n \"total_rating\": 90.4961059572052,\n \"total_rating_count\": 665,\n \"url\": \"https://www.igdb.com/games/metal-gear-solid-v-the-phantom-pain\"\n },\n {\n \"id\": 42945,\n \"genres\": [\n 5,\n 24,\n 31\n ],\n \"name\": \"Metal Gear Solid V: The Phantom Pain Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 89755\n ],\n \"summary\": \"Metal Gear Solid V: The Phantom Pain Collector\\u0027s Edition Includes: Metal Gear Solid V: The Phantom Pain Game Collectible Steelbook Case Exclusive Packaging Map Half Scale Replica of Snake\\u0027s Bionic Arm Behind the Scenes Documentary and Trailers Blu-ray Additional Game Content: Customizable \\u0027Venom Snake\\u0027 Emblem Snake Costume X 4 Cardboard Box X 3 Weapon and Shield Set X 4 MGO Item X 4 XP Boost\",\n \"themes\": [\n 1,\n 17,\n 22,\n 23,\n 38\n ],\n \"url\": \"https://www.igdb.com/games/metal-gear-solid-v-the-phantom-pain-collectors-edition\"\n }\n]",
"igdb_release_date": 1441065600000,
"summary": "The 5th installment of the Metal Gear Solid saga, Metal Gear Solid V: The Phantom Pain continues the story of Big Boss (aka Naked Snake, aka David), connecting the story lines from Metal Gear Solid: Peace Walker, Metal Gear Solid: Ground Zeroes, and the rest of the Metal Gear Universe.",
"web-scraper-order": "1047"
},
{
"game_buy_date": " 7/4/2020",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Adventure",
"game_igdb_id": 7331,
"game_igdb_name": "Uncharted 4: A Thief's End",
"game_platforms": "PlayStation 4",
"game_rating": 92.7313918351,
"game_rating_count": 1306,
"game_size": " 48.69 GB ",
"game_themes": "Action Fantasy Historical",
"game_title": "Uncharted\u2122 4: A Thief\u2019s End",
"games_found": 3,
"hltb_main": 15.0,
"igdb_json": "[\n {\n \"id\": 7331,\n \"artworks\": [\n 6303,\n 6304,\n 6305,\n 6306,\n 6307\n ],\n \"genres\": [\n 5,\n 31\n ],\n \"name\": \"Uncharted 4: A Thief\\u0027s End\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 47481,\n 106655,\n 106656\n ],\n \"summary\": \"Several years after his last adventure, retired fortune hunter, Nathan Drake, is forced back into the world of thieves. With the stakes much more personal, Drake embarks on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. His greatest adventure will test his physical limits, his resolve, and ultimately what he\\u0027s willing to sacrifice to save the ones he loves.\",\n \"themes\": [\n 1,\n 17,\n 22\n ],\n \"total_rating\": 92.73139183508785,\n \"total_rating_count\": 1306,\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thief-s-end\"\n },\n {\n \"id\": 41874,\n \"name\": \"Uncharted 4: A Thief\\u0027s End Special Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 154696\n ],\n \"summary\": \"The Special Edition of Uncharted 4: A Thief\u2019s End includes: \\n \\n- Collector\u2019s SteelBook Case designed by Alexander \u201cThat Kid Who Draws\u201d Iaccarino \\n- 48 Page Hardcover Mini Art Book by Naughty Dog and Dark Horse \\n- Naughty Dog \\u0026 Pirate Sigil Sticker Sheet \\n- Multiplayer Bonuses: \\n- Multiplayer Outfit \u2013 Heist Drake \\n- Naughty Dog Points \u2013 Redeem the points to unlock new multiplayer content and character upgrades. \\n- Snow Weapon Skin\",\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thiefs-end-special-edition\"\n },\n {\n \"id\": 41879,\n \"genres\": [\n 31\n ],\n \"name\": \"Uncharted 4: A Thief\\u0027s End Libertalia Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 89763\n ],\n \"summary\": \"Uncharted 4: A Thief\\u0027s End is an upcoming American action-adventure video game published by Sony Computer Entertainment and developed by Naughty Dog exclusively for the PlayStation 4. Initially teased at Spike TV\\u0027s PS4 launch event on November 14, 2013, a full in-engine trailer confirmed the title during Sony\\u0027s E3 2014 press conference on June 9, 2014. The game is set to release in 2015. \\n \\nSeveral years after the events of Uncharted 3: Drake\\u0027s Deception, Nathan Drake, who retired as a fortune hunter, will embark on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. Naughty Dog outlined the game\\u0027s plot as \\\"his greatest adventure yet and will test his physical limits, his resolve, and ultimately what he\\u0027s willing to sacrifice to save the ones he loves\\\".\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thiefs-end-libertalia-collectors-edition\"\n }\n]",
"igdb_release_date": 1462838400000,
"summary": "Several years after his last adventure, retired fortune hunter, Nathan Drake, is forced back into the world of thieves. With the stakes much more personal, Drake embarks on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. His greatest adventure will test his physical limits, his resolve, and ultimately what he's willing to sacrifice to save the ones he loves.",
"web-scraper-order": "855"
},
{
"game_buy_date": " 1/10/2019",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Adventure",
"game_igdb_id": 6036,
"game_igdb_name": "The Last of Us Remastered",
"game_platforms": "PlayStation 4",
"game_rating": 95.9493647506,
"game_rating_count": 705,
"game_size": " 47.19 GB ",
"game_themes": "Action Horror Survival Stealth",
"game_title": "The Last of Us\u2122 Remastered",
"games_found": 2,
"hltb_main": 14.0,
"igdb_json": "[\n {\n \"id\": 6036,\n \"artworks\": [\n 6199,\n 6200,\n 6201\n ],\n \"genres\": [\n 5,\n 31\n ],\n \"name\": \"The Last of Us Remastered\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 106246,\n 106247,\n 150147,\n 150148\n ],\n \"summary\": \"Winner of over 200 game of the year awards, The Last of Us has been remastered for the PlayStation\u00ae4. Now featuring higher resolution character models, improved shadows and lighting, in addition to several other gameplay improvements. \\n \\nAbandoned cities reclaimed by nature. A population decimated by a modern plague. Survivors are killing each other for food, weapons whatever they can get their hands on. Joel, a brutal survivor, and Ellie, a brave young teenage girl who is wise beyond her years, must work together if they hope to survive their journey across the US. \\n \\nThe Last of Us: Remastered includes the Abandoned Territories Map Pack, Reclaimed Territories Map Pack, and the critically acclaimed The Last of Us: Left Behind Single Player campaign. \\n \\nPS4 PRO ENHANCED: \\n- PS4 Pro HD \\n- Dynamic 4K Gaming \\n- Greater Draw Distance \\n- Visual FX \\n- Vivid Textures \\n- Deep Shadows \\n- High Fidelity Assets \\n- HDR\",\n \"themes\": [\n 1,\n 19,\n 21,\n 23\n ],\n \"total_rating\": 95.94936475060166,\n \"total_rating_count\": 705,\n \"url\": \"https://www.igdb.com/games/the-last-of-us-remastered\"\n },\n {\n \"id\": 25984,\n \"name\": \"The Last Of Us Remastered: Agility Survival Skill\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 61048\n ],\n \"summary\": \"Climb, vault, and move while crouching faster than you could before. At level 3 you are nearly impossible to hear while moving. For one price, get this content for your PS4\u2122 and PS3\u2122 systems. Purchase it on one system and it will automatically appear for download on the other system at no additional cost.\",\n \"url\": \"https://www.igdb.com/games/the-last-of-us-remastered-agility-survival-skill\"\n }\n]",
"igdb_release_date": 1406851200000,
"summary": "Winner of over 200 game of the year awards, The Last of Us has been remastered for the PlayStation\u00ae4. Now featuring higher resolution character models, improved shadows and lighting, in addition to several other gameplay improvements. \n \nAbandoned cities reclaimed by nature. A population decimated by a modern plague. Survivors are killing each other for food, weapons whatever they can get their hands on. Joel, a brutal survivor, and Ellie, a brave young teenage girl who is wise beyond her years, must work together if they hope to survive their journey across the US. \n \nThe Last of Us: Remastered includes the Abandoned Territories Map Pack, Reclaimed Territories Map Pack, and the critically acclaimed The Last of Us: Left Behind Single Player campaign. \n \nPS4 PRO ENHANCED: \n- PS4 Pro HD \n- Dynamic 4K Gaming \n- Greater Draw Distance \n- Visual FX \n- Vivid Textures \n- Deep Shadows \n- High Fidelity Assets \n- HDR",
"web-scraper-order": "881"
},
{
"game_buy_date": " 1/5/2019",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Role-playing (RPG) Adventure",
"game_igdb_id": 9927,
"game_igdb_name": "Persona 5",
"game_platforms": "PlayStation 3 PlayStation 4",
"game_rating": 93.7039991179,
"game_rating_count": 535,
"game_size": " 20.2 GB ",
"game_themes": "Action Fantasy Stealth",
"game_title": "Persona 5",
"games_found": 7,
"hltb_main": 97.0,
"igdb_json": "[\n {\n \"id\": 9927,\n \"artworks\": [\n 3797,\n 3798,\n 3799,\n 3800,\n 3801,\n 6095,\n 6096,\n 6097\n ],\n \"genres\": [\n 8,\n 12,\n 31\n ],\n \"name\": \"Persona 5\",\n \"platforms\": [\n 9,\n 48\n ],\n \"release_dates\": [\n 50396,\n 60562,\n 60564,\n 104841,\n 104842,\n 104843\n ],\n \"summary\": \"Persona 5, a turn-based JRPG with visual novel elements, follows a high school student with a criminal record for a crime he didn\\u0027t commit. Soon he meets several characters who share similar fates to him, and discovers a metaphysical realm which allows him and his friends to channel their pent-up frustrations into becoming a group of vigilantes reveling in aesthetics and rebellion while fighting corruption.\",\n \"themes\": [\n 1,\n 17,\n 23\n ],\n \"total_rating\": 93.70399911794084,\n \"total_rating_count\": 535,\n \"url\": \"https://www.igdb.com/games/persona-5\"\n },\n {\n \"id\": 117731,\n \"genres\": [\n 12,\n 25\n ],\n \"name\": \"Persona 5 Scramble: The Phantom Strikers\",\n \"platforms\": [\n 6,\n 48,\n 130\n ],\n \"release_dates\": [\n 179497,\n 179498,\n 220695,\n 220696,\n 220697\n ],\n \"summary\": \"Persona 5 Scramble is crossover between Koei Tecmo\\u0027s hack and slash Dynasty Warriors series, and Atlus\\u0027s turn-based role-playing game Persona series. As a result, it features gameplay elements from both, such as the real-time action combat of the former with the turn-based Persona-battling aspect of the latter. The game is set six months after the events of Persona 5, and follows Joker and the rest of the Phantom Thieves of Hearts as they end up in a mysterious version of Tokyo filled with supernatural enemies.\",\n \"themes\": [\n 1,\n 17\n ],\n \"url\": \"https://www.igdb.com/games/persona-5-scramble-the-phantom-strikers\"\n },\n {\n \"id\": 114283,\n \"artworks\": [\n 7817,\n 7818,\n 7819\n ],\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Persona 5 Royal\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 168747,\n 180874,\n 180875,\n 180876\n ],\n \"summary\": \"An enhanced version of Persona 5 with some new characters and a third semester added to the game. Released Internationally in 2020.\",\n \"themes\": [\n 1,\n 17\n ],\n \"total_rating\": 96.8344481071584,\n \"total_rating_count\": 59,\n \"url\": \"https://www.igdb.com/games/persona-5-royal\"\n },\n {\n \"id\": 74879,\n \"genres\": [\n 8,\n 12,\n 31\n ],\n \"name\": \"Persona 5: Ultimate Edition\",\n \"platforms\": [\n 9,\n 48\n ],\n \"release_dates\": [\n 180836,\n 180837,\n 180838,\n 180839\n ],\n \"summary\": \"Persona 5, a turn-based JRPG with visual novel elements, follows a high school student with a criminal record for a crime he didn\\u0027t commit. Soon he meets several characters who share similar fates to him, and discovers a metaphysical realm which allows him and his friends to channel their pent-up frustrations into becoming a group of vigilantes reveling in aesthetics and rebellion while fighting corruption. \\n \\nThis special bundle contains: \\n* Persona 5 \\n* all additional Personas \\n* all special Costumes \\n* Healing Item Set \\n* Skill Card Set \\n* Japanese Audio Track \\n* New Difficulty Level (Merciless)\",\n \"themes\": [\n 1,\n 17,\n 23\n ],\n \"url\": \"https://www.igdb.com/games/persona-5-ultimate-edition\"\n },\n {\n \"id\": 54218,\n \"genres\": [\n 7\n ],\n \"name\": \"Persona 5: Dancing in Starlight\",\n \"platforms\": [\n 46,\n 48\n ],\n \"release_dates\": [\n 133973,\n 152181,\n 161617,\n 161618,\n 161619,\n 161620\n ],\n \"summary\": \"Dancing game featuring characters from Persona 5.\",\n \"total_rating\": 72.99719218018521,\n \"total_rating_count\": 20,\n \"url\": \"https://www.igdb.com/games/persona-5-dancing-in-starlight\"\n },\n {\n \"id\": 116559,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Persona 5 Royal Limited Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 168688,\n 168743,\n 168744,\n 168745\n ],\n \"summary\": \"A limited edition version Persona 5 Royal, called Persona 5 The Royal in Japan.\",\n \"themes\": [\n 1,\n 17,\n 31,\n 43\n ],\n \"total_rating\": 97.8981937602627,\n \"total_rating_count\": 6,\n \"url\": \"https://www.igdb.com/games/persona-5-royal-limited-edition\"\n },\n {\n \"id\": 41866,\n \"name\": \"Persona 5: Take Your Heart - Premium Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 185063,\n 185064\n ],\n \"summary\": \"Persona 5 \u201cTake Your Heart\u201d Premium Edition contains: Game, SteelBook Collectible Case, Soundtrack CD, 4\\\" Morgana Plush, 64-page Hardcover Art Book, School Bag \\u0026 Collectible Outer Box \\n \\nThe 5th numbered game in the series, created by renowned developers - \\n \\nA deep and engaging storyline - Each case will take you one step closer to the truth veiled in darkness...What awaits our heroes: glory or ruin? \\n \\nUnique and interesting dungeons with various tricks and traps await - Overcome various obstacles with graceful Phantom Thief action \\n \\nA stable of talented voice actors have provided English voice-overs while ATLUS\u2019 celebrated localization team offers an English script that provides a faithful and engaging play experience\",\n \"url\": \"https://www.igdb.com/games/persona-5-take-your-heart-premium-edition\"\n }\n]",
"igdb_release_date": 1491264000000,
"summary": "Persona 5, a turn-based JRPG with visual novel elements, follows a high school student with a criminal record for a crime he didn't commit. Soon he meets several characters who share similar fates to him, and discovers a metaphysical realm which allows him and his friends to channel their pent-up frustrations into becoming a group of vigilantes reveling in aesthetics and rebellion while fighting corruption.",
"web-scraper-order": "931"
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Shooter Role-playing (RPG) Adventure",
"game_igdb_id": 25076,
"game_igdb_name": "Red Dead Redemption 2",
"game_platforms": "PC (Microsoft Windows) PlayStation 4 Xbox One Google Stadia",
"game_rating": 93.1083249086,
"game_rating_count": 997,
"game_size": null,
"game_themes": "Action Open world",
"game_title": "Red Dead Redemption II",
"games_found": 1,
"hltb_main": 48.0,
"igdb_json": "[\n {\n \"id\": 25076,\n \"artworks\": [\n 5330,\n 5332,\n 5333,\n 5334,\n 5335,\n 5336,\n 5337,\n 5338,\n 5339,\n 5340,\n 5341,\n 5645\n ],\n \"genres\": [\n 5,\n 12,\n 31\n ],\n \"name\": \"Red Dead Redemption 2\",\n \"platforms\": [\n 6,\n 48,\n 49,\n 170\n ],\n \"release_dates\": [\n 137262,\n 137263,\n 137264,\n 137265,\n 137266,\n 147060,\n 177006,\n 179286\n ],\n \"summary\": \"Developed by the creators of Grand Theft Auto V and Red Dead Redemption, Red Dead Redemption 2 is an epic tale of life in America\u2019s unforgiving heartland. The game\\u0027s vast and atmospheric world will also provide the foundation for a brand new online multiplayer experience.\",\n \"themes\": [\n 1,\n 38\n ],\n \"total_rating\": 93.10832490856174,\n \"total_rating_count\": 997,\n \"url\": \"https://www.igdb.com/games/red-dead-redemption-2\"\n }\n]",
"igdb_release_date": 1540512000000,
"summary": "Developed by the creators of Grand Theft Auto V and Red Dead Redemption, Red Dead Redemption 2 is an epic tale of life in America\u2019s unforgiving heartland. The game's vast and atmospheric world will also provide the foundation for a brand new online multiplayer experience.",
"web-scraper-order": null
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Role-playing (RPG) Adventure",
"game_igdb_id": 36926,
"game_igdb_name": "Monster Hunter: World",
"game_platforms": "PC (Microsoft Windows) PlayStation 4 Xbox One",
"game_rating": 90.0037707715,
"game_rating_count": 279,
"game_size": null,
"game_themes": "Action",
"game_title": "Monster Hunter World",
"games_found": 6,
"hltb_main": 49.0,
"igdb_json": "[\n {\n \"id\": 36926,\n \"artworks\": [\n 1304,\n 1305,\n 1306,\n 1307\n ],\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Monster Hunter: World\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 113479,\n 147904,\n 155500\n ],\n \"summary\": \"Monster Hunter: World sees players take on the role of a hunter that completes various quests to hunt and slay monsters within a lively living and breathing eco-system full of predators\u2026. and prey. In the video you can see some of the creatures you can expect to come across within the New World, the newly discovered continent where Monster Hunter: World is set, including the Great Jagras which has the ability to swallow its prey whole and one of the Monster Hunter series favourites, Rathalos. \\n \\nPlayers are able to utilise survival tools such as the slinger and Scoutfly to aid them in their hunt. By using these skills to their advantage hunters can lure monsters into traps and even pit them against each other in an epic fierce battle. Can our hunter successfully survive the fight and slay the Anjanath? He\u2019ll need to select his weapon choice carefully from 14 different weapon classes and think strategically about how to take the giant foe down. Don\u2019t forget to pack the camouflaging ghillie suit!\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 90.00377077149675,\n \"total_rating_count\": 279,\n \"url\": \"https://www.igdb.com/games/monster-hunter-world\"\n },\n {\n \"id\": 113344,\n \"artworks\": [\n 7618,\n 7619\n ],\n \"genres\": [\n 31\n ],\n \"name\": \"Monster Hunter: World - Iceborne\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 169398,\n 187392,\n 187393\n ],\n \"summary\": \"Let your hunting instinct take you further than ever! \\\"Iceborne\\\" is a massive expansion that picks up after the ending of Monster Hunter: World and opens up the new \\\"master rank!\\\" New quests, monsters, weapons, armor, and story await to take your hunting to the next level.\",\n \"themes\": [\n 17\n ],\n \"total_rating\": 93.63011625172065,\n \"total_rating_count\": 31,\n \"url\": \"https://www.igdb.com/games/monster-hunter-world-iceborne\"\n },\n {\n \"id\": 81355,\n \"name\": \"Monster Hunter: World - Steelbook Edition\",\n \"platforms\": [\n 48,\n 49\n ],\n \"release_dates\": [\n 135332,\n 135333\n ],\n \"url\": \"https://www.igdb.com/games/monster-hunter-world-steelbook-edition\"\n },\n {\n \"id\": 118273,\n \"genres\": [\n 12,\n 25,\n 31\n ],\n \"name\": \"Monster Hunter World: Iceborne Master Edition\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 169393,\n 169394,\n 169395\n ],\n \"summary\": \"Discover breathtaking environments and battle titanic creatures in this collection that includes Monster Hunter: World and its massive expansion, Iceborne. As a hunter, you\\u0027ll take on quests to track \\u0026 slay monsters in a variety of living, breathing habitats. Take down these monsters and use their materials to forge even more powerful weapons and armor, then prepare for even tougher foes as the story unfolds. Each discovery adds new areas, creatures, and drama to the adventure! The Iceborne expansion adds new monsters, gear and abilities to Monster Hunter: World. Picking up right where the base game left off, the Research Commission and your hunter set off to a new polar region, kicking off a brand-new story set in Hoarfrost Reach. Conquer these missions alone, or play online with other hunters for epic multiplayer quests.\",\n \"themes\": [\n 1,\n 17\n ],\n \"url\": \"https://www.igdb.com/games/monster-hunter-world-iceborne-master-edition\"\n },\n {\n \"id\": 81289,\n \"name\": \"Monster Hunter: World - Collector\\u0027s Edition\",\n \"platforms\": [\n 48,\n 49\n ],\n \"release_dates\": [\n 135145,\n 135146\n ],\n \"summary\": \"Monster Hunter: World - Collector\\u0027s Edition contains: \\n \\nDeluxe Kit product download code. \\nNergigante figure. \\nMonster Hunter: World special soundtrack. \\nMonster Hunter: World artbook \\\"Monster Designs\\\".\",\n \"url\": \"https://www.igdb.com/games/monster-hunter-world-collectors-edition\"\n },\n {\n \"id\": 81354,\n \"name\": \"Monster Hunter: World - Digital Deluxe Edition\",\n \"platforms\": [\n 48,\n 49\n ],\n \"release_dates\": [\n 135330,\n 135331\n ],\n \"summary\": \"Monster Hunter: World - Digital Deluxe Edition contains: \\n \\nDeluxe Kit contents: \\nSamurai set. \\n3 Additional Gestures: \\\"Zen\\\", \\\"Ninja Star\\\", \\\"Sumo Slap\\\". \\n2 Additional Sticker Sets: \\\"MH All-Stars Set\\\", \\\"Sir Loin Set\\\". \\n1 Additional Face Paint: \\\"Wyvern\\\". \\n1 Additional Hairstyle: \\\"Topknot\\\".\",\n \"url\": \"https://www.igdb.com/games/monster-hunter-world-digital-deluxe-edition\"\n }\n]",
"igdb_release_date": 1516924800000,
"summary": "Monster Hunter: World sees players take on the role of a hunter that completes various quests to hunt and slay monsters within a lively living and breathing eco-system full of predators\u2026. and prey. In the video you can see some of the creatures you can expect to come across within the New World, the newly discovered continent where Monster Hunter: World is set, including the Great Jagras which has the ability to swallow its prey whole and one of the Monster Hunter series favourites, Rathalos. \n \nPlayers are able to utilise survival tools such as the slinger and Scoutfly to aid them in their hunt. By using these skills to their advantage hunters can lure monsters into traps and even pit them against each other in an epic fierce battle. Can our hunter successfully survive the fight and slay the Anjanath? He\u2019ll need to select his weapon choice carefully from 14 different weapon classes and think strategically about how to take the giant foe down. Don\u2019t forget to pack the camouflaging ghillie suit!",
"web-scraper-order": null
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Shooter Adventure",
"game_igdb_id": 6036,
"game_igdb_name": "The Last of Us Remastered",
"game_platforms": "PlayStation 4",
"game_rating": 95.9493647506,
"game_rating_count": 705,
"game_size": null,
"game_themes": "Action Horror Survival Stealth",
"game_title": "The Last of Us Remastered",
"games_found": 2,
"hltb_main": 14.0,
"igdb_json": "[\n {\n \"id\": 6036,\n \"artworks\": [\n 6199,\n 6200,\n 6201\n ],\n \"genres\": [\n 5,\n 31\n ],\n \"name\": \"The Last of Us Remastered\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 106246,\n 106247,\n 150147,\n 150148\n ],\n \"summary\": \"Winner of over 200 game of the year awards, The Last of Us has been remastered for the PlayStation\u00ae4. Now featuring higher resolution character models, improved shadows and lighting, in addition to several other gameplay improvements. \\n \\nAbandoned cities reclaimed by nature. A population decimated by a modern plague. Survivors are killing each other for food, weapons whatever they can get their hands on. Joel, a brutal survivor, and Ellie, a brave young teenage girl who is wise beyond her years, must work together if they hope to survive their journey across the US. \\n \\nThe Last of Us: Remastered includes the Abandoned Territories Map Pack, Reclaimed Territories Map Pack, and the critically acclaimed The Last of Us: Left Behind Single Player campaign. \\n \\nPS4 PRO ENHANCED: \\n- PS4 Pro HD \\n- Dynamic 4K Gaming \\n- Greater Draw Distance \\n- Visual FX \\n- Vivid Textures \\n- Deep Shadows \\n- High Fidelity Assets \\n- HDR\",\n \"themes\": [\n 1,\n 19,\n 21,\n 23\n ],\n \"total_rating\": 95.94936475060166,\n \"total_rating_count\": 705,\n \"url\": \"https://www.igdb.com/games/the-last-of-us-remastered\"\n },\n {\n \"id\": 25984,\n \"name\": \"The Last Of Us Remastered: Agility Survival Skill\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 61048\n ],\n \"summary\": \"Climb, vault, and move while crouching faster than you could before. At level 3 you are nearly impossible to hear while moving. For one price, get this content for your PS4\u2122 and PS3\u2122 systems. Purchase it on one system and it will automatically appear for download on the other system at no additional cost.\",\n \"url\": \"https://www.igdb.com/games/the-last-of-us-remastered-agility-survival-skill\"\n }\n]",
"igdb_release_date": 1406851200000,
"summary": "Winner of over 200 game of the year awards, The Last of Us has been remastered for the PlayStation\u00ae4. Now featuring higher resolution character models, improved shadows and lighting, in addition to several other gameplay improvements. \n \nAbandoned cities reclaimed by nature. A population decimated by a modern plague. Survivors are killing each other for food, weapons whatever they can get their hands on. Joel, a brutal survivor, and Ellie, a brave young teenage girl who is wise beyond her years, must work together if they hope to survive their journey across the US. \n \nThe Last of Us: Remastered includes the Abandoned Territories Map Pack, Reclaimed Territories Map Pack, and the critically acclaimed The Last of Us: Left Behind Single Player campaign. \n \nPS4 PRO ENHANCED: \n- PS4 Pro HD \n- Dynamic 4K Gaming \n- Greater Draw Distance \n- Visual FX \n- Vivid Textures \n- Deep Shadows \n- High Fidelity Assets \n- HDR",
"web-scraper-order": null
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Shooter Adventure",
"game_igdb_id": 7331,
"game_igdb_name": "Uncharted 4: A Thief's End",
"game_platforms": "PlayStation 4",
"game_rating": 92.7313918351,
"game_rating_count": 1306,
"game_size": null,
"game_themes": "Action Fantasy Historical",
"game_title": "Uncharted 4 - A Thief\u2019s End",
"games_found": 3,
"hltb_main": 15.0,
"igdb_json": "[\n {\n \"id\": 7331,\n \"artworks\": [\n 6303,\n 6304,\n 6305,\n 6306,\n 6307\n ],\n \"genres\": [\n 5,\n 31\n ],\n \"name\": \"Uncharted 4: A Thief\\u0027s End\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 47481,\n 106655,\n 106656\n ],\n \"summary\": \"Several years after his last adventure, retired fortune hunter, Nathan Drake, is forced back into the world of thieves. With the stakes much more personal, Drake embarks on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. His greatest adventure will test his physical limits, his resolve, and ultimately what he\\u0027s willing to sacrifice to save the ones he loves.\",\n \"themes\": [\n 1,\n 17,\n 22\n ],\n \"total_rating\": 92.73139183508785,\n \"total_rating_count\": 1306,\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thief-s-end\"\n },\n {\n \"id\": 41874,\n \"name\": \"Uncharted 4: A Thief\\u0027s End Special Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 154696\n ],\n \"summary\": \"The Special Edition of Uncharted 4: A Thief\u2019s End includes: \\n \\n- Collector\u2019s SteelBook Case designed by Alexander \u201cThat Kid Who Draws\u201d Iaccarino \\n- 48 Page Hardcover Mini Art Book by Naughty Dog and Dark Horse \\n- Naughty Dog \\u0026 Pirate Sigil Sticker Sheet \\n- Multiplayer Bonuses: \\n- Multiplayer Outfit \u2013 Heist Drake \\n- Naughty Dog Points \u2013 Redeem the points to unlock new multiplayer content and character upgrades. \\n- Snow Weapon Skin\",\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thiefs-end-special-edition\"\n },\n {\n \"id\": 41879,\n \"genres\": [\n 31\n ],\n \"name\": \"Uncharted 4: A Thief\\u0027s End Libertalia Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 89763\n ],\n \"summary\": \"Uncharted 4: A Thief\\u0027s End is an upcoming American action-adventure video game published by Sony Computer Entertainment and developed by Naughty Dog exclusively for the PlayStation 4. Initially teased at Spike TV\\u0027s PS4 launch event on November 14, 2013, a full in-engine trailer confirmed the title during Sony\\u0027s E3 2014 press conference on June 9, 2014. The game is set to release in 2015. \\n \\nSeveral years after the events of Uncharted 3: Drake\\u0027s Deception, Nathan Drake, who retired as a fortune hunter, will embark on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. Naughty Dog outlined the game\\u0027s plot as \\\"his greatest adventure yet and will test his physical limits, his resolve, and ultimately what he\\u0027s willing to sacrifice to save the ones he loves\\\".\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/uncharted-4-a-thiefs-end-libertalia-collectors-edition\"\n }\n]",
"igdb_release_date": 1462838400000,
"summary": "Several years after his last adventure, retired fortune hunter, Nathan Drake, is forced back into the world of thieves. With the stakes much more personal, Drake embarks on a globe-trotting journey in pursuit of a historical conspiracy behind a fabled pirate treasure. His greatest adventure will test his physical limits, his resolve, and ultimately what he's willing to sacrifice to save the ones he loves.",
"web-scraper-order": null
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Shooter Racing Sport Adventure",
"game_igdb_id": 1020,
"game_igdb_name": "Grand Theft Auto V",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 Xbox 360 PlayStation 4 Xbox One",
"game_rating": 93.3929790512,
"game_rating_count": 2896,
"game_size": null,
"game_themes": "Action Comedy Sandbox Open world",
"game_title": "Grand Theft Auto V",
"games_found": 2,
"hltb_main": 31.0,
"igdb_json": "[\n {\n \"id\": 1020,\n \"artworks\": [\n 2628,\n 2629,\n 2630,\n 2631,\n 2632,\n 2633,\n 2634,\n 2635,\n 2636,\n 5650,\n 5848,\n 5849\n ],\n \"genres\": [\n 5,\n 10,\n 14,\n 31\n ],\n \"name\": \"Grand Theft Auto V\",\n \"platforms\": [\n 6,\n 9,\n 12,\n 48,\n 49\n ],\n \"release_dates\": [\n 20290,\n 20291,\n 20293,\n 20294,\n 27436,\n 103344,\n 103345,\n 133723,\n 133724,\n 136048\n ],\n \"summary\": \"The biggest, most dynamic and most diverse open world ever created, Grand Theft Auto V blends storytelling and gameplay in new ways as players repeatedly jump in and out of the lives of the game\u2019s three lead characters, playing all sides of the game\u2019s interwoven story.\",\n \"themes\": [\n 1,\n 27,\n 33,\n 38\n ],\n \"total_rating\": 93.39297905115895,\n \"total_rating_count\": 2896,\n \"url\": \"https://www.igdb.com/games/grand-theft-auto-v\"\n },\n {\n \"id\": 98077,\n \"genres\": [\n 5,\n 10,\n 14,\n 31\n ],\n \"name\": \"Grand Theft Auto V: Premium Online Edition\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 189410,\n 189411,\n 189412\n ],\n \"summary\": \"The Grand Theft Auto V: Premium Online Edition includes the complete Grand Theft Auto V story experience, free access to the ever evolving Grand Theft Auto Online and all existing gameplay upgrades and content including The Doomsday Heist, Gunrunning, Smuggler\u2019s Run, Bikers and much more. You\u2019ll also get the Criminal Enterprise Starter Pack, the fastest way to jumpstart your criminal empire in Grand Theft Auto Online.\",\n \"themes\": [\n 1,\n 27,\n 38\n ],\n \"url\": \"https://www.igdb.com/games/grand-theft-auto-v-premium-online-edition\"\n }\n]",
"igdb_release_date": 1416268800000,
"summary": "The biggest, most dynamic and most diverse open world ever created, Grand Theft Auto V blends storytelling and gameplay in new ways as players repeatedly jump in and out of the lives of the game\u2019s three lead characters, playing all sides of the game\u2019s interwoven story.",
"web-scraper-order": null
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Role-playing (RPG) Adventure",
"game_igdb_id": 1942,
"game_igdb_name": "The Witcher 3: Wild Hunt",
"game_platforms": "PC (Microsoft Windows) PlayStation 4 Xbox One",
"game_rating": 93.5855808015,
"game_rating_count": 2554,
"game_size": null,
"game_themes": "Action Fantasy Open world",
"game_title": "The Witcher 3: Wild Hunt",
"games_found": 5,
"hltb_main": 51.0,
"igdb_json": "[\n {\n \"id\": 1942,\n \"artworks\": [\n 142,\n 143,\n 144,\n 145,\n 146,\n 147,\n 148,\n 149,\n 150,\n 901,\n 1146,\n 1147,\n 1148\n ],\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"The Witcher 3: Wild Hunt\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 23743,\n 23744,\n 23745\n ],\n \"summary\": \"RPG and sequel to The Witcher 2 (2011), The Witcher 3 follows witcher Geralt of Rivia as he seeks out his former lover and his young subject while intermingling with the political workings of the wartorn Northern Kingdoms. Geralt has to fight monsters and deal with people of all sorts in order to solve complex problems and settle contentious disputes, each ranging from the personal to the world-changing.\",\n \"themes\": [\n 1,\n 17,\n 38\n ],\n \"total_rating\": 93.58558080146204,\n \"total_rating_count\": 2554,\n \"url\": \"https://www.igdb.com/games/the-witcher-3-wild-hunt\"\n },\n {\n \"id\": 22439,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"The Witcher 3: Wild Hunt - Game of the Year Edition\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 55024,\n 55025,\n 107493\n ],\n \"summary\": \"Become a professional monster slayer and embark on an adventure of epic proportions! Upon its release, The Witcher 3: Wild Hunt became an instant classic, claiming over 250 Game of the Year awards. Now you can enjoy this huge, over 100-hour long, open-world adventure along with both its story-driven expansions worth an extra 50 hours of gameplay. This edition includes all additional content - new weapons, armor, companion outfits, new game mode and side quests.\",\n \"themes\": [\n 1,\n 17,\n 22,\n 38\n ],\n \"total_rating\": 99.06680039444299,\n \"total_rating_count\": 305,\n \"url\": \"https://www.igdb.com/games/the-witcher-3-wild-hunt-game-of-the-year-edition\"\n },\n {\n \"id\": 13166,\n \"artworks\": [\n 6234\n ],\n \"genres\": [\n 12\n ],\n \"name\": \"The Witcher 3: Wild Hunt - Blood and Wine\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 50537,\n 50538,\n 50539\n ],\n \"summary\": \"Blood and Wine is the second major expansion to The Witcher 3: Wild Hunt. \\n \\nThe players will accompany Geralt of Rivia to the region of Toussaint, one of the few places in the Witcher universe which has remained largely untouched by the war. The calm, laid-back atmosphere holds a dark and bloody secret that only the White Wolf can uncover; the expansion features brand new items for us to collect on our adventure and also introduces previously unknown characters.\",\n \"themes\": [\n 17,\n 38\n ],\n \"total_rating\": 94.31647388607415,\n \"total_rating_count\": 276,\n \"url\": \"https://www.igdb.com/games/the-witcher-3-wild-hunt-blood-and-wine\"\n },\n {\n \"id\": 12503,\n \"artworks\": [\n 1971\n ],\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"The Witcher 3: Wild Hunt - Hearts of Stone\",\n \"platforms\": [\n 6,\n 45,\n 48,\n 49\n ],\n \"release_dates\": [\n 38585,\n 38586,\n 38587,\n 106356,\n 106357\n ],\n \"summary\": \"Hired by the Merchant of Mirrors, Geralt is tasked with overcoming Olgierd von Everec -- a ruthless bandit captain enchanted with the power of immortality. \\n \\n \\n \\nStep again into the shoes of Geralt of Rivia, a professional monster slayer, this time hired to defeat a ruthless bandit captain, Olgierd von Everec, a man who possesses the power of immortality. This expansion to \u201cThe Witcher 3: Wild Hunt\u201d packs over 10 hours of new adventures, introducing new characters, powerful monsters, unique romance and a brand new storyline shaped by your choices.\",\n \"themes\": [\n 1,\n 17,\n 38\n ],\n \"total_rating\": 92.45542708537221,\n \"total_rating_count\": 358,\n \"url\": \"https://www.igdb.com/games/the-witcher-3-wild-hunt-hearts-of-stone\"\n },\n {\n \"id\": 44549,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"The Witcher 3: Wild Hunt Collector\\u0027s Edition\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 99902\n ],\n \"summary\": \"Contents of the Collector\\u0027s Edition: the game, an exclusive CD with the official soundtrack, the official developer-created Witcher Universe: The Compendium, a beautiful detailed map of the in-game world, a set of unique The Witcher 3: Wild Hunt stickers, a 33 x 24 x 25 cm hand-painted Polystone figure of Geralt of Rivia battling a Griffin, an exclusive collector-grade Witcher medallion, a unique SteelBook, a 200-page artbook containing breathtaking art from the game, and huge outer and inner collector\\u0027s boxes in which you can store your Witcher merchandise\",\n \"themes\": [\n 1,\n 17,\n 22,\n 38\n ],\n \"url\": \"https://www.igdb.com/games/the-witcher-3-wild-hunt-collectors-edition\"\n }\n]",
"igdb_release_date": 1431993600000,
"summary": "RPG and sequel to The Witcher 2 (2011), The Witcher 3 follows witcher Geralt of Rivia as he seeks out his former lover and his young subject while intermingling with the political workings of the wartorn Northern Kingdoms. Geralt has to fight monsters and deal with people of all sorts in order to solve complex problems and settle contentious disputes, each ranging from the personal to the world-changing.",
"web-scraper-order": null
}
]
None
</code>
## The REST API
To get a list of games in our frontend on the client we want to filter by the following attributes:
- Minimum Rating: Games above a certain rating as documented in the IGDB.
- Maximum Rating: Games below a certain rating as documented in the IGDB. Mainly to restrict the number of results.
- Minimum Rating Count: Less popular games often have a few ratings on which their overall rating is based. This is not representative, so we select games with a minimum number of ratings.
- Maximum playing time: We potentially want to play only games which don't have excessive playing time.
Note that since we have imperfect data quality by search for these attributes we exclude games where for any reason these attributes are not set._____no_output_____
<code>
def search_api(min_rating = 0, max_rating = 100, min_rating_count = 0, max_playing_time = 1000):
my_query = Query()
test_rating = lambda s: True if(isinstance(s, float) and s >= min_rating and s <= max_rating) else False
test_time = lambda s: True if(isinstance(s, float) and s <= max_playing_time) else False
return game_db.search((my_query.game_rating.test(test_rating)) & (my_query.game_rating_count >= min_rating_count) &
(my_query.hltb_main.test(test_time)))
query_result = search_api(min_rating = 75, min_rating_count = 20, max_playing_time = 5)
print("Number of results: ", len(query_result))
print(print(json.dumps(query_result, indent=4, sort_keys=True)))Number of results: 23
[
{
"game_buy_date": " 25/6/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Adventure Indie",
"game_igdb_id": 9730,
"game_igdb_name": "Firewatch",
"game_platforms": "Linux PC (Microsoft Windows) Mac PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 79.6556401824,
"game_rating_count": 618,
"game_size": " 3.16 GB ",
"game_themes": "Action Drama Open world Mystery",
"game_title": "Firewatch",
"games_found": 1,
"hltb_main": 4.0,
"igdb_json": "[\n {\n \"id\": 9730,\n \"artworks\": [\n 8852\n ],\n \"genres\": [\n 31,\n 32\n ],\n \"name\": \"Firewatch\",\n \"platforms\": [\n 3,\n 6,\n 14,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 38731,\n 38733,\n 38734,\n 136016,\n 152656,\n 152657,\n 152658,\n 161671\n ],\n \"summary\": \"Firewatch is a mystery set in the woods of Wyoming, where your only emotional lifeline is the person on the other end of a handheld radio. You play as a man named Henry who has retreated from his messy life to work as a fire lookout in the wilderness. Perched high atop a mountain, it\u2019s your job to look for smoke and keep the wilderness safe. An especially hot and dry summer has everyone on edge. Your supervisor, a woman named Delilah, is available to you at all times over a small, handheld radio -- and is your only contact with the world you\u2019ve left behind. But when something strange draws you out of your lookout tower and into the world, you\u2019ll explore a wild and unknown environment, facing questions and making interpersonal choices that can build or destroy the only meaningful relationship you have.\",\n \"themes\": [\n 1,\n 31,\n 38,\n 43\n ],\n \"total_rating\": 79.6556401824191,\n \"total_rating_count\": 618,\n \"url\": \"https://www.igdb.com/games/firewatch\"\n }\n]",
"igdb_release_date": 1454976000000,
"summary": "Firewatch is a mystery set in the woods of Wyoming, where your only emotional lifeline is the person on the other end of a handheld radio. You play as a man named Henry who has retreated from his messy life to work as a fire lookout in the wilderness. Perched high atop a mountain, it\u2019s your job to look for smoke and keep the wilderness safe. An especially hot and dry summer has everyone on edge. Your supervisor, a woman named Delilah, is available to you at all times over a small, handheld radio -- and is your only contact with the world you\u2019ve left behind. But when something strange draws you out of your lookout tower and into the world, you\u2019ll explore a wild and unknown environment, facing questions and making interpersonal choices that can build or destroy the only meaningful relationship you have.",
"web-scraper-order": "1069"
},
{
"game_buy_date": " 6/4/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Simulator Adventure Indie Arcade",
"game_igdb_id": 12520,
"game_igdb_name": "Lovers in a Dangerous Spacetime",
"game_platforms": "Linux PC (Microsoft Windows) Mac PlayStation Network PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 78.827974555,
"game_rating_count": 48,
"game_size": " 900.01 MB ",
"game_themes": "Action Science fiction",
"game_title": "Lovers in a Dangerous Spacetime",
"games_found": 1,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 12520,\n \"genres\": [\n 13,\n 31,\n 32,\n 33\n ],\n \"name\": \"Lovers in a Dangerous Spacetime\",\n \"platforms\": [\n 3,\n 6,\n 14,\n 45,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 35919,\n 35920,\n 35921,\n 35922,\n 44935,\n 107854,\n 107855,\n 113567,\n 133848,\n 136127\n ],\n \"summary\": \"Lovers in a Dangerous Spacetime is a frantic 1- or 2-player couch co-op action space shooter set in a massive neon battleship. Only through teamwork can you triumph over the evil forces of Anti-Love, rescue kidnapped space-bunnies, and avoid a vacuumy demise. Deep space is a dangerous place, but you don\u2019t have to face it alone!\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 78.82797455503655,\n \"total_rating_count\": 48,\n \"url\": \"https://www.igdb.com/games/lovers-in-a-dangerous-spacetime\"\n }\n]",
"igdb_release_date": 1454976000000,
"summary": "Lovers in a Dangerous Spacetime is a frantic 1- or 2-player couch co-op action space shooter set in a massive neon battleship. Only through teamwork can you triumph over the evil forces of Anti-Love, rescue kidnapped space-bunnies, and avoid a vacuumy demise. Deep space is a dangerous place, but you don\u2019t have to face it alone!",
"web-scraper-order": "1112"
},
{
"game_buy_date": " 12/3/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle Adventure Indie",
"game_igdb_id": 7342,
"game_igdb_name": "INSIDE",
"game_platforms": "PC (Microsoft Windows) Mac iOS PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 89.7611687332,
"game_rating_count": 866,
"game_size": " 2.08 GB ",
"game_themes": "Action Science fiction Horror Drama",
"game_title": "INSIDE",
"games_found": 2,
"hltb_main": 3.0,
"igdb_json": "[\n {\n \"id\": 7342,\n \"genres\": [\n 8,\n 9,\n 31,\n 32\n ],\n \"name\": \"INSIDE\",\n \"platforms\": [\n 6,\n 14,\n 39,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 113871,\n 113872,\n 113873,\n 147713,\n 154372,\n 154373,\n 218742\n ],\n \"summary\": \"An atmospheric 2D side-scroller in which, hunted and alone, a boy finds himself drawn into the center of a dark project and struggles to preserve his identity.\",\n \"themes\": [\n 1,\n 18,\n 19,\n 31\n ],\n \"total_rating\": 89.76116873316846,\n \"total_rating_count\": 866,\n \"url\": \"https://www.igdb.com/games/inside\"\n },\n {\n \"id\": 9687,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Little Devil Inside\",\n \"platforms\": [\n 6,\n 48,\n 49,\n 130,\n 167\n ],\n \"release_dates\": [\n 215937,\n 215938,\n 215939,\n 215940,\n 215941\n ],\n \"summary\": \"Little Devil Inside is a truly engaging 3D action adventure RPG game where you are thrown into a surreal but somewhat familiar setting with men, creatures and monsters to interact with, learn and hunt \u2013 survive and discover the world that exists beyond.\",\n \"themes\": [\n 1,\n 17\n ],\n \"url\": \"https://www.igdb.com/games/little-devil-inside\"\n }\n]",
"igdb_release_date": 1471910400000,
"summary": "An atmospheric 2D side-scroller in which, hunted and alone, a boy finds himself drawn into the center of a dark project and struggles to preserve his identity.",
"web-scraper-order": "1118"
},
{
"game_buy_date": " 12/3/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle Adventure Indie",
"game_igdb_id": 1331,
"game_igdb_name": "Limbo",
"game_platforms": "Linux PC (Microsoft Windows) PlayStation 3 Xbox 360 Mac Android iOS PlayStation Vita PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 82.8255222052,
"game_rating_count": 879,
"game_size": " 168.82 MB ",
"game_themes": "Action Horror",
"game_title": "LIMBO",
"games_found": 2,
"hltb_main": 3.0,
"igdb_json": "[\n {\n \"id\": 1331,\n \"genres\": [\n 8,\n 9,\n 31,\n 32\n ],\n \"name\": \"Limbo\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 12,\n 14,\n 34,\n 39,\n 46,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 107789,\n 107790,\n 107791,\n 146865,\n 147822,\n 154368,\n 154370,\n 154371,\n 164518,\n 164519,\n 186469,\n 186470,\n 186471,\n 186472,\n 186473,\n 186474,\n 186490\n ],\n \"summary\": \"Limbo is a black and white puzzle-platforming adventure. Play the role of a young boy traveling through an eerie and treacherous world in an attempt to discover the fate of his sister. Limbo\\u0027s design is an example of gaming as an art form. Short and sweet, doesn\\u0027t overstay its welcome. Puzzles are challenging and fun, not illogical and frustrating.\",\n \"themes\": [\n 1,\n 19\n ],\n \"total_rating\": 82.82552220515444,\n \"total_rating_count\": 879,\n \"url\": \"https://www.igdb.com/games/limbo\"\n },\n {\n \"id\": 27159,\n \"genres\": [\n 4\n ],\n \"name\": \"Touhou Shinpiroku\uff5eUrban Legend in Limbo\",\n \"platforms\": [\n 6,\n 48\n ],\n \"release_dates\": [\n 117470,\n 123457\n ],\n \"summary\": \"The 5th fighting game in the Touhou Project and 14.5th game overall.\",\n \"url\": \"https://www.igdb.com/games/touhou-shinpiroku-urban-legend-in-limbo\"\n }\n]",
"igdb_release_date": 1424822400000,
"summary": "Limbo is a black and white puzzle-platforming adventure. Play the role of a young boy traveling through an eerie and treacherous world in an attempt to discover the fate of his sister. Limbo's design is an example of gaming as an art form. Short and sweet, doesn't overstay its welcome. Puzzles are challenging and fun, not illogical and frustrating.",
"web-scraper-order": "1119"
},
{
"game_buy_date": " 8/2/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Adventure",
"game_igdb_id": 7330,
"game_igdb_name": "LittleBigPlanet 3",
"game_platforms": "PlayStation 3 PlayStation 4",
"game_rating": 76.6274818742,
"game_rating_count": 76,
"game_size": " 15.68 GB ",
"game_themes": "Sandbox",
"game_title": "LittleBigPlanet 3",
"games_found": 1,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 7330,\n \"genres\": [\n 8,\n 31\n ],\n \"name\": \"LittleBigPlanet 3\",\n \"platforms\": [\n 9,\n 48\n ],\n \"release_dates\": [\n 20736,\n 20738,\n 20739,\n 107807,\n 107808,\n 133844\n ],\n \"summary\": \"In LittleBigPlanet 3, explore a world filled with creativity as you explore all corners of the Imagisphere, meet the inhabitants of the mysterious planet Bunkum and face the nefarious Newton. Discover endless surprises that the LittleBigPlanet Community have created and shared for you to enjoy, with new levels and games to play every day. Then if you\\u0027re feeling inspired, flex your creative muscles with the powerful and intuitive customization tools, to bring your own imagination to life in LittleBigPlanet 3.\",\n \"themes\": [\n 33\n ],\n \"total_rating\": 76.62748187417435,\n \"total_rating_count\": 76,\n \"url\": \"https://www.igdb.com/games/littlebigplanet-3\"\n }\n]",
"igdb_release_date": 1417132800000,
"summary": "In LittleBigPlanet 3, explore a world filled with creativity as you explore all corners of the Imagisphere, meet the inhabitants of the mysterious planet Bunkum and face the nefarious Newton. Discover endless surprises that the LittleBigPlanet Community have created and shared for you to enjoy, with new levels and games to play every day. Then if you're feeling inspired, flex your creative muscles with the powerful and intuitive customization tools, to bring your own imagination to life in LittleBigPlanet 3.",
"web-scraper-order": "1129"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Point-and-click Adventure",
"game_igdb_id": 15862,
"game_igdb_name": "Day of the Tentacle Remastered",
"game_platforms": "PC (Microsoft Windows) Mac iOS PlayStation Network PlayStation 4 Xbox One",
"game_rating": 81.0952501079,
"game_rating_count": 75,
"game_size": " 1.79 GB ",
"game_themes": "Fantasy Science fiction Historical Comedy",
"game_title": "Day of the Tentacle Remastered",
"games_found": 1,
"hltb_main": 4.0,
"igdb_json": "[\n {\n \"id\": 15862,\n \"artworks\": [\n 8887\n ],\n \"genres\": [\n 2,\n 31\n ],\n \"name\": \"Day of the Tentacle Remastered\",\n \"platforms\": [\n 6,\n 14,\n 39,\n 45,\n 48,\n 49\n ],\n \"release_dates\": [\n 49179,\n 49180,\n 102383,\n 134543,\n 150413,\n 166069,\n 214601\n ],\n \"summary\": \"Originally released by LucasArts in 1993 as a sequel to Ron Gilbert\u2019s ground \\nbreaking Maniac Mansion, Day of the Tentacle is a mind-bending, time \\ntravel, cartoon puzzle adventure game in which three unlikely friends work \\ntogether to prevent an evil mutated purple tentacle from taking over the \\nworld!\\n\\nNow, over twenty years later, Day of the Tentacle is back in a remastered \\nedition that features all new hand-drawn, high resolution artwork, with \\nremastered audio, music and sound effects (which the original 90s marketing \\nblurb described as \u2018zany!\u2019)\\n\\nPlayers are able to switch back and forth between classic and remastered \\nmodes, and mix and match audio, graphics and user interface to their heart\\u0027s \\ndesire. We\u2019ve also included a concept art browser, and recorded a \\ncommentary track with the game\u2019s original creators Tim Schafer, Dave \\nGrossman, Larry Ahern, Peter Chan, Peter McConnell and Clint Bajakian.\\n \\nDay of the Tentacle was Tim Schafer\u2019s \ufb01rst game as co-project lead, and a \\nmuch beloved cult classic! This special edition has been lovingly restored \\nand remade with the care and attention that can only come from involving \\nthe game\\u0027s original creators. It\u2019s coming to Windows, OSX, PlayStation 4 \\nand PlayStation Vita early next year!\",\n \"themes\": [\n 17,\n 18,\n 22,\n 27\n ],\n \"total_rating\": 81.09525010789625,\n \"total_rating_count\": 75,\n \"url\": \"https://www.igdb.com/games/day-of-the-tentacle-remastered\"\n }\n]",
"igdb_release_date": 1458604800000,
"summary": "Originally released by LucasArts in 1993 as a sequel to Ron Gilbert\u2019s ground \nbreaking Maniac Mansion, Day of the Tentacle is a mind-bending, time \ntravel, cartoon puzzle adventure game in which three unlikely friends work \ntogether to prevent an evil mutated purple tentacle from taking over the \nworld!\n\nNow, over twenty years later, Day of the Tentacle is back in a remastered \nedition that features all new hand-drawn, high resolution artwork, with \nremastered audio, music and sound effects (which the original 90s marketing \nblurb described as \u2018zany!\u2019)\n\nPlayers are able to switch back and forth between classic and remastered \nmodes, and mix and match audio, graphics and user interface to their heart's \ndesire. We\u2019ve also included a concept art browser, and recorded a \ncommentary track with the game\u2019s original creators Tim Schafer, Dave \nGrossman, Larry Ahern, Peter Chan, Peter McConnell and Clint Bajakian.\n \nDay of the Tentacle was Tim Schafer\u2019s \ufb01rst game as co-project lead, and a \nmuch beloved cult classic! This special edition has been lovingly restored \nand remade with the care and attention that can only come from involving \nthe game's original creators. It\u2019s coming to Windows, OSX, PlayStation 4 \nand PlayStation Vita early next year!",
"web-scraper-order": "1152"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Adventure Indie",
"game_igdb_id": 14740,
"game_igdb_name": "The Deadly Tower of Monsters",
"game_platforms": "PC (Microsoft Windows) PlayStation Network PlayStation 4",
"game_rating": 75.8169185565,
"game_rating_count": 39,
"game_size": " 1.92 GB ",
"game_themes": "Action Science fiction Comedy",
"game_title": "The Deadly Tower of Monsters",
"games_found": 1,
"hltb_main": 4.0,
"igdb_json": "[\n {\n \"id\": 14740,\n \"genres\": [\n 5,\n 31,\n 32\n ],\n \"name\": \"The Deadly Tower of Monsters\",\n \"platforms\": [\n 6,\n 45,\n 48\n ],\n \"release_dates\": [\n 51297,\n 51298,\n 106149,\n 106150,\n 118604\n ],\n \"summary\": \"Scarlet Nova - the space diva with a checkered past - and introducing...the robot! Amazing adventures! Strange drama and more await in the deadly tower of monsters. The action-filled adventure game for personal computers and the Playstation 4 entertainment system.\\n\\nMarooned on the desolate planet Gravoria, the trio of adventurers find their only method of escape awaits at the top of a tower - a tower renowned for its deadliness and its propensity to be filled with monsters! Lizard-men! UFO\u2019s! Nuclear ants! Dinosaurs! Robot monkeys! All of these horrors await our intrepid adventurers as they ascend the deadly tower. Will they avoid their own doom? Find out this fall, only in theaters on PC and PS4!\",\n \"themes\": [\n 1,\n 18,\n 27\n ],\n \"total_rating\": 75.8169185564509,\n \"total_rating_count\": 39,\n \"url\": \"https://www.igdb.com/games/the-deadly-tower-of-monsters\"\n }\n]",
"igdb_release_date": 1453161600000,
"summary": "Scarlet Nova - the space diva with a checkered past - and introducing...the robot! Amazing adventures! Strange drama and more await in the deadly tower of monsters. The action-filled adventure game for personal computers and the Playstation 4 entertainment system.\n\nMarooned on the desolate planet Gravoria, the trio of adventurers find their only method of escape awaits at the top of a tower - a tower renowned for its deadliness and its propensity to be filled with monsters! Lizard-men! UFO\u2019s! Nuclear ants! Dinosaurs! Robot monkeys! All of these horrors await our intrepid adventurers as they ascend the deadly tower. Will they avoid their own doom? Find out this fall, only in theaters on PC and PS4!",
"web-scraper-order": "1163"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Adventure Indie",
"game_igdb_id": 7405,
"game_igdb_name": "Everybody's Gone to the Rapture",
"game_platforms": "PC (Microsoft Windows) PlayStation Network PlayStation 4",
"game_rating": 75.7491029859,
"game_rating_count": 133,
"game_size": " 6.04 GB ",
"game_themes": "Science fiction Drama",
"game_title": "Everybody's Gone To The Rapture\u2122",
"games_found": 1,
"hltb_main": 4.0,
"igdb_json": "[\n {\n \"id\": 7405,\n \"genres\": [\n 31,\n 32\n ],\n \"name\": \"Everybody\\u0027s Gone to the Rapture\",\n \"platforms\": [\n 6,\n 45,\n 48\n ],\n \"release_dates\": [\n 32210,\n 48857,\n 102903,\n 102904\n ],\n \"summary\": \"For the last twelve months, we\u2019ve had our heads down working hard on Everybody\u2019s Gone to the Rapture and it\u2019s really exciting to be able to share some more information with you as well as a new trailer.\\n\\nIf you already know The Chinese Room, you\u2019ll know that we make story-driven games. Creating a rich, deep world with strong drama and exceptional production values is key to what we\u2019re all about. Rapture is set in a remote valley in June 1984 and is a story about people and how they live with each other. But it\u2019s also about the end of the world. \\n\\nRapture is inspired by the fiction of John Wyndham, J. G. Ballard, John Christopher and other authors who deal with ordinary people in extraordinary circumstances. There\u2019s a very particular English feel that we wanted to capture in the game, a combination of the epic and the intimate. Rapture also came from our obsession with post-apocalyptic gaming, and the simple idea that whilst we normally play as the hero, in reality, most of us would be the piles of ash and bone littering the game world. That\u2019s an interesting place to start telling a story.\\n\\nOur approach is to create a game that you can utterly immerse yourself in. Yaughton Valley, where Rapture takes place, is a living, breathing world. The world of Rapture is not just a backdrop; it\u2019s a character in its own right. It\u2019s great working with PS4 as its processing power makes a game like this possible for a team our size.\\n\\nThe game is all about discovery. It\u2019s open-world so you have the freedom to explore wherever you like, visiting areas in an order you define, and the story is written to allow this whilst making sure every player has a strong dramatic experience. It\u2019s a type of storytelling that is completely unique to games. The choices you make as a player have a direct impact on how you understand the story \u2013 the more you explore and interact, the deeper you are drawn into Rapture\u2019s world.\",\n \"themes\": [\n 18,\n 31\n ],\n \"total_rating\": 75.7491029858802,\n \"total_rating_count\": 133,\n \"url\": \"https://www.igdb.com/games/everybody-s-gone-to-the-rapture\"\n }\n]",
"igdb_release_date": 1439251200000,
"summary": "For the last twelve months, we\u2019ve had our heads down working hard on Everybody\u2019s Gone to the Rapture and it\u2019s really exciting to be able to share some more information with you as well as a new trailer.\n\nIf you already know The Chinese Room, you\u2019ll know that we make story-driven games. Creating a rich, deep world with strong drama and exceptional production values is key to what we\u2019re all about. Rapture is set in a remote valley in June 1984 and is a story about people and how they live with each other. But it\u2019s also about the end of the world. \n\nRapture is inspired by the fiction of John Wyndham, J. G. Ballard, John Christopher and other authors who deal with ordinary people in extraordinary circumstances. There\u2019s a very particular English feel that we wanted to capture in the game, a combination of the epic and the intimate. Rapture also came from our obsession with post-apocalyptic gaming, and the simple idea that whilst we normally play as the hero, in reality, most of us would be the piles of ash and bone littering the game world. That\u2019s an interesting place to start telling a story.\n\nOur approach is to create a game that you can utterly immerse yourself in. Yaughton Valley, where Rapture takes place, is a living, breathing world. The world of Rapture is not just a backdrop; it\u2019s a character in its own right. It\u2019s great working with PS4 as its processing power makes a game like this possible for a team our size.\n\nThe game is all about discovery. It\u2019s open-world so you have the freedom to explore wherever you like, visiting areas in an order you define, and the story is written to allow this whilst making sure every player has a strong dramatic experience. It\u2019s a type of storytelling that is completely unique to games. The choices you make as a player have a direct impact on how you understand the story \u2013 the more you explore and interact, the deeper you are drawn into Rapture\u2019s world.",
"web-scraper-order": "1181"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Point-and-click Adventure Indie",
"game_igdb_id": 1906,
"game_igdb_name": "Gone Home",
"game_platforms": "Linux PC (Microsoft Windows) Mac iOS PlayStation Network PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 79.6114644805,
"game_rating_count": 362,
"game_size": " 3.27 GB ",
"game_themes": "Drama Mystery",
"game_title": "Gone Home",
"games_found": 1,
"hltb_main": 2.0,
"igdb_json": "[\n {\n \"id\": 1906,\n \"artworks\": [\n 8870\n ],\n \"genres\": [\n 2,\n 31,\n 32\n ],\n \"name\": \"Gone Home\",\n \"platforms\": [\n 3,\n 6,\n 14,\n 39,\n 45,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 20755,\n 20757,\n 43952,\n 43953,\n 53808,\n 103317,\n 103318,\n 147117,\n 157553,\n 157554,\n 161898\n ],\n \"summary\": \"Gone Home is a conceptual simulation game somewhat themed after classic adventure titles where how you interact with space around your characters determines how far you progress in the game. This title is all about exploring a modern, residential locale, and discovering the story of what happened there by investigating a deeply interactive gameworld. The development team aims to push for true simulation,both in the sense of the physics system but also in allowing the player to open any door or drawer they\\u0027d logically be able to and examine what\\u0027s inside, down to small details.\",\n \"themes\": [\n 31,\n 43\n ],\n \"total_rating\": 79.61146448054691,\n \"total_rating_count\": 362,\n \"url\": \"https://www.igdb.com/games/gone-home\"\n }\n]",
"igdb_release_date": 1455235200000,
"summary": "Gone Home is a conceptual simulation game somewhat themed after classic adventure titles where how you interact with space around your characters determines how far you progress in the game. This title is all about exploring a modern, residential locale, and discovering the story of what happened there by investigating a deeply interactive gameworld. The development team aims to push for true simulation,both in the sense of the physics system but also in allowing the player to open any door or drawer they'd logically be able to and examine what's inside, down to small details.",
"web-scraper-order": "1182"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Puzzle Adventure",
"game_igdb_id": 8352,
"game_igdb_name": "The Unfinished Swan",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 Mac iOS PlayStation Vita PlayStation 4",
"game_rating": 78.0818722535,
"game_rating_count": 62,
"game_size": " 1.4 GB ",
"game_themes": null,
"game_title": "The Unfinished Swan",
"games_found": 1,
"hltb_main": 2.0,
"igdb_json": "[\n {\n \"id\": 8352,\n \"genres\": [\n 9,\n 31\n ],\n \"name\": \"The Unfinished Swan\",\n \"platforms\": [\n 6,\n 9,\n 14,\n 39,\n 46,\n 48\n ],\n \"release_dates\": [\n 134251,\n 134252,\n 209914,\n 209915,\n 209916,\n 209917,\n 209918\n ],\n \"summary\": \"The Unfinished Swan is a game about exploring the unknown. \\n \\nThe player is a young boy chasing after a swan who has wandered off into a surreal, unfinished kingdom. The game begins in a completely white space where players can throw paint to splatter their surroundings and reveal the world around them.\",\n \"total_rating\": 78.08187225350545,\n \"total_rating_count\": 62,\n \"url\": \"https://www.igdb.com/games/the-unfinished-swan\"\n }\n]",
"igdb_release_date": 1414454400000,
"summary": "The Unfinished Swan is a game about exploring the unknown. \n \nThe player is a young boy chasing after a swan who has wandered off into a surreal, unfinished kingdom. The game begins in a completely white space where players can throw paint to splatter their surroundings and reveal the world around them.",
"web-scraper-order": "1187"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Music Platform Puzzle Strategy",
"game_igdb_id": 7729,
"game_igdb_name": "Sound Shapes",
"game_platforms": "PlayStation 3 PlayStation Network PlayStation Vita PlayStation 4",
"game_rating": 80.509865238,
"game_rating_count": 40,
"game_size": " 1.29 GB ",
"game_themes": "Action Survival",
"game_title": "Sound Shapes",
"games_found": 1,
"hltb_main": 3.0,
"igdb_json": "[\n {\n \"id\": 7729,\n \"genres\": [\n 7,\n 8,\n 9,\n 15\n ],\n \"name\": \"Sound Shapes\",\n \"platforms\": [\n 9,\n 45,\n 46,\n 48\n ],\n \"release_dates\": [\n 20264,\n 20265,\n 20266,\n 105717,\n 105718,\n 134127,\n 134128,\n 134763,\n 134764\n ],\n \"summary\": \"Play, Compose and Share in a unique take on the classic side-scrolling platformer where your actions make the music. \\n \\nEqual parts instrument and game, Sound Shapes\u2122 gives everyone the ability to make music. Play through a unique campaign that fuses music and artwork into a classic 2D platformer, featuring artwork by Pixeljam, Capy, Superbrothers and more, with music by I Am Robot and Proud, Jim Guthrie and Deadmau5. Create your own unique musical levels with all of the campaign content and share with the world. Sound Shapes creates an ever-changing musical community for everyone to enjoy at home or on the go.\",\n \"themes\": [\n 1,\n 21\n ],\n \"total_rating\": 80.50986523797675,\n \"total_rating_count\": 40,\n \"url\": \"https://www.igdb.com/games/sound-shapes\"\n }\n]",
"igdb_release_date": 1385683200000,
"summary": "Play, Compose and Share in a unique take on the classic side-scrolling platformer where your actions make the music. \n \nEqual parts instrument and game, Sound Shapes\u2122 gives everyone the ability to make music. Play through a unique campaign that fuses music and artwork into a classic 2D platformer, featuring artwork by Pixeljam, Capy, Superbrothers and more, with music by I Am Robot and Proud, Jim Guthrie and Deadmau5. Create your own unique musical levels with all of the campaign content and share with the world. Sound Shapes creates an ever-changing musical community for everyone to enjoy at home or on the go.",
"web-scraper-order": "1189"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle Adventure Indie",
"game_igdb_id": 5892,
"game_igdb_name": "The Swapper",
"game_platforms": "Linux PC (Microsoft Windows) PlayStation 3 Mac Wii U PlayStation Network PlayStation Vita PlayStation 4 Xbox One",
"game_rating": 83.5059281324,
"game_rating_count": 133,
"game_size": " 410.52 MB ",
"game_themes": "Action Science fiction",
"game_title": "The Swapper",
"games_found": 1,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 5892,\n \"genres\": [\n 8,\n 9,\n 31,\n 32\n ],\n \"name\": \"The Swapper\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 41,\n 45,\n 46,\n 48,\n 49\n ],\n \"release_dates\": [\n 14423,\n 32933,\n 32934,\n 32935,\n 32936,\n 32937,\n 32938,\n 32939,\n 32940,\n 32941,\n 32942,\n 32943,\n 106313,\n 106314,\n 136381,\n 144325\n ],\n \"summary\": \"The Swapper is a short puzzle platformer where you must complete every puzzle and collect 124 orbs, in groups of 3 and 9 later on, to complete the game.\\n\\nThe game has a tool which lets you create up to 4 clones and switch between them as long as you have a clear line of sight.\\n\\nThe main obstacles for the puzzles are 3 kinds of lights that interfere with the tool in different ways to make the puzzles harder.\\n\\nAchievements/Trophies are tied to hidden consoles instead of story progress so a guide will most likely be needed to find all 10.\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 83.50592813244465,\n \"total_rating_count\": 133,\n \"url\": \"https://www.igdb.com/games/the-swapper\"\n }\n]",
"igdb_release_date": 1407283200000,
"summary": "The Swapper is a short puzzle platformer where you must complete every puzzle and collect 124 orbs, in groups of 3 and 9 later on, to complete the game.\n\nThe game has a tool which lets you create up to 4 clones and switch between them as long as you have a clear line of sight.\n\nThe main obstacles for the puzzles are 3 kinds of lights that interfere with the tool in different ways to make the puzzles harder.\n\nAchievements/Trophies are tied to hidden consoles instead of story progress so a guide will most likely be needed to find all 10.",
"web-scraper-order": "1194"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Platform Adventure Indie",
"game_igdb_id": 8879,
"game_igdb_name": "Super Time Force Ultra",
"game_platforms": "PC (Microsoft Windows) Xbox 360 PlayStation Network PlayStation Vita PlayStation 4 Xbox One",
"game_rating": 85.2461347893,
"game_rating_count": 23,
"game_size": " 640.55 MB ",
"game_themes": "Action",
"game_title": "Super Time Force Ultra",
"games_found": 1,
"hltb_main": 4.0,
"igdb_json": "[\n {\n \"id\": 8879,\n \"genres\": [\n 5,\n 8,\n 31,\n 32\n ],\n \"name\": \"Super Time Force Ultra\",\n \"platforms\": [\n 6,\n 12,\n 45,\n 46,\n 48,\n 49\n ],\n \"release_dates\": [\n 29152,\n 29153,\n 29154,\n 43003,\n 43004,\n 105958,\n 105959\n ],\n \"summary\": \"Super Time Force Ultra is an action-packed platformer with a time-travelling twist! You\u2019re in control of time itself, bending and stretching it to your advantage on the battlefield. Rewind time and choose when to jump back into the action, teaming-up with your past selves in a unique single-player co-op experience! Take control of up to 16 unique characters, and battle across 6 different time periods, from the long-ago past to the far-away future.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 85.24613478932031,\n \"total_rating_count\": 23,\n \"url\": \"https://www.igdb.com/games/super-time-force-ultra\"\n }\n]",
"igdb_release_date": 1441152000000,
"summary": "Super Time Force Ultra is an action-packed platformer with a time-travelling twist! You\u2019re in control of time itself, bending and stretching it to your advantage on the battlefield. Rewind time and choose when to jump back into the action, teaming-up with your past selves in a unique single-player co-op experience! Take control of up to 16 unique characters, and battle across 6 different time periods, from the long-ago past to the far-away future.",
"web-scraper-order": "1197"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Sport Indie",
"game_igdb_id": 9523,
"game_igdb_name": "OlliOlli2: Welcome to Olliwood",
"game_platforms": "Linux PC (Microsoft Windows) PlayStation 3 Mac PlayStation Vita PlayStation 4 Xbox One",
"game_rating": 78.3917248127,
"game_rating_count": 33,
"game_size": " 139 MB ",
"game_themes": "Action",
"game_title": "OlliOlli2: Welcome to Olliwood",
"games_found": 1,
"hltb_main": 3.0,
"igdb_json": "[\n {\n \"id\": 9523,\n \"genres\": [\n 8,\n 14,\n 32\n ],\n \"name\": \"OlliOlli2: Welcome to Olliwood\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 46,\n 48,\n 49\n ],\n \"release_dates\": [\n 28265,\n 28266,\n 28267,\n 130297,\n 183942,\n 183943,\n 183944\n ],\n \"summary\": \"Drop in to Olliwood and prepare for finger-flippin\u2019 mayhem in this follow up to cult skateboarding smashOlliOlli. The iconic skater is going all green-screen with a stunning new look, plucking you from the street and dropping you squarely in the middle of the big screen\u2019s most bodacious cinematic locations.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 78.39172481274615,\n \"total_rating_count\": 33,\n \"url\": \"https://www.igdb.com/games/olliolli2-welcome-to-olliwood\"\n }\n]",
"igdb_release_date": 1425427200000,
"summary": "Drop in to Olliwood and prepare for finger-flippin\u2019 mayhem in this follow up to cult skateboarding smashOlliOlli. The iconic skater is going all green-screen with a stunning new look, plucking you from the street and dropping you squarely in the middle of the big screen\u2019s most bodacious cinematic locations.",
"web-scraper-order": "1206"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Hack and slash/Beat 'em up Adventure Indie",
"game_igdb_id": 17026,
"game_igdb_name": "Furi",
"game_platforms": "PC (Microsoft Windows) PlayStation Network PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 79.0436333285,
"game_rating_count": 142,
"game_size": " 3.65 GB ",
"game_themes": "Action Science fiction",
"game_title": "Furi",
"games_found": 10,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 17026,\n \"genres\": [\n 5,\n 25,\n 31,\n 32\n ],\n \"name\": \"Furi\",\n \"platforms\": [\n 6,\n 45,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 53089,\n 53090,\n 60514,\n 103218,\n 103219,\n 116252,\n 131745,\n 136030\n ],\n \"summary\": \"Fight your way free in our frenzied all-boss fighter, and discover what\u2019s waiting behind the last gate. Furi is all about the tension of one-on-one fights against deadly adversaries. It\u2019s an intense, ultra-responsive hack-and-slash with a unique mix of fast-paced sword fighting and dual-stick shooting. Each of the formidable guardians \u2014designed by Afro Samurai creator Takashi Okazaki\u2014 has a unique and surprising combat style that requires focus and skill to defeat. The high-energy action gets a boost from an explosive soundtrack composed by electro musicians including Carpenter Brut, who created the trailer\u2019s theme.\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 79.0436333285202,\n \"total_rating_count\": 142,\n \"url\": \"https://www.igdb.com/games/furi\"\n },\n {\n \"id\": 54844,\n \"artworks\": [\n 4979,\n 4980,\n 7955,\n 7956,\n 7957\n ],\n \"genres\": [\n 5,\n 32\n ],\n \"name\": \"Ion Fury\",\n \"platforms\": [\n 3,\n 6,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 172628,\n 172629,\n 195813,\n 195814,\n 195815\n ],\n \"summary\": \"Shelly \u201cBombshell\u201d Harrison earned her nickname as a bomb disposal expert for the Global Defense Force. When transhumanist cult mastermind Dr. Jadus Heskel unleashes a cybernetic army on Neo DC, Shelly decides it\u2019s time to start chucking bombs rather than diffusing them. \\n \\nHer journey will leave trails of blood and gore in huge, multi-path levels filled with those famous colorful keycards and plenty of secrets and Easter Eggs to discover behind every corner. There\u2019s also no regenerating health here, so stop taking cover and start running and gunning. Honestly, Ion Maiden should probably come out on three hundred floppy disks. \\n \\nShelly\u2019s quest to take down Dr. Heskel\u2019s army will see her use an arsenal of weapons, all with alternate fire modes or different ammo types. Her signature revolver, Loverboy, brings enemies pain and players pleasure with single shots, or Shelly can fan the hammer Old West style. Shotguns are fun, but tossing grenades down their barrels and firing explosive rounds is even better. Bowling Bombs are just as violent and over-the-top as one would hope. \\n \\nIon Maiden laughs at the idea of constant checkpoints and straight paths through shooting galleries. But just because this is a true old-school first-person shooter doesn\u2019t mean there won\u2019t be all the good new stuff the last two decades have brought. Headshots? Hell yeah. More physics and interactivity? You betcha. Widescreen, controller support, and Auto Saves? 3D Realms and Voidpoint took the best of both worlds and cooked it all into a bloody stew.\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 81.11534457391255,\n \"total_rating_count\": 22,\n \"url\": \"https://www.igdb.com/games/ion-fury\"\n },\n {\n \"id\": 112185,\n \"artworks\": [\n 9297\n ],\n \"genres\": [\n 5,\n 8,\n 12,\n 31,\n 32\n ],\n \"name\": \"Fury Unleashed\",\n \"platforms\": [\n 6,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 189312,\n 192469,\n 196089,\n 202756\n ],\n \"summary\": \"Fury Unleashed is an action platformer where you shoot your way through the pages of an ever-changing comic book. Take advantage of unique combo system to unleash your fury and become a whirling tornado of destruction. Discover why your creator doubted you and prove him wrong. \\n \\nYour guns are your best friends, and ink is your most valuable resource \u2013 collect it from defeated enemies to permanently upgrade your hero. The more dynamic and brave your play style, the greater the rewards! But remember to stay focused at all times, or you will die sooner than you think and lose everything you looted on your way.\",\n \"themes\": [\n 1,\n 17,\n 18\n ],\n \"total_rating\": 78.3333333333333,\n \"total_rating_count\": 6,\n \"url\": \"https://www.igdb.com/games/fury-unleashed\"\n },\n {\n \"id\": 28507,\n \"genres\": [\n 4\n ],\n \"name\": \"ACA NEOGEO FATAL FURY\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 175996,\n 175997,\n 176350\n ],\n \"summary\": \"FATAL FURY is a fighting game released by SNK in 1991. Players take part in brutal street fights in a variety of locations, with the goal of toppling the infamous crime lord Geese Howard.\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/aca-neogeo-fatal-fury\"\n },\n {\n \"id\": 11052,\n \"genres\": [\n 33\n ],\n \"name\": \"Kung Fury: Street Rage\",\n \"platforms\": [\n 6,\n 34,\n 39,\n 45,\n 48\n ],\n \"release_dates\": [\n 30499,\n 30500,\n 30501,\n 32240,\n 103883,\n 103884,\n 120654\n ],\n \"summary\": \"Get blown into another dimension as you experience the gutbusting fun with the Kung Fury game. Beat the Nazis to stop Kung F\u00fchrer and uphold the law! Become the action!\\n\\nBe. Kung. Fury!!!!\",\n \"themes\": [\n 1,\n 27\n ],\n \"total_rating\": 50.2683931085663,\n \"total_rating_count\": 25,\n \"url\": \"https://www.igdb.com/games/kung-fury-street-rage\"\n },\n {\n \"id\": 99484,\n \"genres\": [\n 4\n ],\n \"name\": \"ACA NEOGEO FATAL FURY 3\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 175948,\n 175949,\n 176364\n ],\n \"summary\": \"FATAL FURY 3 is a fighting game released by SNK in 1995. A new \u201cStory\u201d is about to begin in the city of South Town. Five new characters join for a total of 10 hungry wolves ready for battle.\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/aca-neogeo-fatal-fury-3\"\n },\n {\n \"id\": 99510,\n \"genres\": [\n 4\n ],\n \"name\": \"ACA NEOGEO FATAL FURY 2\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 175965,\n 175966,\n 176353\n ],\n \"summary\": \"FATAL FURY 2 is a fighting game released by SNK in 1992. Terry, Andy, Joe return from the previous installment along with five new fighters in order to decide who is the strongest one.\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/aca-neogeo-fatal-fury-2\"\n },\n {\n \"id\": 28138,\n \"genres\": [\n 4\n ],\n \"name\": \"Fatal Fury: Wild Ambition\",\n \"platforms\": [\n 7,\n 45,\n 135\n ],\n \"release_dates\": [\n 68186,\n 68187,\n 68188,\n 68189,\n 68190,\n 126844\n ],\n \"summary\": \"The only Hyper Neo Geo 64 game to be ported to another system.\",\n \"themes\": [\n 17\n ],\n \"total_rating\": 66.6708023478881,\n \"total_rating_count\": 6,\n \"url\": \"https://www.igdb.com/games/fatal-fury-wild-ambition\"\n },\n {\n \"id\": 99742,\n \"artworks\": [\n 8956\n ],\n \"genres\": [\n 4\n ],\n \"name\": \"ACA NEOGEO FATAL FURY SPECIAL\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 176321,\n 176322,\n 176323\n ],\n \"summary\": \"FATAL FURY SPECIAL is a powered-up version of \\u0027FATAL FURY 2\\u0027 which brings a faster game speed, introduces combo attacks for the first time in the Series, and welcomes more returning characters for a total of 15 fighters.\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/aca-neogeo-fatal-fury-special\"\n },\n {\n \"id\": 90199,\n \"genres\": [\n 4,\n 33\n ],\n \"name\": \"ACA NEOGEO REAL BOUT FATAL FURY\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 176309,\n 176310,\n 176365\n ],\n \"summary\": \"REAL BOUT FATAL FURY is a fighting game released by SNK in 1995. Using the previous system as a base, isolating the Sweep Button, the inclusion of Combination Attacks, and other elements such as ring outs make for an even speedier and tempo-based play style. Sixteen fighters battle it out to see who is the strongest.\",\n \"themes\": [\n 1\n ],\n \"url\": \"https://www.igdb.com/games/aca-neogeo-real-bout-fatal-fury\"\n }\n]",
"igdb_release_date": 1467676800000,
"summary": "Fight your way free in our frenzied all-boss fighter, and discover what\u2019s waiting behind the last gate. Furi is all about the tension of one-on-one fights against deadly adversaries. It\u2019s an intense, ultra-responsive hack-and-slash with a unique mix of fast-paced sword fighting and dual-stick shooting. Each of the formidable guardians \u2014designed by Afro Samurai creator Takashi Okazaki\u2014 has a unique and surprising combat style that requires focus and skill to defeat. The high-energy action gets a boost from an explosive soundtrack composed by electro musicians including Carpenter Brut, who created the trailer\u2019s theme.",
"web-scraper-order": "1210"
},
{
"game_buy_date": " 22/1/2017",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Strategy Adventure",
"game_igdb_id": 5328,
"game_igdb_name": "Metal Gear Solid V: Ground Zeroes",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 Xbox 360 PlayStation 4 Xbox One",
"game_rating": 77.4949480898,
"game_rating_count": 336,
"game_size": " 3.9 GB ",
"game_themes": "Action Science fiction Stealth Open world",
"game_title": "METAL GEAR SOLID V: GROUND ZEROES",
"games_found": 1,
"hltb_main": 2.0,
"igdb_json": "[\n {\n \"id\": 5328,\n \"genres\": [\n 5,\n 15,\n 31\n ],\n \"name\": \"Metal Gear Solid V: Ground Zeroes\",\n \"platforms\": [\n 6,\n 9,\n 12,\n 48,\n 49\n ],\n \"release_dates\": [\n 12601,\n 12602,\n 12603,\n 12604,\n 108034,\n 133873,\n 133874,\n 223274\n ],\n \"summary\": \"World-renowned Kojima Productions showcases another masterpiece in the Metal Gear Solid franchise with Metal Gear Solid V: Ground Zeroes. Metal Gear Solid V: Ground Zeroes is the first segment of the \u2018Metal Gear Solid V Experience\u2019 and prologue to the larger second segment, Metal Gear Solid V: The Phantom Pain launching thereafter. \\n \\n \\nMGSV: GZ gives core fans the opportunity to get a taste of the world-class production\u2019s unparalleled visual presentation and gameplay before the release of the main game. It also provides an opportunity for gamers who have never played a Kojima Productions game, and veterans alike, to gain familiarity with the radical new game design and unparalleled style of presentation. \\n \\n \\nThe critically acclaimed Metal Gear Solid franchise has entertained fans for decades and revolutionized the gaming industry. Kojima Productions once again raises the bar with the FOX Engine offering incredible graphic fidelity and the introduction of open world game design in the Metal Gear Solid universe. This is the experience that core gamers have been waiting for.\",\n \"themes\": [\n 1,\n 18,\n 23,\n 38\n ],\n \"total_rating\": 77.49494808981589,\n \"total_rating_count\": 336,\n \"url\": \"https://www.igdb.com/games/metal-gear-solid-v-ground-zeroes\"\n }\n]",
"igdb_release_date": 1395360000000,
"summary": "World-renowned Kojima Productions showcases another masterpiece in the Metal Gear Solid franchise with Metal Gear Solid V: Ground Zeroes. Metal Gear Solid V: Ground Zeroes is the first segment of the \u2018Metal Gear Solid V Experience\u2019 and prologue to the larger second segment, Metal Gear Solid V: The Phantom Pain launching thereafter. \n \n \nMGSV: GZ gives core fans the opportunity to get a taste of the world-class production\u2019s unparalleled visual presentation and gameplay before the release of the main game. It also provides an opportunity for gamers who have never played a Kojima Productions game, and veterans alike, to gain familiarity with the radical new game design and unparalleled style of presentation. \n \n \nThe critically acclaimed Metal Gear Solid franchise has entertained fans for decades and revolutionized the gaming industry. Kojima Productions once again raises the bar with the FOX Engine offering incredible graphic fidelity and the introduction of open world game design in the Metal Gear Solid universe. This is the experience that core gamers have been waiting for.",
"web-scraper-order": "1213"
},
{
"game_buy_date": " 27/11/2015",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle Indie",
"game_igdb_id": 7641,
"game_igdb_name": "Teslagrad",
"game_platforms": "Linux PC (Microsoft Windows) PlayStation 3 Mac Wii U PlayStation Network PlayStation Vita PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 76.6677350731,
"game_rating_count": 41,
"game_size": " 843.91 MB ",
"game_themes": "Action Science fiction",
"game_title": "Teslagrad",
"games_found": 1,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 7641,\n \"genres\": [\n 8,\n 9,\n 32\n ],\n \"name\": \"Teslagrad\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 41,\n 45,\n 46,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 19963,\n 20726,\n 20727,\n 20728,\n 49710,\n 49711,\n 49712,\n 49713,\n 106084,\n 122361,\n 127742,\n 128854,\n 134205,\n 136348,\n 144065,\n 146574,\n 146575\n ],\n \"summary\": \"Teslagrad is a 2D puzzle platformer with action elements, where magnetism and other electromagnetic powers are the key to go throughout the game, and thereby discover the secrets kept in the long abandoned Tesla Tower. Gain new abilities to explore a non-linear world with more than 100 beautiful hand-drawn environments, in a steampunk-inspired vision of old Europe. Jump into an outstanding adventure told through voiceless storytelling, writing your own part. Armed with ancient Teslamancer technology and your own ingenuity and creativity, your path lies through the decrepit Tesla Tower and beyond.\",\n \"themes\": [\n 1,\n 18\n ],\n \"total_rating\": 76.66773507312445,\n \"total_rating_count\": 41,\n \"url\": \"https://www.igdb.com/games/teslagrad\"\n }\n]",
"igdb_release_date": 1417564800000,
"summary": "Teslagrad is a 2D puzzle platformer with action elements, where magnetism and other electromagnetic powers are the key to go throughout the game, and thereby discover the secrets kept in the long abandoned Tesla Tower. Gain new abilities to explore a non-linear world with more than 100 beautiful hand-drawn environments, in a steampunk-inspired vision of old Europe. Jump into an outstanding adventure told through voiceless storytelling, writing your own part. Armed with ancient Teslamancer technology and your own ingenuity and creativity, your path lies through the decrepit Tesla Tower and beyond.",
"web-scraper-order": "1244"
},
{
"game_buy_date": " 12/3/2015",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Shooter Indie",
"game_igdb_id": 1384,
"game_igdb_name": "Hotline Miami",
"game_platforms": "Linux PC (Microsoft Windows) PlayStation 3 Mac Android PlayStation Network PlayStation Vita PlayStation 4",
"game_rating": 85.2013970117,
"game_rating_count": 788,
"game_size": " 828.7 MB ",
"game_themes": "Action",
"game_title": "Hotline Miami",
"games_found": 2,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 1384,\n \"artworks\": [\n 8863\n ],\n \"genres\": [\n 5,\n 32\n ],\n \"name\": \"Hotline Miami\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 34,\n 45,\n 46,\n 48\n ],\n \"release_dates\": [\n 39930,\n 39931,\n 39932,\n 39933,\n 39934,\n 39935,\n 39936,\n 39937,\n 39938,\n 39939,\n 135461,\n 135462\n ],\n \"summary\": \"A top-down slasher/shooter with unlockable gameplay-altering masks and weapons, featuring a neon-flavoured electronic aesthetic, in which a hitman receives anonymous calls ordering him to travel to certain residences and crime dens and massacre those within, as he stumbles through unreal visions and inconsistencies without any answers to how, why or who.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 85.20139701174546,\n \"total_rating_count\": 788,\n \"url\": \"https://www.igdb.com/games/hotline-miami\"\n },\n {\n \"id\": 2126,\n \"artworks\": [\n 8994\n ],\n \"genres\": [\n 5,\n 32\n ],\n \"name\": \"Hotline Miami 2: Wrong Number\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 45,\n 46,\n 48\n ],\n \"release_dates\": [\n 27592,\n 27593,\n 27594,\n 29067,\n 29068,\n 29069,\n 103581,\n 103582,\n 122484\n ],\n \"summary\": \"A sequel, sidequel and prequel to Hotline Miami (2012) with similar unlockables, violent top-down gameplay and \\u002780s Miami/modern electronic aesthetics, Hotline Miami 2 follows multiple factions related to the events of the original game as they commit increasingly bloody and surreal acts. A greater emphasis is put on storytelling, and the boundaries between real and fictional violence.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 75.32491666820175,\n \"total_rating_count\": 252,\n \"url\": \"https://www.igdb.com/games/hotline-miami-2-wrong-number\"\n }\n]",
"igdb_release_date": 1372204800000,
"summary": "A top-down slasher/shooter with unlockable gameplay-altering masks and weapons, featuring a neon-flavoured electronic aesthetic, in which a hitman receives anonymous calls ordering him to travel to certain residences and crime dens and massacre those within, as he stumbles through unreal visions and inconsistencies without any answers to how, why or who.",
"web-scraper-order": "1248"
},
{
"game_buy_date": " 6/3/2020",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle",
"game_igdb_id": 19241,
"game_igdb_name": "Unravel Two",
"game_platforms": "PC (Microsoft Windows) PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 80.6010214846,
"game_rating_count": 67,
"game_size": " 3.05 GB ",
"game_themes": "Kids",
"game_title": "Unravel",
"games_found": 3,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 19241,\n \"artworks\": [\n 5074\n ],\n \"genres\": [\n 8,\n 9\n ],\n \"name\": \"Unravel Two\",\n \"platforms\": [\n 6,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 153680,\n 153681,\n 153682,\n 165084\n ],\n \"summary\": \"Unravel two is the sequel to the 2015 puzzle platforming game Unravel. It was announced during E3 2018, that the game was actually already finished and available instantly! In the game there are two Yarny\\u0027s (made out of yarn) which can be controlled by one player, though the game can also be played in co-op. Together the Yarny\\u0027s explore area\\u0027s and solve the puzzles within them.\",\n \"themes\": [\n 35\n ],\n \"total_rating\": 80.60102148462995,\n \"total_rating_count\": 67,\n \"url\": \"https://www.igdb.com/games/unravel-two\"\n },\n {\n \"id\": 11170,\n \"artworks\": [\n 6313\n ],\n \"genres\": [\n 8,\n 9,\n 31\n ],\n \"name\": \"Unravel\",\n \"platforms\": [\n 6,\n 48,\n 49\n ],\n \"release_dates\": [\n 42383,\n 42384,\n 106687,\n 106688,\n 134324,\n 136426\n ],\n \"summary\": \"Unravel is a game about Yarny, a tiny character born from a single thread. Yarny embarks on a big adventure into the nature, inspired by the beauty of Northern Scandinavia. Without any spoken words, the character will have to solve puzzles and use tools to overcome tough challenges. All this, in order to find memories of his lost family. \\nOnly Yarny can be the bond that ties everything together in the end.\",\n \"total_rating\": 81.31358716746085,\n \"total_rating_count\": 151,\n \"url\": \"https://www.igdb.com/games/unravel\"\n },\n {\n \"id\": 115025,\n \"genres\": [\n 8,\n 9,\n 31\n ],\n \"name\": \"Unravel: Yarny Bundle\",\n \"platforms\": [\n 48\n ],\n \"release_dates\": [\n 164669,\n 164670,\n 164671\n ],\n \"summary\": \"Discover the delightful adventures of Unravel and Unravel Two, with the Unravel Yarny Bundle.\",\n \"url\": \"https://www.igdb.com/games/unravel-yarny-bundle\"\n }\n]",
"igdb_release_date": 1528502400000,
"summary": "Unravel two is the sequel to the 2015 puzzle platforming game Unravel. It was announced during E3 2018, that the game was actually already finished and available instantly! In the game there are two Yarny's (made out of yarn) which can be controlled by one player, though the game can also be played in co-op. Together the Yarny's explore area's and solve the puzzles within them.",
"web-scraper-order": "863"
},
{
"game_buy_date": " 5/6/2019",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Adventure",
"game_igdb_id": 21062,
"game_igdb_name": "Sonic Mania",
"game_platforms": "PC (Microsoft Windows) PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 84.6740957806,
"game_rating_count": 194,
"game_size": " 198.57 MB ",
"game_themes": "Action",
"game_title": "Sonic Mania",
"games_found": 2,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 21062,\n \"artworks\": [\n 1406\n ],\n \"genres\": [\n 8,\n 31\n ],\n \"name\": \"Sonic Mania\",\n \"platforms\": [\n 6,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 90787,\n 90788,\n 90791,\n 90792,\n 90793,\n 108909,\n 134114,\n 134115,\n 136302,\n 143324\n ],\n \"summary\": \"It\\u0027s the ultimate Sonic celebration! Sonic returns in a new 2D platforming high speed adventure, and he\\u0027s not alone! \\n \\nDeveloped in collaboration between SEGA, Christian Whitehead, Headcannon, and PagodaWest Games, experience new zones and remixed classic levels with Sonic, Tails, and Knuckles!\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 84.6740957805855,\n \"total_rating_count\": 194,\n \"url\": \"https://www.igdb.com/games/sonic-mania\"\n },\n {\n \"id\": 94873,\n \"genres\": [\n 8\n ],\n \"name\": \"Sonic Mania Plus\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 155713,\n 155714,\n 155715\n ],\n \"summary\": \"Sonic Mania Plus is an expanded version of Sonic Mania. \\n \\nIt\\u0027s the ultimate Sonic celebration! Sonic returns in a new 2D platforming high speed adventure, and he\\u0027s not alone! \\n \\nDeveloped in collaboration between SEGA, Christian Whitehead, Headcannon, and PagodaWest Games, experience new zones and remixed classic levels with Sonic, Tails, and Knuckles!\",\n \"total_rating\": 91.0679826849325,\n \"total_rating_count\": 30,\n \"url\": \"https://www.igdb.com/games/sonic-mania-plus\"\n }\n]",
"igdb_release_date": 1502755200000,
"summary": "It's the ultimate Sonic celebration! Sonic returns in a new 2D platforming high speed adventure, and he's not alone! \n \nDeveloped in collaboration between SEGA, Christian Whitehead, Headcannon, and PagodaWest Games, experience new zones and remixed classic levels with Sonic, Tails, and Knuckles!",
"web-scraper-order": "896"
},
{
"game_buy_date": " 8/5/2019",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Puzzle Adventure Indie",
"game_igdb_id": 11233,
"game_igdb_name": "What Remains of Edith Finch",
"game_platforms": "PC (Microsoft Windows) PlayStation Network PlayStation 4 Xbox One Nintendo Switch",
"game_rating": 85.9428890749,
"game_rating_count": 444,
"game_size": " 2.78 GB ",
"game_themes": "Drama",
"game_title": "What Remains of Edith Finch",
"games_found": 1,
"hltb_main": 2.0,
"igdb_json": "[\n {\n \"id\": 11233,\n \"genres\": [\n 9,\n 31,\n 32\n ],\n \"name\": \"What Remains of Edith Finch\",\n \"platforms\": [\n 6,\n 45,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 68356,\n 68357,\n 93183,\n 106850,\n 106851,\n 171957\n ],\n \"summary\": \"What Remains of Edith Finch is a collection of short stories about a cursed family in Washington State. \\n \\nEach story offers a chance to experience the life of a different family member with stories ranging from the early 1900s to the present day. The gameplay and tone of the stories are as varied as the family members themselves. The only constants are that each is played from a first-person perspective and that each story ends with that family member\\u0027s death. It\\u0027s a game about what it feels like to be humbled and astonished by the vast and unknowable world around us. \\n \\nYou\\u0027ll follow Edith Finch as she explores the history of her family and tries to figure out why she\\u0027s the last Finch left alive.\",\n \"themes\": [\n 31\n ],\n \"total_rating\": 85.94288907494915,\n \"total_rating_count\": 444,\n \"url\": \"https://www.igdb.com/games/what-remains-of-edith-finch\"\n }\n]",
"igdb_release_date": 1493078400000,
"summary": "What Remains of Edith Finch is a collection of short stories about a cursed family in Washington State. \n \nEach story offers a chance to experience the life of a different family member with stories ranging from the early 1900s to the present day. The gameplay and tone of the stories are as varied as the family members themselves. The only constants are that each is played from a first-person perspective and that each story ends with that family member's death. It's a game about what it feels like to be humbled and astonished by the vast and unknowable world around us. \n \nYou'll follow Edith Finch as she explores the history of her family and tries to figure out why she's the last Finch left alive.",
"web-scraper-order": "901"
},
{
"game_buy_date": " 5/9/2018",
"game_category": "Spiel ",
"game_flag": null,
"game_genres": "Platform Puzzle Adventure",
"game_igdb_id": 4348,
"game_igdb_name": "Another World",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 PC DOS Amiga Super Nintendo Entertainment System (SNES) Sega Mega Drive/Genesis Android Nintendo 3DS iOS Wii U PlayStation Network PlayStation Vita PlayStation 4 Xbox One 3DO Interactive Multiplayer Mobile Super Famicom Atari ST/STE Windows Phone Apple IIGS",
"game_rating": 87.05956629,
"game_rating_count": 114,
"game_size": " 170.66 MB ",
"game_themes": "Action Science fiction Survival",
"game_title": "Another World",
"games_found": 6,
"hltb_main": 2.0,
"igdb_json": "[\n {\n \"id\": 4348,\n \"artworks\": [\n 5763\n ],\n \"genres\": [\n 8,\n 9,\n 31\n ],\n \"name\": \"Another World\",\n \"platforms\": [\n 6,\n 9,\n 13,\n 16,\n 19,\n 29,\n 34,\n 37,\n 39,\n 41,\n 45,\n 46,\n 48,\n 49,\n 50,\n 55,\n 58,\n 63,\n 74,\n 115\n ],\n \"release_dates\": [\n 10074,\n 25757,\n 25760,\n 25761,\n 25762,\n 25763,\n 25764,\n 25765,\n 25766,\n 25767,\n 25768,\n 25769,\n 25770,\n 25771,\n 25772,\n 34982,\n 39843,\n 101595,\n 101596,\n 218931,\n 218932,\n 218933\n ],\n \"summary\": \"Another World chronicles the story of a man hurtled through space and time by a nuclear experiment gone wrong. You assume the role of Lester Knight Chaykin, a young physicist. You\u2019ll need to dodge, outwit, and overcome a host of alien monsters and deadly earthquakes that plague the alien landscape you now call home. Only a perfect blend of logic and skill will get you past the deadly obstacles that lie in waiting.\",\n \"themes\": [\n 1,\n 18,\n 21\n ],\n \"total_rating\": 87.0595662900337,\n \"total_rating_count\": 114,\n \"url\": \"https://www.igdb.com/games/another-world\"\n },\n {\n \"id\": 16493,\n \"genres\": [\n 31\n ],\n \"name\": \"Another World: 20th Anniversary Edition\",\n \"platforms\": [\n 3,\n 6,\n 9,\n 14,\n 37,\n 41,\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 44468,\n 44469,\n 44470,\n 135842,\n 137418,\n 137419,\n 146082,\n 146083,\n 154505,\n 154506,\n 159057,\n 218678\n ],\n \"summary\": \"Also known as Out Of This World, Another World is a pioneer action/platformer that released across more than a dozen platforms since its debut in 1991. Along the years, Another World has attained cult status among critics and sophisticated gamers alike.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 68.95924710476291,\n \"total_rating_count\": 12,\n \"url\": \"https://www.igdb.com/games/another-world-20th-anniversary-edition\"\n },\n {\n \"id\": 63656,\n \"genres\": [\n 12,\n 31\n ],\n \"name\": \"Oz no Mahoutsukai - Another World - RungRung\",\n \"platforms\": [\n 7,\n 45\n ],\n \"release_dates\": [\n 211204,\n 211205,\n 211206\n ],\n \"summary\": \"A heartwarming adventure RPG based on \\\"The Wonderful Wizard of Oz\\\". In the game, Dorothy Gale and Toto must collect magical items in order to get back to Kansas.\",\n \"themes\": [\n 17\n ],\n \"url\": \"https://www.igdb.com/games/oz-no-mahoutsukai-another-world-rungrung\"\n },\n {\n \"id\": 26668,\n \"genres\": [\n 31\n ],\n \"name\": \"Re:Zero -Starting Life in Another World- Death or Kiss\",\n \"platforms\": [\n 46,\n 48\n ],\n \"release_dates\": [\n 101254,\n 101255\n ],\n \"summary\": \"Based on a light novel by Nagatsuki Tappei. It follows an original story that differs from the original light novel series and the anime. \\n \\nIn its original story, the Ruler Election has spun off a beauty contest, giving the winner a treasure that promises the recipient great luck and a powerful advantage to achieving the throne. \\nExcept, the prize isn\\u0027t what it seems, and, taking on its \\\"Death Curse\\\", Subaru must use his power to return from the dead to solve the mystery, build the trusts with the girls, receive the \\\"kiss of a beautiful girl\\\" and break the curse to escape death. \\n \\nCandidates for a relationship with Subaru are Emilia, Rem, Ram, Felt, Beatrice, Crusch, Priscilla and Anastasia.\",\n \"themes\": [\n 17\n ],\n \"url\": \"https://www.igdb.com/games/re-zero-starting-life-in-another-world-death-or-kiss\"\n },\n {\n \"id\": 134556,\n \"genres\": [\n 24,\n 31\n ],\n \"name\": \"Re:ZERO -Starting Life in Another World- The Prophecy of the Throne\",\n \"platforms\": [\n 6,\n 48,\n 130\n ],\n \"release_dates\": [\n 221467,\n 221468,\n 221469,\n 221470,\n 221471,\n 221472,\n 221473\n ],\n \"summary\": \"The popular light novel/anime series, Re:ZERO - Starting Life in Another World - is coming as a Tactical Adventure game!\",\n \"themes\": [\n 17\n ],\n \"url\": \"https://www.igdb.com/games/re-zero-starting-life-in-another-world-the-prophecy-of-the-throne\"\n },\n {\n \"id\": 130804,\n \"name\": \"Another World/Flashback\",\n \"platforms\": [\n 48,\n 49,\n 130\n ],\n \"release_dates\": [\n 222284,\n 222285,\n 222286\n ],\n \"summary\": \"Rediscover the 90\u2019s cornerstones of adventure videogames, gathered together for the first time! \\nAlso known as Out Of This World\u2122, Another World is a pioneer action/platformer that released across more than a dozen platforms since its debut in 1991. Along the years, Another World\u2122 has attained cult status among critics and sophisticated gamers alike. \\n \\nAnother World\u2122 chronicles the story of Lester Knight Chaykin a young scientist hurtled through space and time by a nuclear experiment that goes wrong. In an alien and inhospitable world, you will have to dodge, outwit, and overcome the host of alien monsters, while surviving an environment as deadly as your enemies. Only a perfect blend of logic and skill will get you past the deadly obstacles that lie in wait. \\n \\nKey Features: \\n \\nRemastered presentation: High Definition graphics faithful to the original design. \\n \\n3 difficulty modes: Normal (easier than original game), Difficult (Equal to original game) and Hardcore (more difficult than original game) \\n \\nA new immersive experience: rediscover a cult adventure with 100% remastered sounds and FX \\n \\nFlashback: 2142. After fleeing from a spaceship but stripped of all memory, the young scientist Conrad B. Hart awakens on Titan, a colonized moon of the planet Saturn. His enemies and kidnappers are snapping at his heels. He must find a way back to Earth, defending himself against the dangers he encounters and unravelling an insidious extra-terrestrial plot that threatens the planet\u2026 \\n \\nRediscover this classic, consistently ranked among the best 100 games of all time! It was one of the first games to use motion capture technology for more realistic animations, with backgrounds that were entirely hand-drawn and a gripping science-fiction storyline. \\n \\nChoose to play with the original graphics and sounds from the 90\u2019s and face an unforgiving difficulty. \\n \\nOr go with the Modern mode. You can also tune your experience by turning Modern mode options independently and on the fly. \\n \\nModern mode: \\n- Post-FX graphic filters \\n- Completely remastered sound and music \\n- A brand new \\\"Rewind\\\" function, variable according to the level of difficulty \\n- Tutorials for those who need a boost!\",\n \"url\": \"https://www.igdb.com/games/another-world-slash-flashback\"\n }\n]",
"igdb_release_date": 1403654400000,
"summary": "Another World chronicles the story of a man hurtled through space and time by a nuclear experiment gone wrong. You assume the role of Lester Knight Chaykin, a young physicist. You\u2019ll need to dodge, outwit, and overcome a host of alien monsters and deadly earthquakes that plague the alien landscape you now call home. Only a perfect blend of logic and skill will get you past the deadly obstacles that lie in waiting.",
"web-scraper-order": "982"
},
{
"game_buy_date": null,
"game_category": null,
"game_flag": null,
"game_genres": "Fighting",
"game_igdb_id": 23354,
"game_igdb_name": "Injustice: Gods Among Us - Ultimate Edition",
"game_platforms": "PC (Microsoft Windows) PlayStation 3 PlayStation Vita PlayStation 4",
"game_rating": 75.9948753817,
"game_rating_count": 58,
"game_size": null,
"game_themes": "Action",
"game_title": "Injustice Ultimate Edition",
"games_found": 1,
"hltb_main": 5.0,
"igdb_json": "[\n {\n \"id\": 23354,\n \"genres\": [\n 4\n ],\n \"name\": \"Injustice: Gods Among Us - Ultimate Edition\",\n \"platforms\": [\n 6,\n 9,\n 46,\n 48\n ],\n \"release_dates\": [\n 61458,\n 103677,\n 103678,\n 133777,\n 133778,\n 134998,\n 134999,\n 179452,\n 179458\n ],\n \"summary\": \"Injustice: Gods Among Us Ultimate Edition enhances the bold new franchise to the fighting game genre from NetherRealm Studios. Featuring six new playable characters, over 30 new skins, and 60 new S.T.A.R. Labs missions, this edition packs a punch. In addition to DC Comics icons such as Batman, The Joker, Green Lantern, The Flash, Superman and Wonder Woman, the latest title from the award-winning studio presents a deep original story. Heroes and villains will engage in epic battles on a massive scale in a world where the line between good and evil has been blurred.\",\n \"themes\": [\n 1\n ],\n \"total_rating\": 75.99487538172156,\n \"total_rating_count\": 58,\n \"url\": \"https://www.igdb.com/games/injustice-gods-among-us-ultimate-edition\"\n }\n]",
"igdb_release_date": 1385683200000,
"summary": "Injustice: Gods Among Us Ultimate Edition enhances the bold new franchise to the fighting game genre from NetherRealm Studios. Featuring six new playable characters, over 30 new skins, and 60 new S.T.A.R. Labs missions, this edition packs a punch. In addition to DC Comics icons such as Batman, The Joker, Green Lantern, The Flash, Superman and Wonder Woman, the latest title from the award-winning studio presents a deep original story. Heroes and villains will engage in epic battles on a massive scale in a world where the line between good and evil has been blurred.",
"web-scraper-order": null
}
]
None
</code>
Summarising the result gives a better overview. We are working with lists of dictionaries a return valies of the search functions in TinyDB._____no_output_____
<code>
result_sorted = sorted(query_result, key=lambda x: x["game_rating"], reverse=True)
for item in result_sorted:
print(item["game_title"], " / ", item["game_igdb_name"], ": ", round(item["game_rating"], 2), " / ", item["game_rating_count"])INSIDE / INSIDE : 89.76 / 866
Another World / Another World : 87.06 / 114
What Remains of Edith Finch / What Remains of Edith Finch : 85.94 / 444
Super Time Force Ultra / Super Time Force Ultra : 85.25 / 23
Hotline Miami / Hotline Miami : 85.2 / 788
Sonic Mania / Sonic Mania : 84.67 / 194
The Swapper / The Swapper : 83.51 / 133
LIMBO / Limbo : 82.83 / 879
Day of the Tentacle Remastered / Day of the Tentacle Remastered : 81.1 / 75
Unravel / Unravel Two : 80.6 / 67
Sound Shapes / Sound Shapes : 80.51 / 40
Firewatch / Firewatch : 79.66 / 618
Gone Home / Gone Home : 79.61 / 362
Furi / Furi : 79.04 / 142
Lovers in a Dangerous Spacetime / Lovers in a Dangerous Spacetime : 78.83 / 48
OlliOlli2: Welcome to Olliwood / OlliOlli2: Welcome to Olliwood : 78.39 / 33
The Unfinished Swan / The Unfinished Swan : 78.08 / 62
METAL GEAR SOLID V: GROUND ZEROES / Metal Gear Solid V: Ground Zeroes : 77.49 / 336
Teslagrad / Teslagrad : 76.67 / 41
LittleBigPlanet 3 / LittleBigPlanet 3 : 76.63 / 76
Injustice Ultimate Edition / Injustice: Gods Among Us - Ultimate Edition : 75.99 / 58
The Deadly Tower of Monsters / The Deadly Tower of Monsters : 75.82 / 39
Everybody's Gone To The Rapture™ / Everybody's Gone to the Rapture : 75.75 / 133
</code>
## Alternative Databases
Shortly tried playing around with SQL Alchemy and an SQLite database. As the base structures returned by IGDB are all JSONs substantial cleaning up would be necessary to create a proper SQL database. I stuck with TinyDB as database in the backend.
Another thing to consider would be just a plain JSON object being queried with eg JMESPath. Like TinyDB this would not be production ready, but the querying capabilities would potentially more powerful. Also it would involve porting the code to a real database as JMESPath to my knowledge is not directly supported by any document database.
_____no_output_____
<code>
engine = db.create_engine('sqlite://', echo=False)_____no_output_____df_tmp.to_sql('games', con=engine)
engine.execute("SELECT * FROM games").fetchall()
_____no_output_____metadata = db.MetaData()
games_table = db.Table('games', metadata, autoload=True, autoload_with=engine)
print(metadata.tables)
print(games_table.columns)
print(games_table.metadata)
immutabledict({'games': Table('games', MetaData(bind=None), Column('index', BIGINT(), table=<games>), Column('web-scraper-order', TEXT(), table=<games>), Column('game_title', TEXT(), table=<games>), Column('game_platforms', TEXT(), table=<games>), Column('game_flag', FLOAT(), table=<games>), Column('game_category', TEXT(), table=<games>), Column('game_size', TEXT(), table=<games>), Column('game_buy_date', TEXT(), table=<games>), Column('games_found', BIGINT(), table=<games>), Column('igdb_json', TEXT(), table=<games>), Column('game_rating', FLOAT(), table=<games>), Column('game_rating_count', BIGINT(), table=<games>), Column('game_genres', TEXT(), table=<games>), Column('game_themes', TEXT(), table=<games>), Column('game_igdb_id', BIGINT(), table=<games>), Column('game_igdb_name', TEXT(), table=<games>), Column('summary', TEXT(), table=<games>), Column('igdb_cover_url', TEXT(), table=<games>), Column('igdb_release_date', DATETIME(), table=<games>), Column('hltb_main', FLOAT(), table=<games>), schema=None)})
['games.index', 'games.web-scraper-order', 'games.game_title', 'games.game_platforms', 'games.game_flag', 'games.game_category', 'games.game_size', 'games.game_buy_date', 'games.games_found', 'games.igdb_json', 'games.game_rating', 'games.game_rating_count', 'games.game_genres', 'games.game_themes', 'games.game_igdb_id', 'games.game_igdb_name', 'games.summary', 'games.igdb_cover_url', 'games.igdb_release_date', 'games.hltb_main']
MetaData(bind=None)
</code>
| {
"repository": "Nop287/game_db",
"path": "TinyDB.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 472471,
"hexsha": "cb05fc6272aedbe809a0ec486eaed65e83bfb1cc",
"max_line_length": 9839,
"avg_line_length": 269.5213918996,
"alphanum_fraction": 0.5985912363
} |
# Notebook from matteoferla/pyrosetta_scripts
Path: colabs/colabs-pyrosetta.ipynb
<code>
#@title blank template
#@markdown This notebook from [github.com/matteoferla/pyrosetta_help](https://github.com/matteoferla/pyrosetta_help).
#@markdown It can be opened in Colabs via [https://colab.research.google.com/github/matteoferla/pyrosetta_help/blob/main/colabs/colabs-pyrosetta.ipynb](https://colab.research.google.com/github/matteoferla/pyrosetta_help/blob/main/colabs/colabs-pyrosetta.ipynb)
#@markdown It just for loading up PyRosetta_____no_output_____#@title Installation
#@markdown Installing PyRosetta with optional backup to your drive (way quicker next time!).
#@markdown Note that PyRosetta occupies some 10 GB, so you'll need to be on the 100 GB plan of Google Drive (it's one pound a month).
#@markdown The following is not the real password. However, the format is similar.
username = 'boltzmann' #@param {type:"string"}
password = 'constant' #@param {type:"string"}
#@markdown Release to install:
_release = 'release-295' #@param {type:"string"}
#@markdown Use Google Drive for PyRosetta (way faster next time, but takes up space)
#@markdown (NB. You may be prompted to follow a link and possibly authenticate and then copy a code into a box
use_drive = True #@param {type:"boolean"}
#@markdown Installing `rdkit` and `rdkit_to_params` allows the creation of custom topologies (params) for new ligands
install_rdkit = True #@param {type:"boolean"}
import sys
import platform
import os
assert platform.dist()[0] == 'Ubuntu'
py_version = str(sys.version_info.major) + str(sys.version_info.minor)
if use_drive:
from google.colab import drive
drive.mount('/content/drive')
_path = '/content/drive/MyDrive'
os.chdir(_path)
else:
_path = '/content'
if not any(['PyRosetta4.Release' in filename for filename in os.listdir()]):
assert not os.system(f'curl -u {username}:{password} https://graylab.jhu.edu/download/PyRosetta4/archive/release/PyRosetta4.Release.python{py_version}.ubuntu/PyRosetta4.Release.python{py_version}.ubuntu.{_release}.tar.bz2 -o /content/a.tar.bz2')
assert not os.system('tar -xf /content/a.tar.bz2')
assert not os.system(f'pip3 install -e {_path}/PyRosetta4.Release.python{py_version}.ubuntu.{_release}/setup/')
assert not os.system(f'pip3 install pyrosetta-help biopython')
if install_rdkit:
assert not os.system(f'pip3 install rdkit-pypi rdkit-to-params')
import site
site.main()_____no_output_____#@title Start PyRosetta
import pyrosetta
import pyrosetta_help as ph
no_optH = False #@param {type:"boolean"}
ignore_unrecognized_res=False #@param {type:"boolean"}
load_PDB_components=False #@param {type:"boolean"}
ignore_waters=False #@param {type:"boolean"}
extra_options= ph.make_option_string(no_optH=no_optH,
ex1=None,
ex2=None,
mute='all',
ignore_unrecognized_res=ignore_unrecognized_res,
load_PDB_components=load_PDB_components,
ignore_waters=ignore_waters)
# capture to log
logger = ph.configure_logger()
pyrosetta.init(extra_options=extra_options)_____no_output_____## Usual stuff
pose = ph.pose_from_alphafold2('P02144')
scorefxn = pyrosetta.get_fa_scorefxn()
relax = pyrosetta.rosetta.protocols.relax.FastRelax(scorefxn, 3)
movemap = pyrosetta.MoveMap()
movemap.set_bb(False)
movemap.set_chi(True)
relax.apply(pose)_____no_output_____# Note that nglview does not work with Colabs but py3Dmol does.
# install py3Dmol
os.system(f'pip3 install py3Dmol')
import site
site.main()
# run
import py3Dmol
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js',)
view.addModel(ph.get_pdbstr(pose),'pdb')
view.zoomTo()
view_____no_output_____# Also note that RDKit Chem.Mol instances are not displays as representations by default._____no_output_____#@title Upload to Michelanglo (optional)
#@markdown [Michelanglo](https://michelanglo.sgc.ox.ac.uk/) is a website that
#@markdown allows the creation, annotation and sharing of a webpage with an interactive protein viewport.
#@markdown ([examples](https://michelanglo.sgc.ox.ac.uk/gallery)).
#@markdown The created pages are private —they have a 1 in a quintillion change to be guessed within 5 tries.
#@markdown Registered users (optional) can add interactive annotations to pages.
#@markdown A page created by a guest is editable by registered users with the URL to it
#@markdown (this can be altered in the page settings).
#@markdown Leave blank for guest (it will not add an interactive description):
username = '' #@param {type:"string"}
password = '' #@param {type:"string"}
import os
assert not os.system(f'pip3 install michelanglo-api')
import site
site.main()
from michelanglo_api import MikeAPI, Prolink
if not username:
mike = MikeAPI.guest_login()
else:
mike = MikeAPI(username, password)
page = mike.convert_pdb(pdbblock=ph.get_pdbstr(pose),
data_selection='ligand',
data_focus='residue',
)
if username:
page.retrieve()
page.description = '## Description\n\n'
page.description += 'autogen bla bla'
page.commit()
page.show_link()_____no_output_____
</code>
| {
"repository": "matteoferla/pyrosetta_scripts",
"path": "colabs/colabs-pyrosetta.ipynb",
"matched_keywords": [
"BioPython"
],
"stars": 1,
"size": 8063,
"hexsha": "cb062c74b4ad13f7c21f1b81feb1585fa6e8c7a3",
"max_line_length": 269,
"avg_line_length": 32.7764227642,
"alphanum_fraction": 0.5753441647
} |
# Notebook from ithakker/CISC367-Projects
Path: Day16/.ipynb_checkpoints/regex_exercises-checkpoint.ipynb
# Regular Expression Exercises
* Debugger: When debugging regular expressions, the best tool is [Regex101](https://regex101.com/). This is an interactive tool that let's you visualize a regular expression in action.
* Tutorial: I tend to like RealPython's tutorials, here is their's on [Regular Expressions](https://realpython.com/regex-python/).
* Tutorial: The [Official Python tutorial on Regular Expressions](https://docs.python.org/3/howto/regex.html) is not a bad introduction.
* Cheat Sheet: People often make use of [Cheat Sheets](https://www.debuggex.com/cheatsheet/regex/python) when they have to do a lot of Regular Expressions.
* Documentation: If you need it, the official [Python documentation on the `re` module](https://docs.python.org/3/library/re.html) can also be a resource._____no_output_____PLEASE FILL IN THE FOLLOWING:
* Your name:
* Link to the Github repo with this file:_____no_output_____
<code>
import re_____no_output_____
</code>
## 1. Introduction
**a)** Check whether the given strings contain `0xB0`. Display a boolean result as shown below._____no_output_____
<code>
line1 = 'start address: 0xA0, func1 address: 0xC0'
line2 = 'end address: 0xFF, func2 address: 0xB0'
assert bool(re.search(r'', line1)) == False
assert bool(re.search(r'', line2)) == True_____no_output_____
</code>
**b)** Replace all occurrences of `5` with `five` for the given string._____no_output_____
<code>
ip = 'They ate 5 apples and 5 oranges'
assert re.sub() == 'They ate five apples and five oranges'_____no_output_____
</code>
**c)** Replace first occurrence of `5` with `five` for the given string._____no_output_____
<code>
ip = 'They ate 5 apples and 5 oranges'
assert re.sub() == 'They ate five apples and 5 oranges'_____no_output_____
</code>
**d)** For the given list, filter all elements that do *not* contain `e`._____no_output_____
<code>
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
assert [w for w in items if not re.search()] == ['goal', 'sit']_____no_output_____
</code>
**e)** Replace all occurrences of `note` irrespective of case with `X`._____no_output_____
<code>
ip = 'This note should not be NoTeD'
assert re.sub() == 'This X should not be XD'_____no_output_____
</code>
**f)** Check if `at` is present in the given byte input data._____no_output_____
<code>
ip = 'tiger imp goat'
assert bool(re.search()) == True_____no_output_____
</code>
**g)** For the given input string, display all lines not containing `start` irrespective of case._____no_output_____
<code>
para = '''good start
Start working on that
project you always wanted
stars are shining brightly
hi there
start and try to
finish the book
bye'''
pat = re.compile() ##### add your solution here
for line in para.split('\n'):
if not pat.search(line):
print(line)
"""project you always wanted
stars are shining brightly
hi there
finish the book
bye"""_____no_output_____
</code>
**h)** For the given list, filter all elements that contains either `a` or `w`._____no_output_____
<code>
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
##### add your solution here
assert [w for w in items if re.search() or re.search()] == ['goal', 'new', 'eat']_____no_output_____
</code>
**i)** For the given list, filter all elements that contains both `e` and `n`._____no_output_____
<code>
items = ['goal', 'new', 'user', 'sit', 'eat', 'dinner']
##### add your solution here
assert [w for w in items if re.search() and re.search()] == ['new', 'dinner']_____no_output_____
</code>
**j)** For the given string, replace `0xA0` with `0x7F` and `0xC0` with `0x1F`._____no_output_____
<code>
ip = 'start address: 0xA0, func1 address: 0xC0'
##### add your solution here
assert ___ == 'start address: 0x7F, func1 address: 0x1F'_____no_output_____
</code>
<br>
# 2. Anchors
**a)** Check if the given strings start with `be`._____no_output_____
<code>
line1 = 'be nice'
line2 = '"best!"'
line3 = 'better?'
line4 = 'oh no\nbear spotted'
pat = re.compile() ##### add your solution here
assert bool(pat.search(line1)) == True
assert bool(pat.search(line2)) == False
assert bool(pat.search(line3)) == True
assert bool(pat.search(line4)) == False_____no_output_____
</code>
**b)** For the given input string, change only whole word `red` to `brown`_____no_output_____
<code>
words = 'bred red spread credible'
assert re.sub() == 'bred brown spread credible'_____no_output_____
</code>
**c)** For the given input list, filter all elements that contains `42` surrounded by word characters._____no_output_____
<code>
words = ['hi42bye', 'nice1423', 'bad42', 'cool_42a', 'fake4b']
assert [w for w in words if re.search()] == ['hi42bye', 'nice1423', 'cool_42a']_____no_output_____
</code>
**d)** For the given input list, filter all elements that start with `den` or end with `ly`._____no_output_____
<code>
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\n', 'dent']
assert [e for e in items if ] == ['lovely', '2 lonely', 'dent']_____no_output_____
</code>
**e)** For the given input string, change whole word `mall` to `1234` only if it is at the start of a line._____no_output_____
<code>
para = '''
ball fall wall tall
mall call ball pall
wall mall ball fall
mallet wallet malls'''
assert re.sub() == """ball fall wall tall
1234 call ball pall
wall mall ball fall
mallet wallet malls"""_____no_output_____
</code>
**f)** For the given list, filter all elements having a line starting with `den` or ending with `ly`._____no_output_____
<code>
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\nfar', 'dent']
##### add your solution here
assert ___ == ['lovely', '1\ndentist', '2 lonely', 'fly\nfar', 'dent']_____no_output_____
</code>
**g)** For the given input list, filter all whole elements `12\nthree` irrespective of case._____no_output_____
<code>
items = ['12\nthree\n', '12\nThree', '12\nthree\n4', '12\nthree']
##### add your solution here
assert ___ == ['12\nThree', '12\nthree']_____no_output_____
</code>
**h)** For the given input list, replace `hand` with `X` for all elements that start with `hand` followed by at least one word character._____no_output_____
<code>
items = ['handed', 'hand', 'handy', 'unhanded', 'handle', 'hand-2']
##### add your solution here
assert ___ == ['Xed', 'hand', 'Xy', 'unhanded', 'Xle', 'hand-2']_____no_output_____
</code>
**i)** For the given input list, filter all elements starting with `h`. Additionally, replace `e` with `X` for these filtered elements._____no_output_____
<code>
items = ['handed', 'hand', 'handy', 'unhanded', 'handle', 'hand-2']
##### add your solution here
assert ___ == ['handXd', 'hand', 'handy', 'handlX', 'hand-2']_____no_output_____
</code>
<br>
# 3. Alternation and Grouping
**a)** For the given input list, filter all elements that start with `den` or end with `ly`_____no_output_____
<code>
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\n', 'dent']
##### add your solution here
assert ___ == ['lovely', '2 lonely', 'dent']_____no_output_____
</code>
**b)** For the given list, filter all elements having a line starting with `den` or ending with `ly`._____no_output_____
<code>
items = ['lovely', '1\ndentist', '2 lonely', 'eden', 'fly\nfar', 'dent']
##### add your solution here
assert ___ == ['lovely', '1\ndentist', '2 lonely', 'fly\nfar', 'dent']_____no_output_____
</code>
**c)** For the given input strings, replace all occurrences of `removed` or `reed` or `received` or `refused` with `X`._____no_output_____
<code>
s1 = 'creed refuse removed read'
s2 = 'refused reed redo received'
pat = re.compile() ##### add your solution here
assert pat.sub('X', s1) == 'cX refuse X read'
assert pat.sub('X', s2) == 'X X redo X'_____no_output_____
</code>
**d)** For the given input strings, replace all matches from the list `words` with `A`._____no_output_____
<code>
s1 = 'plate full of slate'
s2 = "slated for later, don't be late"
words = ['late', 'later', 'slated']
pat = re.compile() ##### add your solution here
assert pat.sub('A', s1) == 'pA full of sA'
assert pat.sub('A', s2) == "A for A, don't be A"_____no_output_____
</code>
**e)** Filter all whole elements from the input list `items` based on elements listed in `words`._____no_output_____
<code>
items = ['slate', 'later', 'plate', 'late', 'slates', 'slated ']
words = ['late', 'later', 'slated']
pat = re.compile() ##### add your solution here
##### add your solution here
assert ___ == ['later', 'late']_____no_output_____
</code>
<br>
# 4. Escaping metacharacters
**a)** Transform the given input strings to the expected output using same logic on both strings._____no_output_____
<code>
str1 = '(9-2)*5+qty/3'
str2 = '(qty+4)/2-(9-2)*5+pq/4'
##### add your solution here for str1
assert ___ == '35+qty/3'
##### add your solution here for str2
assert ___ == '(qty+4)/2-35+pq/4'_____no_output_____
</code>
**b)** Replace `(4)\|` with `2` only at the start or end of given input strings._____no_output_____
<code>
s1 = r'2.3/(4)\|6 foo 5.3-(4)\|'
s2 = r'(4)\|42 - (4)\|3'
s3 = 'two - (4)\\|\n'
pat = re.compile() ##### add your solution here
assert pat.sub('2', s1) == '2.3/(4)\\|6 foo 5.3-2'
assert pat.sub('2', s2) == '242 - (4)\\|3'
assert pat.sub('2', s3) == 'two - (4)\\|\n'_____no_output_____
</code>
**c)** Replace any matching element from the list `items` with `X` for given the input strings. Match the elements from `items` literally. Assume no two elements of `items` will result in any matching conflict._____no_output_____
<code>
items = ['a.b', '3+n', r'x\y\z', 'qty||price', '{n}']
pat = re.compile() ##### add your solution here
assert pat.sub('X', '0a.bcd') == '0Xcd'
assert pat.sub('X', 'E{n}AMPLE') == 'EXAMPLE'
assert pat.sub('X', r'43+n2 ax\y\ze') == '4X2 aXe'_____no_output_____
</code>
**d)** Replace backspace character `\b` with a single space character for the given input string._____no_output_____
<code>
ip = '123\b456'
ip_____no_output_____assert re.sub() == '123 456'_____no_output_____
</code>
**e)** Replace all occurrences of `\e` with `e`._____no_output_____
<code>
ip = r'th\er\e ar\e common asp\ects among th\e alt\ernations'
assert re.sub() == 'there are common aspects among the alternations'_____no_output_____
</code>
**f)** Replace any matching item from the list `eqns` with `X` for given the string `ip`. Match the items from `eqns` literally._____no_output_____
<code>
ip = '3-(a^b)+2*(a^b)-(a/b)+3'
eqns = ['(a^b)', '(a/b)', '(a^b)+2']
##### add your solution here
assert pat.sub('X', ip) == '3-X*X-X+3'_____no_output_____
</code>
<br>
# 5. Dot metacharacter and Quantifiers
Since `.` metacharacter doesn't match newline character by default, assume that the input strings in the following exercises will not contain newline characters.
**a)** Replace `42//5` or `42/5` with `8` for the given input._____no_output_____
<code>
ip = 'a+42//5-c pressure*3+42/5-14256'
assert re.sub() == 'a+8-c pressure*3+8-14256'_____no_output_____
</code>
**b)** For the list `items`, filter all elements starting with `hand` and ending with at most one more character or `le`._____no_output_____
<code>
items = ['handed', 'hand', 'handled', 'handy', 'unhand', 'hands', 'handle']
##### add your solution here
assert ___ == ['hand', 'handy', 'hands', 'handle']_____no_output_____
</code>
**c)** Use `re.split` to get the output as shown for the given input strings._____no_output_____
<code>
eqn1 = 'a+42//5-c'
eqn2 = 'pressure*3+42/5-14256'
eqn3 = 'r*42-5/3+42///5-42/53+a'
##### add your solution here for eqn1
assert ___ == ['a+', '-c']
##### add your solution here for eqn2
assert ___ == ['pressure*3+', '-14256']
##### add your solution here for eqn3
assert ___ == ['r*42-5/3+42///5-', '3+a']_____no_output_____
</code>
**d)** For the given input strings, remove everything from the first occurrence of `i` till end of the string._____no_output_____
<code>
s1 = 'remove the special meaning of such constructs'
s2 = 'characters while constructing'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'remove the spec'
assert pat.sub('', s2) == 'characters wh'_____no_output_____
</code>
**e)** For the given strings, construct a RE to get output as shown._____no_output_____
<code>
str1 = 'a+b(addition)'
str2 = 'a/b(division) + c%d(#modulo)'
str3 = 'Hi there(greeting). Nice day(a(b)'
remove_parentheses = re.compile() ##### add your solution here
assert remove_parentheses.sub('', str1) == 'a+b'
assert remove_parentheses.sub('', str2) == 'a/b + c%d'
assert remove_parentheses.sub('', str3) == 'Hi there. Nice day'_____no_output_____
</code>
**f)** Correct the given RE to get the expected output._____no_output_____
<code>
words = 'plink incoming tint winter in caution sentient'
change = re.compile(r'int|in|ion|ing|inco|inter|ink')
# wrong output
assert change.sub('X', words) == 'plXk XcomXg tX wXer X cautX sentient'
# expected output
change = re.compile() ##### add your solution here
assert change.sub('X', words) == 'plX XmX tX wX X cautX sentient'_____no_output_____
</code>
**g)** For the given greedy quantifiers, what would be the equivalent form using `{m,n}` representation?
* `?` is same as
* `*` is same as
* `+` is same as
**h)** `(a*|b*)` is same as `(a|b)*` — True or False?
**i)** For the given input strings, remove everything from the first occurrence of `test` (irrespective of case) till end of the string, provided `test` isn't at the end of the string._____no_output_____
<code>
s1 = 'this is a Test'
s2 = 'always test your RE for corner cases'
s3 = 'a TEST of skill tests?'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'this is a Test'
assert pat.sub('', s2) == 'always '
assert pat.sub('', s3) == 'a '_____no_output_____
</code>
**j)** For the input list `words`, filter all elements starting with `s` and containing `e` and `t` in any order._____no_output_____
<code>
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['subtle', 'sets', 'site']_____no_output_____
</code>
**k)** For the input list `words`, remove all elements having less than `6` characters._____no_output_____
<code>
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['sequoia', 'subtle', 'exhibit']_____no_output_____
</code>
**l)** For the input list `words`, filter all elements starting with `s` or `t` and having a maximum of `6` characters._____no_output_____
<code>
words = ['sequoia', 'subtle', 'exhibit', 'asset', 'sets', 'tests', 'site']
##### add your solution here
assert ___ == ['subtle', 'sets', 'tests', 'site']_____no_output_____
</code>
**m)** Can you reason out why this code results in the output shown? The aim was to remove all `<characters>` patterns but not the `<>` ones. The expected result was `'a 1<> b 2<> c'`._____no_output_____
<code>
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
assert re.sub(r'<.+?>', '', ip) == 'a 1 2'_____no_output_____
</code>
**n)** Use `re.split` to get the output as shown below for given input strings._____no_output_____
<code>
s1 = 'go there // "this // that"'
s2 = 'a//b // c//d e//f // 4//5'
s3 = '42// hi//bye//see // carefully'
pat = re.compile() ##### add your solution here
assert pat.split() == ['go there', '"this // that"']
assert pat.split() == ['a//b', 'c//d e//f // 4//5']
assert pat.split() == ['42// hi//bye//see', 'carefully']_____no_output_____
</code>
<br>
# 6. Working with matched portions
**a)** For the given strings, extract the matching portion from first `is` to last `t`._____no_output_____
<code>
str1 = 'This the biggest fruit you have seen?'
str2 = 'Your mission is to read and practice consistently'
pat = re.compile() ##### add your solution here
##### add your solution here for str1
assert ___ == 'is the biggest fruit'
##### add your solution here for str2
assert ___ == 'ission is to read and practice consistent'_____no_output_____
</code>
**b)** Find the starting index of first occurrence of `is` or `the` or `was` or `to` for the given input strings._____no_output_____
<code>
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
s3 = 'this is good bye then'
s4 = 'who was there to see?'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 12
##### add your solution here for s2
assert ___ == 4
##### add your solution here for s3
assert ___ == 2
##### add your solution here for s4
assert ___ == 4_____no_output_____
</code>
**c)** Find the starting index of last occurrence of `is` or `the` or `was` or `to` for the given input strings._____no_output_____
<code>
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
s3 = 'this is good bye then'
s4 = 'who was there to see?'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 12
##### add your solution here for s2
assert ___ == 18
##### add your solution here for s3
assert ___ == 17
##### add your solution here for s4
assert ___ == 14_____no_output_____
</code>
**d)** The given input string contains `:` exactly once. Extract all characters after the `:` as output._____no_output_____
<code>
ip = 'fruits:apple, mango, guava, blueberry'
##### add your solution here
assert ___ == 'apple, mango, guava, blueberry'_____no_output_____
</code>
**e)** The given input strings contains some text followed by `-` followed by a number. Replace that number with its `log` value using `math.log()`._____no_output_____
<code>
s1 = 'first-3.14'
s2 = 'next-123'
pat = re.compile() ##### add your solution here
import math
assert pat.sub() == 'first-1.144222799920162'
assert pat.sub() == 'next-4.812184355372417'_____no_output_____
</code>
**f)** Replace all occurrences of `par` with `spar`, `spare` with `extra` and `park` with `garden` for the given input strings._____no_output_____
<code>
str1 = 'apartment has a park'
str2 = 'do you have a spare cable'
str3 = 'write a parser'
##### add your solution here
assert pat.sub() == 'aspartment has a garden'
assert pat.sub() == 'do you have a extra cable'
assert pat.sub() == 'write a sparser'_____no_output_____
</code>
**g)** Extract all words between `(` and `)` from the given input string as a list. Assume that the input will not contain any broken parentheses._____no_output_____
<code>
ip = 'another (way) to reuse (portion) matched (by) capture groups'
assert re.findall() == ['way', 'portion', 'by']_____no_output_____
</code>
**h)** Extract all occurrences of `<` up to next occurrence of `>`, provided there is at least one character in between `<` and `>`._____no_output_____
<code>
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
assert re.findall() == ['<apple>', '<> b<bye>', '<> c<cat>']_____no_output_____
</code>
**i)** Use `re.findall` to get the output as shown below for the given input strings. Note the characters used in the input strings carefully._____no_output_____
<code>
row1 = '-2,5 4,+3 +42,-53 4356246,-357532354 '
row2 = '1.32,-3.14 634,5.63 63.3e3,9907809345343.235 '
pat = re.compile() ##### add your solution here
assert pat.findall(row1) == [('-2', '5'), ('4', '+3'), ('+42', '-53'), ('4356246', '-357532354')]
pat.findall(row2) == [('1.32', '-3.14'), ('634', '5.63'), ('63.3e3', '9907809345343.235')]_____no_output_____
</code>
**j)** This is an extension to previous question.
* For `row1`, find the sum of integers of each tuple element. For example, sum of `-2` and `5` is `3`.
* For `row2`, find the sum of floating-point numbers of each tuple element. For example, sum of `1.32` and `-3.14` is `-1.82`._____no_output_____
<code>
row1 = '-2,5 4,+3 +42,-53 4356246,-357532354 '
row2 = '1.32,-3.14 634,5.63 63.3e3,9907809345343.235 '
# should be same as previous question
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == [3, 7, -11, -353176108]
##### add your solution here for row2
assert ___ == [-1.82, 639.63, 9907809408643.234]_____no_output_____
</code>
**k)** Use `re.split` to get the output as shown below._____no_output_____
<code>
ip = '42:no-output;1000:car-truck;SQEX49801'
assert re.split() == ['42', 'output', '1000', 'truck', 'SQEX49801']_____no_output_____
</code>
**l)** For the given list of strings, change the elements into a tuple of original element and number of times `t` occurs in that element._____no_output_____
<code>
words = ['sequoia', 'attest', 'tattletale', 'asset']
##### add your solution here
assert ___ == [('sequoia', 0), ('attest', 3), ('tattletale', 4), ('asset', 1)]_____no_output_____
</code>
**m)** The given input string has fields separated by `:`. Each field contains four uppercase alphabets followed optionally by two digits. Ignore the last field, which is empty. See [docs.python: Match.groups](https://docs.python.org/3/library/re.html#re.Match.groups) and use `re.finditer` to get the output as shown below. If the optional digits aren't present, show `'NA'` instead of `None`._____no_output_____
<code>
ip = 'TWXA42:JWPA:NTED01:'
##### add your solution here
assert ___ == [('TWXA', '42'), ('JWPA', 'NA'), ('NTED', '01')]_____no_output_____
</code>
> Note that this is different from `re.findall` which will just give empty string instead of `None` when a capture group doesn't participate.
**n)** Convert the comma separated strings to corresponding `dict` objects as shown below._____no_output_____
<code>
row1 = 'name:rohan,maths:75,phy:89,'
row2 = 'name:rose,maths:88,phy:92,'
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == {'name': 'rohan', 'maths': '75', 'phy': '89'}
##### add your solution here for row2
assert ___ == {'name': 'rose', 'maths': '88', 'phy': '92'}_____no_output_____
</code>
<br>
# 7. Character class
**a)** For the list `items`, filter all elements starting with `hand` and ending with `s` or `y` or `le`._____no_output_____
<code>
items = ['-handy', 'hand', 'handy', 'unhand', 'hands', 'handle']
##### add your solution here
assert ___ == ['handy', 'hands', 'handle']_____no_output_____
</code>
**b)** Replace all whole words `reed` or `read` or `red` with `X`._____no_output_____
<code>
ip = 'redo red credible :read: rod reed'
##### add your solution here
assert ___ == 'redo X credible :X: rod X'_____no_output_____
</code>
**c)** For the list `words`, filter all elements containing `e` or `i` followed by `l` or `n`. Note that the order mentioned should be followed._____no_output_____
<code>
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'unicorn', 'eel']_____no_output_____
</code>
**d)** For the list `words`, filter all elements containing `e` or `i` and `l` or `n` in any order._____no_output_____
<code>
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'unicorn', 'newer', 'eel']_____no_output_____
</code>
**e)** Extract all hex character sequences, with `0x` optional prefix. Match the characters case insensitively, and the sequences shouldn't be surrounded by other word characters._____no_output_____
<code>
str1 = '128A foo 0xfe32 34 0xbar'
str2 = '0XDEADBEEF place 0x0ff1ce bad'
hex_seq = re.compile() ##### add your solution here
##### add your solution here for str1
assert ___ == ['128A', '0xfe32', '34']
##### add your solution here for str2
assert ___ == ['0XDEADBEEF', '0x0ff1ce', 'bad']_____no_output_____
</code>
**f)** Delete from `(` to the next occurrence of `)` unless they contain parentheses characters in between._____no_output_____
<code>
str1 = 'def factorial()'
str2 = 'a/b(division) + c%d(#modulo) - (e+(j/k-3)*4)'
str3 = 'Hi there(greeting). Nice day(a(b)'
remove_parentheses = re.compile() ##### add your solution here
assert ___ == remove_parentheses.sub('', str1)
'def factorial'
assert ___ == remove_parentheses.sub('', str2)
'a/b + c%d - (e+*4)'
assert ___ == remove_parentheses.sub('', str3)
'Hi there. Nice day(a'_____no_output_____
</code>
**g)** For the list `words`, filter all elements not starting with `e` or `p` or `u`._____no_output_____
<code>
words = ['surrender', 'unicorn', 'newer', 'door', 'empty', 'eel', 'pest']
##### add your solution here
assert ___ == ['surrender', 'newer', 'door']_____no_output_____
</code>
**h)** For the list `words`, filter all elements not containing `u` or `w` or `ee` or `-`._____no_output_____
<code>
words = ['p-t', 'you', 'tea', 'heel', 'owe', 'new', 'reed', 'ear']
##### add your solution here
assert ___ == ['tea', 'ear']_____no_output_____
</code>
**i)** The given input strings contain fields separated by `,` and fields can be empty too. Replace last three fields with `WHTSZ323`._____no_output_____
<code>
row1 = '(2),kite,12,,D,C,,'
row2 = 'hi,bye,sun,moon'
pat = re.compile() ##### add your solution here
assert pat.sub() == '(2),kite,12,,D,WHTSZ323'
assert pat.sub() == 'hi,WHTSZ323'_____no_output_____
</code>
**j)** Split the given strings based on consecutive sequence of digit or whitespace characters._____no_output_____
<code>
str1 = 'lion \t Ink32onion Nice'
str2 = '**1\f2\n3star\t7 77\r**'
pat = re.compile() ##### add your solution here
assert pat.split(str1) == ['lion', 'Ink', 'onion', 'Nice']
assert pat.split(str2) == ['**', 'star', '**']_____no_output_____
</code>
**k)** Delete all occurrences of the sequence `<characters>` where `characters` is one or more non `>` characters and cannot be empty._____no_output_____
<code>
ip = 'a<apple> 1<> b<bye> 2<> c<cat>'
##### add your solution here
assert ___ == 'a 1<> b 2<> c'_____no_output_____
</code>
**l)** `\b[a-z](on|no)[a-z]\b` is same as `\b[a-z][on]{2}[a-z]\b`. True or False? Sample input lines shown below might help to understand the differences, if any._____no_output_____
<code>
print('known\nmood\nknow\npony\ninns')
known
mood
know
pony
inns_____no_output_____
</code>
**m)** For the given list, filter all elements containing any number sequence greater than `624`._____no_output_____
<code>
items = ['hi0000432abcd', 'car00625', '42_624 0512', '3.14 96 2 foo1234baz']
##### add your solution here
assert ___ == ['car00625', '3.14 96 2 foo1234baz']_____no_output_____
</code>
**n)** Count the maximum depth of nested braces for the given strings. Unbalanced or wrongly ordered braces should return `-1`. Note that this will require a mix of regular expressions and Python code._____no_output_____
<code>
def max_nested_braces(ip):
##### add your solution here
assert max_nested_braces('a*b') == 0
assert max_nested_braces('}a+b{') == -1
assert max_nested_braces('a*b+{}') == 1
assert max_nested_braces('{{a+2}*{b+c}+e}') == 2
assert max_nested_braces('{{a+2}*{b+{c*d}}+e}') == 3
assert max_nested_braces('{{a+2}*{\n{b+{c*d}}+e*d}}') == 4
assert max_nested_braces('a*{b+c*{e*3.14}}}') == -1_____no_output_____
</code>
**o)** By default, `str.split` method will split on whitespace and remove empty strings from the result. Which `re` module function would you use to replicate this functionality?_____no_output_____
<code>
ip = ' \t\r so pole\t\t\t\n\nlit in to \r\n\v\f '
assert ip.split() == ['so', 'pole', 'lit', 'in', 'to']
##### add your solution here
assert ___ == ['so', 'pole', 'lit', 'in', 'to']_____no_output_____
</code>
**p)** Convert the given input string to two different lists as shown below._____no_output_____
<code>
ip = 'price_42 roast^\t\n^-ice==cat\neast'
##### add your solution here
assert ___ == ['price_42', 'roast', 'ice', 'cat', 'east']
##### add your solution here
assert ___ == ['price_42', ' ', 'roast', '^\t\n^-', 'ice', '==', 'cat', '\n', 'east']_____no_output_____
</code>
**q)** Filter all elements whose first non-whitespace character is not a `#` character. Any element made up of only whitespace characters should be ignored as well._____no_output_____
<code>
items = [' #comment', '\t\napple #42', '#oops', 'sure', 'no#1', '\t\r\f']
##### add your solution here
assert ___ == ['\t\napple #42', 'sure', 'no#1']_____no_output_____
</code>
<br>
# 8. Groupings and backreferences
**a)** Replace the space character that occurs after a word ending with `a` or `r` with a newline character._____no_output_____
<code>
ip = 'area not a _a2_ roar took 22'
assert re.sub() == """area
not a
_a2_ roar
took 22"""_____no_output_____
</code>
**b)** Add `[]` around words starting with `s` and containing `e` and `t` in any order._____no_output_____
<code>
ip = 'sequoia subtle exhibit asset sets tests site'
##### add your solution here
assert ___ == 'sequoia [subtle] exhibit asset [sets] tests [site]'_____no_output_____
</code>
**c)** Replace all whole words with `X` that start and end with the same word character. Single character word should get replaced with `X` too, as it satisfies the stated condition._____no_output_____
<code>
ip = 'oreo not a _a2_ roar took 22'
##### add your solution here
assert ___ == 'X not X X X took X'_____no_output_____
</code>
**d)** Convert the given **markdown** headers to corresponding **anchor** tag. Consider the input to start with one or more `#` characters followed by space and word characters. The `name` attribute is constructed by converting the header to lowercase and replacing spaces with hyphens. Can you do it without using a capture group?_____no_output_____
<code>
header1 = '# Regular Expressions'
header2 = '## Compiling regular expressions'
##### add your solution here for header1
assert ___ == '# <a name="regular-expressions"></a>Regular Expressions'
##### add your solution here for header2
assert ___ == '## <a name="compiling-regular-expressions"></a>Compiling regular expressions'_____no_output_____
</code>
**e)** Convert the given **markdown** anchors to corresponding **hyperlinks**._____no_output_____
<code>
anchor1 = '# <a name="regular-expressions"></a>Regular Expressions'
anchor2 = '## <a name="subexpression-calls"></a>Subexpression calls'
##### add your solution here for anchor1
assert ___ == '[Regular Expressions](#regular-expressions)'
##### add your solution here for anchor2
assert ___ == '[Subexpression calls](#subexpression-calls)'_____no_output_____
</code>
**f)** Count the number of whole words that have at least two occurrences of consecutive repeated alphabets. For example, words like `stillness` and `Committee` should be counted but not words like `root` or `readable` or `rotational`._____no_output_____
<code>
ip = '''oppressed abandon accommodation bloodless
carelessness committed apparition innkeeper
occasionally afforded embarrassment foolishness
depended successfully succeeded
possession cleanliness suppress'''
##### add your solution here
assert ___ == 13_____no_output_____
</code>
**g)** For the given input string, replace all occurrences of digit sequences with only the unique non-repeating sequence. For example, `232323` should be changed to `23` and `897897` should be changed to `897`. If there no repeats (for example `1234`) or if the repeats end prematurely (for example `12121`), it should not be changed._____no_output_____
<code>
ip = '1234 2323 453545354535 9339 11 60260260'
##### add your solution here
assert ___ == '1234 23 4535 9339 1 60260260'_____no_output_____
</code>
**h)** Replace sequences made up of words separated by `:` or `.` by the first word of the sequence. Such sequences will end when `:` or `.` is not followed by a word character._____no_output_____
<code>
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == 'wow hi-2 bye kite'_____no_output_____
</code>
**i)** Replace sequences made up of words separated by `:` or `.` by the last word of the sequence. Such sequences will end when `:` or `.` is not followed by a word character._____no_output_____
<code>
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == 'five hi-2 bye water'_____no_output_____
</code>
**j)** Split the given input string on one or more repeated sequence of `cat`._____no_output_____
<code>
ip = 'firecatlioncatcatcatbearcatcatparrot'
##### add your solution here
assert ___ == ['fire', 'lion', 'bear', 'parrot']_____no_output_____
</code>
**k)** For the given input string, find all occurrences of digit sequences with at least one repeating sequence. For example, `232323` and `897897`. If the repeats end prematurely, for example `12121`, it should not be matched._____no_output_____
<code>
ip = '1234 2323 453545354535 9339 11 60260260'
pat = re.compile() ##### add your solution here
# entire sequences in the output
##### add your solution here
assert ___ == ['2323', '453545354535', '11']
# only the unique sequence in the output
##### add your solution here
assert ___ == ['23', '4535', '1']_____no_output_____
</code>
**l)** Convert the comma separated strings to corresponding `dict` objects as shown below. The keys are `name`, `maths` and `phy` for the three fields in the input strings._____no_output_____
<code>
row1 = 'rohan,75,89'
row2 = 'rose,88,92'
pat = re.compile() ##### add your solution here
##### add your solution here for row1
assert ___ == {'name': 'rohan', 'maths': '75', 'phy': '89'}
##### add your solution here for row2
assert ___ == {'name': 'rose', 'maths': '88', 'phy': '92'}_____no_output_____
</code>
**m)** Surround all whole words with `()`. Additionally, if the whole word is `imp` or `ant`, delete them. Can you do it with single substitution?_____no_output_____
<code>
ip = 'tiger imp goat eagle ant important'
##### add your solution here
assert ___ == '(tiger) () (goat) (eagle) () (important)'_____no_output_____
</code>
**n)** Filter all elements that contains a sequence of lowercase alphabets followed by `-` followed by digits. They can be optionally surrounded by `{{` and `}}`. Any partial match shouldn't be part of the output._____no_output_____
<code>
ip = ['{{apple-150}}', '{{mango2-100}}', '{{cherry-200', 'grape-87']
##### add your solution here
assert ___ == ['{{apple-150}}', 'grape-87']_____no_output_____
</code>
**o)** The given input string has sequences made up of words separated by `:` or `.` and such sequences will end when `:` or `.` is not followed by a word character. For all such sequences, display only the last word followed by `-` followed by first word._____no_output_____
<code>
ip = 'wow:Good:2_two:five: hi-2 bye kite.777.water.'
##### add your solution here
assert ___ == ['five-wow', 'water-kite']_____no_output_____
</code>
<br>
# 9. Lookarounds
Starting from here, all following problems are optional!
Please use lookarounds for solving the following exercises even if you can do it without lookarounds. Unless you cannot use lookarounds for cases like variable length lookbehinds.
**a)** Replace all whole words with `X` unless it is preceded by `(` character._____no_output_____
<code>
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X X) X (mango) (grape'_____no_output_____
</code>
**b)** Replace all whole words with `X` unless it is followed by `)` character._____no_output_____
<code>
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X berry) X (mango) (X'_____no_output_____
</code>
**c)** Replace all whole words with `X` unless it is preceded by `(` or followed by `)` characters._____no_output_____
<code>
ip = '(apple) guava berry) apple (mango) (grape'
##### add your solution here
assert ___ == '(apple) X berry) X (mango) (grape'_____no_output_____
</code>
**d)** Extract all whole words that do not end with `e` or `n`._____no_output_____
<code>
ip = 'at row on urn e note dust n'
##### add your solution here
assert ___ == ['at', 'row', 'dust']_____no_output_____
</code>
**e)** Extract all whole words that do not start with `a` or `d` or `n`._____no_output_____
<code>
ip = 'at row on urn e note dust n'
##### add your solution here
assert ___ == ['row', 'on', 'urn', 'e']_____no_output_____
</code>
**f)** Extract all whole words only if they are followed by `:` or `,` or `-`._____no_output_____
<code>
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['poke', 'so', 'ever']_____no_output_____
</code>
**g)** Extract all whole words only if they are preceded by `=` or `/` or `-`._____no_output_____
<code>
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'is', 'sit']_____no_output_____
</code>
**h)** Extract all whole words only if they are preceded by `=` or `:` and followed by `:` or `.`._____no_output_____
<code>
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'ink']_____no_output_____
</code>
**i)** Extract all whole words only if they are preceded by `=` or `:` or `.` or `(` or `-` and not followed by `.` or `/`._____no_output_____
<code>
ip = 'poke,on=-=so:ink.to/is(vast)ever-sit'
##### add your solution here
assert ___ == ['so', 'vast', 'sit']_____no_output_____
</code>
**j)** Remove leading and trailing whitespaces from all the individual fields where `,` is the field separator._____no_output_____
<code>
csv1 = ' comma ,separated ,values \t\r '
csv2 = 'good bad,nice ice , 42 , , stall small'
remove_whitespace = re.compile() ##### add your solution here
assert remove_whitespace.sub('', csv1) == 'comma,separated,values'
assert remove_whitespace.sub('', csv2) == 'good bad,nice ice,42,,stall small'_____no_output_____
</code>
**k)** Filter all elements that satisfy all of these rules:
* should have at least two alphabets
* should have at least 3 digits
* should have at least one special character among `%` or `*` or `#` or `$`
* should not end with a whitespace character_____no_output_____
<code>
pwds = ['hunter2', 'F2H3u%9', '*X3Yz3.14\t', 'r2_d2_42', 'A $B C1234']
##### add your solution here
assert ___ == ['F2H3u%9', 'A $B C1234']_____no_output_____
</code>
**l)** For the given string, surround all whole words with `{}` except for whole words `par` and `cat` and `apple`._____no_output_____
<code>
ip = 'part; cat {super} rest_42 par scatter apple spar'
##### add your solution here
assert ___ == '{part}; cat {{super}} {rest_42} par {scatter} apple {spar}'_____no_output_____
</code>
**m)** Extract integer portion of floating-point numbers for the given string. A number ending with `.` and no further digits should not be considered._____no_output_____
<code>
ip = '12 ab32.4 go 5 2. 46.42 5'
##### add your solution here
assert ___ == ['32', '46']_____no_output_____
</code>
**n)** For the given input strings, extract all overlapping two character sequences._____no_output_____
<code>
s1 = 'apple'
s2 = '1.2-3:4'
pat = re.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == ['ap', 'pp', 'pl', 'le']
##### add your solution here for s2
assert ___ == ['1.', '.2', '2-', '-3', '3:', ':4']_____no_output_____
</code>
**o)** The given input strings contain fields separated by `:` character. Delete `:` and the last field if there is a digit character anywhere before the last field._____no_output_____
<code>
s1 = '42:cat'
s2 = 'twelve:a2b'
s3 = 'we:be:he:0:a:b:bother'
pat = re.compile() ##### add your solution here
assert pat.sub() == '42'
assert pat.sub() == 'twelve:a2b'
assert pat.sub() == 'we:be:he:0:a:b'_____no_output_____
</code>
**p)** Extract all whole words unless they are preceded by `:` or `<=>` or `----` or `#`._____no_output_____
<code>
ip = '::very--at<=>row|in.a_b#b2c=>lion----east'
##### add your solution here
assert ___ == ['at', 'in', 'a_b', 'lion']_____no_output_____
</code>
**q)** Match strings if it contains `qty` followed by `price` but not if there is **whitespace** or the string `error` between them._____no_output_____
<code>
str1 = '23,qty,price,42'
str2 = 'qty price,oh'
str3 = '3.14,qty,6,errors,9,price,3'
str4 = '42\nqty-6,apple-56,price-234,error'
str5 = '4,price,3.14,qty,4'
neg = re.compile() ##### add your solution here
assert bool(neg.search(str1)) == True
assert bool(neg.search(str2)) == False
assert bool(neg.search(str3)) == False
assert bool(neg.search(str4)) == True
assert bool(neg.search(str5)) == False_____no_output_____
</code>
**r)** Can you reason out why the output shown is different for these two regular expressions?_____no_output_____
<code>
ip = 'I have 12, he has 2!'
assert re.sub(r'\b..\b', '{\g<0>}', ip) == '{I }have {12}{, }{he} has{ 2}!'
assert re.sub(r'(?<!\w)..(?!\w)', '{\g<0>}', ip) == 'I have {12}, {he} has {2!}'_____no_output_____
</code>
<br>
# 10. Flags
**a)** Remove from first occurrence of `hat` to last occurrence of `it` for the given input strings. Match these markers case insensitively._____no_output_____
<code>
s1 = 'But Cool THAT\nsee What okay\nwow quite'
s2 = 'it this hat is sliced HIT.'
pat = re.compile() ##### add your solution here
assert pat.sub('', s1) == 'But Cool Te'
assert pat.sub('', s2) == 'it this .'_____no_output_____
</code>
**b)** Delete from `start` if it is at the beginning of a line up to the next occurrence of the `end` at the end of a line. Match these markers case insensitively._____no_output_____
<code>
para = '''
good start
start working on that
project you always wanted
to, do not let it end
hi there
start and end the end
42
Start and try to
finish the End
bye'''
pat = re.compile() ##### add your solution here
assert pat.sub('', para)) == """
good start
hi there
42
bye"""_____no_output_____
</code>
**c)** For the given input strings, match all of these three patterns:
* `This` case sensitively
* `nice` and `cool` case insensitively_____no_output_____
<code>
s1 = 'This is nice and Cool'
s2 = 'Nice and cool this is'
s3 = 'What is so nice and cool about This?'
pat = re.compile() ##### add your solution here
assert bool(pat.search(s1)) == True
assert bool(pat.search(s2)) == False
assert bool(pat.search(s3)) == True_____no_output_____
</code>
**d)** For the given input strings, match if the string begins with `Th` and also contains a line that starts with `There`._____no_output_____
<code>
s1 = 'There there\nHave a cookie'
s2 = 'This is a mess\nYeah?\nThereeeee'
s3 = 'Oh\nThere goes the fun'
pat = re.compile() ##### add your solution here
assert bool(pat.search(s1)) == True
assert bool(pat.search(s2)) == True
assert bool(pat.search(s3)) == False_____no_output_____
</code>
**e)** Explore what the `re.DEBUG` flag does. Here's some example patterns to check out.
* `re.compile(r'\Aden|ly\Z', flags=re.DEBUG)`
* `re.compile(r'\b(0x)?[\da-f]+\b', flags=re.DEBUG)`
* `re.compile(r'\b(?:0x)?[\da-f]+\b', flags=re.I|re.DEBUG)`
<br>
# 11. Unicode
**a)** Output `True` or `False` depending on input string made up of ASCII characters or not. Consider the input to be non-empty strings and any character that isn't part of 7-bit ASCII set should give `False`. Do you need regular expressions for this?_____no_output_____
<code>
str1 = '123—456'
str2 = 'good fοοd'
str3 = 'happy learning!'
str4 = 'İıſK'
##### add your solution here for str1
assert ___ == False
##### add your solution here for str2
assert ___ == False
##### add your solution here for str3
assert ___ == True
##### add your solution here for str4
assert ___ == False_____no_output_____
</code>
**b)** Does `.` quantifier with `re.ASCII` flag enabled match non-ASCII characters?
**c)** Explore the following Q&A threads.
* [stackoverflow: remove powered number from string](https://stackoverflow.com/questions/57553721/remove-powered-number-from-string-in-python)
* [stackoverflow: regular expression for French characters](https://stackoverflow.com/questions/1922097/regular-expression-for-french-characters)
<br>
# 12. regex module
This part is super optional, it has you using the non-builtin `regex` module (https://pypi.org/project/regex/). I've never actually tried it. I skimmed through its features, and it doesn't strike me as adding *that* much more functionality.
**a)** Filter all elements whose first non-whitespace character is not a `#` character. Any element made up of only whitespace characters should be ignored as well._____no_output_____
<code>
items = [' #comment', '\t\napple #42', '#oops', 'sure', 'no#1', '\t\r\f']
##### add your solution here
assert ___ == ['\t\napple #42', 'sure', 'no#1']_____no_output_____
</code>
**b)** Replace sequences made up of words separated by `:` or `.` by the first word of the sequence and the separator. Such sequences will end when `:` or `.` is not followed by a word character._____no_output_____
<code>
ip = 'wow:Good:2_two:five: hi bye kite.777.water.'
##### add your solution here
assert ___ == 'wow: hi bye kite.'_____no_output_____
</code>
**c)** The given list of strings has fields separated by `:` character. Delete `:` and the last field if there is a digit character anywhere before the last field._____no_output_____
<code>
items = ['42:cat', 'twelve:a2b', 'we:be:he:0:a:b:bother']
##### add your solution here
assert ___ == ['42', 'twelve:a2b', 'we:be:he:0:a:b']_____no_output_____
</code>
**d)** Extract all whole words unless they are preceded by `:` or `<=>` or `----` or `#`._____no_output_____
<code>
ip = '::very--at<=>row|in.a_b#b2c=>lion----east'
##### add your solution here
assert ___ == ['at', 'in', 'a_b', 'lion']_____no_output_____
</code>
**e)** The given input string has fields separated by `:` character. Extract all fields if the previous field contains a digit character._____no_output_____
<code>
ip = 'vast:a2b2:ride:in:awe:b2b:3list:end'
##### add your solution here
assert ___ == ['ride', '3list', 'end']_____no_output_____
</code>
**f)** The given input string has fields separated by `:` character. Delete all fields, including the separator, unless the field contains a digit character. Stop deleting once a field with digit character is found._____no_output_____
<code>
row1 = 'vast:a2b2:ride:in:awe:b2b:3list:end'
row2 = 'um:no:low:3e:s4w:seer'
pat = regex.compile() ##### add your solution here
assert pat.sub('', row1) == 'a2b2:ride:in:awe:b2b:3list:end'
assert pat.sub('', row2) == '3e:s4w:seer'_____no_output_____
</code>
**g)** For the given input strings, extract `if` followed by any number of nested parentheses. Assume that there will be only one such pattern per input string._____no_output_____
<code>
ip1 = 'for (((i*3)+2)/6) if(3-(k*3+4)/12-(r+2/3)) while()'
ip2 = 'if+while if(a(b)c(d(e(f)1)2)3) for(i=1)'
pat = regex.compile() ##### add your solution here
assert pat.search(ip1)[0] == 'if(3-(k*3+4)/12-(r+2/3))'
assert pat.search(ip2)[0] == 'if(a(b)c(d(e(f)1)2)3)'_____no_output_____
</code>
**h)** Read about `POSIX` flag from https://pypi.org/project/regex/. Is the following code snippet showing the correct output?_____no_output_____
<code>
words = 'plink incoming tint winter in caution sentient'
change = regex.compile(r'int|in|ion|ing|inco|inter|ink', flags=regex.POSIX)
assert change.sub('X', words) == 'plX XmX tX wX X cautX sentient'_____no_output_____
</code>
**i)** Extract all whole words for the given input strings. However, based on user input `ignore`, do not match words if they contain any character present in the `ignore` variable._____no_output_____
<code>
s1 = 'match after the last newline character'
s2 = 'and then you want to test'
ignore = 'aty'
assert regex.findall() == ['newline']
assert regex.findall() == []
ignore = 'esw'
assert regex.findall() == ['match']
assert regex.findall() == ['and', 'you', 'to']_____no_output_____
</code>
**j)** Retain only punctuation characters for the given strings (generated from codepoints). Use Unicode character set definition for punctuation for solving this exercise._____no_output_____
<code>
s1 = ''.join(chr(c) for c in range(0, 0x80))
s2 = ''.join(chr(c) for c in range(0x80, 0x100))
s3 = ''.join(chr(c) for c in range(0x2600, 0x27ec))
pat = regex.compile() ##### add your solution here
assert pat.sub('', s1) == '!"#%&\'()*,-./:;?@[\\]_{}'
assert pat.sub('', s2) == '¡§«¶·»¿'
assert pat.sub('', s3) == '❨❩❪❫❬❭❮❯❰❱❲❳❴❵⟅⟆⟦⟧⟨⟩⟪⟫'_____no_output_____
</code>
**k)** For the given **markdown** file, replace all occurrences of the string `python` (irrespective of case) with the string `Python`. However, any match within code blocks that start with whole line ` ```python ` and end with whole line ` ``` ` shouldn't be replaced. Consider the input file to be small enough to fit memory requirements.
Refer to [github: exercises folder](https://github.com/learnbyexample/py_regular_expressions/tree/master/exercises) for files `sample.md` and `expected.md` required to solve this exercise._____no_output_____
<code>
ip_str = open('sample.md', 'r').read()
pat = regex.compile() ##### add your solution here
with open('sample_mod.md', 'w') as op_file:
##### add your solution here
305
assert open('sample_mod.md').read() == open('expected.md').read()_____no_output_____
</code>
**l)** For the given input strings, construct a word that is made up of last characters of all the words in the input. Use last character of last word as first character, last character of last but one word as second character and so on._____no_output_____
<code>
s1 = 'knack tic pi roar what'
s2 = '42;rod;t2t2;car'
pat = regex.compile() ##### add your solution here
##### add your solution here for s1
assert ___ == 'trick'
##### add your solution here for s2
assert ___ == 'r2d2'_____no_output_____
</code>
**m)** Replicate `str.rpartition` functionality with regular expressions. Split into three parts based on last match of sequences of digits, which is `777` and `12` for the given input strings._____no_output_____
<code>
s1 = 'Sample123string42with777numbers'
s2 = '12apples'
##### add your solution here for s1
assert ___ == ['Sample123string42with', '777', 'numbers']
##### add your solution here for s2
assert ___ == ['', '12', 'apples']_____no_output_____
</code>
**n)** Read about fuzzy matching on https://pypi.org/project/regex/. For the given input strings, return `True` if they are exactly same as `cat` or there is exactly one character difference. Ignore case when comparing differences. For example, `Ca2` should give `True`. `act` will be `False` even though the characters are same because position should be maintained._____no_output_____
<code>
pat = regex.compile() ##### add your solution here
assert bool(pat.fullmatch('CaT')) == True
assert bool(pat.fullmatch('scat')) == False
assert bool(pat.fullmatch('ca.')) == True
assert bool(pat.fullmatch('ca#')) == True
assert bool(pat.fullmatch('c#t')) == True
assert bool(pat.fullmatch('at')) == False
assert bool(pat.fullmatch('act')) == False
assert bool(pat.fullmatch('2a1')) == False_____no_output_____
</code>
| {
"repository": "ithakker/CISC367-Projects",
"path": "Day16/.ipynb_checkpoints/regex_exercises-checkpoint.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 84718,
"hexsha": "cb0672226577371593bc9940c41f75f81ab6d4e4",
"max_line_length": 400,
"avg_line_length": 27.7763934426,
"alphanum_fraction": 0.5341839987
} |
# Notebook from maximiliense/GLC19
Path: notebooks/Getting started - Baselines and submission.ipynb
This notebook presents some code to compute some basic baselines.
In particular, it shows how to:
1. Use the provided validation set
2. Compute the top-30 metric
3. Save the predictions on the test in the right format for submission_____no_output_____
<code>
%pylab inline --no-import-all
import os
from pathlib import Path
import pandas as pd
# Change this path to adapt to where you downloaded the data
DATA_PATH = Path("data")
# Create the path to save submission files
SUBMISSION_PATH = Path("submissions")
os.makedirs(SUBMISSION_PATH, exist_ok=True)Populating the interactive namespace from numpy and matplotlib
</code>
We also load the official metric, top-30 error rate, for which we provide efficient implementations:_____no_output_____
<code>
from GLC.metrics import top_30_error_rate
help(top_30_error_rate)Help on function top_30_error_rate in module GLC.metrics:
top_30_error_rate(y_true, y_score)
Computes the top-30 error rate.
Parameters
----------
y_true: 1d array, [n_samples]
True labels.
y_score: 2d array, [n_samples, n_classes]
Scores for each label.
Returns
-------
float:
Top-30 error rate value.
Notes
-----
Complexity: :math:`O( n_\text{samples} \times n_\text{classes} )`.
from GLC.metrics import top_k_error_rate_from_sets
help(top_k_error_rate_from_sets)Help on function top_k_error_rate_from_sets in module GLC.metrics:
top_k_error_rate_from_sets(y_true, s_pred)
Computes the top-k error rate from predicted sets.
Parameters
----------
y_true: 1d array, [n_samples]
True labels.
s_pred: 2d array, [n_samples, k]
Previously computed top-k sets for each sample.
Returns
-------
float:
Error rate value.
</code>
For submissions, we will also need to predict the top-30 sets for which we also provide an efficient implementation:_____no_output_____
<code>
from GLC.metrics import predict_top_30_set
help(predict_top_30_set)Help on function predict_top_30_set in module GLC.metrics:
predict_top_30_set(y_score)
Predicts the top-30 sets from scores.
Parameters
----------
y_score: 2d array, [n_samples, n_classes]
Scores for each sample and label.
Returns
-------
2d array, [n_samples, 30]:
Predicted top-30 sets for each sample.
Notes
-----
Complexity: :math:`O( n_\text{samples} \times n_\text{classes} )`.
</code>
We also provide an utility function to generate submission files in the right format:_____no_output_____
<code>
from GLC.submission import generate_submission_file
help(generate_submission_file)Help on function generate_submission_file in module GLC.submission:
generate_submission_file(filename, observation_ids, s_pred)
Generate submission file for Kaggle
Parameters
----------
filename : string
Submission filename.
observation_ids : 1d array-like
Test observations ids
s_pred : list of 1d array-like
Set predictions for test observations.
</code>
# Observation data loading_____no_output_____We first need to load the observation data:_____no_output_____
<code>
df_obs_fr = pd.read_csv(DATA_PATH / "observations" / "observations_fr_train.csv", sep=";", index_col="observation_id")
df_obs_us = pd.read_csv(DATA_PATH / "observations" / "observations_us_train.csv", sep=";", index_col="observation_id")
df_obs = pd.concat((df_obs_fr, df_obs_us))_____no_output_____
</code>
Then, we retrieve the train/val split provided:_____no_output_____
<code>
obs_id_train = df_obs.index[df_obs["subset"] == "train"].values
obs_id_val = df_obs.index[df_obs["subset"] == "val"].values
y_train = df_obs.loc[obs_id_train]["species_id"].values
y_val = df_obs.loc[obs_id_val]["species_id"].values
n_val = len(obs_id_val)
print("Validation set size: {} ({:.1%} of train observations)".format(n_val, n_val / len(df_obs)))Validation set size: 40080 (2.5% of train observations)
</code>
We also load the observation data for the test set:_____no_output_____
<code>
df_obs_fr_test = pd.read_csv(DATA_PATH / "observations" / "observations_fr_test.csv", sep=";", index_col="observation_id")
df_obs_us_test = pd.read_csv(DATA_PATH / "observations" / "observations_us_test.csv", sep=";", index_col="observation_id")
df_obs_test = pd.concat((df_obs_fr_test, df_obs_us_test))
obs_id_test = df_obs_test.index.values
print("Number of observations for testing: {}".format(len(df_obs_test)))
df_obs_test.head()Number of observations for testing: 36421
</code>
# Sample submission file
In this section, we will demonstrate how to generate the sample submission file provided.
To do so, we will use the function `generate_submission_file` from `GLC.submission`._____no_output_____The sample submission consists in always predicting the first 30 species for all the test observations:_____no_output_____
<code>
first_30_species = np.arange(30)
s_pred = np.tile(first_30_species[None], (len(df_obs_test), 1))_____no_output_____
</code>
We can then generate the associated submission file using:_____no_output_____
<code>
generate_submission_file(SUBMISSION_PATH / "sample_submission.csv", df_obs_test.index, s_pred)_____no_output_____
</code>
# Constant baseline: 30 most observed species
The first baseline consists in predicting the 30 most observed species on the train set which corresponds exactly to the "Top-30 most present species":_____no_output_____
<code>
species_distribution = df_obs.loc[obs_id_train]["species_id"].value_counts(normalize=True)
top_30_most_observed = species_distribution.index.values[:30]_____no_output_____
</code>
As expected, it does not perform very well on the validation set:_____no_output_____
<code>
s_pred = np.tile(top_30_most_observed[None], (n_val, 1))
score = top_k_error_rate_from_sets(y_val, s_pred)
print("Top-30 error rate: {:.1%}".format(score))Top-30 error rate: 93.5%
</code>
We will however generate the associated submission file on the test using:_____no_output_____
<code>
# Compute baseline on the test set
n_test = len(df_obs_test)
s_pred = np.tile(top_30_most_observed[None], (n_test, 1))
# Generate the submission file
generate_submission_file(SUBMISSION_PATH / "constant_top_30_most_present_species_baseline.csv", df_obs_test.index, s_pred)_____no_output_____
</code>
# Random forest on environmental vectors
A classical approach in ecology is to train Random Forests on environmental vectors.
We show here how to do so using [scikit-learn](https://scikit-learn.org/).
We start by loading the environmental vectors:_____no_output_____
<code>
df_env = pd.read_csv(DATA_PATH / "pre-extracted" / "environmental_vectors.csv", sep=";", index_col="observation_id")
X_train = df_env.loc[obs_id_train].values
X_val = df_env.loc[obs_id_val].values
X_test = df_env.loc[obs_id_test].values_____no_output_____
</code>
Then, we need to handle properly the missing values.
For instance, using `SimpleImputer`:_____no_output_____
<code>
from sklearn.impute import SimpleImputer
imp = SimpleImputer(
missing_values=np.nan,
strategy="constant",
fill_value=np.finfo(np.float32).min,
)
imp.fit(X_train)
X_train = imp.transform(X_train)
X_val = imp.transform(X_val)
X_test = imp.transform(X_test)_____no_output_____
</code>
We can now start training our Random Forest (as there are a lot of observations, over 1.8M, this can take a while):_____no_output_____
<code>
from sklearn.ensemble import RandomForestClassifier
est = RandomForestClassifier(n_estimators=16, max_depth=10, n_jobs=-1)
est.fit(X_train, y_train)_____no_output_____
</code>
As there are a lot of classes (over 17K), we need to be cautious when predicting the scores of the model.
This can easily take more than 5Go on the validation set.
For this reason, we will be predict the top-30 sets by batches using the following generic function:_____no_output_____
<code>
def batch_predict(predict_func, X, batch_size=1024):
res = predict_func(X[:1])
n_samples, n_outputs, dtype = X.shape[0], res.shape[1], res.dtype
preds = np.empty((n_samples, n_outputs), dtype=dtype)
for i in range(0, len(X), batch_size):
X_batch = X[i:i+batch_size]
preds[i:i+batch_size] = predict_func(X_batch)
return preds_____no_output_____
</code>
We can know compute the top-30 error rate on the validation set:_____no_output_____
<code>
def predict_func(X):
y_score = est.predict_proba(X)
s_pred = predict_top_30_set(y_score)
return s_pred
s_val = batch_predict(predict_func, X_val, batch_size=1024)
score_val = top_k_error_rate_from_sets(y_val, s_val)
print("Top-30 error rate: {:.1%}".format(score_val))Top-30 error rate: 80.4%
</code>
We now predict the top-30 sets on the test data and save them in a submission file:_____no_output_____
<code>
# Compute baseline on the test set
s_pred = batch_predict(predict_func, X_test, batch_size=1024)
# Generate the submission file
generate_submission_file(SUBMISSION_PATH / "random_forest_on_environmental_vectors.csv", df_obs_test.index, s_pred)_____no_output_____
</code>
| {
"repository": "maximiliense/GLC19",
"path": "notebooks/Getting started - Baselines and submission.ipynb",
"matched_keywords": [
"ecology"
],
"stars": 6,
"size": 21737,
"hexsha": "cb06c629ed98fc598eeb31e1e07a6be1a6b1455d",
"max_line_length": 159,
"avg_line_length": 26.4119076549,
"alphanum_fraction": 0.5337443069
} |
# Notebook from eugenesiow/practical-ml
Path: notebooks/OCR_from_Images_with_Transformers.ipynb
# OCR (Optical Character Recognition) from Images with Transformers
---
[Github](https://github.com/eugenesiow/practical-ml/) | More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml)
---_____no_output_____Notebook to recognise text automaticaly from an input image with either handwritten or printed text.
[Optical Character Recognition](https://paperswithcode.com/task/optical-character-recognition) is the task of converting images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo, license plates in cars...) or from subtitle text superimposed on an image (for example: from a television broadcast).
The [transformer models used](https://malaya-speech.readthedocs.io/en/latest/tts-singlish.html) are from Microsoft's TrOCR. The TrOCR models are encoder-decoder models, consisting of an image Transformer as encoder, and a text Transformer as decoder. We utilise the versions hosted on [huggingface.co](https://huggingface.co/models?search=microsoft/trocr) and use the awesome transformers library, for longevity and simplicity.
The notebook is structured as follows:
* Setting up the Environment
* Using the Model (Running Inference)_____no_output_____# Setting up the Environment_____no_output_____#### Dependencies and Runtime
If you're running this notebook in Google Colab, most of the dependencies are already installed and we don't need the GPU for this particular example.
If you decide to run this on many (>thousands) images and want the inference to go faster though, you can select `Runtime` > `Change Runtime Type` from the menubar. Ensure that `GPU` is selected as the `Hardware accelerator`._____no_output_____We need to install huggingface `transformers` for this example to run, so execute the command below to setup the dependencies. We use the version compiled directly from the latest source (at the time of writing this is the only way to access the transforemrs TrOCR model code)._____no_output_____
<code>
!pip install -q git+https://github.com/huggingface/transformers.git Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
[K |████████████████████████████████| 3.3 MB 5.1 MB/s
[K |████████████████████████████████| 596 kB 58.0 MB/s
[K |████████████████████████████████| 895 kB 35.8 MB/s
[K |████████████████████████████████| 56 kB 4.4 MB/s
[?25h Building wheel for transformers (PEP 517) ... [?25l[?25hdone
</code>
# Using the Model (Running Inference)_____no_output_____Let's define a function for us to get images from the web. We execute this function to download an image with a line of handwritten text and display it._____no_output_____
<code>
import requests
from IPython.display import display
from PIL import Image
def show_image(url):
img = Image.open(requests.get(url, stream=True).raw).convert("RGB")
display(img)
return img
handwriting1 = show_image('https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg')_____no_output_____
</code>
Now we want to load the model to recognise handwritten text.
Specifically we are running the following steps:
* Load the processor, `TrOCRProcessor`, which processes our input image and converts it into a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. The processor also adds absolute position embeddings and this sequence is fed to the layers of the Transformer encoder.
* Load the model, `VisionEncoderDecoderModel`, which consists of the image encoder and the text decoder.
* Define `ocr_image` function - We define the function for inferencing which takes our `src_img`, the input image we have downloaded. It will then run both the processor and the model inference and produce the output OCR text that has been recognised from the image._____no_output_____
<code>
import transformers
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-handwritten')
def ocr_image(src_img):
pixel_values = processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
return processor.batch_decode(generated_ids, skip_special_tokens=True)[0]_____no_output_____
</code>
We now run our `ocr_image` function on the line of handwritten text in the image we have downloaded previously (and stored in `handwriting1`)._____no_output_____
<code>
ocr_image(handwriting1)_____no_output_____
</code>
Lets try on another image with handwritten text._____no_output_____
<code>
ocr_image(show_image('https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSoolxi9yWGAT5SLZShv8vVd0bz47UWRzQC19fDTeE8GmGv_Rn-PCF1pP1rrUx8kOjA4gg&usqp=CAU'))_____no_output_____import transformers
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
print_processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')
print_model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed')
def ocr_print_image(src_img):
pixel_values = print_processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = print_model.generate(pixel_values)
return print_processor.batch_decode(generated_ids, skip_special_tokens=True)[0]_____no_output_____
</code>
We download an image with noisy printed text, a scanned receipt._____no_output_____
<code>
receipt = show_image('https://github.com/zzzDavid/ICDAR-2019-SROIE/raw/master/data/img/000.jpg')_____no_output_____
</code>
As the model processes a line of text, we crop the image to include on of the lines of text in the receipt and send it to our model._____no_output_____
<code>
receipt_crop = receipt.crop((0, 80, receipt.size[0], 110))
display(receipt_crop)
ocr_print_image(receipt_crop)_____no_output_____
</code>
More Notebooks @ [eugenesiow/practical-ml](https://github.com/eugenesiow/practical-ml) and do star or drop us some feedback on how to improve the notebooks on the [Github repo](https://github.com/eugenesiow/practical-ml/)._____no_output_____
| {
"repository": "eugenesiow/practical-ml",
"path": "notebooks/OCR_from_Images_with_Transformers.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 39,
"size": 611637,
"hexsha": "cb07417d88ab53e43dda24f67993a891e3b9d4b2",
"max_line_length": 366638,
"avg_line_length": 204.9721849866,
"alphanum_fraction": 0.8838003587
} |
# Notebook from drammock/mne-tools.github.io
Path: 0.16/_downloads/plot_brainstorm_auditory.ipynb
<code>
%matplotlib inline_____no_output_____
</code>
# Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1]_ and the associated
`brainstorm site <http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>`_.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
`FieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>`_.
References
----------
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
_____no_output_____
<code>
# Authors: Mainak Jas <[email protected]>
# Eric Larson <[email protected]>
# Jaakko Leppakangas <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)_____no_output_____
</code>
To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
``use_precomputed = False`` running time of this script can be several
minutes even on a fast computer.
_____no_output_____
<code>
use_precomputed = True_____no_output_____
</code>
The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:`mne.io.Raw`.
_____no_output_____
<code>
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')_____no_output_____
</code>
In the memory saving mode we use ``preload=False`` and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
_____no_output_____
<code>
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)_____no_output_____
</code>
Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
- 1 stim channel for marking presentation times for the stimuli
- 1 audio channel for the sent signal
- 1 response channel for recording the button presses
- 1 ECG bipolar
- 2 EOG bipolar (vertical and horizontal)
- 12 head tracking channels
- 20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
_____no_output_____
<code>
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)_____no_output_____
</code>
For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
_____no_output_____
<code>
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions_____no_output_____
</code>
Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
_____no_output_____
<code>
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory_____no_output_____
</code>
Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
_____no_output_____
<code>
raw.plot(block=True)_____no_output_____
</code>
Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
_____no_output_____
<code>
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(tmax=np.inf, picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks=meg_picks)_____no_output_____
</code>
We also lowpass filter the data at 100 Hz to remove the hf components.
_____no_output_____
<code>
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')_____no_output_____
</code>
Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
_____no_output_____
<code>
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')_____no_output_____
</code>
The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
_____no_output_____
<code>
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs_____no_output_____
</code>
We mark a set of bad channels that seem noisier than others. This can also
be done interactively with ``raw.plot`` by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
_____no_output_____
<code>
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']_____no_output_____
</code>
The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword ``reject_by_annotation=False``.
_____no_output_____
<code>
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)_____no_output_____
</code>
We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
_____no_output_____
<code>
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks_____no_output_____
</code>
The averages for each conditions are computed.
_____no_output_____
<code>
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant_____no_output_____
</code>
Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:`mne.io.Raw.filter`), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:`mne.io.Raw.notch_filter`.)
_____no_output_____
<code>
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')_____no_output_____
</code>
Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
_____no_output_____
<code>
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')_____no_output_____
</code>
Show activations as topography figures.
_____no_output_____
<code>
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')_____no_output_____
</code>
We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
_____no_output_____
<code>
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')_____no_output_____
</code>
Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
_____no_output_____
<code>
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm_____no_output_____
</code>
The transformation is read from a file. More information about coregistering
the data, see `ch_interactive_analysis` or
:func:`mne.gui.coregistration`.
_____no_output_____
<code>
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)_____no_output_____
</code>
To save time and memory, the forward solution is read from a file. Set
``use_precomputed=False`` in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: `CHDBBCEJ`, :func:`mne.setup_source_space`,
`create_bem_model`, :func:`mne.bem.make_watershed_bem`.
_____no_output_____
<code>
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd_____no_output_____
</code>
The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
``time_viewer=True``.
Standard condition.
_____no_output_____
<code>
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain_____no_output_____
</code>
Deviant condition.
_____no_output_____
<code>
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain_____no_output_____
</code>
Difference.
_____no_output_____
<code>
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')_____no_output_____
</code>
| {
"repository": "drammock/mne-tools.github.io",
"path": "0.16/_downloads/plot_brainstorm_auditory.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": null,
"size": 22759,
"hexsha": "cb086e9c24baf3cc8741ef2893314ba8365b722c",
"max_line_length": 993,
"avg_line_length": 45.1567460317,
"alphanum_fraction": 0.5972142889
} |
# Notebook from unibw-patch/fuzzingbook
Path: notebooks/WhenToStopFuzzing.ipynb
# When To Stop Fuzzing
In the past chapters, we have discussed several fuzzing techniques. Knowing _what_ to do is important, but it is also important to know when to _stop_ doing things. In this chapter, we will learn when to _stop fuzzing_ – and use a prominent example for this purpose: The *Enigma* machine that was used in the second world war by the navy of Nazi Germany to encrypt communications, and how Alan Turing and I.J. Good used _fuzzing techniques_ to crack ciphers for the Naval Enigma machine._____no_output_____Turing did not only develop the foundations of computer science, the Turing machine. Together with his assistant I.J. Good, he also invented estimators of the probability of an event occuring that has never previously occured. We show how the Good-Turing estimator can be used to quantify the *residual risk* of a fuzzing campaign that finds no vulnerabilities. Meaning, we show how it estimates the probability of discovering a vulnerability when no vulnerability has been observed before throughout the fuzzing campaign.
We discuss means to speed up [coverage-based fuzzers](Coverage.ipynb) and introduce a range of estimation and extrapolation methodologies to assess and extrapolate fuzzing progress and residual risk.
**Prerequisites**
* _The chapter on [Coverage](Coverage.ipynb) discusses how to use coverage information for an executed test input to guide a coverage-based mutational greybox fuzzer_.
* Some knowledge of statistics is helpful._____no_output_____
<code>
import fuzzingbook_utils_____no_output_____import Fuzzer
import Coverage_____no_output_____
</code>
## The Enigma Machine
It is autumn in the year of 1938. Turing has just finished his PhD at Princeton University demonstrating the limits of computation and laying the foundation for the theory of computer science. Nazi Germany is rearming. It has reoccupied the Rhineland and annexed Austria against the treaty of Versailles. It has just annexed the Sudetenland in Czechoslovakia and begins preparations to take over the rest of Czechoslovakia despite an agreement just signed in Munich.
Meanwhile, the British intelligence is building up their capability to break encrypted messages used by the Germans to communicate military and naval information. The Germans are using [Enigma machines](https://en.wikipedia.org/wiki/Enigma_machine) for encryption. Enigma machines use a series of electro-mechanical rotor cipher machines to protect military communication. Here is a picture of an Enigma machine:_____no_output__________no_output_____By the time Turing joined the British Bletchley park, the Polish intelligence reverse engineered the logical structure of the Enigma machine and built a decryption machine called *Bomba* (perhaps because of the ticking noise they made). A bomba simulates six Enigma machines simultaneously and tries different decryption keys until the code is broken. The Polish bomba might have been the very _first fuzzer_.
Turing took it upon himself to crack ciphers of the Naval Enigma machine, which were notoriously hard to crack. The Naval Enigma used, as part of its encryption key, a three letter sequence called *trigram*. These trigrams were selected from a book, called *Kenngruppenbuch*, which contained all trigrams in a random order._____no_output_____### The Kenngruppenbuch
Let's start with the Kenngruppenbuch (K-Book).
We are going to use the following Python functions.
* `shuffle(elements)` - shuffle *elements* and put items in random order.
* `choice(elements, p=weights)` - choose an item from *elements* at random. An element with twice the *weight* is twice as likely to be chosen.
* `log(a)` - returns the natural logarithm of a.
* `a ** b` - is the a to the power of b (a.k.a. [power operator](https://docs.python.org/3/reference/expressions.html#the-power-operator))_____no_output_____
<code>
import string_____no_output_____import numpy
from numpy.random import choice
from numpy.random import shuffle
from numpy import log_____no_output_____
</code>
We start with creating the set of trigrams:_____no_output_____
<code>
letters = list(string.ascii_letters[26:]) # upper-case characters
trigrams = [str(a + b + c) for a in letters for b in letters for c in letters]
shuffle(trigrams)_____no_output_____trigrams[:10]_____no_output_____
</code>
These now go into the Kenngruppenbuch. However, it was observed that some trigrams were more likely chosen than others. For instance, trigrams at the top-left corner of any page, or trigrams on the first or last few pages were more likely than one somewhere in the middle of the book or page. We reflect this difference in distribution by assigning a _probability_ to each trigram, using Benford's law as introduced in [Probabilistic Fuzzing](ProbabilisticGrammarFuzzer.ipynb)._____no_output_____Recall, that Benford's law assigns the $i$-th digit the probability $\log_{10}\left(1 + \frac{1}{i}\right)$ where the base 10 is chosen because there are 10 digits $i\in [0,9]$. However, Benford's law works for an arbitrary number of "digits". Hence, we assign the $i$-th trigram the probability $\log_b\left(1 + \frac{1}{i}\right)$ where the base $b$ is the number of all possible trigrams $b=26^3$. _____no_output_____
<code>
k_book = {} # Kenngruppenbuch
for i in range(1, len(trigrams) + 1):
trigram = trigrams[i - 1]
# choose weights according to Benford's law
k_book[trigram] = log(1 + 1 / i) / log(26**3 + 1)_____no_output_____
</code>
Here's a random trigram from the Kenngruppenbuch:_____no_output_____
<code>
random_trigram = choice(list(k_book.keys()), p=list(k_book.values()))
random_trigram_____no_output_____
</code>
And this is its probability:_____no_output_____
<code>
k_book[random_trigram]_____no_output_____
</code>
### Fuzzing the Enigma
In the following, we introduce an extremely simplified implementation of the Naval Enigma based on the trigrams from the K-book. Of course, the encryption mechanism of the actual Enigma machine is much more sophisticated and worthy of a much more detailed investigation. We encourage the interested reader to follow up with further reading listed in the Background section.
The personell at Bletchley Park can only check whether an encoded message is encoded with a (guessed) trigram.
Our implementation `naval_enigma()` takes a `message` and a `key` (i.e., the guessed trigram). If the given key matches the (previously computed) key for the message, `naval_enigma()` returns `True`._____no_output_____
<code>
from Fuzzer import RandomFuzzer
from Fuzzer import Runner_____no_output_____class EnigmaMachine(Runner):
def __init__(self, k_book):
self.k_book = k_book
self.reset()
def reset(self):
"""Resets the key register"""
self.msg2key = {}
def internal_msg2key(self, message):
"""Internal helper method.
Returns the trigram for an encoded message."""
if not message in self.msg2key:
# Simulating how an officer chooses a key from the Kenngruppenbuch to encode the message.
self.msg2key[message] = choice(list(self.k_book.keys()), p=list(self.k_book.values()))
trigram = self.msg2key[message]
return trigram
def naval_enigma(self, message, key):
"""Returns true if 'message' is encoded with 'key'"""
if key == self.internal_msg2key(message):
return True
else:
return False_____no_output_____
</code>
To "fuzz" the `naval_enigma()`, our job will be to come up with a key that matches a given (encrypted) message. Since the keys only have three characters, we have a good chance to achieve this in much less than a seconds. (Of course, longer keys will be much harder to find via random fuzzing.)_____no_output_____
<code>
class EnigmaMachine(EnigmaMachine):
def run(self, tri):
"""PASS if cur_msg is encoded with trigram tri"""
if self.naval_enigma(self.cur_msg, tri):
outcome = self.PASS
else:
outcome = self.FAIL
return (tri, outcome)_____no_output_____
</code>
Now we can use the `EnigmaMachine` to check whether a certain message is encoded with a certain trigram._____no_output_____
<code>
enigma = EnigmaMachine(k_book)
enigma.cur_msg = "BrEaK mE. L0Lzz"
enigma.run("AAA")_____no_output_____
</code>
The simplest way to crack an encoded message is by brute forcing. Suppose, at Bletchley park they would try random trigrams until a message is broken._____no_output_____
<code>
class BletchleyPark(object):
def __init__(self, enigma):
self.enigma = enigma
self.enigma.reset()
self.enigma_fuzzer = RandomFuzzer(
min_length=3,
max_length=3,
char_start=65,
char_range=26)
def break_message(self, message):
"""Returning the trigram for an encoded message"""
self.enigma.cur_msg = message
while True:
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
if outcome == self.enigma.PASS:
break
return trigram_____no_output_____
</code>
How long does it take Bletchley park to find the key using this brute forcing approach?_____no_output_____
<code>
from Timer import Timer_____no_output_____enigma = EnigmaMachine(k_book)
bletchley = BletchleyPark(enigma)
with Timer() as t:
trigram = bletchley.break_message("BrEaK mE. L0Lzz")_____no_output_____
</code>
Here's the key for the current message:_____no_output_____
<code>
trigram_____no_output_____
</code>
And no, this did not take long:_____no_output_____
<code>
'%f seconds' % t.elapsed_time()_____no_output_____'Bletchley cracks about %d messages per second' % (1/t.elapsed_time())_____no_output_____
</code>
### Turing's Observations
Okay, lets crack a few messages and count the number of times each trigram is observed._____no_output_____
<code>
from collections import defaultdict_____no_output_____n = 100 # messages to crack_____no_output_____observed = defaultdict(int)
for msg in range(0, n):
trigram = bletchley.break_message(msg)
observed[trigram] += 1
# list of trigrams that have been observed
counts = [k for k, v in observed.items() if int(v) > 0]
t_trigrams = len(k_book)
o_trigrams = len(counts)_____no_output_____"After cracking %d messages, we observed %d out of %d trigrams." % (
n, o_trigrams, t_trigrams)_____no_output_____singletons = len([k for k, v in observed.items() if int(v) == 1])_____no_output_____"From the %d observed trigrams, %d were observed only once." % (
o_trigrams, singletons)_____no_output_____
</code>
Given a sample of previously used entries, Turing wanted to _estimate the likelihood_ that the current unknown entry was one that had been previously used, and further, to estimate the probability distribution over the previously used entries. This lead to the development of the estimators of the missing mass and estimates of the true probability mass of the set of items occuring in the sample. Good worked with Turing during the war and, with Turing’s permission, published the analysis of the bias of these estimators in 1953._____no_output_____Suppose, after finding the keys for n=100 messages, we have observed the trigram "ABC" exactly $X_\text{ABC}=10$ times. What is the probability $p_\text{ABC}$ that "ABC" is the key for the next message? Empirically, we would estimate $\hat p_\text{ABC}=\frac{X_\text{ABC}}{n}=0.1$. We can derive the empirical estimates for all other trigrams that we have observed. However, it becomes quickly evident that the complete probability mass is distributed over the *observed* trigrams. This leaves no mass for *unobserved* trigrams, i.e., the probability of discovering a new trigram. This is called the missing probability mass or the discovery probability._____no_output_____Turing and Good derived an estimate of the *discovery probability* $p_0$, i.e., the probability to discover an unobserved trigram, as the number $f_1$ of trigrams observed exactly once divided by the total number $n$ of messages cracked:
$$
p_0 = \frac{f_1}{n}
$$
where $f_1$ is the number of singletons and $n$ is the number of cracked messages._____no_output_____Lets explore this idea for a bit. We'll extend `BletchleyPark` to crack `n` messages and record the number of trigrams observed as the number of cracked messages increases._____no_output_____
<code>
class BletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returning the trigram for an encoded message"""
# For the following experiment, we want to make it practical
# to break a large number of messages. So, we remove the
# loop and just return the trigram for a message.
#
# enigma.cur_msg = message
# while True:
# (trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
# if outcome == self.enigma.PASS:
# break
trigram = enigma.internal_msg2key(message)
return trigram
def break_n_messages(self, n):
"""Returns how often each trigram has been observed,
and #trigrams discovered for each message."""
observed = defaultdict(int)
timeseries = [0] * n
# Crack n messages and record #trigrams observed as #messages increases
cur_observed = 0
for cur_msg in range(0, n):
trigram = self.break_message(cur_msg)
observed[trigram] += 1
if (observed[trigram] == 1):
cur_observed += 1
timeseries[cur_msg] = cur_observed
return (observed, timeseries)_____no_output_____
</code>
Let's crack 2000 messages and compute the GT-estimate._____no_output_____
<code>
n = 2000 # messages to crack_____no_output_____bletchley = BletchleyPark(enigma)
(observed, timeseries) = bletchley.break_n_messages(n)_____no_output_____
</code>
Let us determine the Good-Turing estimate of the probability that the next trigram has not been observed before:_____no_output_____
<code>
singletons = len([k for k, v in observed.items() if int(v) == 1])
gt = singletons / n
gt_____no_output_____
</code>
We can verify the Good-Turing estimate empirically and compute the empirically determined probability that the next trigram has not been observed before. To do this, we repeat the following experiment repeats=1000 times, reporting the average: If the next message is a new trigram, return 1, otherwise return 0. Note that here, we do not record the newly discovered trigrams as observed._____no_output_____
<code>
repeats = 1000 # experiment repetitions _____no_output_____newly_discovered = 0
for cur_msg in range(n, n + repeats):
trigram = bletchley.break_message(cur_msg)
if(observed[trigram] == 0):
newly_discovered += 1
newly_discovered / repeats_____no_output_____
</code>
Looks pretty accurate, huh? The difference between estimates is reasonably small, probably below 0.03. However, the Good-Turing estimate did not nearly require as much computational resources as the empirical estimate. Unlike the empirical estimate, the Good-Turing estimate can be computed during the campaign. Unlike the empirical estimate, the Good-Turing estimate requires no additional, redundant repetitions._____no_output_____In fact, the Good-Turing (GT) estimator often performs close to the best estimator for arbitrary distributions ([Try it here!](#Kenngruppenbuch)). Of course, the concept of *discovery* is not limited to trigrams. The GT estimator is also used in the study of natural languages to estimate the likelihood that we haven't ever heard or read the word we next encounter. The GT estimator is used in ecology to estimate the likelihood of discovering a new, unseen species in our quest to catalog all _species_ on earth. Later, we will see how it can be used to estimate the probability to discover a vulnerability when none has been observed, yet (i.e., residual risk)._____no_output_____Alan Turing was interested in the _complement_ $(1-GT)$ which gives the proportion of _all_ messages for which the Brits have already observed the trigram needed for decryption. For this reason, the complement is also called sample coverage. The *sample coverage* quantifies how much we know about decryption of all messages given the few messages we have already decrypted. _____no_output_____The probability that the next message can be decrypted with a previously discovered trigram is:_____no_output_____
<code>
1 - gt_____no_output_____
</code>
The *inverse* of the GT-estimate (1/GT) is a _maximum likelihood estimate_ of the expected number of messages that we can decrypt with previously observed trigrams before having to find a new trigram to decrypt the message. In our setting, the number of messages for which we can expect to reuse previous trigrams before having to discover a new trigram is:_____no_output_____
<code>
1 / gt_____no_output_____
</code>
But why is GT so accurate? Intuitively, despite a large sampling effort (i.e., cracking $n$ messages), there are still $f_1$ trigrams that have been observed only once. We could say that such "singletons" are very rare trigrams. Hence, the probability that the next messages is encoded with such a rare but observed trigram gives a good upper bound on the probability that the next message is encoded with an evidently much rarer, unobserved trigram. Since Turing's observation 80 years ago, an entire statistical theory has been developed around the hypothesis that rare, observed "species" are good predictors of unobserved species.
Let's have a look at the distribution of rare trigrams._____no_output_____
<code>
%matplotlib inline_____no_output_____import matplotlib.pyplot as plt_____no_output_____frequencies = [v for k, v in observed.items() if int(v) > 0]
frequencies.sort(reverse=True)
# Uncomment to see how often each discovered trigram has been observed
# print(frequencies)
# frequency of rare trigrams
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5)
plt.xticks(range(1, 21))
plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)')
plt.ylabel('Frequency of occurances')
plt.title('Figure 1. Frequency of Rare Trigrams')
# trigram discovery over time
plt.subplot(1, 2, 2)
plt.plot(timeseries)
plt.xlabel('# of messages cracked')
plt.ylabel('# of trigrams discovered')
plt.title('Figure 2. Trigram Discovery Over Time');_____no_output_____# Statistics for most and least often observed trigrams
singletons = len([v for k, v in observed.items() if int(v) == 1])
total = len(frequencies)
print("%3d of %3d trigrams (%.3f%%) have been observed 1 time (i.e., are singleton trigrams)."
% (singletons, total, singletons * 100 / total))
print("%3d of %3d trigrams ( %.3f%%) have been observed %d times."
% (1, total, 1 / total, frequencies[0]))_____no_output_____
</code>
The *majority of trigrams* have been observed only once, as we can see in Figure 1 (left). In other words, a the majority of observed trigrams are "rare" singletons. In Figure 2 (right), we can see that discovery is in full swing. The trajectory seems almost linear. However, since there is a finite number of trigrams (26^3 = 17,576) trigram discovery will slow down and eventually approach an asymptote (the total number of trigrams).
### Boosting the Performance of BletchleyPark
Some trigrams have been observed very often. We call these "abundant" trigrams._____no_output_____
<code>
print("Trigram : Frequency")
for trigram in sorted(observed, key=observed.get, reverse=True):
if observed[trigram] > 10:
print(" %s : %d" % (trigram, observed[trigram]))_____no_output_____
</code>
We'll speed up the code breaking by _trying the abundant trigrams first_.
First, we'll find out how many messages can be cracked by the existing brute forcing strategy at Bledgley park, given a maximum number of attempts. We'll also track the number of messages cracked over time (`timeseries`)._____no_output_____
<code>
class BletchleyPark(BletchleyPark):
def __init__(self, enigma):
super().__init__(enigma)
self.cur_attempts = 0
self.cur_observed = 0
self.observed = defaultdict(int)
self.timeseries = [None] * max_attempts * 2
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
while True:
self.cur_attempts += 1 # NEW
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
self.timeseries[self.cur_attempts] = self.cur_observed # NEW
if outcome == self.enigma.PASS:
break
return trigram
def break_max_attempts(self, max_attempts):
"""Returns #messages successfully cracked after a given #attempts."""
cur_msg = 0
n_messages = 0
while True:
trigram = self.break_message(cur_msg)
# stop when reaching max_attempts
if self.cur_attempts >= max_attempts:
break
# update observed trigrams
n_messages += 1
self.observed[trigram] += 1
if (self.observed[trigram] == 1):
self.cur_observed += 1
self.timeseries[self.cur_attempts] = self.cur_observed
cur_msg += 1
return n_messages_____no_output_____
</code>
`original` is the number of messages cracked by the bruteforcing strategy, given 100k attempts. Can we beat this?_____no_output_____
<code>
max_attempts = 100000_____no_output_____bletchley = BletchleyPark(enigma)
original = bletchley.break_max_attempts(max_attempts)
original_____no_output_____
</code>
Now, we'll create a boosting strategy by trying trigrams first that we have previously observed most often._____no_output_____
<code>
class BoostedBletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
# boost cracking by trying observed trigrams first
for trigram in sorted(self.prior, key=self.prior.get, reverse=True):
self.cur_attempts += 1
(_, outcome) = self.enigma.run(trigram)
self.timeseries[self.cur_attempts] = self.cur_observed
if outcome == self.enigma.PASS:
return trigram
# else fall back to normal cracking
return super().break_message(message)_____no_output_____
</code>
`boosted` is the number of messages cracked by the boosted strategy._____no_output_____
<code>
boostedBletchley = BoostedBletchleyPark(enigma)
boostedBletchley.prior = observed
boosted = boostedBletchley.break_max_attempts(max_attempts)
boosted_____no_output_____
</code>
We see that the boosted technique cracks substantially more messages. It is worthwhile to record how often each trigram is being used as key and try them in the order of their occurence.
***Try it***. *For practical reasons, we use a large number of previous observations as prior (`boostedBletchley.prior = observed`). You can try to change the code such that the strategy uses the trigram frequencies (`self.observed`) observed **during** the campaign itself to boost the campaign. You will need to increase `max_attempts` and wait for a long while.*_____no_output_____Let's compare the number of trigrams discovered over time._____no_output_____
<code>
# print plots
line_old, = plt.plot(bletchley.timeseries, label="Bruteforce Strategy")
line_new, = plt.plot(boostedBletchley.timeseries, label="Boosted Strategy")
plt.legend(handles=[line_old, line_new])
plt.xlabel('# of cracking attempts')
plt.ylabel('# of trigrams discovered')
plt.title('Trigram Discovery Over Time');_____no_output_____
</code>
We see that the boosted fuzzer is constantly superior over the random fuzzer._____no_output_____## Estimating the Probability of Path Discovery
<!-- ## Residual Risk: Probability of Failure after an Unsuccessful Fuzzing Campaign -->
<!-- Residual risk is not formally defined in this section, so I made the title a bit more generic -- AZ -->
So, what does Turing's observation for the Naval Enigma have to do with fuzzing _arbitrary_ programs? Turing's assistant I.J. Good extended and published Turing's work on the estimation procedures in Biometrica, a journal for theoretical biostatistics that still exists today. Good did not talk about trigrams. Instead, he calls them "species". Hence, the GT estimator is presented to estimate how likely it is to discover a new species, given an existing sample of individuals (each of which belongs to exactly one species).
Now, we can associate program inputs to species, as well. For instance, we could define the path that is exercised by an input as that input's species. This would allow us to _estimate the probability that fuzzing discovers a new path._ Later, we will see how this discovery probability estimate also estimates the likelihood of discovering a vulnerability when we have not seen one, yet (residual risk)._____no_output_____Let's do this. We identify the species for an input by computing a hash-id over the set of statements exercised by that input. In the [Coverage](Coverage.ipynb) chapter, we have learned about the [Coverage class](Coverage.ipynb#A-Coverage-Class) which collects coverage information for an executed Python function. As an example, the function [`cgi_decode()`](Coverage.ipynb#A-CGI-Decoder) was introduced. The function `cgi_decode()` takes a string encoded for a website URL and decodes it back to its original form.
Here's what `cgi_decode()` does and how coverage is computed._____no_output_____
<code>
from Coverage import Coverage, cgi_decode_____no_output_____encoded = "Hello%2c+world%21"
with Coverage() as cov:
decoded = cgi_decode(encoded)_____no_output_____decoded_____no_output_____print(cov.coverage());_____no_output_____
</code>
### Trace Coverage
First, we will introduce the concept of execution traces, which are a coarse abstraction of the execution path taken by an input. Compared to the definition of path, a trace ignores the sequence in which statements are exercised or how often each statement is exercised.
* `pickle.dumps()` - serializes an object by producing a byte array from all the information in the object
* `hashlib.md5()` - produces a 128-bit hash value from a byte array_____no_output_____
<code>
import pickle
import hashlib_____no_output_____def getTraceHash(cov):
pickledCov = pickle.dumps(cov.coverage())
hashedCov = hashlib.md5(pickledCov).hexdigest()
return hashedCov_____no_output_____
</code>
Remember our model for the Naval Enigma machine? Each message must be decrypted using exactly one trigram while multiple messages may be decrypted by the same trigram. Similarly, we need each input to yield exactly one trace hash while multiple inputs can yield the same trace hash._____no_output_____Let's see whether this is true for our `getTraceHash()` function._____no_output_____
<code>
inp1 = "a+b"
inp2 = "a+b+c"
inp3 = "abc"
with Coverage() as cov1:
cgi_decode(inp1)
with Coverage() as cov2:
cgi_decode(inp2)
with Coverage() as cov3:
cgi_decode(inp3)_____no_output_____
</code>
The inputs `inp1` and `inp2` execute the same statements:_____no_output_____
<code>
inp1, inp2_____no_output_____cov1.coverage() - cov2.coverage()_____no_output_____
</code>
The difference between both coverage sets is empty. Hence, the trace hashes should be the same:_____no_output_____
<code>
getTraceHash(cov1)_____no_output_____getTraceHash(cov2)_____no_output_____assert getTraceHash(cov1) == getTraceHash(cov2)_____no_output_____
</code>
In contrast, the inputs `inp1` and `inp3` execute _different_ statements:_____no_output_____
<code>
inp1, inp3_____no_output_____cov1.coverage() - cov3.coverage()_____no_output_____
</code>
Hence, the trace hashes should be different, too:_____no_output_____
<code>
getTraceHash(cov1)_____no_output_____getTraceHash(cov3)_____no_output_____assert getTraceHash(cov1) != getTraceHash(cov3)_____no_output_____
</code>
### Measuring Trace Coverage over Time
In order to measure trace coverage for a `function` executing a `population` of fuzz inputs, we slightly adapt the `population_coverage()` function from the [Chapter on Coverage](Coverage.ipynb#Coverage-of-Basic-Fuzzing)._____no_output_____
<code>
def population_trace_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = set([getTraceHash(cov)])
# singletons and doubletons -- we will need them later
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons_____no_output_____
</code>
Let's see whether our new function really contains coverage information only for *two* traces given our three inputs for `cgi_decode`._____no_output_____
<code>
all_coverage = population_trace_coverage([inp1, inp2, inp3], cgi_decode)[0]
assert len(all_coverage) == 2_____no_output_____
</code>
Unfortunately, the `cgi_decode()` function is too simple. Instead, we will use the original Python [HTMLParser](https://docs.python.org/3/library/html.parser.html) as our test subject._____no_output_____
<code>
from Fuzzer import RandomFuzzer
from Coverage import population_coverage
from html.parser import HTMLParser_____no_output_____trials = 50000 # number of random inputs generated_____no_output_____
</code>
Let's run a random fuzzer for $n=50000$ times and plot trace coverage over time._____no_output_____
<code>
# create wrapper function
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)_____no_output_____# create random fuzzer
fuzzer = RandomFuzzer(min_length=1, max_length=100,
char_start=32, char_range=94)
# create population of fuzz inputs
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
# execute and measure trace coverage
trace_timeseries = population_trace_coverage(population, my_parser)[1]
# execute and measure code coverage
code_timeseries = population_coverage(population, my_parser)[1]
# plot trace coverage over time
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.plot(trace_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.title('Trace Coverage Over Time')
# plot code coverage over time
plt.subplot(1, 2, 2)
plt.plot(code_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements covered')
plt.title('Code Coverage Over Time');_____no_output_____
</code>
Above, we can see trace coverage (left) and code coverage (right) over time. Here are our observations.
1. **Trace coverage is more robust**. There are less sudden jumps in the graph compared to code coverage.
2. **Trace coverage is more fine grained.** There more traces than statements covered at the end (y-axis)
3. **Trace coverage grows more steadily**. Code coverage exercise more than half the statements with the first input that it exercises after 50k inputs. Instead, the number of traces covered grows slowly and steadily since each input can yield only one execution trace.
It is for this reason that one of the most prominent and successful fuzzers today, american fuzzy lop (AFL), uses a similar *measure of progress* (a hash computed over the branches exercised by the input)._____no_output_____### Evaluating the Discovery Probability Estimate
Let's find out how the Good-Turing estimator performs as estimate of discovery probability when we are fuzzing to discover execution traces rather than trigrams.
To measure the empirical probability, we execute the same population of inputs (n=50000) and measure in regular intervals (measurement=100 intervals). During each measurement, we repeat the following experiment repeats=500 times, reporting the average: If the next input yields a new trace, return 1, otherwise return 0. Note that during these repetitions, we do not record the newly discovered traces as observed._____no_output_____
<code>
repeats = 500 # experiment repetitions
measurements = 100 # experiment measurements_____no_output_____emp_timeseries = []
all_coverage = set()
step = int(trials / measurements)
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= set([getTraceHash(cov)])
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
if getTraceHash(cov) not in all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)_____no_output_____
</code>
Now, we compute the Good-Turing estimate over time._____no_output_____
<code>
gt_timeseries = []
singleton_timeseries = population_trace_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)_____no_output_____
</code>
Let's go ahead and plot both time series._____no_output_____
<code>
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');_____no_output_____
</code>
Again, the Good-Turing estimate appears to be *highly accurate*. In fact, the empirical estimator has a much lower precision as indicated by the large swings. You can try and increase the number of repetitions (repeats) to get more precision for the empirical estimates, however, at the cost of waiting much longer._____no_output_____### Discovery Probability Quantifies Residual Risk
Alright. You have gotten a hold of a couple of powerful machines and used them to fuzz a software system for several months without finding any vulnerabilities. Is the system vulnerable?
Well, who knows? We cannot say for sure; there is always some residual risk. Testing is not verification. Maybe the next test input that is generated reveals a vulnerability.
Let's say *residual risk* is the probability that the next test input reveals a vulnerability that has not been found, yet. Böhme \cite{stads} has shown that the Good-Turing estimate of the discovery probability is also an estimate of the maxmimum residual risk.
**Proof sketch (Residual Risk)**. Here is a proof sketch that shows that an estimator of discovery probability for an arbitrary definition of species gives an upper bound on the probability to discover a vulnerability when none has been found: Suppose, for each "old" species A (here, execution trace), we derive two "new" species: Some inputs belonging to A expose a vulnerability while others belonging to A do not. We know that _only_ species that do not expose a vulnerability have been discovered. Hence, _all_ species exposing a vulnerability and _some_ species that do not expose a vulnerability remain undiscovered. Hence, the probability to discover a new species gives an upper bound on the probability to discover (a species that exposes) a vulnerability. **QED**.
An estimate of the discovery probability is useful in many other ways.
1. **Discovery probability**. We can estimate, at any point during the fuzzing campaign, the probability that the next input belongs to a previously unseen species (here, that it yields a new execution trace, i.e., exercises a new set of statements).
2. **Complement of discovery probability**. We can estimate the proportion of *all* inputs the fuzzer can generate for which we have already seen the species (here, execution traces). In some sense, this allows us to quantify the *progress of the fuzzing campaign towards completion*: If the probability to discovery a new species is too low, we might as well abort the campaign.
3. **Inverse of discovery probability**. We can predict the number of test inputs needed, so that we can expect the discovery of a new species (here, execution trace)._____no_output_____## How Do We Know When to Stop Fuzzing?
In fuzzing, we have measures of progress such as [code coverage](Coverage.ipynb) or [grammar coverage](GrammarCoverageFuzzer.ipynb). Suppose, we are interested in covering all statements in the program. The _percentage_ of statements that have already been covered quantifies how "far" we are from completing the fuzzing campaign. However, sometimes we know only the _number_ of species $S(n)$ (here, statements) that have been discovered after generating $n$ fuzz inputs. The percentage $S(n)/S$ can only be computed if we know the _total number_ of species $S$. Even then, not all species may be feasible._____no_output_____### A Success Estimator
If we do not _know_ the total number of species, then let's at least _estimate_ it: As we have seen before, species discovery slows down over time. In the beginning, many new species are discovered. Later, many inputs need to be generated before discovering the next species. In fact, given enough time, the fuzzing campaign approaches an _asymptote_. It is this asymptote that we can estimate._____no_output_____In 1984, Anne Chao, a well-known theoretical bio-statistician, has developed an estimator $\hat S$ which estimates the asymptotic total number of species $S$:
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{f_1^2}{2f_2} & \text{if $f_2>0$}\\
S(n) + \frac{f_1(f_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $f_1$ and $f_2$ is the number of singleton and doubleton species, respectively (that have been observed exactly once or twice, resp.), and
* where $S(n)$ is the number of species that have been discovered after generating $n$ fuzz inputs._____no_output_____So, how does Chao's estimate perform? To investigate this, we generate trials=400000 fuzz inputs using a fuzzer setting that allows us to see an asymptote in a few seconds. We measure trace coverage coverage. After half-way into our fuzzing campaign (trials/2=100000), we generate Chao's estimate $\hat S$ of the asymptotic total number of species. Then, we run the remainer of the campaign to see the "empirical" asymptote._____no_output_____
<code>
trials = 400000
fuzzer = RandomFuzzer(min_length=2, max_length=4,
char_start=32, char_range=32)
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, trace_ts, f1_ts, f2_ts = population_trace_coverage(population, my_parser)_____no_output_____time = int(trials / 2)
time_____no_output_____f1 = f1_ts[time]
f2 = f2_ts[time]
Sn = trace_ts[time]
if f2 > 0:
hat_S = Sn + f1 * f1 / (2 * f2)
else:
hat_S = Sn + f1 * (f1 - 1) / 2_____no_output_____
</code>
After executing `time` fuzz inputs (half of all), we have covered these many traces:_____no_output_____
<code>
time_____no_output_____Sn_____no_output_____
</code>
We can estimate there are this many traces in total:_____no_output_____
<code>
hat_S_____no_output_____
</code>
Hence, we have achieved this percentage of the estimate:_____no_output_____
<code>
100 * Sn / hat_S_____no_output_____
</code>
After executing `trials` fuzz inputs, we have covered these many traces:_____no_output_____
<code>
trials_____no_output_____trace_ts[trials - 1]_____no_output_____
</code>
The accuracy of Chao's estimator is quite reasonable. It isn't always accurate -- particularly at the beginning of a fuzzing campaign when the [discovery probability](WhenIsEnough.ipynb#Measuring-Trace-Coverage-over-Time) is still very high. Nevertheless, it demonstrates the main benefit of reporting a percentage to assess the progress of a fuzzing campaign towards completion.
***Try it***. *Try setting and `trials` to 1 million and `time` to `int(trials / 4)`.*_____no_output_____### Extrapolating Fuzzing Success
<!-- ## Cost-Benefit Analysis: Extrapolating the Number of Species Discovered -->
Suppose you have run the fuzzer for a week, which generated $n$ fuzz inputs and discovered $S(n)$ species (here, covered $S(n)$ execution traces). Instead, of running the fuzzer for another week, you would like to *predict* how many more species you would discover. In 2003, Anne Chao and her team developed an extrapolation methodology to do just that. We are interested in the number $S(n+m^*)$ of species discovered if $m^*$ more fuzz inputs were generated:
\begin{align}
\hat S(n + m^*) = S(n) + \hat f_0 \left[1-\left(1-\frac{f_1}{n\hat f_0 + f_1}\right)^{m^*}\right]
\end{align}
* where $\hat f_0=\hat S - S(n)$ is an estimate of the number $f_0$ of undiscovered species, and
* where $f_1$ the number of singleton species, i.e., those we have observed exactly once.
The number $f_1$ of singletons, we can just keep track of during the fuzzing campaign itself. The estimate of the number $\hat f_0$ of undiscovered species, we can simply derive using Chao's estimate $\hat S$ and the number of observed species $S(n)$.
Let's see how Chao's extrapolator performs by comparing the predicted number of species to the empirical number of species._____no_output_____
<code>
prediction_ts = [None] * time
f0 = hat_S - Sn
for m in range(trials - time):
assert (time * f0 + f1) != 0 , 'time:%s f0:%s f1:%s' % (time, f0,f1)
prediction_ts.append(Sn + f0 * (1 - (1 - f1 / (time * f0 + f1)) ** m))_____no_output_____plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(trace_ts, color='white')
plt.plot(trace_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(trace_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised');_____no_output_____
</code>
The prediction from Chao's extrapolator looks quite accurate. We make a prediction at $time=trials/4$. Despite an extrapolation by 3 times (i.e., at trials), we can see that the predicted value (black, dashed line) closely matches the empirical value (grey, solid line).
***Try it***. Again, try setting and `trials` to 1 million and `time` to `int(trials / 4)`._____no_output_____## Lessons Learned
* One can measure the _progress_ of a fuzzing campaign (as species over time, i.e., $S(n)$).
* One can measure the _effectiveness_ of a fuzzing campaign (as asymptotic total number of species $S$).
* One can estimate the _effectiveness_ of a fuzzing campaign using the Chao1-estimator $\hat S$.
* One can extrapolate the _progress_ of a fuzzing campaign, $\hat S(n+m^*)$.
* One can estimate the _residual risk_ (i.e., the probability that a bug exists that has not been found) using the Good-Turing estimator $GT$ of the species discovery probability._____no_output_____## Next Steps
This chapter is the last in the book! If you want to continue reading, have a look at the [Appendices](99_Appendices.ipynb). Otherwise, _make use of what you have learned and go and create great fuzzers and test generators!______no_output_____## Background
* A **statistical framework for fuzzing**, inspired from ecology. Marcel Böhme. [STADS: Software Testing as Species Discovery](https://mboehme.github.io/paper/TOSEM18.pdf). ACM TOSEM 27(2):1--52
* Estimating the **discovery probability**: I.J. Good. 1953. [The population frequencies of species and the
estimation of population parameters](https://www.jstor.org/stable/2333344). Biometrika 40:237–264.
* Estimating the **asymptotic total number of species** when each input can belong to exactly one species: Anne Chao. 1984. [Nonparametric estimation of the number of classes in a population](https://www.jstor.org/stable/4615964). Scandinavian Journal of Statistics 11:265–270
* Estimating the **asymptotic total number of species** when each input can belong to one or more species: Anne Chao. 1987. [Estimating the population size for capture-recapture data with unequal catchability](https://www.jstor.org/stable/2531532). Biometrics 43:783–791
* **Extrapolating** the number of discovered species: Tsung-Jen Shen, Anne Chao, and Chih-Feng Lin. 2003. [Predicting the Number of New Species in Further Taxonomic Sampling](http://chao.stat.nthu.edu.tw/wordpress/paper/2003_Ecology_84_P798.pdf). Ecology 84, 3 (2003), 798–804._____no_output_____## Exercises
I.J. Good and Alan Turing developed an estimator for the case where each input belongs to exactly one species. For instance, each input yields exactly one execution trace (see function [`getTraceHash`](#Trace-Coverage)). However, this is not true in general. For instance, each input exercises multiple statements and branches in the source code. Generally, each input can belong to one *or more* species.
In this extended model, the underlying statistics are quite different. Yet, all estimators that we have discussed in this chapter turn out to be almost identical to those for the simple, single-species model. For instance, the Good-Turing estimator $C$ is defined as
$$C=\frac{Q_1}{n}$$
where $Q_1$ is the number of singleton species and $n$ is the number of generated test cases.
Throughout the fuzzing campaign, we record for each species the *incidence frequency*, i.e., the number of inputs that belong to that species. Again, we define a species $i$ as *singleton species* if we have seen exactly one input that belongs to species $i$._____no_output_____### Exercise 1: Estimate and Evaluate the Discovery Probability for Statement Coverage
In this exercise, we create a Good-Turing estimator for the simple fuzzer._____no_output_____#### Part 1: Population Coverage
Implement a function `population_stmt_coverage()` as in [the section on estimating discovery probability](#Estimating-the-Discovery-Probability) that monitors the number of singletons and doubletons over time, i.e., as the number $i$ of test inputs increases._____no_output_____
<code>
from Coverage import population_coverage, Coverage
..._____no_output_____
</code>
**Solution.** Here we go:_____no_output_____
<code>
def population_stmt_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = cov.coverage()
# singletons and doubletons
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons_____no_output_____
</code>
#### Part 2: Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` from [the chapter on Fuzzers](Fuzzer.ipynb) to generate a population of $n=10000$ fuzz inputs._____no_output_____
<code>
from Fuzzer import RandomFuzzer
from html.parser import HTMLParser
..._____no_output_____
</code>
**Solution.** This is fairly straightforward:_____no_output_____
<code>
trials = 2000 # increase to 10000 for better convergences. Will take a while.._____no_output_____
</code>
We create a wrapper function..._____no_output_____
<code>
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)_____no_output_____
</code>
... and a random fuzzer:_____no_output_____
<code>
fuzzer = RandomFuzzer(min_length=1, max_length=1000,
char_start=0, char_range=255)_____no_output_____
</code>
We fill the population:_____no_output_____
<code>
population = []
for i in range(trials):
population.append(fuzzer.fuzz())_____no_output_____
</code>
#### Part 3: Estimating Probabilities
Execute the generated inputs on the Python HTML parser (`from html.parser import HTMLParser`) and estimate the probability that the next input covers a previously uncovered statement (i.e., the discovery probability) using the Good-Turing estimator._____no_output_____**Solution.** Here we go:_____no_output_____
<code>
measurements = 100 # experiment measurements
step = int(trials / measurements)
gt_timeseries = []
singleton_timeseries = population_stmt_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)_____no_output_____
</code>
#### Part 4: Empirical Evaluation
Empirically evaluate the accuracy of the Good-Turing estimator (using $10000$ repetitions) of the probability to cover new statements using the experimental procedure at the end of [the section on estimating discovery probability](#Estimating-the-Discovery-Probability)._____no_output_____**Solution.** This is as above:_____no_output_____
<code>
# increase to 10000 for better precision (less variance). Will take a while..
repeats = 100_____no_output_____emp_timeseries = []
all_coverage = set()
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= cov.coverage()
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
# If intersection not empty, a new stmt was (dis)covered
if cov.coverage() - all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)_____no_output_____%matplotlib inline
import matplotlib.pyplot as plt
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');_____no_output_____
</code>
### Exercise 2: Extrapolate and Evaluate Statement Coverage
In this exercise, we use Chao's extrapolation method to estimate the success of fuzzing._____no_output_____#### Part 1: Create Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` to generate a population of $n=400000$ fuzz inputs._____no_output_____**Solution.** Here we go:_____no_output_____
<code>
trials = 400 # Use 400000 for actual solution. This takes a while!_____no_output_____population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, stmt_ts, Q1_ts, Q2_ts = population_stmt_coverage(population, my_parser)_____no_output_____
</code>
#### Part 2: Compute Estimate
Compute an estimate of the total number of statements $\hat S$ after $n/4=100000$ fuzz inputs were generated. In the extended model, $\hat S$ is computed as
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{Q_1^2}{2Q_2} & \text{if $Q_2>0$}\\
S(n) + \frac{Q_1(Q_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $Q_1$ and $Q_2$ is the number of singleton and doubleton statements, respectively (i.e., statements that have been exercised by exactly one or two fuzz inputs, resp.), and
* where $S(n)$ is the number of statements that have been (dis)covered after generating $n$ fuzz inputs._____no_output_____**Solution.** Here we go:_____no_output_____
<code>
time = int(trials / 4)
Q1 = Q1_ts[time]
Q2 = Q2_ts[time]
Sn = stmt_ts[time]
if Q2 > 0:
hat_S = Sn + Q1 * Q1 / (2 * Q2)
else:
hat_S = Sn + Q1 * (Q1 - 1) / 2
print("After executing %d fuzz inputs, we have covered %d **(%.1f %%)** statements.\n" % (time, Sn, 100 * Sn / hat_S) +
"After executing %d fuzz inputs, we estimate there are %d statements in total.\n" % (time, hat_S) +
"After executing %d fuzz inputs, we have covered %d statements." % (trials, stmt_ts[trials - 1]))_____no_output_____
</code>
#### Part 3: Compute and Evaluate Extrapolator
Compute and evaluate Chao's extrapolator by comparing the predicted number of statements to the empirical number of statements._____no_output_____**Solution.** Here's our solution:_____no_output_____
<code>
prediction_ts = [None] * time
Q0 = hat_S - Sn
for m in range(trials - time):
prediction_ts.append(Sn + Q0 * (1 - (1 - Q1 / (time * Q0 + Q1)) ** m))_____no_output_____plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(stmt_ts, color='white')
plt.plot(stmt_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(stmt_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised');_____no_output_____
</code>
| {
"repository": "unibw-patch/fuzzingbook",
"path": "notebooks/WhenToStopFuzzing.ipynb",
"matched_keywords": [
"biostatistics",
"ecology"
],
"stars": null,
"size": 83801,
"hexsha": "cb0baf84f30488f04906888e80058e7084952ac8",
"max_line_length": 788,
"avg_line_length": 33.0967614534,
"alphanum_fraction": 0.5941814537
} |
# Notebook from tAndreani/scATAC-benchmarking
Path: Extra/Buenrostro_2018/test_peaks/Control/run_clustering_frequency.ipynb
<code>
import pandas as pd
import numpy as np
import scanpy as sc
import os
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.cluster import adjusted_mutual_info_score
from sklearn.metrics.cluster import homogeneity_score
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri_____no_output_____df_metrics = pd.DataFrame(columns=['ARI_Louvain','ARI_kmeans','ARI_HC',
'AMI_Louvain','AMI_kmeans','AMI_HC',
'Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC'])_____no_output_____workdir = './peaks_frequency_results/'
path_fm = os.path.join(workdir,'feature_matrices/')
path_clusters = os.path.join(workdir,'clusters/')
path_metrics = os.path.join(workdir,'metrics/')
os.system('mkdir -p '+path_clusters)
os.system('mkdir -p '+path_metrics)_____no_output_____metadata = pd.read_csv('../../input/metadata.tsv',sep='\t',index_col=0)
num_clusters = len(np.unique(metadata['label']))
print(num_clusters)10
files = [x for x in os.listdir(path_fm) if x.startswith('FM')]
len(files)_____no_output_____files_____no_output_____def getNClusters(adata,n_cluster,range_min=0,range_max=3,max_steps=20):
this_step = 0
this_min = float(range_min)
this_max = float(range_max)
while this_step < max_steps:
print('step ' + str(this_step))
this_resolution = this_min + ((this_max-this_min)/2)
sc.tl.louvain(adata,resolution=this_resolution)
this_clusters = adata.obs['louvain'].nunique()
print('got ' + str(this_clusters) + ' at resolution ' + str(this_resolution))
if this_clusters > n_cluster:
this_max = this_resolution
elif this_clusters < n_cluster:
this_min = this_resolution
else:
return(this_resolution, adata)
this_step += 1
print('Cannot find the number of clusters')
print('Clustering solution from last iteration is used:' + str(this_clusters) + ' at resolution ' + str(this_resolution))_____no_output_____for file in files:
file_split = file[:-4].split('_')
method = file_split[1]
print(method)
pandas2ri.activate()
readRDS = robjects.r['readRDS']
df_rds = readRDS(os.path.join(path_fm,file))
fm_mat = pandas2ri.ri2py(robjects.r['data.frame'](robjects.r['as.matrix'](df_rds)))
fm_mat.fillna(0,inplace=True)
fm_mat.columns = metadata.index
adata = sc.AnnData(fm_mat.T)
adata.var_names_make_unique()
adata.obs = metadata.loc[adata.obs.index,]
df_metrics.loc[method,] = ""
#Louvain
sc.pp.neighbors(adata, n_neighbors=15,use_rep='X')
# sc.tl.louvain(adata)
getNClusters(adata,n_cluster=num_clusters)
#kmeans
kmeans = KMeans(n_clusters=num_clusters, random_state=2019).fit(adata.X)
adata.obs['kmeans'] = pd.Series(kmeans.labels_,index=adata.obs.index).astype('category')
#hierachical clustering
hc = AgglomerativeClustering(n_clusters=num_clusters).fit(adata.X)
adata.obs['hc'] = pd.Series(hc.labels_,index=adata.obs.index).astype('category')
#clustering metrics
#adjusted rank index
ari_louvain = adjusted_rand_score(adata.obs['label'], adata.obs['louvain'])
ari_kmeans = adjusted_rand_score(adata.obs['label'], adata.obs['kmeans'])
ari_hc = adjusted_rand_score(adata.obs['label'], adata.obs['hc'])
#adjusted mutual information
ami_louvain = adjusted_mutual_info_score(adata.obs['label'], adata.obs['louvain'],average_method='arithmetic')
ami_kmeans = adjusted_mutual_info_score(adata.obs['label'], adata.obs['kmeans'],average_method='arithmetic')
ami_hc = adjusted_mutual_info_score(adata.obs['label'], adata.obs['hc'],average_method='arithmetic')
#homogeneity
homo_louvain = homogeneity_score(adata.obs['label'], adata.obs['louvain'])
homo_kmeans = homogeneity_score(adata.obs['label'], adata.obs['kmeans'])
homo_hc = homogeneity_score(adata.obs['label'], adata.obs['hc'])
df_metrics.loc[method,['ARI_Louvain','ARI_kmeans','ARI_HC']] = [ari_louvain,ari_kmeans,ari_hc]
df_metrics.loc[method,['AMI_Louvain','AMI_kmeans','AMI_HC']] = [ami_louvain,ami_kmeans,ami_hc]
df_metrics.loc[method,['Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC']] = [homo_louvain,homo_kmeans,homo_hc]
adata.obs[['louvain','kmeans','hc']].to_csv(os.path.join(path_clusters ,method + '_clusters.tsv'),sep='\t')control1
df_metrics.to_csv(path_metrics+'clustering_scores.csv')_____no_output_____df_metrics_____no_output_____
</code>
| {
"repository": "tAndreani/scATAC-benchmarking",
"path": "Extra/Buenrostro_2018/test_peaks/Control/run_clustering_frequency.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": 2,
"size": 26349,
"hexsha": "cb0d6dae0268195dff407883e09d33903f259d72",
"max_line_length": 312,
"avg_line_length": 33.9987096774,
"alphanum_fraction": 0.5184636988
} |
# Notebook from adam-dziedzic/time-series-ml
Path: pytorch_tutorials/pytorch_tutorial_NLP3_word_embeddings_tutorial.ipynb
<code>
%matplotlib inline_____no_output_____
</code>
Word Embeddings: Encoding Lexical Semantics
===========================================
Word embeddings are dense vectors of real numbers, one per word in your
vocabulary. In NLP, it is almost always the case that your features are
words! But how should you represent a word in a computer? You could
store its ascii character representation, but that only tells you what
the word *is*, it doesn't say much about what it *means* (you might be
able to derive its part of speech from its affixes, or properties from
its capitalization, but not much). Even more, in what sense could you
combine these representations? We often want dense outputs from our
neural networks, where the inputs are $|V|$ dimensional, where
$V$ is our vocabulary, but often the outputs are only a few
dimensional (if we are only predicting a handful of labels, for
instance). How do we get from a massive dimensional space to a smaller
dimensional space?
How about instead of ascii representations, we use a one-hot encoding?
That is, we represent the word $w$ by
\begin{align}\overbrace{\left[ 0, 0, \dots, 1, \dots, 0, 0 \right]}^\text{|V| elements}\end{align}
where the 1 is in a location unique to $w$. Any other word will
have a 1 in some other location, and a 0 everywhere else.
There is an enormous drawback to this representation, besides just how
huge it is. It basically treats all words as independent entities with
no relation to each other. What we really want is some notion of
*similarity* between words. Why? Let's see an example.
Suppose we are building a language model. Suppose we have seen the
sentences
* The mathematician ran to the store.
* The physicist ran to the store.
* The mathematician solved the open problem.
in our training data. Now suppose we get a new sentence never before
seen in our training data:
* The physicist solved the open problem.
Our language model might do OK on this sentence, but wouldn't it be much
better if we could use the following two facts:
* We have seen mathematician and physicist in the same role in a sentence. Somehow they
have a semantic relation.
* We have seen mathematician in the same role in this new unseen sentence
as we are now seeing physicist.
and then infer that physicist is actually a good fit in the new unseen
sentence? This is what we mean by a notion of similarity: we mean
*semantic similarity*, not simply having similar orthographic
representations. It is a technique to combat the sparsity of linguistic
data, by connecting the dots between what we have seen and what we
haven't. This example of course relies on a fundamental linguistic
assumption: that words appearing in similar contexts are related to each
other semantically. This is called the `distributional
hypothesis <https://en.wikipedia.org/wiki/Distributional_semantics>`__.
Getting Dense Word Embeddings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How can we solve this problem? That is, how could we actually encode
semantic similarity in words? Maybe we think up some semantic
attributes. For example, we see that both mathematicians and physicists
can run, so maybe we give these words a high score for the "is able to
run" semantic attribute. Think of some other attributes, and imagine
what you might score some common words on those attributes.
If each attribute is a dimension, then we might give each word a vector,
like this:
\begin{align}q_\text{mathematician} = \left[ \overbrace{2.3}^\text{can run},
\overbrace{9.4}^\text{likes coffee}, \overbrace{-5.5}^\text{majored in Physics}, \dots \right]\end{align}
\begin{align}q_\text{physicist} = \left[ \overbrace{2.5}^\text{can run},
\overbrace{9.1}^\text{likes coffee}, \overbrace{6.4}^\text{majored in Physics}, \dots \right]\end{align}
Then we can get a measure of similarity between these words by doing:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = q_\text{physicist} \cdot q_\text{mathematician}\end{align}
Although it is more common to normalize by the lengths:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = \frac{q_\text{physicist} \cdot q_\text{mathematician}}
{\| q_\text{\physicist} \| \| q_\text{mathematician} \|} = \cos (\phi)\end{align}
Where $\phi$ is the angle between the two vectors. That way,
extremely similar words (words whose embeddings point in the same
direction) will have similarity 1. Extremely dissimilar words should
have similarity -1.
You can think of the sparse one-hot vectors from the beginning of this
section as a special case of these new vectors we have defined, where
each word basically has similarity 0, and we gave each word some unique
semantic attribute. These new vectors are *dense*, which is to say their
entries are (typically) non-zero.
But these new vectors are a big pain: you could think of thousands of
different semantic attributes that might be relevant to determining
similarity, and how on earth would you set the values of the different
attributes? Central to the idea of deep learning is that the neural
network learns representations of the features, rather than requiring
the programmer to design them herself. So why not just let the word
embeddings be parameters in our model, and then be updated during
training? This is exactly what we will do. We will have some *latent
semantic attributes* that the network can, in principle, learn. Note
that the word embeddings will probably not be interpretable. That is,
although with our hand-crafted vectors above we can see that
mathematicians and physicists are similar in that they both like coffee,
if we allow a neural network to learn the embeddings and see that both
mathematicians and physicists have a large value in the second
dimension, it is not clear what that means. They are similar in some
latent semantic dimension, but this probably has no interpretation to
us.
In summary, **word embeddings are a representation of the *semantics* of
a word, efficiently encoding semantic information that might be relevant
to the task at hand**. You can embed other things too: part of speech
tags, parse trees, anything! The idea of feature embeddings is central
to the field.
Word Embeddings in Pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we get to a worked example and an exercise, a few quick notes
about how to use embeddings in Pytorch and in deep learning programming
in general. Similar to how we defined a unique index for each word when
making one-hot vectors, we also need to define an index for each word
when using embeddings. These will be keys into a lookup table. That is,
embeddings are stored as a $|V| \times D$ matrix, where $D$
is the dimensionality of the embeddings, such that the word assigned
index $i$ has its embedding stored in the $i$'th row of the
matrix. In all of my code, the mapping from words to indices is a
dictionary named word\_to\_ix.
The module that allows you to use embeddings is torch.nn.Embedding,
which takes two arguments: the vocabulary size, and the dimensionality
of the embeddings.
To index into this table, you must use torch.LongTensor (since the
indices are integers, not floats).
_____no_output_____
<code>
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)_____no_output_____word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor_hello = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor_hello)
print("hello_embed: ", hello_embed)
lookup_tensor_world = torch.tensor([word_to_ix["world"]], dtype=torch.long)
world_embed = embeds(lookup_tensor_world)
print("worlds_embed: ", world_embed)hello_embed: tensor([[-0.8923, -0.0583, -0.1955, -0.9656, 0.4224]], grad_fn=<EmbeddingBackward>)
worlds_embed: tensor([[ 0.2673, -0.4212, -0.5107, -1.5727, -0.1232]], grad_fn=<EmbeddingBackward>)
</code>
An Example: N-Gram Language Modeling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Recall that in an n-gram language model, given a sequence of words
$w$, we want to compute
\begin{align}P(w_i | w_{i-1}, w_{i-2}, \dots, w_{i-n+1} )\end{align}
Where $w_i$ is the ith word of the sequence.
In this example, we will compute the loss function on some training
examples and update the parameters with backpropagation.
_____no_output_____
<code>
CONTEXT_SIZE = 5
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
ngrams = [([test_sentence[i + j] for j in range(CONTEXT_SIZE)], test_sentence[i + CONTEXT_SIZE])
for i in range(len(test_sentence) - CONTEXT_SIZE)]
# trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
# for i in range(len(test_sentence) - 2)]
print("the first 3 ngrams, just so you can see what they look like: ")
print(ngrams[:3])
print("the last 3 ngrams: ")
print(ngrams[-3:])
vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=1)
# print("log probs: ", log_probs)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(1):
total_loss = 0
for context, target in ngrams:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print("losses: ", losses)
print("The loss decreased every iteration over the training data!")the first 3 ngrams, just so you can see what they look like:
[(['When', 'forty', 'winters', 'shall', 'besiege'], 'thy'), (['forty', 'winters', 'shall', 'besiege', 'thy'], 'brow,'), (['winters', 'shall', 'besiege', 'thy', 'brow,'], 'And')]
the last 3 ngrams:
[(['thy', 'blood', 'warm', 'when', 'thou'], "feel'st"), (['blood', 'warm', 'when', 'thou', "feel'st"], 'it'), (['warm', 'when', 'thou', "feel'st", 'it'], 'cold.')]
losses: [506.0660638809204]
The loss decreased every iteration over the training data!
</code>
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep
learning. It is a model that tries to predict words given the context of
a few words before and a few words after the target word. This is
distinct from language modeling, since CBOW is not sequential and does
not have to be probabilistic. Typcially, CBOW is used to quickly train
word embeddings, and these embeddings are used to initialize the
embeddings of some more complicated model. Usually, this is referred to
as *pretraining embeddings*. It almost always helps performance a couple
of percent.
The CBOW model is as follows. Given a target word $w_i$ and an
$N$ context window on each side, $w_{i-1}, \dots, w_{i-N}$
and $w_{i+1}, \dots, w_{i+N}$, referring to all context words
collectively as $C$, CBOW tries to minimize
\begin{align}-\log p(w_i | C) = -\log \text{Softmax}(A(\sum_{w \in C} q_w) + b)\end{align}
where $q_w$ is the embedding for word $w$.
Implement this model in Pytorch by filling in the class below. Some
tips:
* Think about which parameters you need to define.
* Make sure you know what shape each operation expects. Use .view() if you need to
reshape.
_____no_output_____
<code>
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
EMBEDDING_DIM = 10
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear = nn.Linear(embedding_dim, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs)
# print("embeds: ", embeds)
qsum = torch.sum(embeds, dim=0)
# print("qsum: ", qsum)
out = self.linear(qsum)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=0)
# print("log probs: ", log_probs)
return log_probs
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
context_vector = make_context_vector(data[0][0], word_to_ix) # example
print("context vector: ", context_vector)
losses = []
loss_function = nn.NLLLoss()
model = CBOW(len(vocab), EMBEDDING_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(10):
total_loss = 0
for context, target in data:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
# context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
context_idxs = make_context_vector(context, word_to_ix)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
# loss_function requires a minibatch index - here we have only 1
loss = loss_function(log_probs.unsqueeze(0), torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print(losses) # The loss decreased every iteration over the training data![(['We', 'are', 'to', 'study'], 'about'), (['are', 'about', 'study', 'the'], 'to'), (['about', 'to', 'the', 'idea'], 'study'), (['to', 'study', 'idea', 'of'], 'the'), (['study', 'the', 'of', 'a'], 'idea')]
context vector: tensor([41, 15, 12, 30])
[256.88402342796326, 253.96123218536377, 251.09937977790833, 248.29585433006287, 245.548273563385, 242.8544623851776, 240.2124011516571, 237.6202473640442, 235.07628321647644, 232.57891368865967]
</code>
| {
"repository": "adam-dziedzic/time-series-ml",
"path": "pytorch_tutorials/pytorch_tutorial_NLP3_word_embeddings_tutorial.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 21307,
"hexsha": "cb0dd9a1cee45357d97edcf9a20ea154e5fa3f25",
"max_line_length": 7336,
"avg_line_length": 63.4136904762,
"alphanum_fraction": 0.6402121369
} |
# Notebook from ebayandelger/MSDS600
Path: Reddit_Sentiment_Assignment.ipynb
<code>
# This handy piece of code changes Jupyter Notebooks margins to fit your screen.
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))_____no_output_____
</code>
## Be sure you've installed the praw and tqdm libraries. If you haven't you can run the line below. Node.js in required to install the jupyter widgets in a few cells. These two cells can take a while to run and won't show progress; you can also run the commands in the command prompt (without the !) to see the progress as it installs.
If conda is taking a long time, you might try the mamba installer: https://github.com/TheSnakePit/mamba
`conda install -c conda-forge mamba -y`
Then installing packages with mamba should be done from the command line (console or terminal)._____no_output_____
<code>
!conda install tqdm praw nodejs -yCollecting package metadata (current_repodata.json): ...working... done
</code>
Install the jupyter widget to enable tqdm to work with jupyter lab:_____no_output_____
<code>
!jupyter labextension install @jupyter-widgets/jupyterlab-managerAn error occured.
ValueError: Please install Node.js and npm before continuing installation. You may be able to install Node.js from your package manager, from conda, or directly from the Node.js website (https://nodejs.org).
See the log file for details: C:\Users\ENKHBA~1\AppData\Local\Temp\jupyterlab-debug-b79fsi1d.log
</code>
# Scrape Reddit Comments for a Sentiment Analysis - Assignment
### Go through the notebook and complete the code where prompted
##### This assignment was adapted from a number of sources including: http://www.storybench.org/how-to-scrape-reddit-with-python/ and https://towardsdatascience.com/scraping-reddit-data-1c0af3040768_____no_output_____
<code>
# Import all the necessary libraries
import praw # Import the Praw library: https://praw.readthedocs.io/en/latest/code_overview/reddit_instance.html
import pandas as pd # Import Pandas library: https://pandas.pydata.org/
import datetime as dt # Import datetime library
import matplotlib.pyplot as plt # Import Matplot lib for plotting
from tqdm.notebook import tqdm # progress bar used in loops
import credentials as cred # make sure to enter your API credentials in the credentials.py file_____no_output_____
</code>
# Prompt
### In the cell below, enter your client ID, client secret, user agent, username, and password in the appropitate place withing the quotation marks_____no_output_____
<code>
reddit = praw.Reddit(client_id = 'A0HZsCKBk-P50g',
client_secret = '9vAJO7D2uhGh-I7wpVNDI2JEwm0',
username = 'baymsds',
password = '2003@@PUujin',
user_agent = 'msds')_____no_output_____
</code>
# Prompt
## In the cell below, enter a subreddit you which to compare the sentiment of the post comments, decide how far back to pull posts, and how many posts to pull comments from.
## We will be comparing two subreddits, so think of a subject where a comparison might be interesting (e.g. if there are two sides to an issue which may show up in the sentiment analysis as positive and negative scores)._____no_output_____
<code>
number_of_posts = 200
time_period = 'all' # use posts from all time
# .top() can use the time_period argument
# subreddit = reddit.subreddit('').top(time_filter=time_period, limit=number_of_posts)
subreddit = reddit.subreddit('GlobalWarming').hot(limit=number_of_posts)
# Create an empty list to store the data
subreddit_comments = []
# go through each post in our subreddit and put the comment body and id in our dictionary
# the value for 'total' here needs to match 'limit' in reddit.subreddit().top()
for post in tqdm(subreddit, total=number_of_posts):
submission = reddit.submission(id=post)
submission.comments.replace_more(limit=0) # This line of code expands the comments if “load more comments” and “continue this thread” links are encountered
for top_level_comment in submission.comments:
subreddit_comments.append(top_level_comment.body) # add the comment to our list of comments_____no_output_____# View the comments.
print(subreddit_comments)['Part 2 would probably go something like this. \n\n"Ok so he\'s ill, but the important thing is that it\'s not my fault"', 'This is perfect and needs to be made into a film.', 'This actually was well written and so much truth', "That's actually a wonderful allegory. Thanks for posting!", 'I love your writing', " Hey, he might not actually be ill tho. There is still a chance he's not.\n There is still a chance everyone is lying about climate change.", "> Is it already too late to do anything. \n\nNot necessarily. It is likely too late to stave off +2C, but that doesn't mean we can't do anything about the problem. It just means that we'll have to kick our adaptation into high gear because the window of possibility for mitigation has closed.\n\n> Is everything i want to accomplish in life impossible? \n\nNo. Go to school to be an engineer, because they will be absolutely KEY in designing resilient infrastructure. You will also have considerable job security, because the demand for your skills will only increase as things worsen.\n\n> Will i see the end of humanity?! \n\nNo. Whoever is telling you that you will is blowing smoke up your tuchus. Geopolitically speaking, things could get quite problematic as developing nations are destabilized and climate refugees are mobilized and displaced en masse, but the species will not die out as a result of near-term climate change. We survived the last ice age. We'll make it through. It will be painful, sure, but not fatal.\n\n> Is everything i do meaningless? \n\nNot if you dedicate your life to living sustainably, your career to on-the-ground skills, and your energy to promoting awareness among your peers, friends, and family.\n\n> I’m really scared. \n\nFear is like a rocking chair. Sure, it gives you something to do, but it won't actually *get* you anywhere. Channel your fear into positive changes in your life. You control the amount of energy you put into making the world a better place, and global warming doesn't change the fact that you cannot control the world around you. You may be hit by a bus tomorrow on your way to school. Is that a reason not to wake up tomorrow? Of course not.\n\n> Is this the end? \n\nNo.\n\n> are there no other options to save earth?\n\nOf course there are options. Take a deep breath. The planet will make it through just fine. It's been a flaming hellscape and a frozen ball of ice, and yet life found a way. We might enter a period of destabilized ecosystems and political unrest, but you won't die in a fiery inferno tomorrow, or the next day, or the next day. Take it step by step and build a life you can be proud of. After a while, you might actually change the people around you for the better.", "Don't be scared. I do not see any value in that, and it is something you can control. \n\nFeel empowered by your awareness. For example, I have convinced my fiance that we have no right thinking about having kids. I am prioritizing land ownership and practicing horticulture. Also, many musical instruments don't require power, which is helpful if things get absurdly bad. \n\nKnowing possible future conditions can be very empowering.", "I'm 33. I feel the same way.", 'YOU DEAD BOY', "I think that in our lifetimes, it's going to make things worse, but it won't kill us all. Species extinctions make us all poorer, since a world of diverse species is better, and we'll have to live with that. Some areas will flood, and the people living there will have to go elsewhere. A lot of places will have hotter summers and more and more violent storms. We can live with all of that, it's just not as good.", 'I’m 2 years older than you and I came to know about climate change when I was your age and I was very scared like you. I’m here to tell you that NO humanity is not ending with you! all your work is NOT a meaningless and this is NOT the end. Although much damage has been done already, we can’t lose hope here. I commend you actually for having the will to at Least learn about climate change. You’re better off than people who just deny it! There are solutions around us in the nature giving us signs to look at them and use them. And yes as cheesy as it sounds ‘change is possible’!! I believe youth like us should use our voice and talk about the climate issues and demand solutions and well initiate change!! I don’t want to give your false affirmations and tell you everything is going to be okay because it will not be okay if we don’t act now! So don’t lose hope and try and start looking for solutions around you and search. It will surely help you!', 'Yeah my 15 year old son told me the same thing, so I’m definitely watching this post...', "Just stop.\n\nI'm 41 and live in Sweden whatever tens of meter above the sea level and so far I haven't noticed just about shit.\n\nThe coral reefs seem to have it worst at the moment?\n\nIf necessary we could run nuclear plants or use renewable energy to remove CO2. As for whatever we'd want to use the resources that's a different story.\n\nI don't think human kind is doomed from it. Yet. Life may change with it but humans have had that impact for quite some time now.\n\nOf course we should try to act better but I don't think you should kinda worry if you can't have a life because of it.", "Basically (no '?'). Most likely. No. Yes. Not yet. It's complicated.\n\n​\n\nFor more precise answers: are you on the 'winning' side of capital ?", 'This is a pretty typical disaster mate... Just a large forest fire.', 'Siberia. Just saying.', '[removed]', 'Why attack on meat industry, and animal lifes. while oil industry fuck up whole world and create wars and destruction every where.', "Pigs don't emit greenhouse gasses!\n\nOnly Ruminants, like cattle, do.\n\nSo Bacon and Pork chops and BBQ ribs are fine.\n\nJust avoid the steaks and burgers.", 'I am Chinese, so my English is relatively poor. As other replies said,\xa0the approach will not reduce carbon emissions by 50%. But if it can be implemented as a policy, the reduction in carbon emissions can still be expected.\n\nIf implemented, there will be other effects. For example, people will wonder why the government has set such a policy. With the popularity of "Friday for Future" action, people may learn more about global warming.\n\nGlobal warming will lead to reduced food production and reduced fresh water. A smaller population will have less competition and less injustice.', 'It is already a thing. Lots of people are saying this. \nSome people in 1st world countries are using very flawed logic though....\n"Its OK for me to buy lots of new stuff, commute to work and fly on holiday as much as I like if we have less children. I won\'t mind at all if in 30 years time old people outnumber the young and there aren\'t enough taxpayers to support me or our society when I\'m old and want to retire..."', "We need less billionaires, poor don't have much output", "It's already expensive to have children so there already is a huge financial incentive not to.", "The population increase is a result of our ability to burn fossil fuel which means more energy, more efficient means of production and thus a better capacity to support more population and denser population as well. The population increased naturally as we could support more people and as each individual was able to consume more.\n\nYou cannot really convince or force an entire race to stop having kids but you can convince or entice people to stop burning oil. This will lead to a natural decrease in consumption, population and emissions.\n\nForcing people not to have kids will be a band aid solution because that's not the root of the problem. Even if you get 50% you have not addressed the fact that the remaining ones use single use plastic and burn oil and are still polluting", 'Population decline has to be low, otherwise you get a rapidly aging population that can’t be supported by the productivity of the working people. Many western nations already have decreasing domestic populations that are supplemented with immigration, which isn’t good for the immigrant countries of origin like in Africa or Eastern Europe because the immigrants tend to be the people who can afford to move, which draws money, labor, and knowledge out of those nations.\n\nAlso if you reduced the population by half, you wouldn’t necessarily reduce the consumption or pollution by half. For example, an oil or coal power plant producing power for electric cars will be more efficient than thousands of little gasoline engines for cars, but a small population wouldn’t really justify a power plant’s cost so even though they use less fossil fuels overall, it’s still more per person. It’s like the term “economies of scale”.', 'One thought i have read recently i found very intersting: Education of women in developing countries can help tackle climate change. The growth in population we see in many poor countries is mostly because women there are not meant or able to have a career therefore their only goal is to have children/marry. 130 Million girls today are denied education and it believed, that a 100% enrollment of girls in all countries could decrease population by nearly 900 million people by 2050. So your thought isnt stupid, but it certainly is a hard thing to achieve.', "It's a combination of both, not just population reduction but also proactive climate mitigation actions like investment in renewables.\n\nThis is the crux of the ongoing debate when it comes to climate treaties. Most of the pollution comes from wealthy nations from the first world yet developing nations are criticized for growing their economies and their population by using fossil fuels. No one can dictate how much each nation can grow in terms of population. \n\nWhat really needs to happen is for the developed world to give developing world the technologies to bypass the use of fossil fuels to grow their economies and to help girls get an education and careers in the developing world. Wealthier families generally have fewer children. This is why there has been a general push to move developing countries out of poverty. It's all interlinked so we need to stop looking through the nationalistic lens and return to the global perspective.", '“How to decarbonize America”: https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id1081584611?i=1000489267291\n\nBasically he says if we aggressively convert everything to electric (vehicles, home heating) we don’t have to change our lifestyles and we’ll pay $1-2k less per year.', "You bring up an uncomfortable truth that most people don't want to entertain. And when you think about it, that reaction makes sense. Many biologists believe that the drive to reproduce is the fundamental force that drives all human activity (all life on Earth, really). It is a hard instinct to turn against.\n\nBut really, over time, there is not another option that will yield nearly as much benefit when it comes to slowing global warming. The naysayers in this thread can't argue the simple logic that fewer people on Earth = less energy consumption and reduced Co2 emissions inherently. And frankly, if not for our absolutely massive population boom over the past 200 years, global warming would not be nearly as a big a threat today as it currently is.\n\nReally, citizens should be pushing for this to happen now, while they still have some kind of chance at helping to shape the legislation to make this cutback in reproduction as ethical as possible. Because if we wait too long, governments won't decide to incentivize not having kids - they'll just enforce but having kids, probably with some kind of lottery system.", 'Less people is less emissions? Sure. But the amount and which people is where this gets complicated:\nBasically, 50% less people ≠ 50% less emissions. The people who make the most emissions are a lot less than 50% of us. \n\nWe all have a different footprint. (Do you own a car? Do you take private jets? How big is your house? Are you the CEO of Amazon?)', "Every week we have new genius popping up with exactly the same stupid idea. Can't we just ban these posts?", "I already have a massive incentive to not have kids, at least in the US: They're expensive. Just _having_ the kid is prohibitively expensive. Which is why my wife and I have no plans on having any.", 'The problem is that the changes are in the **probability** of events occurring, so it\'s hard to point to an individual event and say "This wouldn\'t have happened but for Global Warming".\n\nYou have to look at statistical changes in frequency or intensity that are consistent with global warming, like the greater strength of Atlantic hurricanes, or frequency of California wildfires.\n\nThat said, some things are unprecedented, like loss of polar ice.', 'Extreme weather events occur from time to time even without warming. The key is when they occur too often or set new records. For example, sucessive years of hot weather have caused huge areas of coral reef to die around Australia. This summer there was a record highest temperature in Siberia, rsing temperatures caused huge fires where there used to be permafrost.', "And we'll probably still be told to turn them off at every stop during 100+ degree days.", 'Political activism. Obviously reducing your carbon footprint is the ethical thing to do, but success is going to depend on government policy. So politics is the most important front in the battle to save the future.', "I like the reply here about political activism, but let's be real here. Without boatloads of money or fame, our voice and our vote lack meaningful agency. \n\nIts absolutely the 'right' way to approach things. But in terms of effecting change, I think the only realistic way for the people to move the needle is by looking at coordinating a range of non-violent activism (sit-ins, blocking traffic) to destruction of corporate property. \n\nSo since these groups do not advertise themselves if they do exist, you could start one, or start small by finding an Extinction Rebellion group to join. \n\nAm I wrong here?", 'Consume less animal products, plant some trees, have compost pile in home if you can', "Similar age, similar climate anxiety. If you're in the US, try out this tool to see some projections. [US climate projections](https://www.climate.gov/maps-data/data-snapshots/averagemaxtemp-decade-LOCA-rcp85-2030-07-00?theme=Projections)\nEither way, I recommend the UN reports from 2019.\n\nOn a personal note, I find listening to experts extremely helpful. I tune in for live broadcasts and read all the reports I can. I like to know what's going on & feel like I'm doing all the right things for my family -- but also I have friends who know how bad this is and choose to not be as in touch for their own sanity. I don't think there's a wrong way to feel about something like this.", 'You should come to terms with this now. Get therapy, or find others to talk about this with. Placing any emotional value on the future is not a good strategy in 2020.', 'dont put too much faith into anything other than maybe global temperature increase and/or co2 ppm, co2e ppm. there is no way to know what exact effect it will have on your life as an ‘individual’. we can loosely predict the effects to our **civilisation**. \nas a user of r/collapse, if you ever go there especially discount any “”predictions”” if you can even call it that. NTHE is thrown about constantly and measured posts saying that BOE wont be this or next year, etc. ridiculous, dont listen to redditors in general , not even me', "I know somebody who studied climate science who told me climate predictions are just that: predictions, models. Not even the smartest climate scientist in the world knows for sure what the world will look like in 20, 50 or 100 years. Humans causing the climate to change is a new phenomenon, it's not like we have previous experiences to compare this with.\n\nPersonally, I can't deal with climate reports anymore. I've cared for so long while most people around me are still indifferent. I do what I can and I do worry about the future sometimes, but it's not like that wouldn't have been the case if climate change wasn't a thing. Illness, death, natural disasters have always been a part of the human experience. The only major thing I still worry about is having kids - looking at climate change reports and still deciding to have them seems cruel.", 'Very', "There are plenty of scenarios and none look good. But not having ice on poles isn't looking good. From more extreme temperatures, being more cold in colder zones and more heat in hotter zones, to more heat absorption from having less of a reflective surface on the planet no bounce sun light, from the release of even worse gases locked in by ice, to the ocean acidification, to global ocean conveyor belt stopping, breaking food chains, and whatever more we don't know, I would say it is really normal you're feeling depressed. I know I was feeling depressed myself over this. And I was reading stuff everyday, sharing stuff everyday, but the thing is, while personal mindset of ourselves have to act, global political changes have to happen. So you should be very worried. I don't understand why people can sometimes stand up for animal rights, human rights, huge protest over a single and some times simple thing, but for what makes everything for us, our world, everybody ignore it in general.\nBut life goes on, we have to live right now, you may even die before you can see real changes. It can help to ease your mind, to develop some survival skills. I've learned to brew beer!\nNew technology is coming up everyday.\nI think we are at a point of something really big to come, as the next step we will face it will be a global change. Covid-19 has showed us we aren't prepared to big things, but we are learning and we will have to change. A global energy solution would most certainly help.\nSometime I feel hopeful, other times depressed. But when you're walking on a thin line, maybe it's just better to don't look down.\nSorry for the confusing speech", '/r/ClimatePreparation', "You should have a healthy appreciation for the present. It's not going to get much better than this", 'I know this is really to discuss Global Warming, but as someone pretty close in age (24) and battling depression and anxiety I can 100% relate. I wholeheartedly do my best to force my mentality to steer towards an ideology something like this. "Worry about what you have control over and can actively work on to obtain betterment or progress in." Let\'s say the world as we know it ends in the next 20 years. Is there anything you can actively work on to change or stop this? I mean if you want to reallllyyyy get into it. You could "waste" a bunch of your money on supplies, weapons, storage facility, bunker etc. that\'s entirely up to you. Moving more towards the subreddits topic in particular. There are things you can attempt to do that will minimize your environmental footprint. Recycle, buy electric over gas powered things. I bought an electric mower and am saving up for an electric/hybrid car. Bike or walk to places you visit in your immediate vicinity instead of driving, if you do. Use public transportation if it\'s available to you. Carpool with friends or co-workers if this would be viable for you. Try to eat less meat. You don\'t have to go full vegetarian or vegan, but I believe any bit helps. On a larger scale though, it\'s debatable if these sort of efforts will really bear any meaningful impact. It\'s pretty set in stone that major corporations are not globally advocating or making changes to go green. Many experts believe that it is already too late or getting very close to being too late. Any switches now would only dampen the snowballing effect, but it\'s already happening. The evidence is far too great to realistically say otherwise. I hope this maybe helped in some way. Hope you have a nice day!\n\nEdit: spelling, re-wordings.', "if you're that concerned, get a gun and shoot every c02 molecule you see. every bit helps.", 'Probably we will be fine for at least 40-50 years.', 'We absolutely can, it will take an massive industrial undertaking to make it work.\n\nFor example, Brian von Herzen is a researcher who with his group is developing floating seaweed deployment platforms that can grow in the open ocean. With a few simple technologies, seaweed can be grown, plankton can be stimulated, and I believe personally that it could also work as a floating habitat for fish. It would be one of the many ways to heal our planet by in this case removing lots of carbon from the atmosphere. The seaweed could be harvested, and the excess could be put back into the ocean where it would sink about 1 km below the waterline, thereby taking carbon out of the atmospheric carbon cycle for 150-1000 years. \n\nIt would have to be done on a massive scale, but it avoids multiple problems of growing things on land. \n\nThere are many possibilities, they all take work. Its up to humanity if it wants to do the work or not.', 'Yes because they are going to experience the worst of the climate effects. They have no choice but to adapt. The excuses are running out for the deniers as the evidence of global warming becomes more common each year.', 'The best thing Gen Z can do to stop climate change is to not have any kids.', 'No, when we’re «in charge» it’s too late. And do not underestimate how much people change as they age. A lot of people will have earned money and inherited wealth, so now their priorities has changed.', 'No. Carbon in the ocean has locked in temperature increases for at least 1000 years. All we can do is hope to adapt.', "while they can certainly make changes, with the release of methane starting in the north, the beginning of the end is upon us. electric vehicles, end of fossil fuels etc. won't stop the methane releases.", 'I am a biology graduate and I am pretty worried. What is your plan', 'Industrial design engineer here. Tell me your plan', '[this!!](https://giphy.com/gifs/crossover-noah-moses-1lkF5OJeezodO)', 'Computer science student here. Tell me if I can help', 'Ref:\n\nAbrupt CO2 release to the atmosphere under glacial and early interglacial climate conditions\n\nhttps://science.sciencemag.org/content/369/6506/1000.abstract', '> Carbon dioxide stays in our atmosphere for 300 to potentially thousands of years. Methane doesn’t: Approximately 95% of it breaks down in the atmosphere in about 10 years, with a small portion eventually converting to carbon dioxide.\n\nThis ignores the problem entirely and paints it as if methane doesn’t contribute much at all, which is actually what they explicitly state in the very next paragraph. It doesn’t matter that methane lasts roughly a decade, because it’s still there for a decade and is being continuously replenished by cows. You wouldn’t say that hair can’t be long because all the hair follicles end up falling out, would you?\n\n> ...the amount of food produced per cow varies immensely by country and that affects global cattle emissions. Less productive cows mean you need more cows for the same amount of food produced, hence higher emissions per pound of food... Cattle only contribute to 4% of U.S. total emissions, much less than overall global statistics.\n\nThis is highly misleading. The most efficient method of livestock production by far is animal farming. Despite this, it’s getting more a more pushback for ethical and public health concerns (as it should). This article is advocating for grazing, which would end up with more emissions for the same production.\nThe US statistic doesn’t paint an accurate picture either. With this, I would assume cattle don’t contribute much to emissions at all, but that’s not the case. The US is one of the most developed countries and outputs incredibly large amounts, well beyond what it ought to. On top of that, agriculture makes up 10% of the nation’s emissions according to the EPA. Thus, if the 4% figure is to be trusted, 40% of that sector is solely made up by cattle.\n\n\n\nThe rest of the article seems to be a mixed bag of truths and half truths. The bias is very clear and I’m not particularly interested in what was said.', "Lowering methane levels does not have 'A strong cooling effect', or indeed, any cooling effect at all - it simply has less of a warming effect - like turning your gas ring down low, heats your beans slower!", 'Parts of Florida have been leveled many times. They just rebuild with their government subsidized home owner insurance every year. \n\nStronger buildings and a seawall would be a long term solution. Buy that cost more in the short term and thats all people look at.', 'Shit holes of America', 'Short answer: No.\n\n\n\n\nLong answer: Noooooo, but everything helps\n\n\n\n^Like ^being ^vegan', 'Cycling to work is way better than an electric car, but even if everybody cycled, that is still not enough. \n\nEvery time we buy something, lots of people used energy to make it. \n\nEvery time my countries military invade somewhere, or even practice invading somewhere, they burn a lot of fuel. \n\nCement production produces a lot of CO2.\n\nCattle farmers chop down forests to make room for more cows.\n\nEtc', 'Electric vehicles would not make a significant impact because not only will trucking be a major part of transportation emissions (electric trucks are a lot farther out technologically than cars), but that also just puts a greater reliance on electricity rather than gas. While that is more efficient, you’ll never get even close to net 0 emissions like that.\n\nPublic transportation and urban planning should be the primary focus imo', "Don't be guilted into believing it's your fault. It's big companies doing 70% of it. Everything helps, but we should focus on regulating those corporations.", 'https://www.youtube.com/watch?v=26qzmw_xG58', "Ev helps. I drive everyday. The ev helps on the transportation side.\n\nI generate electricity from solar panels. We've had our panels for 4 years now and it's reduced carbon to the equivalent of planting 225 trees so far. I don't have a backyard that large. That helps on the electricity generation front.\n\nI try to eat less red meat. That helps on the ag front. \n\nI try to support organizations like Ikea and ms and AAPL and Ecosia which have pledged to be carbon neutral or plant trees in ecosia's case so that helps on the industry front. \n\nNo silver bullet but everything helps your fellow human.", 'You are leaving out natural emissions from forests, farms and oceans. These contribute a great deal of CO2.', "No, electricity still requires power plants which are fueled by things like fossil fuels. What's more, batteries contain rare earth metals that require extraction (usually fueled by fossil fuels) and often leave lots of toxic waste (not global warming but still environmentally damaging). More cars means more roads, which requires construction, etc.\n\nIf everyone switched to bicycles on the other hand...", 'No one really knows, but for the record, climate scientists long ago were worried that their genuine findings were too "doom and gloom" for people to take seriously, so they watered their message down. Called it "Global Warming" instead of "Runaway Greenhouse Effect" because it sounded less scary. Then when people called them alarmist anyway they changed it to "Climate Change." I don\'t think their strategy has worked.\n\nBut to at least give you something valuable, NASA and the IPCC are probably the most up-to-date resources for global warming data. If it seems "doom and gloom," it\'s because it is doom and gloom, not because of some spin.', 'dont take stock of specific predictions, its pretty impossible to actually know what we will see other than maybe carbon ppm and temperature', "Not all carbon footprints are the same. Leaving aside the ethical implications (and there are major ethical implications -- see China's former one child policy for how this works in practice) it's worth considering where emissions actually come from. I know you said only effecting certain countries, so let's take a look at: \n\nChina = 10.06GT \nThe US = 5.41GT\nIndia = 2.65GT\n(GT = Metric gigatons)\n\nThe average number of children for these countries is:\nChina: 1.6 (Families are currently encouraged to have 2)\nThe US: 1.9\nIndia: 2.27\n\nAll these numbers to say: in the countries where there are high emissions people already aren't having as many children. The emissions (and wealth) are not spread around evenly within these populations, so limiting the population wouldn't be enough to reduce the emissions.", 'Half of the countries in the world already average less than 2 children per woman, and the birth rates in other countries are going down. Many demographers believe that there will be an average of less than 2 children per woman globally by the end of this century. It could happen as early as the year 2060.', 'The problem is more that rich countries pollute 5 or 6 times per person more than poor countries. You are penalizing the folks who pollute only a tiny bit.', "I don't remember such a policy reducing the emissions in China.", 'Why only per woman?', '[deleted]', 'This deprives one of the most fundamental human rights and essentially grants a pass to the rich western nations which have already benefited from their polluting hay day while punishing the poor countries. There’s no way in hell they could even afford to handle a declining population that would come from 2 per woman.', 'That sounds very sexist. It is only fair if you make it 2 pr. person.', "I'm ok with that. I only had one kid. I mean, I don't think I'd want to bring any more into this world given how things are going anyway.", 'How about: A maximum of one biological child. If somebody wants more children they have to adopt. There are countless of children waiting to be adopted anyway.', "I am 100% for a maximum of 2 billion humans on earth at any given point in time, but it is a bit harder to sell someone the idea that they shouldn't exist. Lol", 'Great in theory. Many European countries have a declining birth rate because they have the wealth to do it. Education, contraception, medical support, legal support, financial support, etc. Bit different in countries without any of that. Imagine if Italians were just told not to have sex anymore to limit the risk of getting pregnant. Can you imagine Joey from friends keeping it in his pants?', 'Max one children per couple', "If you haven't already seen it, this is worth watching: [https://www.gapminder.org/videos/dont-panic-the-facts-about-population/](https://www.gapminder.org/videos/dont-panic-the-facts-about-population/) \n\nIn it, Hans Rosling explains that as a whole, we have reached peak child, meaning the number of children in the world has stopped increasing.\n\nWe're currently at almost 8 billion people. Because we are living longer, world population is expected to peak at about 11 billion around the year 2100. By then, it's projected that there will be an average of about 2 children per woman; currently, it's about 2.5 children per woman.\n\nThere's no need to mandate what's already happening naturally, and it's certainly unethical to make the mandate in other countries that don't have good access to education, health care, and birth control.\n\nI suppose if every country allowed a maximum of 1 child per woman, that would keep our population at par for now. Good luck with that.\n\nAnd as others have pointed out, more children per women doesn't necessarily correlate to higher emissions.", 'Two is one too many.', 'Ref:\n\nSea-ice-free Arctic during the Last Interglacial supports fast future loss\n\nhttps://www.nature.com/articles/s41558-020-0865-2', '1) It\'s important to stress that this article conflates loss of Arctic sea-ice with loss of Arctic **summer** sea-ice. "...complete loss of Arctic sea-ice..." means something entirely different from what is stated in the actual study, "...a fast retreat of future Arctic summer sea ice."\n\n2) The term "free" in regards to Arctic sea-ice doesn\'t actually mean what people assume it to mean. When discussing Arctic sea-ice "free" actually means an ice-extent of less than one million square kilometers \n\nThe Intergovernmental Panel on Climate Change (IPCC) concluded it was likely that the Arctic would be reliably ice-free in September before 2050, assuming high future greenhouse-gas emissions (where ‘reliably ice-free’ means five consecutive years with less than a million square kilometres of sea ice). Individual years will be ice-free sometime earlier – in the 2020s, 2030s or 2040s – depending on both future greenhouse gas emissions and the natural erratic fluctuations.^[1](https://nerc.ukri.org/latest/publications/planetearth/aut15-ice-free/)', '2020 just keeps getting worse huh...', '!RemindMe 2035', "Won't get better till we Dump Trump!", "Lol, we can't even make Americans agree to put on a mask. This world is doomed.", 'You can see this happening in real time on [earth.nullschool.net](https://earth.nullschool.net) \n\nPut on Ocean overlay, then select SSTA (Sea Surface Temp Anomaly)', 'What? Summary please', "Nobody has overlooked Nuclear. It's probably useful in some contexts, but it's also incredibly capital intensive, generated political opposition everywhere it goes, and has the potential to contribute to proliferation of radioactive material into places we dont want it.", "Impossible to read this block of words. From what I understood you're talking about nuclear energy. \n\nYou're not the only person in the world that knows about it. Sorry to disappoint you...", 'Leta go for Dyson sphere', 'When ice thats floating melts, the water level isnt affected, but if ice that currently on land melts, watet levels will rise (and theres a lot of ice in places like greenland). Also, water has its highest density at 4 degrees Celsius, after that it expands, so if the sea gets above 4 degrees, thermal expansaion will cause further sea level rise.', 'Yep that’s the main contributor. Also thermal expansion of the oceans from warmer conditions plays a role.', 'I believe thermal expansion is the main factor.', 'It comes from glaciers on Land, melts and runs into the ocean', 'Think of it this way: what happens when you boil a pot of water? Molecules in the liquid get excited. As a result, liquid expands.', "Water expands when it's hotter. Doesn't have much to do with landlocked ice, but still true.", 'I read somewhere water levels near the north pole will drop when the ice melts.\n\nThe explanation is that ice attracts water, in a bowl of water with an icecube, water will be higher around the ice cube.\n\n Is this true? And how big is this effect?', "How do you get that? It's up to 3°C in several places, nowhere is it below normal temps. \n\nIf it gets cooler in some areas over a year or two, bear in mind the polar vortex has shifted more towards Greenland now that the North Pole is melted.\n\nI hope you're not trying to imply the world's biosphere isn't gaining heat, because that would be disengenous at best.", 'Yes', "Regardless of how long methane remains in the atmosphere, we know two things: 1) Methane is a much more effective greenhouse gas than carbon 2) Methane concentration in the atmosphere is rising rapidly. We can therefore conclude that Methane is a significant greenhouse gas even if it doesn't stay in the atmosphere very long.", "Methane DOES cause warming for hundreds of years, because you don't just emit one burst of methane and then never do it again. This is just propaganda, and the guy posting it is cringy as hell.", 'They are cutting rainforest killing its inhabitants to provide pasture for those methane emitters. \nIf this is not enough: it takes much more land to grow animal calories vs same amount of plant calories.', 'Anyone else notice the “opinion” in the title (of the screenshot) and not even watch it?', "I think that this is a very useful video. It explains that methane from farming is part of a natural cycle, only stays in the atmosphere for 10 years, and is eventually converted to CO2 and reabsorbed by plants before being eaten by a new generation of cows and sheep. Methane concentration is higher than ever and is warming the planet. This is a bad thing, but it is only a TEMPORARY bad thing, because changes in farming and eating habits could make it all go away in 10 years. \n\nUnfortunately the other kind of global warming which comes from using fossil fuels to heat homes, generate electricity, fuel cars and make cement will.. last... 1000.... years!!!! It's a permanent problem with no cheap or simple solution!!! \n\nTherefore changes to fossil fuel use need to happen immediately, because the CO2 we all emit from using these fuels will still be here in the year 3000. However, if we all carry on eating meat until 2040, then go vegan as the effects of climate change get more obvious, the methane will all have vanished by 2050 and the earth will cool down a bit. \n\nSo when people tell me that the best thing I can do for the planet is to stop eating meat, they are misguided because they are prioritising a temporary problem over a permanant problem.", "Eastern Siberian Sea shelf. It's only really started. That dark red spot will slowly grow.", "Antarctica would be a horrible place to farm solar energy because of the angle at which the sun will be is always low, and there's 4 months of complete night. Large variations in power output are not good for a grid. How would you even build or maintain a solar farm there? That's not even the hardest part though, because you'd have transport the energy. The shortest point between Antarctica and Chile is 1000km. South Africa is 4000km away, and both Australia and New Zealand are 2600km away. That's not feasible. \n\n\nThis also completely ignores the environmental and health disasters that would result. Even ignoring the problems of ozone depletion, a solar farm could be devastation to local wildlife if you just plop it down without consideration to the effects.", 'Fuck', 'Double fuck', 'cool. didnt the IPCC have this in the ar5 as ‘low confidence’ by 2100? cool.', 'Yes, yes we should! I am very curious if thorium reactors could really be made to work.', 'Nuclear power plants are fairly safe compared to the alternatives. Think of all the environmental damage and human deaths from e.g. the oil industry or the coal industry. Unfortunately building a nuclear plant is very very expensive. Many of the originals were funded by governments to make nukes. To build modern ones for commercial profit is not easy, and the electricity they produce is more expensive than wind turbines. See this [https://en.wikipedia.org/wiki/Areva](https://en.wikipedia.org/wiki/Areva) and this [https://en.wikipedia.org/wiki/Hinkley\\_Point\\_C\\_nuclear\\_power\\_station](https://en.wikipedia.org/wiki/Hinkley_Point_C_nuclear_power_station) for more details', 'Yes! It’s impossible for a country to rely on renewables entirely (without massive battery banks which all use lots of finite resources). Countries should base their power off what renewables they can get the most out of for their geography then have nuclear to top it up in case of a dip in the supply or spike in demand.\n\nIn the Netflix doc on bill gates it details the evolution of the nuclear tech itself, us now being able to reuse the waste matter and other amazing developments. But they spoke a lot about how the huge problem for nuclear is PR. \n\nYou think nuclear, you think bomb, you think meltdown. One thing governments are good at is propaganda, it would be in everyone’s interest to shift the opinion on nuclear to being viable and beneficial, that is something that really needs to be tackled by governments in the world but they’re mostly all still reaping the rewards from FF so I doubt they will change until that goes away first. :/', 'No because Fukushima.', 'Nothing about that "rebutts" overpopulation. They\'re separate, but connected issues.', 'Must needed area to focus.', '> The doubling in CO2 per capita between 1945 and 1980 is US, Europe, Japan, Soviet bloc. In each case energy intensity, coal, oil are the keys not population growth which is mainly in what we then called “3rd world”.\n\nhttps://twitter.com/adam_tooze/status/1283392242225418240', '[Global Warming & Climate Change Myths](https://www.skepticalscience.com/argument.php)', "Poe's Law is strong with this one", 'Video Link [https://youtu.be/-VDoBSH4-C8](https://youtu.be/-VDoBSH4-C8)', 'commercial air travel and relying on planes in the global supply chains, one of our biggest mistakes', 'Did they ever start taxing airplane fuel for international flights?', 'Video Link\nhttps://youtu.be/-14m7tKeOo4', "Going vegan is the least we can do. It's the biggest difference to reduce emissions, pollution, deforestation, water use, species extinctions and a whole lot of suffering of innocent animals. Its something we can do with out waiting for politicians to get the memo.\ufeff https://www.theguardian.com/environment/2018/may/31/avoiding-meat-and-dairy-is-single-biggest-way-to-reduce-your-impact-on-earth", 'Sorry but this video is not great. You do have some valid points but you don\'t make good arguments for them.\n\nSecond 35 onward:"One of the most discouraging things when talking about the environmental issue is the lack of immediate interest from the public. Various public poles about climate change give a strong indication that more than half of the world think it as a problem for the future."\n\nYou show three polls:\n\n1) No source, likely just about New Zealand parties. Not the public. Not the world.\n\n69% think it is an urgent problem, directly contradicting your statement.\n\n​\n\n2) No source, directly contradicts your statement.\n\n11% see it as a problem for the future in this graph in 2019. No information on the country or anything.\n\n​\n\n3) 35% of people think the threat of climate change is generally exaggerated, the rest believes it is correct or underestimated. That is not exactly the topic but the data again contradicts the statement.\n\nI\'m not going through the entire video but this just jumped out to me and I thought I\'d let you know.', 'And people like my dad and similar baby boomers: "I don\'t give a shit"\n\nMy dad has literally said that to me with a straight face, it\'s sad', 'Kinda scary! Literally just last night I had a nightmare that the Earths temperature was going to reach 73 degrees Celsius. My arms and my family’s skin all began peel and blister as the sun rose. It seemed so real! Then I see this article...', "There could still be a chance to limit the extent of the devastation to various non-extinction event levels, and we need to continue to try for that as much as possible. However, after 25 years of environmental activism I've concluded it's become nearly certain humanity will suffer unprecedented turmoil and hardship soon, and it will likely only get worse and worse with each coming decade. \n\nWhat I've chosen to do is not to fall into despair but to shift much of my energy to prepare to help others in this coming crisis. I've moved to a place that won't be hit as hard and I'm establishing sustainable living practices and various resources to help climate refugees and community members when it gets bad.\n\nPerhaps most importantly I'm studying meditation and psychology practices that could help manage despair and hopelessness in the face of this. In short we need people who will help others maintain stable outer lives and stable inner lives in the face of chaos and seeming hopelessness.\n\nThere's no telling whats coming but i hope and work for the best and prepare for the worst.", "Chemical engineer here. Yes, there is hope, but it depends on politics. We have technology ready to fix pretty much all of it, we just need regulation to kickstart the economies of scale.\n\nNobody has any way of telling how much of society will collapse, if at all. 2030 is a very pessimistic view, they are just speculating and full of shit imo. People are very powerful and can endure/persevere IF we decide to. We have to get serious, unite. I think of it like WWII. The US fucking *mobilized* like never before in history. But it took a shocking event to do so. Now that we have increasingly shocking events like crazy storms, fires, videos of rivers flowing off of greenland, acres of dead coral, etc., I think it's starting to feel real for more people all the time. There's hope. And it's not binary. Even if it will get bad, anything we do will help it from getting even worse.", '>by 2030’s, There will be a collapse of civilization due to climate change. \n\nWhere have you read that?', "It's fair to note that it doesn't look good.\n\nCivilization won't collapse overall. Parts of the world will (and have) collapse. But HSBC and JP Morgan Chase will keep maintaining their accounts. And most cities will still keep collecting the rubbish.", 'Yes. There is always hope.\nIf you want to see some action, come over to /r/climateactionplan', "I dont know much about this subject, but from the things I've heard/read, and my relatively short experience observing how humans cant really seem to agree on anything and that humans normally dont react until until something bad happens, I feel that space exploration may be our only hope in a couple of decades. I think colonizing Mars should be a bigger priority right now, because that job will heavily rely on earth being around for a long time.", "There isn't going to be civilizational collapse due to warming. There are going to be problems, though.\n\nLet's consider what the effects are going to be (the real observable effects instead of hysteria):\n\nHeavier, more concentrated rains.\n\nExpansion of tornado alley.\n\nCoastal flooding.\n\nThe first one means agriculture is going to have to adapt to crops potentially been rained out.\n\nThe middle one means stronger basements and storm doors on most buildings in the area.\n\nThe third one is the tough one: Think New Orleans scale disasters on coastal cities.", 'What a beautiful place to live. \n\nNot trying to make light of the horrible situation.', "It's happening everywhere for the past 50 years.", '>\tglobal surface temperatures tied with a 12-month high from August 2015-September 2016. \n\n>\tWith records from half the year now available, it is likely that 2020 will be the warmest year on record. (Calc by @hausfath) [sic]\n\nhttps://twitter.com/carbonbrief/status/1280396592760205312?s=21', "I'm really excited for all the extra CO2 and methane that this will release /s", 'that is so concerning jfc', 'this is the most depressing on land effect of global warming ive seen by far', 'Kinda ironic, I see humanity using fossil fuels as a fire spreading through a forest. It was slow at first, but it keeps accelerating.', 'It’s sadly looking like a golf course', 'Siberia is over 5 million square miles. By comparison, USA is 3.8 million square miles.\n\n.\n\nImagine a continent bigger than the USA all of a sudden giving off the most potent and damaging greenhouse gases (methane) than the entire USA.\n\n.\n\nThat’s what this is like.', 'The simple answer is that fusion is always 30 years in the future and it has been for decades. \n\nTo add to that, solar, wind and cheaper storage is fast making nuclear uneconomical.', 'It may be better. Maybe. I think though you should read [this](https://grist.org/climate-energy/fusion-wont-save-us-from-climate-change/) first. Lemme know what you think.', 'Same reason we ain\'t talking about dyson spheres...\n\nIf the world was Charlie Brown, Fusion research would be played by Lucy. In the last 70 years there has not been a decade when some group of scientists wouldn\'t stand up an claim that fusion is less than dozen years away. In Europe we are building the ITER and got the Wendelstain 7x "stellarator" breaking records and performing above expectations. Even if both these projects end up perfectly succesfully, we are still decades from commercially viable systems.\n\n\nLuckilly, nuclear energy potential is far from exhausted. Since the end of Cold war nuclear industry is going through rapid development because it can finally do what it was ment to do - produce energy - not bombs. The examples are molten salt reactors popping out all around the world (China has at least 3), Canadian Terrestrial Energy is already negotiating a commercial setup for their IMSR. Another great candidate is american NUSCALE - who are building small high pressure reactors without "user replaceable" fuel - when the fuel runs out, you just send the reactor back.. \n\nAll these are at least several orders of magnitude safer than anything currently in commercial use and unlike fusion - they already work.....\n(edit: my english)', "Aren't they building iter", 'Source?', 'Where is your science?', 'Link to study can be found [here](https://www.nature.com/articles/s41467-020-16970-7#citeas). \n\nReference:\n\n>\tPerkins-Kirkpatrick, S.E., Lewis, S.C. Increasing trends in regional heatwaves. Nat Commun 11, 3357 (2020). https://doi.org/10.1038/s41467-020-16970-7', '#arewegreatyet', 'Ultra-high temperatures were recorded before we started filling the atmosphere with various gases. Evidence suggests that ~100 million years ago the half a year frozen wasteland I live in was a tropical rainforest. \n\nWe need to quit worrying about climate change and start fixing the problems destroying our world. Reducing the amount of plastic in the ocean is a good place to start.', 'Refrigeration is just moving heat from one place to another.\n\nsorry\n\nWell, unless you can figure out a way to vent it to space :)\n\nEdit: hmmm vent is the wrong word; I think transmit would be more apropos.\n\nMaybe you can figure out a way to transmit the heat out of our atmosphere through conduction like possibly hang a long copper/aluminum/whatever rod "vertically" on the dark side of earth. :)\n\nMy real idea for fixing global warming unfortunately has potential military applications soooo Imma wait to see just how stupid the \'Murikkkun voting public truly is.\n\nPLEASE somebody demand these erections be closely monitored by the U.N.', '​\n\nHere is the link: \n\n[https://www.theguardian.com/business/2020/jun/29/uk-ministers-send-mixed-messages-over-climate-commitments-says-fund-manager](https://www.theguardian.com/business/2020/jun/29/uk-ministers-send-mixed-messages-over-climate-commitments-says-fund-manager)', 'I can’t tell if this is negative', 'lol no shit', 'It probably will... People and industries will see the changes that happened during those months and will maybe lower their CO2 emissions.', 'No quite the opposite, global warming threatens to make some places uninhabitable due to extreme heat.:\nhttps://en.m.wikipedia.org/wiki/2019_heat_wave_in_India_and_Pakistan', 'I will combine two of my comments in another thread to answer this. \n\n>\tIt is 30 times more likely to occur now than before the industrial revolution because of the higher concentration of carbon dioxide (a greenhouse gas) in the atmosphere. As greenhouse gas concentrations increase **heatwaves of similar intensity are projected to become even more frequent,** perhaps occurring as regularly as every other year by the 2050s. The Earth’s surface temperature has risen by 1 °C since the pre-industrial period (1850-1900) and UK temperatures have risen by a similar amount. [sic]\n\n~ [Met Office](https://www.metoffice.gov.uk/weather/learn-about/weather/types-of-weather/temperature/heatwave)\n\n>\tGlobal warming is increasing the frequency, duration and intensity of heatwaves.\n\n>>\tIf we contine to rely heavily on fossil fuels, **extreme** heatwaves will become the norm across most of the world by the late 21^st century.^^1 [sic]\n\n~ [skepticalscience](https://www.skepticalscience.com/heatwaves-past-global-warming-climate-change-basic.htm)\n\n***\n\n^^1 Dim Coumou^1 and Alexander Robinson^1,2,3\nPublished 14 August 2013 • 2013 IOP Publishing Ltd\nEnvironmental Research Letters, Volume 8, Number 3', 'it varies. its global warming because totally it will go up, though some places will be colder', 'Yesnt and no but maybe', '[https://www.youtube.com/watch?v=Z4bSxb5THm4](https://www.youtube.com/watch?v=Z4bSxb5THm4)', 'Some cold places will get colder and some will get hotter. The same goes for hot places. I\'m purposefully not using the term "countries" here because climate change doesn\'t respect borders, so some countries, like Russia, will experience all of the above.\n\nJust off the top of my head it will depend on altitude, proximity to the ocean, warm oceanic currents, local landscape... That\'s why it\'s important to be skeptical of Fox News or Steven Crowder presenting a bombshell report about one iceberg gaining mass in some recent time period as a proof that global warming is a hoax.', 'Nice video haha', "Uhh what, I'm sorry but this is horrifying", '"Temperatures reached +38°C within the Arctic Circle on Saturday, 17°C hotter than normal for 20 June. \\#GlobalHeating is accelerating, and some parts of the world are heating a lot faster than others. \n\nThe \\#RaceToZero emissions is a race for survival. \n\nDataviz via @ScottDuncanWX " \n \n>posted by @UNFCCC \n ___ \n \nmedia in tweet: https://video.twimg.com/ext_tw_video/1274949754997350406/pu/pl/XlCZBoWfkEaJX3x4.m3u8?tag=10', 'Kill off About 12 countries', 'https://www.nature.com/articles/s41467-020-16941-y', 'this guy charles down the block is. all he does is eat beans and be farting all the time. total jerk if you ask me.', 'Everyone is responsable', 'Rusty Cage backround music', 'Double negatives are not the unworst language to use.\n\n*Based on the current resource consumption rates and best estimate of technological rate growth our study shows that we have very low probability, less than 10% in most optimistic estimate, to survive without facing a catastrophic collapse.*', 'The hell does that mean', 'Nero porcupine', 'So no hope? Well in that case just start rioting, kill your self, no point in doing anything', 'I have a feeling that some of the members are beings pessimistic because they actually want global warming to kill us', "I'm interested in the data on how much less greenhouse gas emissions were.\nThe problem is that this was just 3 months of lockdown in Europe and the USA, so it might not be visible in the weather patterns.", "First thing that would have to happen is the technique being advocated has to be proven to capture carbon efficiently, store it indefinitely, and have more than modest carbon offsets (more carbon must be captured than produced).\n\nSo far, there aren't many methods that can achieve these goals and they can't be scaled up because the geologic conditions and existing infrastructure only exists in limited and remote areas. You can only transport captured carbon so far before the carbon gasses emitted will exceed the carbon captured.", 'No, because that would be like saying “Alright, we’re going to sacrifice ourselves because the government can’t stand against the billionaires”. Effectively giving them a free pass. Never should the working class be the ones paying for this.', "Depends on how the system works. If it's carbon capture to energy consumption ratio is right, I would give it a try.\n\nDisclaimer:\n-I am not an expert these are just some thoughts that came up my mind-\n\nBut it also needs to be way more efficient than trees, forests etc. In conclusion to that I guess, that it has to be either very big or suck air in like a vacuum cleaner, which would cause noise and more energy loss. \nI think another problem would be that you need a loot of environmental data to find a place for this thing in order to get the best carbon capture ratio as possible. \nThere are also other bad greenhouse gasses that cause much more harm to the environment than carbon does. \nAnd in the end: What would be the impact of it if, let's say, we get the money, fundig etc. to get a Million of these?", 'I d k', "You can't really take this serious, or do you? You really believe that maybe one good side of excess of carbon surpasses the hundreds of downsides. Not to mention the author of this 'scientific' paper is an industrial consultant and climate change denier.", 'Before commenting here, *please* read the whole paper.', 'Try /r/datasets.', 'Have you really been far even as decided to use even go want to do look more like?', 'No.', 'That literally made no sense but\nPositive news????', 'Idk much about science but logically further we are from sun the colder it would be. But if we keep up the CO2 emission then greenhouse effect wouldnt dissapear. So im not sure if it would change anthing in the long run.', "Consider the mass of the planet and how much force would be required to move it. That is your baseline energy requirement.\n\n​\n\nAlso consider that if you were somehow able to change the orbital trajectory, it would likely just change the eccentricity of the orbit. You'd have to apply the exact amount of force at the exact moment several times throughout the year to extend the orbit. This would also extend the orbital period (longer year) and may affect rotation as well (increase or decrease daily periods). \n\n\ntl;dr no, it is not feasible nor would it be a good method.", "Go play KSP for a little while. It's $40 at the humble store right now, and it goes on sale fairly regularly.\n\nOnce you've got an idea of how orbits work, and how orbital mechanics work, you'll realize just how much energy it takes to change something's orbit... and that's for object with masses in the metric ton range.\n\nThe earth has a mass of approx 5,972,200,000,000,000,000,000 metric tons. \n\nThe biggest reason we have a global warming issue is energy consumption. Gasoline, coal, natural gas, fossil fuels in general, they are why we are having problems with climate change.\n\nThe energy involved in implementing your idea is simply astronomical, and any benefits from the solution will be FAR outweighed by the side effects of that energy use.", 'It would not affect global warming but it would most likely change the times of day for each continent and country this is a simple answer but it would really not change anything if humans don’t stop burning fossil fuels in the next 5-10 years we are all doomed', 'Who’s “y’all” and where did you read that ~~people~~ scientists want global warming to end us?\n\nAnyway, here are some solutions I’ve found that you may be interested in:\n\n* [NatGeo](https://www.nationalgeographic.com/environment/global-warming/global-warming-solutions/)\n\n* [NASA](https://climate.nasa.gov/faq/16/is-it-too-late-to-prevent-climate-change/)\n\n* [Scientific American ](https://www.scientificamerican.com/article/10-solutions-for-climate-change/)\n\n* [BBC](https://www.bbc.com/future/article/20181102-what-can-i-do-about-climate-change) (one of the more lay person friendly sites I’ve seen)\n\nEDIT: clarification.', 'Globox', 'When I say y’all is the people who are like “ oh my Jesus, in like 3 days or 5 years global warming will kill us” cough cough Greta thunberg', 'And what does that mean', 'This is a forum full of people that have already woken up.', "Watching Americans' response to Coronavirus helped me to understand this as well. \n\nIf a pandemic can't convince us to save lives, then a much longer-timeframed catastrophe isn't going to make us do anything either. \n\nWe're fucked. I feel sad for anyone under the age of 20 at this point. I'm glad I don't have kids. I'll likely die before shit gets too bad. Thank god.", "Chemical engineer here. It's more a willpower problem than a technical one at this point. \n\nIf we decide to buckle down and actually get started, we can do pretty much anything we want. I'm less qualified on talking about people, but I think people are too reactive instead of proactive. We'll probably need to see a lot more negative effects before enough people get serious.", 'Hey maybe one in those asteroids will do the trick?', 'u/crow-of-darkness\n\nHang out on /r/collapse \n\nThat sounds like your jam', 'The thing is that if the virus stunts global warming, we die from the virus, but if the virus stops, we die from warming', 'Nice', 'We are so fucked the clathrate gun has fired and has turned into a methane gusher thousands of miles across.', '[Here](https://youtu.be/3sqdyEpklFU) is a video showcasing Global Warming from 1880 all the time at up to 2019.', '"Anomalous warmth stretching from western Siberia to the central \\#Arctic basin over the last month 🔥 " \n \n>posted by @ZLabe \n ___ \n \nmedia in tweet: https://i.imgur.com/M58PYRm.jpg', "I live in Siberia, this year's May was somewhere between the usual June and July in terms of temperatures. A handful of days nearly reached 30°C. Usually, it's not too uncommon to have some snowfall in May, with temperatures around 10-15 degrees. It's mad. Nothing like that has happened in the 30 years that I've lived here. [Here's the 10 years average day/night temperatures compared to the 2020 ones (grayed out, below the average values) for each day of May](https://i.imgur.com/kmvNHNc.png).\n\nThere were also 2 storms. Nothing like the hurricanes of the Caribbean but you have to realize that this kind of weather used to be extremely uncommon around here and neither the infrastructure nor people are prepared for it. [Roofs go flying](https://youtu.be/y8FRXkGPZhA) when the wind picks up.\n\nIf this sort of thing becomes regular, I don't know how humanity's gonna fare.", 'Correction: CO2e levels have reached highest level ever since Miocene. The human species appeared after the end of Miocene.', 'The fleet of container ships traveling between China and the US and Europe emit as much CO2 as all the autos in the US. Why not convert these ships to nuclear power?', 'Half the planet is on lockdown. This is proof we can do absolutely nothing to stop this now. Might as well accept it', 'The news is not completely true.\n\nKindly follow the [Link](https://www.thequint.com/news/webqoof/old-images-passed-off-to-show-massive-fire-in-uttarakhand-forests-fact-check)', 'Unfortunately, this has been true for years. For factual climate change info, it’s better to follow /r/climate.', "I wouldn't rule out the possibility that these are paid operators with many alts", "Yup, have been for years. There's the same grito of 5 or 6 users who are on climateskeptic and climatechange. The mods are from climateskeptic, etc. It's a garbage honeypot sub.", "It's run by climate skeptics, you say?\n\nThat's a nice way of putting it, I would even say incorrect.\n\nIt's run by climate denialists, that I had the misfortune of conversing with back in the day. As the voice of science and reason, I am now of course banned on both of his (the main mod's) subs. He was also incredibly rude and resorted to personal attacks against me.\n\nThere are some decent people who know their stuff on r/climatechange, but not many. As a European with atmospheric science background, I just gave up - many of them argue in bad faith, some seem to be loonies across the pond on that sub (no offense meant, I have many clever friends from there too, but the most opposition to climate change science is from the US). Thinking specifically about people linking Judith Curry, wattsupwiththat, Heartland Institute, and some other denialists' bs.\n\nI would say the best course of action would be to ignore them, instead of even give them the attention they want, but because the sub name gives them so much exposure, it might be a decent idea to show how faulty their reasoning and science background is. Not sure how else to deal with biased mods, with horrible intentions about the most important subject in the history of mankind.", '>" In his own words, he does not believe climate change poses a great danger to humanity\'s survival and future. "\n\nThat\'s considered denial now? Do you believe humans can live on Mars soon, and if so, why not also on a warmer Earth? \n\n\nEdit: Why downvote this? I\'m on your side.', 'How do you get from skepticism to denial to fossil fuel funding and tinfoil hats in 10 seconds? I find the discussion most fruitful on r/climatechange and consider r/climate and r/climateskeptics to be quite similar echo chambers of different sides of the argument. If you only want to hear what you believe you can stick in your echo chambers - or then you can expose yourself to conversation about the science. To be fair r/climate is fairly ok but the atmosphere is definitely less tolerant to debate. \n\nHow do you even define a skeptic? The way you do it here is anyone with an optimistic outlook of the future as it sounds. If only negativity is to be allowed I recommend r/collapse', "Hasn't the concentration of CO2 been highest ever every day for a few decades now?", "> In April 2020 the average concentration of CO2 in the atmosphere was 416.21 parts per million (ppm), the highest since measurements began in Hawaii in 1958.\n\n> Furthermore, ice core records indicate that such levels have not been seen in the last 800,000 years.\n\nIt's time for politicians and voters to wake up and make global warming the number one priority for every government in every major country.", 'Youtuber Simon Clark has a video on the subject, and he said that likely this year will be warmer. Also, due to the damage caused to the economy, some countries are considering rolling back on their environmental programs', "Yeah, it will be warmer. the climate isn't a switch we can collectively turn on or off. We still need to do more to reduce greenhouse gases.", ' Global warming is so yesterday, welcome to Grand Solar Minimum.\n\n​\n\n[https://www.youtube.com/watch?v=FqcL1JGlA2I&feature=share](https://www.youtube.com/watch?v=FqcL1JGlA2I&feature=share)', 'Check out the [ProjectVesta video](https://youtu.be/X5m3an3f5S0)', 'Fossil fuel industry propaganda.', 'r/jokes is that way -->', "This is why you'll never enter the circle of knowledge, it's impossible for a square peg.", 'Where does the water come from to sustain the trees?', 'I like the formal suit as a the tree-planting dress code.', 'Even during coronavirus, we have not reduced co2 emissions below the threshold for the planet to heal. The planet is still warming at an increasing rate, even with our reduced emissions.', "I think what this unplanned global experiment shows is that improving the planet's condition doesn't like *solely* in the hands of regular people. That's one part but the bigger element is governments and corporations completely changing how they operate.", "It's like getting punched in the face less hard. The situation isn't really improved until you can get the person to stop hitting you altogether", 'So your saying we’re fucked', "The effects of COVID-19 are the only thing making this seem remotely attainable. I like to think it's a wake up call but my concern is that all the people currently sheltering in place, etc. are just waiting to explode back to their normal activity levels which, in turn, will raise our emissions right back up. In fact, I've heard various rumblings of the YOLO mindset that makes me fear of attaining even higher levels of emissions by way of expanded travelling, activities, etc.\n\nI SO wish we had leadership across the globe with a genuine interest in working to solve this crisis but that's not currently the case. We have some concerned leaders but far too many that are either indifferent or even in explicit defiance of our best interests here. Sigh.", 'No thanks. We are not a smart species. We will take the various catastrophes and then cut our emissions by more than 7.6%.\n\n-Humanity', 'https://grist.org/climate-energy/current-global-warming-is-just-part-of-a-natural-cycle/', 'Muay thai or jujitsu. Guns also work', 'I ask them what would be causing temperature to rise in recent decades. The temperature does naturally fluctuate, all climate researchers agree on that. But based on the measurements of the natural factors that can influence mean global temperature, it should not be increasing currently.\n\nFor example, solar activity can’t explain the warming since 1950, because [solar inputs haven\'t been increasing](https://imgur.com/vf3jTHY) during that time. They’ve been generally steady or decreasing, while temperature has risen.\n\nSimilarly, [the recent trend in cosmic rays is the opposite of what it would need to be, to cause warming](https://imgur.com/llsqkSB).\n\nEven "skeptic" blogs like [wattsupwiththat](https://wattsupwiththat.com/2019/12/12/deep-solar-minimum-on-the-verge-of-an-historic-milestone/) have acknowledged these recent trends in solar activity and cosmic rays.\n\nCyclical variations in the orbit and tilt of the Earth relative to the Sun are known as the [Milankovitch cycles](https://en.wikipedia.org/wiki/Milankovitch_cycles), which are understood to be an important driver of the glacial/interglacial cycle. But the Milankovitch cycles can’t explain the recent warming - they are currently in a phase that should be leading to a slight cooling trend, if anything. [Here\'s a graph that shows the Milankovitch forcing for the past 20,000 years](https://imgur.com/a/50Sotae). As you can see, it peaked around 10K years ago, corresponding to the warming that took us out of the last glacial period. It has been decreasing for thousands of years since then. (Source data available through [this page](https://biocycle.atmos.colostate.edu/shiny/Milankovitch/).)\n\nAnd so forth. If you take into account all of the measurements of natural factors, they can\'t explain the recent warming trend. But it is well explained, when human-caused factors (such as the increase in CO2) are taken into account. [See here for a graph that illustrates this](https://www.gfdl.noaa.gov/wp-content/uploads/pix/model_development/climate_modeling/climatemodelingforcing3.jpg).', "First off, you'll never convince them, you can't reason someone out of a position they didn't reason themselves into.\n\nWe disagree with idiocy in the hopes it sways the undecided.\n\nMy advice, don't bother when it's just you two, be prepared to destroy them when there's an audience.\n\nFirst rule of global heating, CO2 holds in heat. We spew it. He can try to say co2 doesn't trap heat, simply read up on it, he'll be wrong, you'll be right. Make them defend idiocy.", "Maybe the point is that capitalism won't save us. If you live in the U.S. capitalism - regardless of what Bernie Sanders and AOC espouse - is the state religion. After 9-11, Bush told people to go buy something, Obama bailed out the largest banks and Trump is cool with people dying as long as the economy looks rosey. Buying more shit and using it as an indulgence to paper over our unnecessary SUV or government subsized meat will never reach where we need to be to reduce methane, CO2 and other greenhouse gases. We have too many people striving for the same standard of living. At some point all the systems break. We can't physically have 8 billion people on the planet eating a diet that requires 49 billion acres of land (we only have 4 billion arable acres), but the standard American diet requires the 49 figure. Buying shit, be it the right car, solar panels or windmill or whatever may feel good, but ultimately your best bets are to 1) Go Vegan / plant based, 2) Not have children and 3) not fly internationally. Everything from COViD only added up to a tiny percentage of global warming emissions change. We are fu#$d.\nBefore you attack, I have solar panels, and am following numbers 1 and 2 above. I just think most of the Green Movement has been co-opted by capitalism and won't change anything. It's almost in their mantra, they never asked anyone to change. Sort your trash sure, but give up meat or not have a brood, never.", ' I have read several fact checks in this "documantary" and its an evil hit piece against renewable energy.', "Capitalism won't save us. Buying solar panels or Teslas isn't going to do it. People need to make actual sacrifices. Give up meat, stop having kids, live on less.", 'What is your goal?\n\nIs it not having a climate catastrophe? Please tell me what bad thing you think is going to happen and where that bad thing will hit hardest.\n\nOr is it some sort of destruction of human civilization for the sake of it?', "It's unavoidable at this point - the only path forward is to actively remove greenhouse gases from the atmosphere and somehow sequester them. I foresee huge solar-powered machines that collect methane and co2 from the air and use large amounts of energy to solidify them for burying or perhaps blasting into space if that becomes affordable and environmentally-friendly one day.", 'At least the rich and their children will die with us plebs.', 'Failure by design. We need a complete restructuring.', 'My ass. Been eating a lot of Hummus I had stocked in the fridge.', "10 billion isn't enough but still a good initiative", 'Are you paid by eyeball seconds?\n\nWhat a frustratingly inefficient way to display the data!', 'How far is this truthfull based on what science? I understand that there is a possibility that we are going back to the ice age but why does Russia have the hottest place on the plannet? Methane lakes? Or just in the right geo location?', 'The environment is all fucked up because of corporate greed and unfettered capitalism. Sure, there has always been some kind of sense of reuse and recycling among the lower economic class that survived things like the great depression and wwii rationing, but this sounds like a grumpy old boomer railing against those dang disrespectful millenials and their pokemons and vidya games and that Swedish rascal Greta whatshername.\n\nTake ur heart meds, old man, and keep ur social distance.', 'The old lady seems to be forgetting that it was their generation that allowed things to become the way they are now in the first place. They are the ones that allowed capitalism to dictate our way of life.', "Society Moves on and as it advances the amount that will be sacrificed will increase too, the only realistic way we can solve this is if we are to find ways to technologically *limit* or even possibly *extract* carbon entering or already in the air. The blame game isn't going to work anymore, we can only improve and that can only happen if we can come together and start solving this problem as a whole not like some children who argue over the TV show but by the time they decide both shows are over.", 'Ha ha ha.... This is pretty embarrassing.', 'Let me ask you, where does that concrete come from? It comes from the land, so the net weight change of the land is zero. Besides, the Earth weighs roughly 6,000,000,000,000,000,000,000,000 kg, a change of 6,600,000,000 is not even a drop in the bucket. \n\nIt’s true that the earth is warmed, for all practical purposes, entirely by solar radiation, so if the temperature is going up or down, the sun is a reasonable place to seek the cause.\n\nTurns out it’s more complicated than one might think to detect and measure changes in the amount or type of sunshine reaching the earth. Detectors on the ground are susceptible to all kinds of interference from the atmosphere — after all, one cloud passing overhead can cause a shiver on an otherwise warm day, but not because the sun itself changed. The best way to detect changes in the output of the sun — versus changes in the radiation reaching the earth’s surface through clouds, smoke, dust, or pollution — is by taking readings from space.\n\nThis is a job for satellites. According to PMOD at the World Radiation Center there has been no increase in solar irradiance since at least 1978, when satellite observations began. This means that for the last 40+ years, while the temperature has been rising fastest, the sun has not changed.\n\nThere has been work done reconstructing the solar irradiance record over the last century, before satellites were available. According to the Max Planck Institute, where this work is being done, there has been no increase in solar irradiance since around 1940. This reconstruction does show an increase in the first part of the 20th century, which coincides with the warming from around 1900 until the 1940s. It’s not enough to explain all the warming from those years, but it is responsible for a large portion. See this chart of observed temperature, modeled temperature, and variations in the major forcings that contributed to 20th century climate. https://en.wikipedia.org/wiki/File:Climate_Change_Attribution.png\n\nRealClimate has a couple of detailed discussions on what we can conclude about solar forcing and how science reached those conclusions.\nhttp://www.realclimate.org/index.php/archives/2005/07/the-lure-of-solar-forcing/\nhttp://www.realclimate.org/index.php/archives/2005/08/did-the-sun-hit-record-highs-over-the-last-few-decades/', 'Co2', 'Can you give the source of this article ?', "The main problem isn't the warming per se, it's the rapid change. A rapid cooling would also be very bad. If I had to pick, I'd prefer warming over cooling.", 'And we already see that in effect...', 'Nobody likes smart people anyway.', 'Ah what...?', "People are on quarantine. That means there is very little cars, planes, ships moving around. That means less co2 emissions. Not to mention a lot of industry has stopped that's less polution moving around etc\n..", 'I\'m pretty sure it will NOT help here is why\n\nConsidering:\n\n- all the carbon emission and pollution levels are currently falling\n\n- governments have agreed to reducing carbon emissions as an essential step to protecting our environment\n\n- investment in green energy and environmental policy was growing before the crisis\n\nI feel that the reduction in pollution during the covid will allow governments to hit emission reduction targets and pollution reduction targets without addressing the core or the problem (it\'s a side effect of the quarantine, not because our economy is more "green").\nThat coupled with an economic crisis will shift priorities away from continuing to invest in renewable energies, sustainability and enforcing strict environmental policies which has always been more prevalent in times where the economy is doing good.\nIt\'s understandable, people will care less about the future of the planet and long term sustainability if they suffer from an economic crisis that makes it harder for them to sustain to their immediate needs.', '90 percent of carbon emissions come from the richest ten percent of people, and it probably will be mostly the poorer people that will suffer. meaning corona killing poor people will do little to nothing to slow climate change. if it only killed billionares on the other hand...', 'please by all means you be the first to volunteer.', "This analysis is a middle and upper class priviledge. It's not the op-ed writers who will die, that duty goes to the impoverished who contributed the least to global warming. Even though I wouldn't recommend mass human extinction to solve global warming, if the population that died were the ones doing the polluting this line of reasoning would at least be coherent. But that is not the case and the sentiment only shows that smarmy columnists want the poor to die for them.", "That's just a temporary solution, people will reproduce again.", 'Thanos has joined the chat.', 'Air quality here in South Africa has already started to increase', "Prob doesn't account for extreme environmental rollback \n\nAnd myriad other extreme reactions from Power\n\nBut worth having the deep think", "We draw the line at advocating deaths. I'm not aware of any climate movement ever that advocated for deaths of people. Well, unless we count Koch brothers as people... The whole point of an anthropocentric climate movements is to prevent the death and destruction that catastrophic climate change would cause. \n\nAny and all claims of preventing overpopulation I've ever seen (not counting deranged lunatics on the internet) talked about family planning and birth control as ways of empowering people to not have children of they don't wish to or (in the more extreme cases) talking about things like the China one child policy as a possible solution. \n\nI'm really just unaware of anyone saying it would be a good idea to just straight up murder a bunch of poor people in order to save the climate.", "It would be a hug win if the 30% are killed in the more developed countries. \n\nIf it kills mainly in poor country such as the ones in Africa or East Asia, it won't change the current situation. \n\n​\n\nMost of the CO2 emission are produced to make goods for western countries from what I know", '[deleted]', 'There are too many people on Earth already, fact. They can decide not to reproduce like rabbits, or be chopped down by a huge epidemic, which is inevitable in a mono-species world. Alas, choosing not to reproduce too fast is contrary to the sense of life -passing on your genes. It takes more than intelligence.\n\nOn a side note, less people is good for wealth equality. Now workers are treated as disposable, many are in bullshit job positions. Plague history shows how surviving working class was rewarded.', "It can be WW3 or something. I think it will be. 'Cause our resources are running low. It's inevitable. The humankind has to suffer. :(", "Spraying water into the atmosphere and raising the humidity level would actually slightly increase the world's temperature because humid air retains more heat than dry air. Additionally, it takes electricity to pump water, so you'd be using more electricity which generally means more global warming.", 'Cool story, high entertainment potential.', "That's the sad thing, isn't it? People think that you are fear-mongering, while it's a real issue.", 'The best thing you can do is call or write your government representatives.', 'Recently. Oh brother.', "If you really want to tell people I think you have to accept that some people will always think you're fear mongering. Not talking about it because of other people's feelings that don't believe it in the first place will never help to solve the issue. I agree with the other commenter though that right now is the best thing that could be happening for the planet. It won't last forever, but maybe when things begin to slowly return to normal it will have changed the way people think and people will finally realize that maybe there are some important things besides themselves", 'Earth is healing herself at a rapid pace right now. Like aggressively fast. Due to the planet shutting down from Corona .', "Sounds like a ton of dependencies on others and companies. You'll need zoning for your bunker and your geothermal system uses a ton of electricity to circulate. Solar panels are only rated for maybe 25-30 years before efficiency falls below acceptable levels. \n\nThink simpler. Less moving parts, less technology reliant upon electricity. Focus on the basics- shelter, food and water.\n\nAlso, others may try to take it from you. Including military and government.\n\nTry over at r/preppers", "You might want a waterproofed underground LIVING & SLEEPING space, for when air temperatures become intolerable, and there's not enough power for air conditioning. Also useful in case of \ntornadoes.\n\nA Motte and Bailey construction would be good to provide earth insulation, yet still be above possible flooding from heacy rains.\nReflective coating or solar-panels on all non-North facing surfaces.\n\nLarge openable panels on roof, to dump heat into space at night.", "I've thought a ton about just going it alone with me and my family in a little bunker like you describe. I do feel like its not super safe, and roving marauders will eventually stop by your house. I feel like a group of like minded people would be smart to form a community.\n\nyour thoughts?", 'my plan is to die as painlessly as i can.', "My plans for a 2 degree temperature rise. \n\n1. pay lots and lots of taxes so that the government can improve the infrastructure to cope with more extreme weather.\n2. Mourn the lost wildlife that couldn't adapt to rapid change.\n3. Welcome the migrants who have come to my country to avoid really bad weather where they used to live.\n4. Wish that people had taken CO2 more seriously\n\nI'm not trying to be dismissive of the problems of climate change, but before preparing for a scenario out of a mad max film, please bear in mind...\n\na. people happily live in the sahara desert, where it is very hot and dry.\n\nb. people happily live in Holland, where the land is literally below sea level.", 'I’m just gonna refer to global warming as climate change since cc includes the actual effects of warming. Anyways, here’s a collection of things you should be worried about (enjoy!):\n\n-rising sea levels as you mentioned which will cause flooding of coastal cities (it’s important to note that 40% of the world’s population lives within 100 km of a coast so it’s not a risk for “a relatively small number of humans”) [source](https://sedac.ciesin.columbia.edu/es/papers/Coastal_Zone_Pop_Method.pdf) \n\n-extreme weather events such as heatwaves which lead to wildfires and drought (impacts agriculture) also heavy precipitation and blizzards\n\n-now in terms of the environment it causes effects such as ocean acidification (which badly harms the ocean life) and once again wildfires and drought (negatively impacts flora and fauna) \n\n-if you’re still not feeling it w the above points then I’ll mention how even if you’re somehow someplace where you’re not experiencing the above, you’ll experience it through an increase in prices of certain foods/goods (which would go on to threaten food/goods availability), higher tax rates, health problems, and for sure major (negative) impact on the economy \n\n-I’m probably forgetting some things rn because it’s a little late but these are the main ideas\n\nThis still may not seem like much to you and that’s fine because it’s hard to imagine this happening to our planet. Climate change is unprecedented and it’s truly the greatest test of our humanity. As for our species it has the potential to wipe us out unless we take major action (Including the poor other living creatures :( ). The planet will still exist regardless because we (hopefully) can’t blow it up and physically remove it. It will definitely be inhabitable without question (at least for our type). \n\nSorry if this doesn’t make sense but I hope this answers your questions in some capacity and please ask more if this doesn’t convince you!', 'Shifts in climate are projected to lead to the collapse of several of the major food growing regions on earth leading to famine on a scale never before experienced, mainly in Africa and Asia.', "Gases dissolve in ocean water at greater rates if the planet / ocean is cooler. This means greater oxygen availability for the fauna / flora in the ocean. This flora is actually what creates breathable air for the landlubbers. Increasing temperature means less oxygen in the water. When this suffocates the organisms responsible for creating breathable O2, you will see a massive collapse of the ecosystem.\n\nCO2 is weird because it'll form H2CO3 (carbonic acid) and will dissolve in greater rates and pushes out even more CO2 as the acidification then further dissolves the chalk (CaCO3 - basically sequestered CO2) at the bottom of the ocean. Acidification then causes bleaching of corals and massive changes to the ocean environment. This will further exacerbate the situation. As there's more CO2 in the air, the more it dissolves into the water (Henry's law).\n\nWe're parasites in the grand scheme of things and when these ecosystems collapse, that's going to do the true damage. Less arable, farmable land. Less available clean water. Less hospitable living areas on the planet. Hell, if the CO2 concentration in the atmosphere increases past a certain point, you will not be able to leave your house for very long as you won't be able to breathe the air without getting loopy. 78% N2 / 21% O2 and trace other gases is the composition of the planet. If you fuck with that O2 number too much, you put everything in jeopardy. Including the way the planet keeps cool.\n\nI suggest you read a chemistry book or just literally any science text book. It's like you don't know how Henry's law works, or what an ecosystem is, and only understand what's been spat on the news (ocean level rise because glaciers are melting...4HEAD) I feel like you can't grasp basic concepts and are asking questions like this. It's not even hate, I just feel sad because this is likely the level of understanding the general populace has. No understanding of scale, no understanding of pretty basic science principles, no understanding of interconnectedness of systems, no understanding of stimulus / response, no understanding of time scales, and just general poor background in any sort of mathematics and / or science.", 'Small Numbers of People?... think again... All major cities in the entire world are located by large water bodies. Geography check!', 'i think you underestimate the amout of people that life in coastal cities and island in the world that will be homeless due to rising sea levels. but its not just that, it will also get hotter and there will be places were it will just be to hot to live. Additionally droughts will be much more servere and it will be harder to grow food. Also there will be more extreme and more regular extrme weather events.\n\nso all these things will make places uninhabitable, and the people that lost their houses will become refugees and must go somewhere else. \n\nif you look at refugees crisis in europe had about 1 million refugees and caused alot of trouble. \nthe number of refugees that loose their homes due global warming will be closer to 1 billion. \nso where are these people gonna go?', 'War. There will be massive wars over migration and resources as resources become more scarce. We saw how poorly the world handled migration from the Middle East following the Arab spring....imagine if every person living near the equator moved north or south to escape the heat, wild fires, expanding desertification, and general unpleasantness. There will be lots of wars started because of climate change. The DoD plans their strategy with this assumption baked in.', " My plans for a 2 degree temperature rise.\n\n1. pay lots and lots of taxes so that the government can improve the infrastructure to cope with more extreme weather.\n2. Pay extra for food, to cover the cost of farmers changing crops or moving location to suit the new climate.\n3. Mourn the lost wildlife that couldn't adapt to rapid change.\n4. Welcome the migrants who have come to my country to avoid really bad weather where they used to live.\n5. Wish that people had taken CO2 more seriously\n\nI'm not trying to be dismissive of the problems of climate change, but for those imagining a scenario out of a mad max film, please bear in mind...\n\na. people happily live in the Sahara desert, where it is very hot and dry.\n\nb. people happily live in Holland, where the land is literally below sea level.\n\nc. rising temperatures and CO2 are bad for some crops, but good for others.", 'Are you a human being?', "It comes from over consumption of land and resources.\n\nPopulation growth is related to those things but it isn't a root cause. \n\nOver consumption of land and resources is the best locus for the direct cause of global warming.\n\nInterestingly global warming is just one of several effects that locus is causing. The rest are bad too.", 'I saw this earlier and I looked at the sources electroverse and the global temperature twitter account, needless to say when I looked at them further I felt no need to see where their sympathies laid.', "Please don't be an insufferable prick to them. It will only confirm their suspicion of the world lying to them. Just give civil arguments and eventually they won't be able to defend their point anymore. They'll start shouting at you and that's the sweet taste of victory", 'Gee, thanks, that\'s, um, quite the website. I went to the main page and gagged. Antivax and "COVID 19 won\'t hurt you" interspersed with worst winter ever. \n\nIt\'s sad to see the chemtrails claim another victim.', 'Climate change? Cover the earth in 800M Wales and the problem is solved.', "Isn't global warming from co2 a chinese myth and truly the sun is getting hotter and that's the reason why the globe is warming.", 'I NEED TO KNOW WHAT IS THIS', 'Really its just capiralism shutting down for a bit', 'You can get near real time from NASA, [https://earthdata.nasa.gov/earth-observation-data/near-real-time/hazards-and-disasters/air-quality](https://earthdata.nasa.gov/earth-observation-data/near-real-time/hazards-and-disasters/air-quality)', 'There is a good app: https://apps.apple.com/de/app/airvisual-air-quality-forecast/id1048912974?l=en', 'World made by hand?', "I picked up a book at the op-shop recently, the blurb sounded like this but haven't read it yet, the title is 'end of days'", '1) None of your claims have any figures. Using words like “huge” when there’s no reference frame at all doesn’t mean much of anything.\n\n2) None of your claims have sources.\n\n3) Youre unfairly comparing the fuel cost of nuclear power to the construction cost of solar and wind. All the concrete and steel still needs to be shipped. The technicians and operators need to drive to the plant.\n\n4) I don’t know if you even know about this, but there isn’t a lot of risk with nuclear power, it’s the [safest source of energy.](https://www.statista.com/statistics/494425/death-rate-worldwide-by-energy-source/) Think of it like a plane vs a car. Plane crashes are a catastrophe while car crashes are “just” a tragedy. But cars crash more often to the point that they’re far outpace planes.\n\nI’m sorry but after this it just seems like you’re addressing nonsense arguments and it’s making me cringe to even hear it, I’m tapping out\n\nGeneral improvements:\nSunglasses makes the communication less personal, making you seem insincere and untrustworthy.\nEstablishing yourself politically and separating yourself from the “liberals and progressives” alienates yourself from liberals and progressives.', 'No offense but I don’t think anyone’s going to watch the whole thing. You need to be quick to the point and have an ordered structure already on your head. If there’s nothing visually striking, it really has no basis being more than 5 minutes. People on the internet have short attention spans.\n\nedit: I’ll go through it anyway', 'True', 'I want to cry.', "Well that's at least kinda funny I guess", "Let's not forget that a green new deal is the only way we can save this planet! 💚🌍🌎🌏", 'Get involved with your local movements and see what they have planned next month. They need your help to plan and to participate!', "This article is 16 years old and we still haven't done $h#*. Shameful. Our leaders should be tried and convicted Nuremberg style for the suffering and death they're responsible for.", 'The report says that Britain will have a "Siberian" climate by 2020. Maybe the report was kept secret because it was badly done.', 'I don’t understand how the world leaders are just going to let them do this. They live on this planet too. All the money in the universe isn’t going to protect them.', "Change is necessary but suggesting mass murder makes you no better. It's taking the same act you are condemning. If you truly want to elicit change then you need to actually BE different.", "Rule 1:\n\nDon't piss into to the wind. Especially a hot dry wind that's getting hotter and dryer all the time. You'll just end up all wet.", 'I don\'t want to get all gloom and doom here, but the author of this article fails to address several "apocalyptic" scenarios, and displays some ignorance about the scenarios he does address. For example in discussing climate change the author completely ignores the negative effects that increased atmospheric CO2 appears to be having (such as on [ocean acidity](https://www.pmel.noaa.gov/co2/story/What+is+Ocean+Acidification%3F) and on [C3 plant health](https://cosmosmagazine.com/biology/long-term-experiment-shows-we-ve-got-it-wrong-on-photosynthesis)), and frames the issue as a matter of temperature. And then he incorrectly suggests the only problems caused by increasing temperatures are consequences like sea level rise, when in fact there are several other consequences, such as (but not limited to) declining insect fertility ([source 1](https://www.bbc.com/news/science-environment-38559336), [source 2](https://www.bbc.com/news/science-environment-46194383)). The author\'s failure to adequately address the issues doesn\'t bode well for their conclusions.', 'So will roaches and that’s what we have to look forward to in the future. Thousand year wars between humanity and versatile radioactive beetles. \n\nThe futures looking bright, innit folks?', 'So, a plague saves the planet?', 'First and foremost, look at where you are getting your information from. YouTube videos made by random people online is not a valid source. I’ve noticed that a lot in climate skeptics is their lack of education; not to offend you but when they see a clickbait article from a publisher online in a social media site, they tend to follow it more and more. But going back to the topic, an overwhelming majority of scientists all agree that climate change is real and it’s a threat to all societies. Even Exxon Mobil scientists were made aware of this issue, years ago but did nothing to raise concerns simply due to corporate interests. Of that slim minority of scientists who do reject climate change, are all backed by corporations to serve in their financial interests. My environmental geography professor has gone over the harmful impact misinformation has done to society, especially on this topic, so I ask that you take a look at what actual accredited scientists say on this matter, while looking at valid sources. I would recommend using ‘Google Scholarly’ and simply search up climate change to get all the statistics, reports, and analysis on climate change. The sources come from independent agencies, universities, and so on so forth.', '>They often claim that global warming is caused by other factors such as gravitational pull\n\nHow would "gravitational pull" affect mean global temperature?\n\n> the sun\n\nSolar activity can’t explain the warming since 1950, because [solar inputs haven\'t been increasing](https://imgur.com/vf3jTHY) during that time. They’ve been generally steady or decreasing, while temperature has risen.\n\nSimilarly, [the recent trend in cosmic rays is the opposite of what it would need to be, to cause warming](https://imgur.com/llsqkSB).\n\nEven "skeptic" blogs like [wattsupwiththat](https://wattsupwiththat.com/2019/12/12/deep-solar-minimum-on-the-verge-of-an-historic-milestone/) have acknowledged these recent trends in solar activity and cosmic rays.\n\n>it\'s just a fluctuation in a large cycle of warming and cooling\n\nCyclical variations in the orbit and tilt of the Earth relative to the Sun are known as the [Milankovitch cycles](https://en.wikipedia.org/wiki/Milankovitch_cycles), which are understood to be an important driver of the glacial/interglacial cycle. But the Milankovitch cycles can’t explain the recent warming - they are currently in a phase that should be leading to a slight cooling trend, if anything. [Here\'s a graph that shows the Milankovitch forcing for the past 20,000 years](https://imgur.com/a/50Sotae). As you can see, it peaked around 10K years ago, corresponding to the warming that took us out of the last glacial period. It has been decreasing for thousands of years since then. (Source data available through [this page](https://biocycle.atmos.colostate.edu/shiny/Milankovitch/).)\n\n>green house gases are only made up of .3% percent Carbon dioxide\n\nCO2 has now been raised from around 270 ppm of the dry atmosphere before we started adding to it, to above 400 ppm (0.04%) now. That sounds like a very small percentage, but keep in mind that [more than 99.9% of the dry atmosphere is composed of gases that are not greenhouse gases](https://en.wikipedia.org/wiki/Atmosphere_of_Earth#Composition). So even as a trace gas, CO2 is the second-biggest contributor to the greenhouse effect, after water vapor, accounting for between [9 and 26% of the total effect](https://en.wikipedia.org/wiki/Greenhouse_gas#Impacts_on_the_overall_greenhouse_effect).\n\n>while water vapor makes up 3%, and is more effective at trapping in heat radiation\n\nYes, water vapor makes up the biggest portion of the total greenhouse effect. [But it acts as a feedback, not a forcing](https://www.yaleclimateconnections.org/2008/02/common-climate-misconceptions-the-water-vapor-feedback-2/). It amplifies cooling and warming signals from other causes, including the increase in CO2.\n\nI recommend starting with [skepticalscience.com](https://skepticalscience.com/) to find well-sourced answers to most of the common "skeptic" arguments.', "Youtube, or even random internet website, isn't a good way to seek information. It's very easy to take diagrams, numbers etc out of context and make them say what you want.\n\nA good way to inform yourself is for example reading IPCC's reports : [https://www.ipcc.ch/report/ar5/syr/](https://www.ipcc.ch/report/ar5/syr/) .\n\nA thing to keep in mind is that a very large part of people talking about global warming don't know what they're talking about.\n\nThe reality is that if you look at all the scientific publications about global warming, a vast majority come to the conclusion that the actual global warming is caused by the human activity. \n\nThose people are specialists, and know far better than random people on internet.\n\nAn other thing to see is that global warming isn't even the only issue caused by CO2 emission.\n\nCO2 dissolve in water and produce carbonic acid. This action change the pH of water and will cause important problems for underwater life.\n\nThe conclusion is that we have all interest in reducing our CO2 consumption.", 'Try this podcast https://www.criticalfrequency.org/drilled', "The most persuasive climate deniers are good at honestly telling PART of the story about the climate in a way which casts doubt on the argument for global warming, but they don't mention crucial facts which undermine their argument. \n\n​\n\nFor example, it is true to post evidence showing that over thousands and millions of years the climate was only affected by the sun, volcanoes, ocean currents etc and not directly by CO2. However, this is deliberately misleading because until recently nobody was burning millions of tons of coal and oil and changing the balance of the atmosphere.", "You shouldn't believe a scientist, or two, or more.\nWhat you should believe is the scientific consensus, ie the result of the majority of studies.\nAround 97 to 99% of these studies are unanimous : global warming is real, and caused by human.\nAlso you shouldn't try to understand exactly how and why, like you wouldn't try to be a specialist in molecular physic, or biology, or mathematics.\nBut you should always thrust the scientific consensus.", 'so there have been some good answers already but here is a graph that shows the correlation of c02 and average temperature. https://herdsoft.com/climate/widget/image.php?width=600&height=400&title=&temp_axis=Temperature%2BAnomaly%2B%28C%29&co2_axis=CO2%2BConcentration%2B%28ppm%29', "You should check your sources, I was also brainwashed by skeptics back when I was in schooll. Now that I'm in uni and study something related to tbe field, I can tell that the claims that you often see on youtube and on blogs doesn't really make sense. You shouldn't get your information from scientists directly, becase they are people, and people can lie an be misleading. My recomendation is to see what scientific institutions have to say about it. Try looking what NASA, NOAA, IPCC, or the US Army Corps of Engineers, and you'll have a better picture of what the science actually says.\nIf you are looking for a direct debunking of many of the claims made by skeptics I can't recomend you enough the youtuber potholer54.\n[https://youtu.be/ugwqXKHLrGk](https://youtu.be/ugwqXKHLrGk)", "If you'd like to read a book that presents the evidence, Hansen's *Storms of My Grandchildren* is great. It focuses on basic physics and geological history, instead of just asking you to trust computer models.\n\nThere are chapters on the politics and what Hansen thinks we should do, so if you want you could just focus on a few chapters in the middle that go through the science. Just from reading that you'll easily see through a lot of the nonsense posted on youtube.\n\nHansen is the NASA scientist who testified to Congress about climate change back in 1988.", 'Hey guys how about we don’t attack this person. \n\nWhy not give him some good books to read or studies to look at?\n\nTry and Inconvenient Truth. \n\nMy only personal opinion is that changing to stop global warming will be harder than to maintain the status quo. Many people wish to maintain the systems in place at current because those very systems make them rich without refused to the effects it has on the planet, or it’s peoples.', 'For me back in the 2000s when I was reading scientific papers on climate science and trying to figure out whether the arguments of the skeptics held any water, the most convincing evidence for me that global warming was happening and was caused by greenhouse gasses was the scientific data showing that the upper atmosphere of the earth was dramatically cooling. Here is one article that lays out some of the information:\n\n[http://www.theclimateconsensus.com/content/satellite-data-show-a-cooling-trend-in-the-upper-atmosphere-so-much-for-global-warming-right](http://www.theclimateconsensus.com/content/satellite-data-show-a-cooling-trend-in-the-upper-atmosphere-so-much-for-global-warming-right)\n\nThe upper atmosphere is cooling dramatically. There is no weather up there to cause fluctuations in temperature and the data is very clear. The cooling is causing the upper atmosphere to shrink. NASA noticed that satellites could fly lower without experiencing drag.\n\nWhy would a cooling upper atmosphere be evidence of global warming? Such cooling was a prediction of the scientists who argued that a rising greenhouse effect due to human emissions was warming the earth. Increasing greenhouse gasses trap heat in the lower atmosphere so there is less heat radiating to space and warming the upper atmosphere. In the 2000s, many sincere skeptics abandoned their skeptical stance when they saw the data showing a dramatic cooling of the upper atmosphere.\n\nIf global warming was caused by the sun, you wouldn\'t see the lower atmosphere warming while the upper atmosphere was cooling.\n\nYou shouldn\'t get all your information from one side. You lack the scientific training to find the problems with the skeptics arguments, so you will be tend to be persuaded by arguments that seem convincing.\n\nSee this youtube playlist by user Potholer54, science journalist Peter Hadfield:\n\n[https://www.youtube.com/playlist?list=PL82yk73N8eoX-Xobr\\_TfHsWPfAIyI7VAP](https://www.youtube.com/playlist?list=PL82yk73N8eoX-Xobr_TfHsWPfAIyI7VAP)\n\nPotholer54 lays out some of the basic evidence for human caused global warming and counters the arguments of the youtube climate skeptics.\n\nAlso check out [https://skepticalscience.com/](https://skepticalscience.com/). The website show the fallacies in the arguments presented by skeptics. It\'s a never ending task. As soon as one argument is put to rest, skeptics come up with another "reason" why global warming isn\'t happening, isn\'t caused by humans, or isn\'t dangerous.\n\nSkeptics arguments change over time because they are not arguing in good faith. They are looking to cast doubt on the scientific consensus about global warming rather than seeking to understand the scientific evidence.\n\nDo you really think scientific institutions all over the globe that have proclaimed that the earth is warming dangerously and that the cause are human emissions of greenhouse gasses are part of a vast left-wing conspiracy? It\'s more plausible that the few individuals arguing the on the other side are on the payroll of fossil fuel companies. Many of them are, in fact, as you noted, indirectly receiving payments from industry.\n\nRegarding your concern about whether increasing concentrations of CO2 could cause a an increasing greenhouse effect, consider that a the greenhouse effect due to CO2 is settled science following a paper published in 1969. See the bottom of this post:\n\n[https://skepticalscience.com/basics\\_one.html](https://skepticalscience.com/basics_one.html)\n\nHow does increasing CO2 cause an increased greenhouse effect when water vapor is so much more important? CO2 and water work together to create a greenhouse effect because they have different and complementary absorption spectra. Radiation that would escape through water molecules is blocked by CO2. Water vapor is like a thick crocheted blanket with holes that let some heat escape. CO2 is like a thin blanket on top of the thick blanket that traps the heat that would have otherwise escaped. We\'re getting into the weeds a little, but here\'s a paper about this:\n\n[https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/wea.2072](https://rmets.onlinelibrary.wiley.com/doi/pdf/10.1002/wea.2072)\n\nIncreasing CO2 warms the earth a little. The little extra warmth increases the amount of water vapor in the atmosphere since warm air holds more water than cold. The increase in water vapor causes the earth to warm more. Increasing CO2 causes a positive feedback from water vapor which causes more warming than that due to CO2 alone.\n\nScientists have been arguing about how much the earth would warm for a given increase in CO2 (climate sensitivity). I\'m not going to try to get into the details. In recent years the estimates for climate sensitivity have been increasing as more and more warming is measured. In particular, the oceans are warming dramatically as they absorb most of the heat produced by the increasing greenhouse effect.\n\nTry to understand the basics: the increasing greenhouse effect from gasses produced from the combustion of fossil fuels. And keep in mind that you may lack sufficient science education to understand the more nuanced scientific arguments. Please trust the scientific institutions of the world.\n\nWhat the hell dude. Are you really leaning skeptic? Can\'t you see Greenland is melting? Can\'t you see that the Arctic ice is disappearing, coral reefs are dying, glaciers are melting, Spring is happening earlier, etc., etc., etc.??? Is there any good explanation for all of his other than the heat trapped by greenhouse gasses. Is there???', "Hey, checkout this article.\n\n [https://www.bloomberg.com/graphics/2015-whats-warming-the-world/](https://www.bloomberg.com/graphics/2015-whats-warming-the-world/)\n\nit addresses most of the counterclaims that denialists put out (volcanos, sun, deforestation, etc.) You'll see that most claims are horseshit, put out by paid shills of the petrol gas industry.", 'Propaganda from the left? What could the left possibly want besides cleaner air? Preventing the melting of glaciers? Don’t you think oil and coal companies would want to stay in business? So what kind of studies do you think they’re funding?', "I don't think asking for likes/upvotes on reddit is very wise. Make people think your karma hungry. But good luck getting solar panels.", '20 to go', "Maybe you guys should learn outside in the sunlight instead of using electricity to run the lights, or make the ceiling transparent. Hows that for innovation combined with conservation? The solar panels only utilize 20% of light anyway, why not use 100% of it? Maybe you can plant few trees while you're at it.", 'No it won’t', 'I hope you get the solar panels!', "Would help if you didnt upload a video in 240p. Cant even read what the text is saying, music is awful quality, tictoc???? It's a college that you're paying for a d they wa t you to raise money for them????? Private institution.... okayyyyyyyy surrreeeeeeeee...", 'Same here, in Poland. No snow at all. Two years ago was at least cold and I had a chance to sleep in the woods in -15C. Nowadays - barely ever any freezy temperatures. That is so sad... and disturbing.', "Yeah, it's like 62 degrees out today", "I just watched the first minute of this video and was shocked by how stupid it was. Yes, the earth may have been 2 degrees warmer in the past, but, it's the pace or how fast the temperature is changing that is so dramatic and why global warming is so bad. The local plants and animals of local ecology can adapt that fast. When the world was warmer, plants and animals were given thousands of years to slowly adapt to warmer temperatures, it was slowly evolve or go extinct. With global warming, the temperature is changing so rapidly, that most plants and animals can't adapt quickly enough to the new norms. It's the rate of change that is one of the biggest dangers of global warming. not just that the temperature might be a couple or more degrees warmer. Whoever put out this video is an idiot and should be ignored.", "Peer reviewed scientific studies examining the level of consensus among climate scientists that Earth is warming and the primary cause is human activity:\n\n Verheggen 2014 - 91% consensus\n Powell, 2013 - 97% consensus\n John Cook et al., 2013 - 97% consensus\n Farnsworth and Lichter, 2011 - 84% consensus\n Anderegg et al, 2010 - 97% consensus\n Doran, 2009 - 97% consensus\n Bray and von Storch, 2008 - 93.8% consensus\n STATS, 2007 - 95% consensus\n Oreskes, 2004 - 100% consensus\n\n\nThe Cook 2016 meta analysis of consensus studies finds the rate of consensus between 90 - 100% and that agreement with the consensus approaches 100% as expertise in climatology increases.\n\nSO YEAH. I'll take the opinion of the phd experts who dedicate their lives to understanding climate and have been studying this for generations over some Bullshit YouTube video. Thanks.", 'Trump administration\'s NASA website still says its real. No longer updated, but still flying proud representing the scientific consensus that it\'s us, cher. It\'s us indeed.\n\n"Oh say can you see...? " It lifts my spirits TREMENDOUSLY to see the NASA site still up and waving climate truth proudly. Through the rockets red glare, bombs bursting in air, gave proof through the night that the truth of mankind\'s role in global warming still stands unbowed, undefeated.\n\nGlobal warming is an earth cycle that our actions affect. That affect is most evident in the incredible rate of accelerating change we are experiencing: "Models predict that Earth will warm between 2 and 6 degrees Celsius in the next century. When global warming has happened at various times in the past two million years, it has taken the planet about 5,000 years to warm 5 degrees. The predicted rate of warming for the next century is at least 20 times faster. This rate of change is extremely unusual." [https://earthobservatory.nasa.gov/features/GlobalWarming/page3.php](https://earthobservatory.nasa.gov/features/GlobalWarming/page3.php)', 'Earth is in a warming cycle right now but normally that would take dozens of years, were accelerating it with our C02 emissions (like really fast)', ' [https://grist.org/climate-energy/current-global-warming-is-just-part-of-a-natural-cycle/](https://grist.org/climate-energy/current-global-warming-is-just-part-of-a-natural-cycle/)', 'The glacial/interglacial cycle is well accepted by climate researchers, and not in dispute. But the recent warming is not part of that cycle.\n\n As our industrial age began, we were already in the relatively warm phase of the ice age cycle. The last glacial period ended about 11,000 years ago, the warming from that shift ended about 8,000 years ago, and basically all of human civilization has developed in a long, relatively stable interglacial period since then (known as the [Holocene](https://en.wikipedia.org/wiki/Holocene)).\n\nBut based on what is known of the causes of the glacial/interglacial cycle, we should not naturally be experiencing rapid warming now as part of that cycle. If anything, [we should be cooling slightly](http://www.climatedata.info/forcing/milankovitch-cycles/files/stacks-image-6a30b42-800x488.png) - although due to the current status of the [Milankovitch cycles](https://en.wikipedia.org/wiki/Milankovitch_cycles) (a primary driver of the glacial/interglacial changes), we’re in a particularly stable interglacial period, and the next full glaciation would likely not be for the next 50,000 years ([Ref 1](http://science.sciencemag.org/content/297/5585/1287), [Ref 2](https://www.nature.com/articles/nature16494)).', "It is 100% earth's cycle, we are in a warming period. However, we are making it go way too fast!", 'The alternative is to use a bidet, which requires pumped water. Pumped water uses electricity, which contributes to global warming.', "Or go on a keto diet, and your stools will be so hard that you don't need to wipe.", 'Very very good point', 'Nah sorry I tried it, still need to use paper.', 'You do need to clean your ass even if you squat. Trust me, I’m Slav living in Japan.', "Temperature is going to get hotter and hotter. If we do nothing, the projections are that first we hit 2 celsius, which is a crisis point for our planet. After that we are heading to 3, then 4, then 5 and 6 Celsius. Whether that takes 1 or a few hundred years, that is civilization ending for most of our planet. \n\nSo yes, bringing the cost down of our solutions and massively ramping them up is cost effective because we wont be able to air condition or eat money in the literal hellscape we are creating. \n\nApart from it being overall hotter, insects,.plants, animals, sea life, forests, coral, and so on are going to get wiped out. The moon is nice to look at but it's the ultimate desert w no life. \n\nAnd there are cheaper way to get out of this mess. \n\nPeople.need.to organize to do so though. \n\n1) There is a mess in getting organized. \n2) information that is necessary to make decisions is all over the place but needs to be categorized\n3) plans need to be made, by governmenta, by business, and most importantly by the other brilliant minds of the 7.5 billion or so ppl on this world that each studies and owns a chunk of the problem. \n4) those plans need to be implemented on a mass scale, with the ever continuing drive towards efficiency (using the best solutions more, without expecting 1 silver bullet). \n5) keep refining things and recruiting more and more people in a transnational organization that work on these issues, some free some paid, because there is a ton of work to do and it needs doing now. \n\nFixing these and many other in between steps is what our group LiveCivix is working on. Because whether you are rich, middle class, or poor, it is each our collective responsibility to build this world into a better and continuously livable one. Because all the extra 0's in the world won't make life pleasant living in a bunker (if you can even afford one and then for how long until supplies run out) when things get really bad. \n\nFor solutions as an example, there is a book/org called drawdown that outlines about 100 effective things people could do to fix climate change completely heading towards the goal of drawdown (pulling more carbon out of the atmosphere than we put up.)\n\nAnd a guy named Brian von Herzen who's research project has the potential to drop the cost of carbon removal to the low 2 digits per ton or even maybe make a profit on it with his off shore seaweed platform designs. \n\nThere are likely thousands of these potential solutions around and no one hears about them. \n\nSo if you want to know more,.or commit yourself to a better world, reach out to me as we work on building a standardized system to get people working on projects that fix the problem legally. We can by doing the hard but worthwhile work and getting out of this mental rut of waiting for the inevitable end by doing nothing.", 'Doing nothing is actually the ultimate solution. Polluting industries’ workaholism is the problem.', "The hotter it gets, the more the positive feedbacks kick in, making it get hotter even faster.\n\nThis is a fucking Catastrophe, and asking what would be 'cheaper' is absurd!", 'I wonder if the $50 trillion includes the cost of human suffering? Or if we can put a price tag on biodiversity or quality of life? We must stop global warming by transitioning off fossil fuel. The change will require a carbon tax which will fund transitioning our transportation system to electric cars, electric trains and powering it all with new safe small scale modular nuclear reactors, (SMRs). SMRs are not you grandfathers nuclear power, Chernobyl, Fukushima or a Three Mile Island waiting to happen, but safe and reliable. Think of the difference between a Ford Model T and Tesla Roadster. If we could put a man on the moon in less than ten years we can become carbon neutral in less than 10 years. With proper planning a smooth transition can occur.', 'Well this is certainly an anticlimactic robot uprising.', "I don't doubt this is true, but I just wish these folks would have published first, then got the media attention after. I like to read the report for myself, and then promote/discuss from an informed position.", 'A denialist will say: "But, but, bot are my friends. They are the only ones that talk to me while I am browsing the Internet from my grandma\'s basement".', "Maybe using the fossil fuels to produce only plastic, and this would decrease DRASTICALLY the CO2 production, like in 80% or so (sorry if i'm wrong).", "There's just so much evidence. It's troubling that this guy is teaching science. I'd start with this link: https://climate.nasa.gov/evidence/", "Provide data sets. Data doesn't lie, but people aren't perfect either. The more data sets you have to reference the stronger the argument you can make.\n\nIf you're teacher is in fact a science teacher hit them with the ol' Scientific Method route. You'll come off a lot better and show them you understand how science works.", '"Claim 4: The sun or cosmic rays are much more likely the real causes of global warming. After all, Mars is warming up, too.\n\nAstronomical phenomena are obvious natural factors to consider when trying to understand climate, particularly the brightness of the sun and details of Earth\'s orbit because those seem to have been major drivers of the ice ages and other climate changes before the rise of industrial civilization. Climatologists, therefore, do take them into account in their models. But in defiance of the naysayers who want to chalk the recent warming up to natural cycles, there is insufficient evidence that enough extra solar energy is reaching our planet to account for the observed rise in global temperatures.\n\nThe IPCC has noted that between 1750 and 2005, the radiative forcing from the sun increased by 0.12 watt per square meter—less than a tenth of the net forcings from human activities (1.6 W/ m2). The largest uncertainty in that comparison comes from the estimated effects of aerosols in the atmosphere, which can variously shade Earth or warm it. Even granting the maximum uncertainties to these estimates, however, the increase in human influence on climate exceeds that of any solar variation.\n\nMoreover, remember that the effect of CO2 and the other greenhouse gases is to amplify the sun\'s warming. Contrarians looking to pin global warming on the sun can\'t simply point to any trend in solar radiance: they also need to quantify its effect and explain why CO2 does not consequently become an even more powerful driver of climate change. (And is what weakens the greenhouse effect a necessary consequence of the rising solar influence or an ad hoc corollary added to give the desired result?)\n\nContrarians therefore gravitated toward work by Henrik Svensmark of the Technical University of Denmark, who argues that the sun\'s influence on cosmic rays needs to be considered. Cosmic rays entering the atmosphere help to seed the formation of aerosols and clouds that reflect sunlight. In Svensmark\'s theory, the high solar magnetic activity over the past 50 years has shielded Earth from cosmic rays and allowed exceptional heating, but now that the sun is more magnetically quiet again, global warming will reverse. Svensmark claims that, in his model, temperature changes correlate better with cosmic-ray levels and solar magnetic activity than with other greenhouse factors.\n\nSvensmark\'s theory failed to persuade most climatologists, however, because of weaknesses in its evidence. In particular, there do not seem to be clear long-term trends in the cosmic-ray influxes or in the clouds that they are supposed to form, and his model does not explain (as greenhouse explanations do) some of the observed patterns in how the world is getting warmer (such as that more of the warming occurs at night). For now, at least, cosmic rays remain a less plausible culprit in climate change.\n\n\n \n\nAnd the apparent warming seen on Mars? It is based on a very small base of measurements, so it may not represent a true trend. Too little is yet known about what governs the Martian climate to be sure, but a period when there was a darker surface might have increased the amount of absorbed sunlight and raised temperatures." \n\nhttps://www.scientificamerican.com/article/7-answers-to-climate-contrarian-nonsense/?fbclid=IwAR2vkrOk0d36Dvl9xPZEfmYqXosawIStSWuJN09x5MXMjrsN0C5SYTtOxJM', 'Solar activity can’t explain the warming since 1950, because [solar inputs haven\'t been increasing](https://imgur.com/vf3jTHY) during that time. They’ve been generally steady or decreasing, while temperature has risen.\n\nSimilarly, [the recent trend in cosmic rays (which is indirectly related to solar activity) is the opposite of what it would need to be, to cause warming](https://imgur.com/llsqkSB).\n\nEven "skeptic" blogs like [wattsupwiththat](https://wattsupwiththat.com/2019/12/12/deep-solar-minimum-on-the-verge-of-an-historic-milestone/) have acknowledged these recent trends in solar activity and cosmic rays.', "I'm late but I would start with the skeptical science website. \n\nhttps://skepticalscience.com/solar-activity-sunspots-global-warming.htm", 'This two articles might help :\n\n[https://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds](https://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds)\n\n[https://jamesclear.com/why-facts-dont-change-minds](https://jamesclear.com/why-facts-dont-change-minds)\n\n​\n\nYale climate communication has a wealth of resources on talking about climate. Most often, spitting out facts dont work. I am an energy scientist and public speaker. After all these years, my biggest lesson : in certain scenarios, I may not be the person who should lecture a denier on climate. It has to come from someone they trust and agree with in some other aspect of their life. I always try to collaborate with a person from the community and make it a discussion instead of a debate.', 'Your "science teacher" is no teacher at all. He shouldn\'t be "teaching" science if he is unable to look at the very real evidence all around us. I say this as a scientist (Ph.D.), having taught at university. By the way I\'m a boomer and know that climate change is real. Might he be an evangelical?? Some of them believe god will save them so they don\'t need to worry about it.', 'What country do you live in? Can you report your teacher to your principal for being unqualified to teach science?', "1- You can learn a lot of thing about global warming on the website of IPCC : [https://www.ipcc.ch/report/ar5/syr/](https://www.ipcc.ch/report/ar5/syr/) . I suggest you to read as much as you can of the synthesis report. It is 167 pages long, but if you put the effort to understand the main diagrams and read the text in orange box, you'll learn a good amount of facts.\n\n2- Ask your professor if he got facts that back up what he is saying. A real scientific argument should be proved by number, multiple scientific publications, etc...\n\nFor every argument he says, ask him for proof, for the name of the study, basically, ask him where he found his informations on the subject.\n\nIf he don't know what to answer when you ask this, it will just show that he's not a scientific and basically know nothing about what he is talking.", 'He is not giving away the money, but transferring them to his private fund. Educate yourself on how billionaires benefit financially from such schemes.', 'That’s nice.\n\nNow go pay your taxes too.', 'He has announced it. Do good PR. Remains to be seen if he spends it... while it still matters.', 'I dont even have 7% left of my check when i get paid', 'Thank God for the largesse of insane wealth. What a hero.', "I have - $40.00 I'll take my $3.08 plz", '[deleted]', "The Earth will warm no matter where the concentration occurs. The CO2 is acting like a blanket that's trapping the sun's radiation. Simply put, the heat is being absorbed faster than it can convect through the atmosphere and radiate back away from earth. This is the reason Venus is hotter than Mercury, despite Mercury being closer to the sun.", 'Not sure I understand the question, but here is one thing to keep in mind: greenhouse gasses work by absorbing the infrared emitted by the Earth.', 'Mental gymnastics much.', 'We have to outbreed them and/or educate their children properly', 'You could always retort that climate change is God’s Will and we are nearing the end of Days, then quote the scriptures to prove it is real, e.g. \n\n“For behold, the day is coming, burning like an oven, when all the arrogant and all evildoers will be stubble. The day that is coming shall set them ablaze, says the Lord of hosts, so that it will leave them neither root nor branch.”\n\nMalachi 4:1', "Know that you're not alone in this. I'm in my fifties and I am a mother. I have children who share your love for the planet. I have to believe there are enough of us out there that care as we do. I have to believe that we as a species will work together to heal this beautiful planet. To believe all is lost is not something my mind and my heart can tolerate. So I have to believe in the good of people. There are so many of us who really do care.", 'Lol religion.... backwards as you can get', 'Ask her if she wears a seatbelt. If she says, "yes", then turn the argument back on her:\n\n"Well seatbelts are just the wisdom of man and the Bible calls that foolish. People have been in accidents since before there were cars and we need to trust in God\'s wisdom and not the wisdom of car manufacturers. All that is doing is creating fear in your heart."', 'There is no such thing as god. The Bible literally tells us that slavery is okay and that the world is 6,000 years old', 'I had this same conversation with my grandmother where she claimed “God will protect us.” Or something to that effect, and to that I said “Isn’t climate science and modern green technology that logical protection he is providing us, aren’t those the tools necessary to save ourselves?”', "Use their religion against them. In this case, I would ask her is she is familiar with with the parable of the good steward. If she is a Christian and strongly religious she should know it. Then ask, when Jesus returns will you multiply what was given you or waste God's investment in your stewardship of the Earth? I have yet to hear a counter argument.\n\nAnother thing you could do is use the same reasoning for something they care about. Social Security, Medicare, Medicaid, the Military, etc. The first argument works best in my experience.\n\nGood luck.", "Try out some of Katharine Hayhoe's Points on her.\n\nHere's a short video of hers:\n\nhttps://youtu.be/SpjL_otLq6Y\n\nBut she's into science communication, so there's a lot of her out there.", 'I am not a religious person but from my limited understanding, much of religion revolves around prayer. You might need to pray and tell your mom of your prayer so that the blind may see and the deaf may hear and then ask her to pray with you.', 'You can counter by saying that it is mans wisdom which has caused this issue and is detrimental to gods plan. You can stand in the ground that god would want us to save as many innocent people as we can from the problems caused by mans wisdom.', 'What was it, 2 days ago Antarctica got a new highest temperature of 69°F (20.56°C) and previously was 67°F (19.4°C)?', 'Have you tried saying “did you see god say that” and the watch them do backflips trying to justify their stance', 'Well its up to us to make a change', 'Just say the old timey religious phrase "God helps those who help themselves."', 'I will admit that I am an atheist and even I do not care too much.. here are my reasons why...\n\n​\n\n1. The world is always changing\n2. We have had many major extinction events prior, all due to climate change\n3. Don\'t think you\'re so special that this time because humans may be exacerbating the speed of climate change that you\'re any different than any of the previous reasons behind environmental change, mass catastrophe, or extinction events.\n4. Most environmentalists and vegans only seem to care for "cute" animals? Selfish? Why do they worry about Koalas? Go start a cockroach farm. They seem to treat animals, including humans, on a certain scale of what is worth more than the other. Why should I care more about a shark than a raccoon? Is not a life a life either way, regardless of its circumstances? Go take in a homeless man if you\'re so concerned about wellbeing rather than a dog. We are all equal in the sense of life, so I do not get why dog\'s are placed higher than humans when they are equal.\n5. If humans are making the world a worse place, then isn\'t mother nature doing its job to get rid of us? The best way to reduce your carbon footprint is to stop having children, stop flying, stop driving cars, and-just die?\n6. Maybe the next species that arise some 25-100 million years from now will be SUPER-WOKE to maintaining the status quo more than we are. Let humans die out and allow something better to come along? I don\'t hear anyone crying over the loss of the previous 99% of species that have ever lived on this planet for the last 4 billion years and how we should try to revive Pre Cambrian species.\n7. Slowing down our demise only continues to maintain humans as the primate species of control over this planet. Let mother nature do it\'s thang and wipe us out, as she always will.\n8. Maybe go to another planet and bring animals with you. Noah\'s Arc anyone?\n\n​\n\nYes. We are obviously making climate change happen quicker, but if mother nature is the overall concern, maybe we should be wiped out or leave? That is where I end argument with environmentalists. You\'re still an animal. Let the universe run its course. We\'re just a minor hiccup in an endless universe. If you\'re so progressive then let the change happen. I am more interested in seeing a planet earth without humans at this point.', "I'm truly sorry your mother can't see it. That has to be painful.", 'Shoulda said fuck God', ' [https://grist.org/climate-energy/current-global-warming-is-just-part-of-a-natural-cycle/](https://grist.org/climate-energy/current-global-warming-is-just-part-of-a-natural-cycle/)', "The cycles are called milankovitch cycles and those are in a cooling phase. There are also solar cycles that can affect global temps, but those too are in a cooling phase. This warming is actually counter cyclical to the earth's natural cycles. Without greenhouse gas emissions the earth would likely be cooling very very slowly. Instead it is warming more rapidly than natural cycles could account for even if they were in warming phases.", 'The xkcd global warming comic', "Save your breath, if they don't believe by now the never will. They are probably Republicans so no use trying to change their opinion on anything at all...", 'The are probably right. IMHO', "Tell them it's not supposed to be happening as quickly as it should be. An ice age lastes over a few million years. Currently, we are exiting it, so we should be getting warmer. But based from the rising temperatures from the last 40 years, it shouldn't be happening this quickly.\n\nAnd we have achieved temperatures higher than the earth's highest natural temperature.", "I should thank my lucky stars every day my parents aren't brainwashed by fox type idiocy.\n\nI feel for you, it must be painful.", 'Thats can be quite stressful but good luck buddy. you got plenty of time left wirh so guess at least you dont have to rush things.\nhttps://skepticalscience.com\nhttps://youtu.be/9M29ns1rUSE', 'Exactly how I feel', '\nIf you or someone you know is contemplating suicide, please reach out. You can find help at a National Suicide Prevention Lifeline\n\nUSA: 18002738255\nUS Crisis textline: 741741 text HOME\n\nUnited Kingdom: 116 123\n\nTrans Lifeline (877-565-8860)\n\nOthers: https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines\n\nhttps://suicidepreventionlifeline.org', 'Existential dread is something we all share. I remember (maybe I was 5 or 6) being acutely aware of impending nuclear death. In fact, in my darkest moments, that is how I predict the tragic death of myself and my loved ones.\n\n There are no assurances in life, other than something is going to kill you. Climate change will impact our lives just as war and famine and disease always has; there is nothing you can do to control the circumstances of your inevitable passing.\n\nSo get on with adapting to your future. Focus on self sufficiency and sustainability. Learn how to build and how to grow. Get strong in body and mind. These are things that you can do now to lessen the impact that these external influences have. They will make you feel better about the future, by changing things you can have a direct influence over.\n\nGood luck brother.', "Im in my late 30s in the US and feel very similar. i watch the fastest growing car segment are SUV's while people talk seemingly rationally about wanting cheaper gas prices and I just dont understand how they arent just panicking inside and we all arent trying to get off gas with every spare resource possible on top of all of the other areas really we need to focus.", "The problem with global warming is going to be that those who rule over us will probably leave us out to hang. Rather then shift tax dollars away from them and over to the rebuilding of society, in new locations. New Towns will be required, shifting to new arable land to farm. As well as an urgent need to back off the commercial foraging of wild life without total sustainability. Convenience foods designed for maximum nutrition and not salty, fatty comfort. Get it in everyone minds the fact of living within Limited Resources is a major priority. None of the positive aspects of transitioning to a new climate can be accomplished without the average human becoming proactive, educating themselves, and leaning in to press hard against the greedy and self centered minority of the wealthy. The squeaky wheel gets the grease. It will also require the awareness that human population must be reduced, probably to half the current population. Which can humanely be accomplished by acknowledging the urgent and eternal need to reduce births. Just halting and maintaining current population won't decrease the massive over use of limited resources that remain available to sustain a modern life style. We'll probably massive and grossly expensive tax use to build barriers around the large coastal cities. When in stead, that money needs to be spent on relocation. Relocation will be forced on us and better to spend the money now, then spend it and than spend even more after the rising water forces the issue.", 'PBS, Nova, Polar Extremes, Season 47- Episode 1, is a great film to learn from.', "I'm 15 and parents want me to go to college. Idk if it's even worth it if I'm gonna starve to death in the future.", "Ride it out, where are you going to go?\n\nDying is going to happen no matter if the world was perfect. Fourteen and a half billion years you were dead, when you leave, it could be billions more, for all you know.\n\nGo easy man, you get this time. Don't cheat yourself out of the infinitesimal time you know you have.", 'The Netherlands, no snow this year (or for the past few years...). January was about 3.5c warmer than usual.\n\nThe snowy winters from my childhood are pretty much gone (although I do hate snow now....)', 'Yeah. The same in Central Europe (Czech Republic). It started to snow a little in February, otherwise Christmas on mud is normal in this times here.', 'Today Is 20 C is southern Europe', "New England here... some of the same, but we're also getting feast-and-famine years. Our mild winters (like this one so far) feel like autumn. Our bad winters are unending with record levels of snow over a short period of time.", "I'm in Utah, USA. I feel like the snow comes 2 weeks later every year", "Not a similar experience at all but.. Yesterday, it snowed in Iraq. Yes, that's right. I said IRAQ. It snowed in a desert.", "Live in west-Norway. One year, I think ut was 2013, there was enough snow for is to make a huge snow pile, climb onto the roof of our house, and jump down into the snowpile! It was mye grandpa's idea. He told me he used to do it EVERY YEAR when he was younger! I thought it was a lot of fun and have always wanted to do it again and see whether mye little brother dares to do the same. But there never comes enough snow anymore to make a pile big enough so its safe to jump...", 'Eastern Kentucky USA. Wore a T-shirt today. Why.', "Same here in NYC. Usually we get a few massive blizzards in January. We've had snow maybe three times this winter and it was very light and never stuck more than a couple hours. I haven't worn gloves, a hat, a scarf, or thermal pants this winter. I wore shorts and a t shirt out this weekend and it wasn't a problem. Last winter and this winter feel more like the fall that slowly became spring. \n\nInterestingly enough however, the summers have not been that bad either the last couple years. It feels like the weather here is becoming more mild.", 'Same in southern Germany. 20 years ago the snow got up to grown ups knees some winters. And thats in the valley big cities.\n\nThe last couple of years we had almost no snow and the little snow we had was way to late (started snowing November / December back then. Nowadays it starts in February / March and usually melts after one or two days.)', 'The real problem is that there is no quick fix to burning fossil fuels, in fact we humans are burning oil and gas faster than ever, at an accelerating rate and renewables aren’t really making a dent.\n\nWe might as well resign ourselves to the fact that the atmospheric CO2 level has a long way to go before it peaks and learn to live with it.', "I don't think you should minimize plastic pollution like that.\n\nThere is more and more speach about both global warming and plastic pollution, I don't think people in general are more concerned by plastic pollution than by global warming.\n\nI think that a good amount of people that cares about plastic pollution have the minimum of information and cares about global warming too.\n\n(and then there is the persons who doesn't give a single fuck about all of it)\n\nI don't think that the ocean cleanup that exist today is really efficient, but to me it's still a good thing because it raise awareness about a real problem, and will perhaps allow to develop better ocean-cleaning method in the future.\n\nCurrently I believe the best way to reduce plastic pollution is to cut a maximum of plastic consumption.\n\nTo conclude, I think everyone should try their best to cut the more they can their plastic consumption and their carbon footprint.", "The real problem is that with all of these problems, it's too late do anything to fix them.", 'I think I like that you recognize a problem, and it seems to be something a lot of our fellow citizens are incapable of.', "I'm expecting another broken record down there next year, or maybe even sooner than that.", 'The meat industry produces a good chunk of green houses gases, plus it takes up a lot of land. However it is not your place to tell you friend how to live; though the meat industry has quite an impact on the environment, waste produced by corporations makes up around the production of 50% of greenhouse gases. Convince your friend to start voting, and maybe purchasing meat from ethical sources if he can afford to do so.', "Here is an idea, if he doesn't share your views, you shouldn't keep attempting to convince him.", 'instead of trying to convince them to sacrifice such a big part of their life, inform them on other ways they could contribute to the environment like recycling, stopping the use of single use plastics etc?', 'How about you don’t force your views on to him and let him do what he wants', "Compromise on chicken. We can't be asking people to fall into extreme life changes.", 'Just start cooking him really good vegetarian meals.', 'Focus on better farming methods, emission standards, voting for the right candidates, renewable energy alternatives. Animal fat and protein is the most nutritious food on earth, and the reason we evolved into the creatures we are today. There are much bigger issues than the tiny fraction of GHG naturally caused by our most valuable food source.', 'Start with Impossible Burgers.', 'From r/vegan\n\n [https://www.reddit.com/r/vegan/wiki/beginnersguide#wiki\\_.2022\\_why\\_go\\_vegan.3F](https://www.reddit.com/r/vegan/wiki/beginnersguide#wiki_.2022_why_go_vegan.3F)', 'I mean... Is it really that important to you that everyone around you 100% agrees with you on everything? There really is no point in trying. If he is going to start eating less meat, it should be because HE wants to and not because you do.', 'There’s bigger problems than your buddy eating meat. You’re literally on a device that is constructed of materials that are extremely harmful to the environment to mine and even worse when disposed of not to mention the energy using to voice this brainless opinion and the spent hydrocarbons your ear holes emit while you angrily explain to your friend that the meat industry is the only cause of global warming that he or she can have an impact on.', "Sorry, this post was written in a hurry. O mean thar my friend, although he is a great guy, believes that it is in his right to eat meat every meal of the day. I believe that, sure, it is, but maybe eat a bit less. No, I'm not a vegan. No I am not forcing my diet on anybody. I am just trying to share my perspective on this with him. \n\nFurthermore he doesn't believe in Climate Change either (lmao) how do I convince him it is real?", "There are 2 ways I can see this. The way the cows are raised, or the factories the process the meat? \n\nSome people (not calling names) think that when cows die, they release methane that causes global warming. This natural methane is true, but it isn't enough.\n\nThe factories the process meant is much worse though. What's even worse is a clothing factory. Pretty much anything that is related to global warming is. You can't control your friend's diet like this.", "Although you can't fully change your friend's mind, you could always advise to him have one day out of the week where your friend doesn't eat meat, and make it a fun challenge. \n\nAlso humans have been eating meat for centuries...", 'You should start eating meat.', "Make them some amazing vegan meals and then give them the veganomicon cookbook and watch Dominion. \nAlso point out that animal agriculture is often underreported for it's greenhouse gas emissions due to methane turning into ... Carbon dioxide. Finally wander over to any of the vegan subreddits and ask this question again.", "Disaster is inevitable.\n\nThe question now isn't how to stop it, but how to live with it for the next tens of thousands of years till our feeble efforts to slow it might take effect..", "It's coming on fast!", 'The ski industry all over the world is in trouble.', "The winter here in Lithuania has been an extraordinarily warm one. The temperature stays above 0°C, while it used to drop to minus 20 at times. Must say we're truly f*cked.", "I can't be the only one who wants to tell these people to shut up. Where were you when we needed you? It's a done deal Dum Dum.", '> recognizing the reality of climate change, but not who’s responsible or what could be done about it—is reflected in today’s media coverage of climate. [...]\n>\n> **FAIR compiled every article mentioning “climate change” or “global warming” that appeared in the New York Times, Washington Post, LA Times, USA Today and Wall Street Journal in August 2017, when only 277 articles ran, and August 2019, by which time the number had soared to 751. (These months were picked to ensure that results weren’t skewed by time of year—climate coverage, like the storms and fires that often set it off, waxes and wanes seasonally—and also to precede the rush of coverage that accompanied Thunberg’s visit.)** [...]\n>\n> ***while it turns out that the US media have indeed ramped up their coverage of the climate crisis, they continue to give short shrift to what are arguably the most important factors for determining our future: what specific human practices are responsible for the changing climate, why carbon emissions continue to rise, and what we can and should be doing about it*** [...]\n>\n> 2017: Fires and Floods, But Few Solutions [...]\n>\n> 2019: Costs of Decarbonizing, But Not of Inaction [...]\n>\n> The media’s shift toward acknowledging the reality of climate change is welcome, if three decades too late, given that the IPCC has been sounding essentially the same alarm about a warming planet since 1988 [...] But **the public presentation of the climate crisis remains carefully constrained to focus on the horrors awaiting us, not on what** ***can be done*** **to ward off the worst, or who stands in the way of doing so. When climate coverage leaves that out, it amounts to mourning the Earth without trying to save it.**\n\n\n-------------------------------------------------------------------------------------------------\n\n\n**Cost of not acting on climate change -> https://www.theworldcounts.com/challenges/climate-change/global-warming/cost-of-climate-change**\n>\n> ***An estimate from the World Bank finds that climate inaction could reduce global GDP by at least 5 percent annually while the price of the necessary action is set to 1 percent of global GDP annually.***\n>\n> ***We already use 6.5% of global GDP subsidizing fossil fuels so a 1 % investment should be very possible.***', 'Im confused as to why the "extremist left" doesn\'t commit any ecoterrorism against ExxonMobil!', 'Of course, climate change is, not a concerted effort since the eighties to disenfranchise small farmers and consolidate conglomerate farms.\n\nThey even spin the destruction of humanity to hide the crimes of the oligarchs.', 'You really cannot begin to think about solving global warming if you dont understand the greenhouse effect. I would reccomend looking at the wikipedia page. The best thing we can do to stop the planet from heating up too much is to stop polluting the atmosphere with greenhouse gasses like co2 and methane.', "I hope it's a troll / sarcastic post.", 'You’ve solved it!', 'We might need a cull, Or maybe this new virus might help mother nature out.', "Give this man the noble Peace prize, the congressional metal of honor, and times Man of the year. It's the simple beauty of it. It's like poetry. HUMANITY OWES STAMPYWA A THANK YOU! You, sir will be remembered by countless generations.\n\nWHAT ARE WE WAITING FOR? LET'S ROLL UP OUR SLEEVES AND INSTITUTE THE STAMPYWAY!!!!!", 'Fk up mate, go back to tent city', '> Over 55 scientists have signed an open letter rebuking Democratic presidential candidate and former Vice President Joe Biden’s claim that the climate plan rival contender Vermont Senator Bernie Sanders supports, the Green New Deal, isn’t supported by anyone in the scientific field.\n>\n> Sanders has proposed spending $16.3 trillion through 2030 to radically reshape the U.S. economy, including $2.37 trillion to renewable energy and storage, over $2 trillion in grants for low- and middle-income families as well as small businesses to buy electric vehicles, and $964 billion in grants for those groups to electrify gas and propane heating systems. His plan also calls for $526 billion on a smart electric grid and hundreds of billions on replacing diesel trucks and buses and new mass transit and high-speed rail lines.\n>\n> Biden’s plan, while still more sweeping than any prior federal effort to address climate change, calls for $1.7 trillion in new spending and only arrived after immense pressure from environmentalists to detail a concrete approach. Biden has also called for ending fossil fuel subsidies across the G20 [...]\n>\n> Last week, Biden attacked Sanders’ plan, telling reporters in New Hampshire that “there’s not a single solitary scientist that thinks it can work,” adding that he doesn’t think zero emissions by 2030 wasn’t possible (note that Sanders’ plan actually calls for 71 percent cut in domestic emissions by that date). **57 scientists from universities and research institutes responded to Biden’s comments in an open letter in support of Sanders released Tuesday.**\n>\n> ***“The top scientific body on climate change, the United Nations Intergovernmental Panel on Climate Change (IPCC), tells us we must act immediately to bring the world together to stop the catastrophic impacts of climate change,” the scientists wrote. “The Green New Deal you are proposing is not only possible, but it must be done if we want to save the planet for ourselves, our children, grandchildren, and future generations.”***\n>\n> ***“Not only does your Green New Deal follow the IPCC’s timeline for action, but the solutions you are proposing to solve our climate crisis are realistic, necessary, and backed by science,” they added. “We must protect the air we breathe, the water we drink, and the planet we call home.”*** [...]', 'This is why I keep telling my parents that supporting Biden for his “electability” makes zero sense. He’d be “better than Trump”, but so would a moldy old shoe. And neither of those options will begin to handle the issues that face Americans and our world.', 'I will never forget what my college science teacher told the whole class our freshman year. "I know I shouldn\'t be saying this, but dammnit your generation has the right to know this. I work with a lot of climate experts, I\'ve done my own research h into climate change and if your generation dosent do something now, you will live in a world that none of your peers, mentors, government officals, ANYONE has ever lived in before." He lead on to say that "if we dont do something by the year 2019 it will he too late." Here we are in 2020 essentially still doing fuck all about this. The boat has sailed people. We should still try SOMETHING, but it\'s probably already too late. And yes he provided a lot of data points, logs, entries, etc. From his peers and the data was already very convincing back in 2016.', "Ok, I'll try and dumb it down if I can.\nSo Bernie is the first person in the federal state to do a move to fix climate change and reduce it by 71%, but scientists say it won't work and is now falsely restating what Bernie says. (Tell me if I got this correct)\nPolitics are dumb. I know why we have some laws, but wtf, he is one of the first to try and make an effort and you turn it down!", 'Im confused as to why the "extremist left" doesn\'t commit any ecoterrorism against ExxonMobil!', 'Save the Bernie posts for r/politics thank you.', 'Ahh fuck', 'So many people are talking about it. None of those in power seem to care about their children’s future though', "We don't need Lifestyle/Consumerist change. This current neoliberal focus on Individualism is a hoax. We need to fight the oil corporations.", "I know it's been 5 days to this post, but look it's been 4 years to the video. Not much sense in sharing a piece on climate from that far back.", 'Hahahaha \n\n# ***We’re fucked***', 'Oh, the irony of characterizing this as an American response, when the family in the video is French.\n\nIt\'s so easy to point fingers and blame "them", but the reality is it\'s "us". We, as a species populating planet Earth, need to recognize the threat. We can\'t be thinking in terms of America this, France that, China didn\'t, Australia did. We need to be solving this as a world community of humans, not as nationalists looking after our particular interests.', 'He left the kid behind', 'As soon as the rich stop buying ocean front homes, and building hotels on ocean front property, and the insurance companies and building departments refuse to build on ocean front....\n\nFaith without works is dead. Nobody really believes this as their actions tell me otherwise.', 'Oof', 'There is no such thing as global warming', 'What a stupid title for this video. America is doing a lot to mitigate climate change.', 'You are not wrong at all. If you follow Guy McPherson, he does talk about this, but hey, he’s just a nut job, right? That is why we are so utterly fucked,', 'Didn‘t check the numbers, but the ice reflects giant amounts of energy. Loosing it as a reflector is much worse than the buffer functionality', "Your calculation looks approximately right if you consider, for example, that all the ice melt on a time of one year, and the next year, the same amount of energy which were used to melt the ice is used to heat up water.\n\nIn reality, this is of course not what is happening, this is much more complex than your approximation.\n\nAs I'm not a specialist of thermodynamics, I will not be able to correctly explain how this works in reality.\n\nAnyway, this phenomenon is basic and I'm very confident that every scientist working on global warming is well aware of it and that GIEC, for example, takes it in consideration for their prediction of temperature rise in the future.", 'Well one could ask why the earth didn’t turn into Venus when the last ice age ended. There are many feedback systems that play a role in global temperature, both positive and negative contributing. Just because ONE of the negative feedback loops is dampened or removed does not mean the climate will spiral out of control. It will simply reach a new equilibrium.', "Why do they think that there's more of a threat of nuclear war now than there was during the Cuban missile crisis? Back when there was a Soviet Union, there were some scary times when it seemed like it could happen.", 'Given that it has been set to about 2 minutes to midnight as long as I can remember, midnight isn’t going to come in my lifetime so I am not going to worry about this silly clock that appears to be stuck.', 'Same story.\n\nClimate change has been noticeable in the past few years.\n\nI remember years back, when forecaster would say something along the lines of"it will be quite a warm month, probably most warm in the past x years, the mid temperatures will be at –8C°". I don\'t remember how forecasters talk, but something like this.\n\nNow it\'s +4C° and it\'s raining.', 'Coal and oil were first created in the carboniferous. CO2 was absorbed by the trees, and turned into fossil fuels. If we burn all the fossil fuels, we end up back with the 800ppm CO2 we had in the carboniferous. What did I get wrong?', 'I live in Ontario and it’s the first time I’ve seen green grass in January. The trend is following last year but worse. A week of weather just below 0. Snow then a week of plus weather +1 or +2 and snow melts. \nI really is evident with green grass in Ontario in January. Very scary.', 'Climate deniers: "If global warming is real why is it cold?"\nActivists: "The weather is not the climate you cretins!\'\nAlso activists: "It\'s warm so it must be climate change!"\n\nNothing but hypocrites', 'Man, I want you to know that, we as humans, are coming to an end, we can\'t see a bright outcome for the future. Some do, and those are the people that dont know how tragic global warming has come too. Or they outright deny the existence of it. I mean, the atmosphere that holds our oxygen in, is in grave danger of collapsing. We talk about wanting to plant more trees, and we are, and yes, its helping. But, if we dont have any atmosphere we dont have any oxygen. Now, I myself, am a Christian, I\'m not like those "Boomer" Christian\'s tho, that\'s what I call all those., homophobic, racist, sexist Christian\'s, etc, and personally, I believe that god is going to come back, so that\'s why I\'m, well be it, still worried about losing this planet. But, im not worried about losing my soul, now, I\'m not saying to be a Christian or anything, but try to find something you can have faith in/be hopeful for. I would suggest studying about buddhism and Hinduism, actually, because those are very calming religions.', 'Yes, global warming is real, and some of it is man made. But climate change has happened before we started burning coal and oil.\n\nDid you know that before global warming, we had 100s of years of cold winters called the little ice age? Before that, the weather was even warmer than it is now. There used to be vineyards in northern Britain during Roman times.\n\nSo yes, we should protest about pollution, destruction of the environment and our stupid consumer society. \n\nMore importantly we should do stuff like cycling instead of driving, insulating our houses, and not buying stuff unless you intend to use it until it wears out.\n\nBut please stop talking about the end of the world.', "I know exactly how you feel. I see it too. It's deeply upsetting. I don't have any real words of comfort and I'm not even terribly knowledgeable about climate change. I just know that things are different than they used to be. I suppose the thing that keeps me going is that I still believe that human beings can and will find some answer around this, though a lot of people will perish because of it. I believe that nature itself will find a balance again, should this be the end of the human race. It was a good run and a happy accident mostly. An interesting and unique accident. In the grand scheme of things, humanity and the planet have gone through countless changes over thousands, millions, and billions of years. This is one of those changes. We're witnessing history in the making, for whatever that's worth. Still though, I'm sad that winter won't be as cold or snowy as it used to be.", 'As we look through an open window at the future, so many refused to even draw the curtains to look .', 'FadulLuix makes some good points. For example warming could cause release of methane hydrates from tundra would cause another big increase in temperature. Meltwater from greenland could divert the Atlantic current which keeps Europe warm. These would be major catastrophes, lots of animals and people would die. \n\nHowever, extinction is another level, means none of a species survive. So far only extinction only happened because humans hunting, farming, introducing rats and cats to isolated islands etc.\n\nI have seen no scientific study explaining how CO2 warming could lead to extinction.', 'Check it out, this may be what your looking for: \nhttps://www.reekoscience.com/science-experiments/miscellaneous/how-to-create-terrarium-vivarium-self-sustainable-bottle-garden\nD', "Don't worry, extinction won't happen. \n\nWorst case we spend a load of money moving agriculture and wildlife north, fight wars over territory and immigration, have to build sea defences like the dutch.\n\nIt's bad, well worth protesting against and changing your lifestyle, but please don't lose hope", 'r/collapse', '[removed]', 'Focus on what can be done, not what might happen. Focus your efforts to learn more and drive political and local change on this issue. Find a local group that spreads awareness. Etc.\n\nTalk about it.(most important)', "Decide that you're going to be the one to figure out how to efficiently remove and sequester carbon dioxide from the atmosphere.", 'I just picked up a good book called Truth to Power by Al Gore. It’s the sequel to An Inconvenient Truth. Truth to Power is a kind of handbook for what to talk to people about, how you can make change, and how you can talk to policymakers. Reading it gave me a sense of hope, that we can still turn this around. It won’t be easy but we all need to do our share. Your generation will help us change our ways. Sorry for my generations contribution to this mess.', 'If anything is to be done about the on-rushing climate disaster, it will be done because people face up to it, not worry about how to feel OK with it.', 'Totally feel this also, hope to see some inspiring responses in this thread because things feel bleak.', 'I feel worried about the changes of our planet also. It’s scary to think how life would be once we are beyond a tipping point. Heading for extinction.\n\nTo feel better about it, do what you can to help and just trust that everyone else is, or will, too. \n\nEasiest thing to do is learn more about it and then chat about what you know with people. That’ll spread awareness (or improve your own) which is the first steps of making change.\n\nAfter I read up a bit, the most I felt I could do to oppose the fat cats of the world, as a small fish, was to support someone/thing that could.\nThere are charities that fight the big corporations. I read the union of concerned scientists were a good one for that stuff. (Also read consistent donations to a specific charity is most efficient)\nI suppose voting for the best (lesser of the evil) government is an easy one also.\n\nOther things to do are, eat less meat, eat more locally sourced foods. If you can afford it, get an electric car.\n\nThat’s my 2 pence.', 'I couldn’t tell you, I feel hopeless as hell all we can hope for is that once the droughts and famines and wars wipe out our foolish existence everyone who makes it through can rebuild and learn to co exist with the planet that’s trying to burn us off', 'Fight the oil companies. Fight them', 'Read this paper and do further research [link](https://www.thegwpf.org/content/uploads/2018/02/Groupthink.pdf)', "Even if the planet is destroyed in the next few decades or so, it's unlikely humanity will die with it.\n\nSomething I think a lot of people either don't know or forget is that saving the planet isn't the only chance of us surviving, the most likely alternative is living on mars (which yes is a decently plausible idea) and it's definitely not ideal but at least we won't all be dead.\nBut I doubt it will come to that, we've made decent strides toward helping the planet (not big enough strides but still) and more people are learning about it, it's being brought up more in politics, people are researching different things to help more, and we're coming closer to a time where the people who don't believe in it, or just don't have a reason to care, will no longer have power.", 'There are other theories to answer the Fermi paradox that are more bright !\n\nThe possibility that the birth of life on a planet with good conditions remains extremely unlikely \n\nThe fact that alien spot is but don’t consider us smart enough to make contact \n\nEtc ...', 'Stop being that pessimistic.\n\nNobody really knows what will happen in the future.\n\nOf course this is a grave subject and global warming + a lot of other things will be a challenge for the human race, but we are not at the point where we should think about what we could left behind us.', "I don't think you quite understand the Fermi Paradox.\n\nThe Fermi Paradox is the fact that life *should* be incredibly abundant in the universe, but for some reason no one out there seems to be talking or making themselves known. We can't find anything.\n\nThere are a huge number of possible solutions to this paradox.The one you are thinking of is the Great Filter solution: the idea that there is some evolutionary step or technological leap that virtually all life cannot cross, and that keeps them from exploring the stars where we might detect traces of them. This is probably the most popular solution to this problem, but even those who believe it are very split on where the Great Filter lies - ahead of us, or before us. It could be either. Incidentally, if we ever find life on another planet in our solar system, the odds are very, very likely that the Great Filter is in front of us, not behind. That would be a bummer.\n\nBut as someone in this thread already stated, there are a number of other possible solutions. The Dark Forest Theory, the Apex Predator Theory, on and on and on. They are all worth looking into (here's the first in a cool series of videos explaining some of them: https://youtu.be/sNhhvQGsMEc).\n\nBut setting the Fermi Paradox aside - I mean, I understand your feelings. It's hard to deal with ignorance, but that has always been true. Continue to do what you know is right, that's all anyone can do.\n\nBesides hope. We need to hope! There are massive problems threatening humanity at an existential level, but we don't know the future. We don't know what we are ultimately capable of achieving when our back is in the corner (and boy, it will be soon). Look into carbon scrubbing and Co2 mineralization technology. It exists and it will get better with time. Vertical farms, lab grown meats, the theoretical science of weather manipulation. There is a fucking metric SHIT TON of things we can do and try before we throw in the towel and resign all hope for our planet and our species.\n\nBe one of the people that hope for a better future. We need you.", 'Nobody is getting out of this life alive. All that matters is what you do while you’re here. Your eyes are open. Live your best life and don’t worry about what you can’t control, just make better what you can.', 'Fact is, you’re not going to likely change someone’s mind until an experience changes it for them. So the real question is, what are you physically doing to mitigate climate change? For example, I acknowledge I probably won’t change people’s minds, but I can have a positive impact. I have studied environmental sciences and I currently work at a wastewater plant. It may not be huge in the grand scheme, but it’s what I can currently do with my current limitations.', "The Earth is not dying. Civilization is collapsing. It is hubris to think the Epoch we are causing will stack up to those life has survived.\n\nIf we don't what the far smaller amount of humans that will be around in 100 years to be hunter gathers with the culture and knowledge of civilization rapidly being erased by time... we need to get our shit together.", 'Where did you find this? We need to get this on YT trending', "I haven't watched the video, so I dont know if it mentions this, but not alot of people know that our atmosphere could collapse, within the next few years actually, and almost nobody is aware of that, no atmosphere = no oxygen, we would all die, if anybody wants evidence to back up this just reply to my comment and I'll provide it", "Increasing concentrations of greenhouse gasses are causing more of the sun's energy to be trapped in the climate system.", 'Don’t worry. Nuclear winter will cancel global warming out.', 'Did you just make an argument for why virtue signaling is ok? Why emotion should trump hard evidence? There’s nothing wrong with having a gut feeling but to let that dictate action over rational thought is not something I think anyone will benefit from.', 'Holy moly!!!\n\nThis means the oceans have absorbed the energy equivalent to 3.6 billion Hiroshima atomic bombs (63,000,000,000,000 Joules) in 25 years, lead author Lijing Cheng, an associate professor in oceanography at the IAP, said in\xa0a statement.', 'https://www.huffpost.com/entry/world-oceans-hottest-history_n_5e1d0bd9c5b6640ec3d9a1a4', 'As the article says I think he’s a realist. I just hope the world wakes up before it’s too late. I don’t think you’ll ever bring that many people together for a cause in time to save them selves', 'ECOCIDE', 'https://www.technologyreview.com/s/615035/australias-fires-have-pumped-out-more-emissions-than-100-nations-combined/', "Weird how in Australian media we tend to have the courtesy to use the terminology local to the phenomenon in question, (brushfires, wildfires, as you like...) but nobody overseas bothers to use our term for our disasters. (They're called bushfires here in Australia, in case you were interested. We also have cyclones, not typhoons or hurricanes.)\n\n\nEven more depressing is how many disasters we have here to make these issues of language stand out."]
# Store comments in a DataFrame using a dictionary as our input
# This sets the column name as the key of the dictionary, and the list of values as the values in the DataFrame
subreddit_comments_df = pd.DataFrame(data={'comment': subreddit_comments})
subreddit_comments_df_____no_output_____# This is an example of how we split up the comments into individual words.
# This technique will be used again to get the scores of each individual word.
for comment in subreddit_comments_df['comment']: # loop over each word
comment_words = comment.split() # split comments into individual words
for word in comment_words: # loop over idndividual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
print(word)
break # end the loop after one commentPart
2
would
probably
go
something
like
this
Ok
so
he's
ill
but
the
important
thing
is
that
it's
not
my
fault
</code>
### Now we will use the sentiment file called AFINN-en-165.txt. This file contains a sentiment score for 3382 words. More information can be found here: https://github.com/fnielsen/afinn With the sentiment file we will assign scores to words within the top comments that are found in the AFINN file_____no_output_____
<code>
# We load the AFINN sentiment table into a Python dictionary
sentimentfile = open("AFINN-en-165.txt", "r") # open sentiment file
scores = {} # an empty dictionary
for line in sentimentfile: # loop over each word / sentiment score
word, score = line.split("\t") # file is tab-delimited
scores[word] = int(score) # convert the scores to intergers
sentimentfile.close()_____no_output_____# print out the first 10 entries of the dictionary
counter = 0
for key, value in scores.items():
print(key, ':', value)
counter += 1
if counter >= 10:
breakabandon : -2
abandoned : -2
abandons : -2
abducted : -2
abduction : -2
abductions : -2
abhor : -3
abhorred : -3
abhorrent : -3
abhors : -3
# we create a dictionary for storing overall counts of sentiment values
sentiments = {"-5": 0, "-4": 0, "-3": 0, "-2": 0, "-1": 0, "0": 0, "1": 0, "2": 0, "3": 0, "4": 0, "5": 0}
for word in subreddit_comments_df['comment']: # loop over each word
comment_words = word.split() # split comments into individual words
for word in comment_words: # loop over individual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
if word in scores.keys(): # check if word is in sentiment dictionary
score = scores[word] # check if word is in sentiment dictionary
sentiments[str(score)] += 1 # add one to the sentiment score_____no_output_____# Print the scores
for sentiment_value in range(-5, 6):
# this uses string formatting, more on this here: https://realpython.com/python-f-strings/
print(f"{sentiment_value} sentiment:", sentiments[str(sentiment_value)])
# this would be equivalent, but obviously much less compact and elegant
# print("-5 sentiments ", sentiments["-5"])
# print("-4 sentiments ", sentiments["-4"])
# print("-3 sentiments ", sentiments["-3"])
# print("-2 sentiments ", sentiments["-2"])
# print("-1 sentiments ", sentiments["-1"])
# print(" 0 sentiments ", sentiments["0"])
# print(" 1 sentiments ", sentiments["1"])
# print(" 2 sentiments ", sentiments["2"])
# print(" 3 sentiments ", sentiments["3"])
# print(" 4 sentiments ", sentiments["4"])
# print(" 5 sentiments ", sentiments["5"])-5 sentiment: 1
-4 sentiment: 36
-3 sentiment: 180
-2 sentiment: 374
-1 sentiment: 219
0 sentiment: 0
1 sentiment: 367
2 sentiment: 442
3 sentiment: 122
4 sentiment: 9
5 sentiment: 0
# Now let us put the sentiment scores into a dataframe.
comment_sentiment_df = pd.DataFrame(data={'Sentiment_Value': list(sentiments.keys()), 'Counts': list(sentiments.values())})
# the 'value' column is a string; convert to integer (numeric type)
comment_sentiment_df['Sentiment_Value'] = comment_sentiment_df['Sentiment_Value'].astype('int')
# We normalize the counts so we will be able to compare between two subreddits on the same plot easily
comment_sentiment_df['Normalized_Counts'] = comment_sentiment_df['Counts'] / comment_sentiment_df['Counts'].sum() # Normalize the Count
comment_sentiment_df_____no_output_____
</code>
# Prompt
## We will plot the data so it is easier to visualize.
## In each of the three cells below, plot the Count, Normalized Count, and Normalized Score vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title, and color_____no_output_____
<code>
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Counts'], color='green') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Count') # add y-label
plt.title('Reddit Global Warming Sentiment Analysis') # add title
plt.show()_____no_output_____comment_sentiment_df['Normalized_Counts'] = comment_sentiment_df['Counts'] / comment_sentiment_df['Counts'].sum() # Normalize the Count
comment_sentiment_df_____no_output_____# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Normalized_Counts'], color='gray') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value Plot') # add title
plt.show()_____no_output_____
</code>
# Prompt
### In the cell below, enter a subreddit you which to compare the sentiment of the post comments, decide how far back to pull posts, and how many posts to pull comments from.
Pick a subreddit that can be compared with your first subreddit in terms of sentiment. You may want to go back up to the first subreddit section and change some parameters. For example, do you want to find top posts, or hot posts? From what time period? How many posts? If you change these settings above (the `number_of_posts` and `time_period` variables) you should re-run the notebook from the beginning._____no_output_____The following code is the same as we did for our first subreddit, just condensed into one code cell._____no_output_____
<code>
subreddit_2 = reddit.subreddit('Futurology').hot(limit=number_of_posts)
# Create an empty list to store the data
subreddit_comments_2 = []
# go through each post in our subreddit and put the comment body and id in our dictionary
for post in tqdm(subreddit_2, total=number_of_posts):
submission = reddit.submission(id=post)
submission.comments.replace_more(limit=0) # This line of code expands the comments if “load more comments” and “continue this thread” links are encountered
for top_level_comment in submission.comments:
subreddit_comments_2.append(top_level_comment.body) # add the comment to our list of comments
# Store comments in a DataFrame using a dictionary as our input
# This sets the column name as the key of the dictionary, and the list of values as the values in the DataFrame
subreddit_comments_df_2 = pd.DataFrame(data={'comment': subreddit_comments_2})
# we create a dictionary for storing overall counts of sentiment values
sentiments_2 = {"-5": 0, "-4": 0, "-3": 0, "-2": 0, "-1": 0, "0": 0, "1": 0, "2": 0, "3": 0, "4": 0, "5": 0}
for comment in subreddit_comments_df_2['comment']: # loop over each comment
comment_words = comment.split() # split comments into individual words
for word in comment_words: # loop over individual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
if word in scores.keys(): # check if word is in sentiment dictionary
score = scores[word] # check if word is in sentiment dictionary
sentiments_2[str(score)] += 1 # add one to the sentiment score
# Now let us put the sentiment scores into a dataframe.
comment_sentiment_df_2 = pd.DataFrame(data={'Sentiment_Value': list(sentiments_2.keys()), 'Counts': list(sentiments_2.values())})
# the 'value' column is a string; convert to integer (numeric type)
comment_sentiment_df_2['Sentiment_Value'] = comment_sentiment_df_2['Sentiment_Value'].astype('int')
# We normalize the counts so we will be able to compare between two subreddits on the same plot easily
comment_sentiment_df_2['Normalized_Counts'] = comment_sentiment_df_2['Counts'] / comment_sentiment_df_2['Counts'].sum() # Normalize the Count
comment_sentiment_df_2_____no_output_____
</code>
# Prompt
## We will plot the data so it is easier to visualize.
## In each of the three cells below, plot the Count, Normalized Count, and Normalized Score data vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title , and color_____no_output_____
<code>
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df_2['Sentiment_Value'], comment_sentiment_df_2['Counts'], color='blue') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Counts') # add y-label
plt.title('Futurology Reddit Sentiment Value Analysis') # add title
plt.show()_____no_output_____# Normalized Counts vs Sentiment Value Plot
plt.bar(comment_sentiment_df_2['Sentiment_Value'], comment_sentiment_df_2['Normalized_Counts'], color='black') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value Plot') # add title
plt.show()_____no_output_____
</code>
# Prompt
## Now we will overlay the baseline comment sentiment and the subreddit comment sentiment to help compare.
## In each of the three cells below, overlay the plots the Count, Normalized Count, and Normalized Score data vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title, and plot color_____no_output_____
<code>
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Counts'], color='green', label='Global Warming') # add first subreddit data and color
# add second subreddit with a slight offset of x-axis; alpha is opacity/transparency
plt.bar(comment_sentiment_df_2['Sentiment_Value'] + 0.2, comment_sentiment_df_2['Counts'], color='brown', label='Confidence in Future', alpha=0.5) # add second subreddit and color
plt.legend() # show the legend
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Count') # add y-label
plt.title('Count vs Sentiment Value') # add title
plt.tight_layout() # tight_layout() automatically adjusts margins to make it look nice
plt.show() # show the plot_____no_output_____# Normalized Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Normalized_Counts'], color='gray', label='Global Warming') # add first subreddit data and color
ax = plt.gca() # gets current axes of the plot for adding another dataset to the plot
# add second subreddit with a slight offset of x-axis
plt.bar(comment_sentiment_df_2['Sentiment_Value'] + 0.2, comment_sentiment_df_2['Normalized_Counts'], color='blue', label='Confidence in Future', alpha=0.5) # add second subreddit and color
plt.legend() # show the legend
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value') # add title
plt.tight_layout() # tight_layout() automatically adjusts margins to make it look nice
plt.show() # show the plot_____no_output_____
</code>
# Stretch goal (bonus-ish)
### Although this is not formally a bonus for points, it is a learning opportinity. You are not required to complete the following part of this notebook for the assignment.
Our sentiment analysis technique above works, but has some shortcomings. The biggest shortcoming is that each word is treated individually. But what if we have a sentence with a negation? For example:
'This is not a bad thing.'
This sentence should be positive overall, but AFINN only has the word 'bad' in the dictionary, and so the sentence gets an overall negative score of -3.
The most accurate sentiment analysis methods use neural networks to capture context as well as semantics. The drawback of NNs is they are computationally expensive to train and run.
An easier method is to use a slightly-improved sentiment analysis technique, such as TextBlob or VADER (https://github.com/cjhutto/vaderSentiment) in Python. Both libraries use a hand-coded algorithm with word scores like AFINN, but also with additions like negation rules (e.g. a word after 'not' has it's score reversed).
Other sentiment analysis libraries in Python can be read about here: https://www.iflexion.com/blog/sentiment-analysis-python
### The stretch goal
The stretch goal is to use other sentiment analysis libraries on the Reddit data we collected, and compare the various approaches (AFINN word-by-word, TextBlob, and VADER) using plots and statistics. For the AFINN word-by-word approach, you will need to either sum up the sentiment scores for each comment, or average them. You might also divide them by 5 to get the values between -1 and +1.
Here is a brief example of getting scores from the 3 methods described above. We can see while the raw AFINN approach gives a score of -0.6 (if normalized), TextBlob shows 0.35 and VADER shows 0.43._____no_output_____
<code>
!conda install -c conda-forge textblob_____no_output_____!pip install textblob vaderSentiment_____no_output_____sentence = 'This is not a bad thing.'
[(word, scores[word]) for word in sentence.split() if word in scores]_____no_output_____from textblob import TextBlob
tb = TextBlob(sentence)
print(tb.polarity)
print(tb.sentiment_assessments)_____no_output_____from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
analyzer.polarity_scores(sentence)_____no_output_____
</code>
| {
"repository": "ebayandelger/MSDS600",
"path": "Reddit_Sentiment_Assignment.ipynb",
"matched_keywords": [
"evolution",
"biology",
"ecology"
],
"stars": null,
"size": 292028,
"hexsha": "cb0e7315d698209292a9ef5a71153b17634a5d21",
"max_line_length": 175647,
"avg_line_length": 227.7909516381,
"alphanum_fraction": 0.7835926692
} |
# Notebook from trquinn/tangos
Path: docs/Data exploration with python.ipynb
Interactive analysis with python
--------------------------------
Before starting this tutorial, ensure that you have set up _tangos_ [as described here](https://pynbody.github.io/tangos/) and the data sources [as described here](https://pynbody.github.io/tangos/data_exploration.html).
We get started by importing the modules we'll need:_____no_output_____
<code>
%matplotlib inline
import tangos
import pylab as p_____no_output_____
</code>
First let's inspect what simulations are available in our database:_____no_output_____
<code>
tangos.all_simulations()_____no_output_____
</code>
For any of these simulations, we can generate a list of available timesteps as follows:_____no_output_____
<code>
tangos.get_simulation("tutorial_changa").timesteps_____no_output_____
</code>
For any timestep, we can access the halos using `.halos` and a specific halo using standard python 0-based indexing:_____no_output_____
<code>
tangos.get_simulation("tutorial_changa").timesteps[3].halos[3]_____no_output_____
</code>
One can skip straight to getting a specific halo as follows:_____no_output_____
<code>
tangos.get_halo("tutorial_changa/%384/halo_4")_____no_output_____
</code>
Note the use of the SQL wildcard % character which avoids us having to type out the entire path. Whatever way you access it, the resulting object allows you to query what properties have been calculated for that specific halo. We can then access those properties using the normal python square-bracket dictionary syntax._____no_output_____
<code>
halo = tangos.get_halo("tutorial_changa/%960/halo_1")
halo.keys()_____no_output_____halo['Mvir']_____no_output_____p.imshow(halo['uvi_image'])_____no_output_____
</code>
One can also get meta-information about the computed property. It would be nice to know
the physical size of the image we just plotted. We retrieve the underlying property object
and ask it:_____no_output_____
<code>
halo.get_description("uvi_image").plot_extent()_____no_output_____
</code>
This tells us that the image is 15 kpc across. The example properties that come with _tangos_
use _pynbody_'s units system to convert everything to physical kpc, solar masses and km/s. When
you implement your own properties, you can of course store them in whichever units you like._____no_output_____Getting a time sequence of properties
-------------------------------------
Often we would like to see how a property varies over time. _Tangos_ provides convenient ways to extract this information, automatically finding
major progenitors or descendants for a halo. Let's see this illustrated on the SubFind _mass_ property:_____no_output_____
<code>
halo = tangos.get_halo("tutorial_gadget/snapshot_020/halo_10")
# Calculate on major progenitor branch:
Mvir, t = halo.calculate_for_progenitors("mass","t()")
# Now perform plotting:
p.plot(t,1e10*Mvir)
p.xlabel("t/Gyr")
p.ylabel(r"$M/h^{-1} M_{\odot}$")
p.semilogy()_____no_output_____
</code>
In the example above, `calculate_for_progenitors` retrieves properties on the major progenitor branch of the chosen halo. One can ask for as many properties as you like, each one being returned as a numpy array in order. In this particular example the first property is the mass (as reported by subfind) and the second is the time. In fact the second property isn't really stored - if you check `halo.keys()` you won't find `t` in there. It's a simple example of a _live property_ which means it's calculated on-the-fly from other data. The time is actually stored in the TimeStep rather than the Halo database entry, so the `t()` live property simply retrieves it from the appropriate location.
Live properties are a powerful aspect of _tangos_. We'll see more of them momentarily._____no_output_____Histogram properties
--------------------
While the approach above is the main way to get time series of data with _tangos_, sometimes one
wants to be able to use finer time bins than the number of outputs available. For example, star
formation rates or black hole accretion rates often vary on short timescales and the output files
from simulations are sufficient to reconstruct these variations in between snapshots.
_Tangos_ implements `TimeChunkedHistogram` for this purpose. As the name suggests, a _chunk_ of
historical data is stored with each timestep. The full history is then reconstructed by combining
the chunks through the merger tree; this process is customizable. Let's start with the simplest
possible request:_____no_output_____
<code>
halo = tangos.get_halo("tutorial_changa_blackholes/%960/halo_1")
SFR = halo["SFR_histogram"]
# The above is sufficient to retrieve the histogram; however you probably also want to check
# the size of the time bins. The easiest approach is to request a suitable time array to go with
# the SF history:
SFR_property_object = halo.get_objects("SFR_histogram")[0]
SFR_time_bins = SFR_property_object.x_values()
p.plot(SFR_time_bins, SFR)
p.xlabel("Time/Gyr")
p.ylabel("SFR/$M_{\odot}\,yr^{-1}$")_____no_output_____
</code>
The advantage of storing the histogram in chunks is that one can reconstruct it
in different ways. The default is to go along the major progenitor branch, but
one can also sum over all progenitors. The following code shows the fraction of
star formation in the major progenitor:_____no_output_____
<code>
SFR_all = halo.calculate('reassemble(SFR_histogram, "sum")')
p.plot(SFR_time_bins, SFR/SFR_all)
p.xlabel("Time/Gyr")
p.ylabel("Frac. SFR in major progenitor")/Users/app/anaconda/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in true_divide
</code>
_Technical note_: It's worth being aware that the merger information is, of course, quantized to the
output timesteps even though the SFR information is stored in small chunks. This is rarely an issue
but with coarse timesteps (such as those in the tutorial simulations), the quantization can cause
noticable artefacts – here, the jump to 100% in the major progenitor shortly before _t_ = 3 Gyr
corresponds to the time of the penultimate stored step, after which no mergers are recorded.
For more information, see the [time-histogram properties](https://pynbody.github.io/tangos/histogram_properties.html) page._____no_output_____Let's see another example of a histogram property: the black hole accretion rate _____no_output_____
<code>
BH_accrate = halo.calculate('BH.BH_mdot_histogram')
p.plot(SFR_time_bins, BH_accrate)
p.xlabel("Time/Gyr")
p.ylabel("BH accretion rate/$M_{\odot}\,yr^{-1}$")/Users/app/Science/tangos/tangos/live_calculation/__init__.py:585: RuntimeWarning: More than one relation for target 'BH' has been found. Picking the first.
warnings.warn("More than one relation for target %r has been found. Picking the first."%str(self.locator), RuntimeWarning)
</code>
This works fine, but you may have noticed the warning that more than one black hole
is in the halo of interest. There is more information about the way that links between
objects work in _tangos_, and disambiguating between them, in the "using links" section
below._____no_output_____Getting properties for multiple halos
-------------------------------------
Quite often one wants to collect properties from multiple halos simultaneously. Suppose we want to plot the mass against the vmax for all halos at
a specific snapshot:_____no_output_____
<code>
timestep = tangos.get_timestep("tutorial_gadget/snapshot_019")
mass, vmax = timestep.calculate_all("mass","VMax")
p.plot(mass*1e10,vmax,'k.')
p.loglog()
p.xlabel("$M/h^{-1} M_{\odot}$")
p.ylabel(r"$v_{max}/{\rm km s^{-1}}$")_____no_output_____
</code>
Often when querying multiple halos we still want to know something about their history, and live calculations enable that. Suppose we want to know how much the mass has grown since the previous snapshot:_____no_output_____
<code>
mass, fractional_delta_2 = timestep.calculate_all("mass", "(mass-earlier(2).mass)/mass")
p.hlines(0.0,1e10,1e15, colors="gray")
p.plot(mass*1e10, fractional_delta_2,"r.", alpha=0.2)
p.semilogx()
p.ylim(-0.1,0.9)
p.xlim(1e12,1e15)
p.xlabel("$M/h^{-1} M_{\odot}$")
p.ylabel("Fractional growth in mass")_____no_output_____
</code>
This is a much more ambitious use of the live calculation system. Consider the last property retrieved, which is `(mass-earlier(2).mass)/mass`. This combines algebraic operations with _redirection_: `earlier(2)` finds the major progenitor two steps prior to this one, after which `.mass` retrieves the mass at that earlier timestep. This is another example of a "link", as previously used to retrieve
black hole information above.
_____no_output_____Using Links
-----------
_Tangos_ has a concept of "links" between objects including halos and black holes. For example,
the merger tree information that you have already used indirectly is stored as links.
Returning to our example of black holes above, we used a link named `BH`; however this issued a
warning that the result was technically ambiguous. Let's see that warning again. For clarity,
we will use the link named `BH_central` this time around -- it's an alternative set of links
which only includes black holes associated with the central galaxy (rather than any satellites)._____no_output_____
<code>
halo = tangos.get_halo("tutorial_changa_blackholes/%960/halo_1")
BH_mass = halo.calculate('BH_central.BH_mass')/Users/app/Science/tangos/tangos/live_calculation/__init__.py:585: RuntimeWarning: More than one relation for target 'BH_central' has been found. Picking the first.
warnings.warn("More than one relation for target %r has been found. Picking the first."%str(self.locator), RuntimeWarning)
</code>
We still get the warning, so there's more than one black hole in the central galaxy.
To avoid such warnings, you can specify more about which link you are referring to. For example,
we can specifically ask for the black hole with the _largest mass_ and _smallest impact parameters_
using the following two queries:_____no_output_____
<code>
BH_max_mass = halo.calculate('link(BH_central, BH_mass, "max")')
BH_closest = halo.calculate('link(BH_central, BH_central_distance, "min")')_____no_output_____
</code>
The `link` live-calculation function returns the halo with either the maximum or minimum value of an
associated property, here the `BH_mass` and `BH_central_distance` properties respectively.
Either approach disambiguates the black holes we mean (in fact, they unsurprisingly lead to
the same disambiguation):_____no_output_____
<code>
BH_max_mass == BH_closest_____no_output_____
</code>
However one doesn't always have to name a link to make use of it. The mere existence of a link
is sometimes enough. An example is the merger tree information already used. Another useful
example is when two simulations have the same initial conditions, as in the `tutorial_changa`
and `tutorial_changa_blackholes` examples; these two simulations differ only in that the latter
has AGN feedback. We can identify halos between simulations using the following syntax:_____no_output_____
<code>
SFR_in_other_sim = halo.calculate("match('tutorial_changa').SFR_histogram")
p.plot(SFR_time_bins, halo['SFR_histogram'],color='r', label="With AGN feedback")
p.plot(SFR_time_bins, SFR_in_other_sim, color='b',label="No AGN feedback")
p.legend(loc="lower right")
p.semilogy()
p.xlabel("t/Gyr")
p.ylabel("SFR/$M_{\odot}\,yr^{-1}$")_____no_output_____
</code>
The `match` syntax simply tries to follow links until it finds a halo in the named
_tangos_ context. One can use it to match halos across entire timesteps too; let's
compare the stellar masses of our objects:_____no_output_____
<code>
timestep = tangos.get_timestep("tutorial_changa/%960")
Mstar_no_AGN, Mstar_AGN = timestep.calculate_all("star_mass_profile[-1]",
"match('tutorial_changa_blackholes').star_mass_profile[-1]")
# note that we use star_mass_profile[-1] to get the last entry of the star_mass_profile array,
# as a means to get the total stellar mass from a profile
p.plot(Mstar_no_AGN, Mstar_AGN, 'k.')
p.plot([1e6,1e11],[1e6,1e11],'k-',alpha=0.3)
p.loglog()
p.xlabel("$M_{\star}/M_{\odot}$ without AGN")
p.ylabel("$M_{\star}/M_{\odot}$ with AGN")_____no_output_____
</code>
| {
"repository": "trquinn/tangos",
"path": "docs/Data exploration with python.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 15,
"size": 278299,
"hexsha": "cb0f76a7250985f464cd5f93689a8b19a4232163",
"max_line_length": 46342,
"avg_line_length": 334.4939903846,
"alphanum_fraction": 0.923395341
} |
# Notebook from DerwenAI/conda_sux
Path: spaCy_tuTorial.ipynb
# An Introduction to Natural Language in Python using spaCy_____no_output_____## Introduction
This tutorial provides a brief introduction to working with natural language (sometimes called "text analytics") in Pytho, using [spaCy](https://spacy.io/) and related libraries.
Data science teams in industry must work with lots of text, one of the top four categories of data used in machine learning.
Usually that's human-generated text, although not always.
Think about it: how does the "operating system" for business work? Typically, there are contracts (sales contracts, work agreements, partnerships), there are invoices, there are insurance policies, there are regulations and other laws, and so on.
All of those are represented as text.
You may run across a few acronyms: _natural language processing_ (NLP), _natural language understanding_ (NLU), _natural language generation_ (NLG) — which are roughly speaking "read text", "understand meaning", "write text" respectively.
Increasingly these tasks overlap and it becomes difficult to categorize any given feature.
The _spaCy_ framework — along with a wide and growing range of plug-ins and other integrations — provides features for a wide range of natural language tasks.
It's become one of the most widely used natural language libraries in Python for industry use cases, and has quite a large community — and with that, much support for commercialization of research advances as this area continues to evolve rapidly._____no_output_____## Getting Started
Check out the excellent _spaCy_ [installation notes](https://spacy.io/usage) for a "configurator" which generates installation commands based on which platforms and natural languages you need to support.
Some people tend to use `pip` while others use `conda`, and there are instructions for both. For example, to get started with _spaCy_ working with text in English and installed via `conda` on a Linux system:
```
conda install -c conda-forge spacy
python -m spacy download en_core_web_sm
```
BTW, the second line above is a download for language resources (models, etc.) and the `_sm` at the end of the download's name indicates a "small" model. There's also "medium" and "large", albeit those are quite large. Some of the more advanced features depend on the latter, although we won't quite be diving to the bottom of that ocean in this (brief) tutorial.
Now let's load _spaCy_ and run some code:_____no_output_____
<code>
import spacy
nlp = spacy.load("en_core_web_sm")_____no_output_____
</code>
That `nlp` variable is now your gateway to all things _spaCy_ and loaded with the `en_core_web_sm` small model for English.
Next, let's run a small "document" through the natural language parser:_____no_output_____
<code>
text = "The rain in Spain falls mainly on the plain."
doc = nlp(text)
for token in doc:
print(token.text, token.lemma_, token.pos_, token.is_stop)_____no_output_____
</code>
First we created a [doc](https://spacy.io/api/doc) from the text, which is a container for a document and all of its annotations. Then we iterated through the document to see what _spaCy_ had parsed.
Good, but it's a lot of info and a bit difficult to read. Let's reformat the _spaCy_ parse of that sentence as a [pandas](https://pandas.pydata.org/) dataframe:_____no_output_____
<code>
import pandas as pd
cols = ("text", "lemma", "POS", "explain", "stopword")
rows = []
for t in doc:
row = [t.text, t.lemma_, t.pos_, spacy.explain(t.pos_), t.is_stop]
rows.append(row)
df = pd.DataFrame(rows, columns=cols)
df_____no_output_____
</code>
Much more readable!
In this simple case, the entire document is merely one short sentence.
For each word in that sentence _spaCy_ has created a [token](https://spacy.io/api/token), and we accessed fields in each token to show:
- raw text
- [lemma](https://en.wikipedia.org/wiki/Lemma_(morphology)) – a root form of the word
- [part of speech](https://en.wikipedia.org/wiki/Part_of_speech)
- a flag for whether the word is a _stopword_ – i.e., a common word that may be filtered out_____no_output_____Next let's use the [displaCy](https://ines.io/blog/developing-displacy) library to visualize the parse tree for that sentence:_____no_output_____
<code>
from spacy import displacy
displacy.render(doc, style="dep", jupyter=True)_____no_output_____
</code>
Does that bring back memories of grade school? Frankly, for those of us coming from more of a computational linguistics background, that diagram sparks joy.
But let's backup for a moment. How do you handle multiple sentences?
There are features for _sentence boundary detection_ (SBD) – also known as _sentence segmentation_ – based on the builtin/default [sentencizer](https://spacy.io/api/sentencizer):_____no_output_____
<code>
text = "We were all out at the zoo one day, I was doing some acting, walking on the railing of the gorilla exhibit. I fell in. Everyone screamed and Tommy jumped in after me, forgetting that he had blueberries in his front pocket. The gorillas just went wild."
doc = nlp(text)
for sent in doc.sents:
print(">", sent)_____no_output_____
</code>
When _spaCy_ creates a document, it uses a principle of _non-destructive tokenization_ meaning that the tokens, sentences, etc., are simply indexes into a long array. In other words, they don't carve the text stream into little pieces. So each sentence is a [span](https://spacy.io/api/span) with a _start_ and an _end_ index into the document array:_____no_output_____
<code>
for sent in doc.sents:
print(">", sent.start, sent.end)_____no_output_____
</code>
We can index into the document array to pull out the tokens for one sentence:_____no_output_____
<code>
doc[48:54]_____no_output_____
</code>
Or simply index into a specific token, such as the verb `went` in the last sentence:_____no_output_____
<code>
token = doc[51]
print(token.text, token.lemma_, token.pos_)_____no_output_____
</code>
At this point we can parse a document, segment that document into sentences, then look at annotations about the tokens in each sentence. That's a good start._____no_output_____## Acquiring Text
Now that we can parse texts, where do we get texts?
One quick source is to leverage the interwebs.
Of course when we download web pages we'll get HTML, and then need to extract text from them.
[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) is a popular package for that.
First, a little housekeeping:_____no_output_____
<code>
import sys
import warnings
warnings.filterwarnings("ignore")_____no_output_____
</code>
### Character Encoding_____no_output_____The following shows examples of how to use [codecs](https://docs.python.org/3/library/codecs.html) and [normalize unicode](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize). NB: the example text comes from the article "[Metal umlat](https://en.wikipedia.org/wiki/Metal_umlaut)"._____no_output_____
<code>
x = "Rinôçérôse screams flow not unlike an encyclopædia, \
'TECHNICIÄNS ÖF SPÅCE SHIP EÅRTH THIS IS YÖÜR CÄPTÅIN SPEÄKING YÖÜR ØÅPTÅIN IS DEA̋D' to Spın̈al Tap."
type(x)_____no_output_____
</code>
The variable `x` is a *string* in Python:_____no_output_____
<code>
repr(x)_____no_output_____
</code>
Its translation into [ASCII](http://www.asciitable.com/) is unusable by parsers:_____no_output_____
<code>
ascii(x)_____no_output_____
</code>
Encoding as [UTF-8](http://unicode.org/faq/utf_bom.html) doesn't help much:_____no_output_____
<code>
x.encode('utf8')_____no_output_____
</code>
Ignoring difficult characters is perhaps an even worse strategy:_____no_output_____
<code>
x.encode('ascii', 'ignore')_____no_output_____
</code>
However, one can *normalize* text, then encode…_____no_output_____
<code>
import unicodedata
unicodedata.normalize('NFKD', x).encode('ascii','ignore')_____no_output_____
</code>
Even before this normalization and encoding, you may need to convert some characters explicitly **before** parsing. For example:_____no_output_____
<code>
x = "The sky “above” the port … was the color of ‘cable television’ – tuned to the Weather Channel®"
ascii(x)_____no_output_____
</code>
Consider the results for that line:_____no_output_____
<code>
unicodedata.normalize('NFKD', x).encode('ascii', 'ignore')_____no_output_____
</code>
...which still drops characters that may be important for parsing a sentence.
So a more advanced approach could be:_____no_output_____
<code>
x = x.replace('“', '"').replace('”', '"')
x = x.replace("‘", "'").replace("’", "'")
x = x.replace('…', '...').replace('–', '-')
x = unicodedata.normalize('NFKD', x).encode('ascii', 'ignore').decode('utf-8')
print(x)_____no_output_____
</code>
### Parsing HTML_____no_output_____In the following function `get_text()` we'll parse the HTML to find all of the `<p/>` tags, then extract the text for those:_____no_output_____
<code>
from bs4 import BeautifulSoup
import requests
import traceback
def get_text (url):
buf = []
try:
soup = BeautifulSoup(requests.get(url).text, "html.parser")
for p in soup.find_all("p"):
buf.append(p.get_text())
return "\n".join(buf)
except:
print(traceback.format_exc())
sys.exit(-1)_____no_output_____
</code>
Now let's grab some text from online sources.
We can compare open source licenses hosted on the [Open Source Initiative](https://opensource.org/licenses/) site:_____no_output_____
<code>
lic = {}
lic["mit"] = nlp(get_text("https://opensource.org/licenses/MIT"))
lic["asl"] = nlp(get_text("https://opensource.org/licenses/Apache-2.0"))
lic["bsd"] = nlp(get_text("https://opensource.org/licenses/BSD-3-Clause"))
for sent in lic["bsd"].sents:
print(">", sent)_____no_output_____
</code>
One common use case for natural language work is to compare texts. For example, with those open source licenses we can download their text, parse, then compare [similarity](https://spacy.io/api/doc#similarity) metrics among them:_____no_output_____
<code>
pairs = [
["mit", "asl"],
["asl", "bsd"],
["bsd", "mit"]
]
for a, b in pairs:
print(a, b, lic[a].similarity(lic[b]))_____no_output_____
</code>
That is interesting, since the [BSD](https://opensource.org/licenses/BSD-3-Clause) and [MIT](https://opensource.org/licenses/MIT) licenses appear to be the most similar documents.
In fact they are closely related.
Admittedly, there was some extra text included in each document due to the OSI disclaimer in the footer – but this provides a reasonable approximation for comparing the licenses._____no_output_____## Natural Language Understanding
Now let's dive into some of the _spaCy_ features for NLU.
Given that we have a parse of a document, from a purely grammatical standpoint we can pull the [noun chunks](https://spacy.io/usage/linguistic-features#noun-chunks), i.e., each of the noun phrases:_____no_output_____
<code>
text = "Steve Jobs and Steve Wozniak incorporated Apple Computer on January 3, 1977, in Cupertino, California."
doc = nlp(text)
for chunk in doc.noun_chunks:
print(chunk.text)_____no_output_____
</code>
Not bad. The noun phrases in a sentence generally provide more information content – as a simple filter used to reduce a long document into a more "distilled" representation.
We can take this approach further and identify [named entities](https://spacy.io/usage/linguistic-features#named-entities) within the text, i.e., the proper nouns:_____no_output_____
<code>
for ent in doc.ents:
print(ent.text, ent.label_)_____no_output_____
</code>
The _displaCy_ library provides an excellent way to visualize named entities:_____no_output_____
<code>
displacy.render(doc, style="ent", jupyter=True)_____no_output_____
</code>
If you're working with [knowledge graph](https://www.akbc.ws/) applications and other [linked data](http://linkeddata.org/), your challenge is to construct links between the named entities in a document and other related information for the entities – which is called [entity linking](http://nlpprogress.com/english/entity_linking.html).
Identifying the named entities in a document is the first step in this particular kind of AI work.
For example, given the text above, one might link the `Steve Wozniak` named entity to a [lookup in DBpedia](http://dbpedia.org/page/Steve_Wozniak)._____no_output_____In more general terms, one can also link _lemmas_ to resources that describe their meanings.
For example, in an early section we parsed the sentence `The gorillas just went wild` and were able to show that the lemma for the word `went` is the verb `go`. At this point we can use a venerable project called [WordNet](https://wordnet.princeton.edu/) which provides a lexical database for English – in other words, it's a computable thesaurus.
There's a _spaCy_ integration for WordNet called
[spacy-wordnet](https://github.com/recognai/spacy-wordnet) by [Daniel Vila Suero](https://twitter.com/dvilasuero), an expert in natural language and knowledge graph work.
Then we'll load the WordNet data via NLTK (these things happen):_____no_output_____
<code>
import nltk
nltk.download("wordnet")_____no_output_____
</code>
Note that _spaCy_ runs as a "pipeline" and allows means for customizing parts of the pipeline in use.
That's excellent for supporting really interesting workflow integrations in data science work.
Here we'll add the `WordnetAnnotator` from the _spacy-wordnet_ project:_____no_output_____
<code>
!pip install spacy-wordnet_____no_output_____from spacy_wordnet.wordnet_annotator import WordnetAnnotator
print("before", nlp.pipe_names)
if "WordnetAnnotator" not in nlp.pipe_names:
nlp.add_pipe(WordnetAnnotator(nlp.lang), after="tagger")
print("after", nlp.pipe_names)_____no_output_____
</code>
Within the English language, some words are infamous for having many possible meanings. For example, click through the results online in a [WordNet](http://wordnetweb.princeton.edu/perl/webwn?s=star&sub=Search+WordNet&o2=&o0=1&o8=1&o1=1&o7=&o5=&o9=&o6=&o3=&o4=&h=) search to find the meanings related to the word `withdraw`.
Now let's use _spaCy_ to perform that lookup automatically:_____no_output_____
<code>
token = nlp("withdraw")[0]
token._.wordnet.synsets()_____no_output_____token._.wordnet.lemmas()_____no_output_____token._.wordnet.wordnet_domains()_____no_output_____
</code>
Again, if you're working with knowledge graphs, those "word sense" links from WordNet could be used along with graph algorithms to help identify the meanings for a particular word. That can also be used to develop summaries for larger sections of text through a technique called _summarization_. It's beyond the scope of this tutorial, but an interesting application currently for natural language in industry._____no_output_____Going in the other direction, if you know _a priori_ that a document was about a particular domain or set of topics, then you can constrain the meanings returned from _WordNet_. In the following example, we want to consider NLU results that are within Finance and Banking:_____no_output_____
<code>
domains = ["finance", "banking"]
sentence = nlp("I want to withdraw 5,000 euros.")
enriched_sent = []
for token in sentence:
# get synsets within the desired domains
synsets = token._.wordnet.wordnet_synsets_for_domain(domains)
if synsets:
lemmas_for_synset = []
for s in synsets:
# get synset variants and add to the enriched sentence
lemmas_for_synset.extend(s.lemma_names())
enriched_sent.append("({})".format("|".join(set(lemmas_for_synset))))
else:
enriched_sent.append(token.text)
print(" ".join(enriched_sent))_____no_output_____
</code>
That example may look simple but, if you play with the `domains` list, you'll find that the results have a kind of combinatorial explosion when run without reasonable constraints.
Imagine having a knowledge graph with millions of elements: you'd want to constrain searches where possible to avoid having every query take days/weeks/months/years to compute._____no_output_____Sometimes the problems encountered when trying to understand a text – or better yet when trying to understand a _corpus_ (a dataset with many related texts) – become so complex that you need to visualize it first.
Here's an interactive visualization for understanding texts: [scattertext](https://spacy.io/universe/project/scattertext), a product of the genius of [Jason Kessler](https://twitter.com/jasonkessler).
To install:
```
conda install -c conda-forge scattertext
```
Let's analyze text data from the party conventions during the 2012 US Presidential elections. It may take a minute or two to run, but the results from all that number crunching is worth the wait._____no_output_____
<code>
!pip install scattertext_____no_output_____import scattertext as st
if "merge_entities" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_entities"))
if "merge_noun_chunks" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_noun_chunks"))
convention_df = st.SampleCorpora.ConventionData2012.get_data()
corpus = st.CorpusFromPandas(convention_df,
category_col="party",
text_col="text",
nlp=nlp).build()_____no_output_____
</code>
Once you have the `corpus` ready, generate an interactive visualization in HTML:_____no_output_____
<code>
html = st.produce_scattertext_explorer(
corpus,
category="democrat",
category_name="Democratic",
not_category_name="Republican",
width_in_pixels=1000,
metadata=convention_df["speaker"]
)_____no_output_____
</code>
Now we'll render the HTML – give it a minute or two to load, it's worth the wait..._____no_output_____
<code>
from IPython.display import IFrame
from IPython.core.display import display, HTML
import sys
IN_COLAB = "google.colab" in sys.modules
print(IN_COLAB)_____no_output_____
</code>
**NB: use the following cell on Google Colab:**_____no_output_____
<code>
if IN_COLAB:
display(HTML("<style>.container { width:98% !important; }</style>"))
display(HTML(html))_____no_output_____
</code>
**NB: use the following cell instead on Jupyter in general:**_____no_output_____
<code>
file_name = "foo.html"
with open(file_name, "wb") as f:
f.write(html.encode("utf-8"))
IFrame(src=file_name, width = 1200, height=700)_____no_output_____
</code>
Imagine if you had text from the past three years of customer support for a particular product in your organization. Suppose your team needed to understand how customers have been talking about the product? This _scattertext_ library might come in quite handy! You could cluster (k=2) on _NPS scores_ (a customer evaluation metric) then replace the Democrat/Republican dimension with the top two components from the clustering._____no_output_____## Summary
Five years ago, if you’d asked about open source in Python for natural language, a default answer from many people working in data science would've been [NLTK](https://www.nltk.org/).
That project includes just about everything but the kitchen sink and has components which are relatively academic.
Another popular natural language project is [CoreNLP](https://stanfordnlp.github.io/CoreNLP/) from Stanford.
Also quite academic, albeit powerful, though _CoreNLP_ can be challenging to integrate with other software for production use.
Then a few years ago everything in this natural language corner of the world began to change.
The two principal authors for _spaCy_ -- [Matthew Honnibal](https://twitter.com/honnibal) and [Ines Montani](https://twitter.com/_inesmontani) -- launched the project in 2015 and industry adoption was rapid.
They focused on an _opinionated_ approach (do what's needed, do it well, no more, no less) which provided simple, rapid integration into data science workflows in Python, as well as faster execution and better accuracy than the alternatives.
Based on those priorities, _spaCy_ become sort of the opposite of _NLTK_.
Since 2015, _spaCy_ has consistently focused on being an open source project (i.e., depending on its community for directions, integrations, etc.) and being commercial-grade software (not academic research).
That said, _spaCy_ has been quick to incorporate the SOTA advances in machine learning, effectively becoming a conduit for moving research into industry.
It's important to note that machine learning for natural language got a big boost during the mid-2000's as Google began to win international language translation competitions.
Another big change occurred during 2017-2018 when, following the many successes of _deep learning_, those approaches began to out-perform previous machine learning models.
For example, see the [ELMo](https://arxiv.org/abs/1802.05365) work on _language embedding_ by Allen AI, followed by [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) from Google, and more recently [ERNIE](https://medium.com/syncedreview/baidus-ernie-tops-google-s-bert-in-chinese-nlp-tasks-d6a42b49223d) by Baidu -- in other words, the search engine giants of the world have gifted the rest of us with a Sesame Street repertoire of open source embedded language models based on deep learning, which is now _state of the art_ (SOTA).
Speaking of which, to keep track of SOTA for natural language, keep an eye on [NLP-Progress](http://nlpprogress.com/) and [Papers with Code](https://paperswithcode.com/sota).
The use cases for natural language have shifted dramatically over the past two years, after deep learning techniques arose to the fore.
Circa 2014, a natural language tutorial in Python might have shown _word count_ or _keyword search_ or _sentiment detection_ where the target use cases were relatively underwhelming.
Circa 2019 we're talking about analyzing thousands of documents for vendor contracts in an industrial supply chain optimization ... or hundreds of millions of documents for policy holders of an insurance company, or gazillions of documents regarding financial disclosures.
More contemporary natural language work tends to be in NLU, often to support construction of _knowledge graphs,_ and increasingly in NLG where large numbers of similar documents can be summarized at human scale.
The [spaCy Universe](https://spacy.io/universe) is a great place to check for deep-dives into particular use cases, and to see how this field is evolving. Some selections from this "universe" include:
- [Blackstone](https://spacy.io/universe/project/blackstone) – parsing unstructured legal texts
- [Kindred](https://spacy.io/universe/project/kindred) – extracting entities from biomedical texts (e.g., Pharma)
- [mordecai](https://spacy.io/universe/project/mordecai) – parsing geographic information
- [Prodigy](https://spacy.io/universe/project/prodigy) – human-in-the-loop annotation to label datasets
- [spacy-raspberry](https://spacy.io/universe/project/spacy-raspberry) – Raspberry PI image for running _spaCy_ and deep learning on edge devices
- [Rasa NLU](https://spacy.io/universe/project/rasa) – Rasa integration for voice apps
Also, a couple super new items to mention:
- [spacy-pytorch-transformers](https://explosion.ai/blog/spacy-pytorch-transformers) to fine tune (i.e., use _transfer learning_ with) the Sesame Street characters and friends: BERT, GPT-2, XLNet, etc.
- [spaCy IRL 2019](https://irl.spacy.io/2019/) conference – check out videos from the talks!
There's so much more that can be done with _spaCy_ – hopefully this tutorial provides an introduction. We wish you all the best in your natural language work._____no_output_____
| {
"repository": "DerwenAI/conda_sux",
"path": "spaCy_tuTorial.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 22,
"size": 32790,
"hexsha": "cb1112c12ce949ace0dac753dc5c3244fe55b553",
"max_line_length": 574,
"avg_line_length": 35.3721682848,
"alphanum_fraction": 0.6212869777
} |
# Notebook from biqar/Fall-2020-ITCS-8010-MLGraph
Path: assignments/assignment_0/gnp_simulation_experiment.ipynb
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\Chi}{\mathcal{X}}
\newcommand{\R}{\rm I\!R}
\newcommand{\sign}{\text{sign}}
\newcommand{\Tm}{\mathbf{T}}
\newcommand{\Xm}{\mathbf{X}}
\newcommand{\Im}{\mathbf{I}}
\newcommand{\Ym}{\mathbf{Y}}
$
### ITCS8010
# G_np Simulation Experiment
In this experiment I like to replicate the behaviour of `Fraction of node in largest CC` and `Fraction of isolated nodes` over the `p*log(n)` in `Erdös-Renyi random graph model`._____no_output_____
<code>
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import collections as collec
%matplotlib inline_____no_output_____# Fraction of node in largest CC Vs. {p*log(n)}
n = 100000
x1 = []
y1 = []
for kave in np.arange(0.5, 3.0, 0.1):
G = nx.fast_gnp_random_graph(n, kave / (n - 1))
largest_cc = max(nx.connected_components(G), key=len)
x1.append(kave)
y1.append(len(largest_cc)/n)
# print(kave)
# print(len(largest_cc)/n)
fig, ax = plt.subplots()
ax.plot(x1, y1)
ax.set(xlabel='p*(n-1)', ylabel='Fraction of node in largest CC',
title='Fraction of node in largest CC Vs. p*(n-1)')
ax.grid()
# fig.savefig("test.png")
plt.show()_____no_output_____# Fraction of isolated nodes Vs. {p*log(n)}
x2 = []
y2 = []
for kave in np.arange(0.3, 1.5, 0.1):
p = kave / (n - 1)
G = nx.fast_gnp_random_graph(n, p)
isolates = len(list(nx.isolates(G)))
x2.append(p * np.log10(n))
y2.append(isolates/n)
# print(kave)
# print(isolates/n)
fig, ax = plt.subplots()
ax.plot(x2, y2)
ax.set(xlabel='p*log(n)', ylabel='Fraction of isolated nodes',
title='Fraction of isolated nodes Vs. p*log(n)')
ax.grid()
# fig.savefig("test.png")
plt.show()_____no_output_____# Fraction of isolated nodes Vs. {p*log(n)}
x2 = []
y2 = []
for kave in np.arange(0.3, 10, 0.1):
p = kave / (n - 1)
G = nx.fast_gnp_random_graph(n, p)
isolates = len(list(nx.isolates(G)))
x2.append(p * np.log10(n))
y2.append(isolates/n)
# print(kave)
# print(isolates/n)
fig, ax = plt.subplots()
ax.plot(x2, y2)
ax.set(xlabel='p*log(n)', ylabel='Fraction of isolated nodes',
title='Fraction of isolated nodes Vs. p*log(n)')
ax.grid()
# fig.savefig("test.png")
plt.show()_____no_output_____
</code>
### Observation:
1. The result of the first experiment (i.e. `fraction of node in largest CC` varying `p*(n-1)`) gives somewhat similar behaviour we observed in the class slide.
2. In the second experiment (i.e. plotting `fraction of isolated nodes` on varying `p*log(n)`) gives somewhat different result comparing to the one we found in the class slide. When we plot the graph for p*(n-1) within the range of 0.3 to 1.5 we don't get the long tail; which we can get when we increase the range of p*(n-1) from 0.3 to 10. Just to inform, in this experiment we do run the loop on the different values of `p*(n-1)`, but plot it on `p*log(n)` scale. I am not sure the reason behind type of behaviour._____no_output_____## Key Network Properties
Now we like to use the networkx [[2]](https://networkx.github.io/documentation/stable/) library support to ovserve the values of the key network properties in Erdös-Renyi random graph._____no_output_____
<code>
# plotting degree distribution
n1 = 180
p1 = 0.11
G = nx.fast_gnp_random_graph(n1, p1)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True) # degree sequence
degreeCount = collec.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.bar(deg, cnt, width=0.80, color="b")
plt.title("Degree Histogram")
plt.ylabel("Count")
plt.xlabel("Degree")
ax.set_xticks([d + 0.4 for d in deg])
ax.set_xticklabels(deg)
# draw graph in inset
plt.axes([0.4, 0.4, 0.5, 0.5])
Gcc = G.subgraph(sorted(nx.connected_components(G), key=len, reverse=True)[0])
pos = nx.spring_layout(G)
plt.axis("off")
nx.draw_networkx_nodes(G, pos, node_size=20)
nx.draw_networkx_edges(G, pos, alpha=0.4)
plt.show()
# diameter and path length
dia = nx.diameter(G)
print(dia)
avg_path_len = nx.average_shortest_path_length(G)
print(avg_path_len)_____no_output_____
</code>
# References
[1] Erdős, Paul, and Alfréd Rényi. 1960. “On the Evolution of Random Graphs.” Bull. Inst. Internat. Statis. 38 (4): 343–47.
[2] NetworkX, “Software for Complex Networks,” https://networkx.github.io/documentation/stable/, 2020, accessed: 2020-10._____no_output_____
| {
"repository": "biqar/Fall-2020-ITCS-8010-MLGraph",
"path": "assignments/assignment_0/gnp_simulation_experiment.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 3,
"size": 126929,
"hexsha": "cb11f0fbb7db96aaf22fc95753f99463572792c6",
"max_line_length": 64656,
"avg_line_length": 445.3649122807,
"alphanum_fraction": 0.9407385231
} |
# Notebook from maximcondon/Project_BabyNames
Path: Project_1_Babynames.ipynb
# Project 1: Babynames_____no_output_____## I. Characterise One File
### 1. Read the data
- Read the file yob2000.txt
- Name the columns
- Print the first 10 entries_____no_output_____
<code>
import pandas as pd
from matplotlib import pyplot as plt_____no_output_____popular_names = pd.read_csv('yob2000.csv',
names = ['Names', 'Sex', 'Birth Count'])_____no_output_____len(popular_names)_____no_output_____popular_names.head(10)_____no_output_____top_1000 = popular_names.sort_values(by = 'Birth Count',
ascending=False).reset_index().drop('index', axis=1)_____no_output_____top_1000.head(10)_____no_output_____
</code>
### 2. Calculate total births
- Calculate the sum of the birth count column in the file yob2000.txt.\_____no_output_____
<code>
top_1000['Birth Count'].sum()_____no_output_____
</code>
### 3. Separate boys / girls
- Calculate separate sums for boys and girls.
- Plot both sums in a bar plot_____no_output_____
<code>
top_1000.groupby('Sex')['Birth Count'].sum()_____no_output_____plot_boys_girls = top_1000.groupby('Sex')['Birth Count'].sum()_____no_output_____plot_boys_girls.plot.bar()
plt.ylabel('Birth Count')
plt.title('Total births of females and males in Year 2000')
plt.show()_____no_output_____
</code>
But there's a greater amount of female names!_____no_output_____
<code>
top_1000['Sex'].value_counts() # counts column values_____no_output_____
</code>
### 4. Frequent names
- Count how many names occur at least 1000 times in the file yob2000.txt._____no_output_____
<code>
top_1000[top_1000['Birth Count'] > 1000].head(10)_____no_output_____top_1000[top_1000['Birth Count'] > 1000]['Birth Count'].count()_____no_output_____
</code>
### 5. Relative amount
- Create a new column containing the percentage of a name on the total births of that year.
- Verify that the sum of percentages is 100%._____no_output_____
<code>
(top_1000['Birth Count']/(top_1000['Birth Count'].sum()) * 100).head()_____no_output_____top_1000['Percentage of total count'] = top_1000['Birth Count']/(top_1000['Birth Count'].sum()) * 100_____no_output_____top_1000.head()_____no_output_____top_1000['Percentage of total count'].sum().round()_____no_output_____
</code>
### 6. Search your name
- Identify and print all lines containing your name in the year 2000._____no_output_____
<code>
top_1000[top_1000['Names'].str.contains('Max')]_____no_output_____
</code>
### 7. Bar plot
- Create a bar plot showing 5 selected names for the year 2000._____no_output_____
<code>
peppermint = top_1000.set_index('Names').loc[['Max', 'Eric','Josh','Daniela','Michael']]_____no_output_____peppermint_____no_output_____peppermint_5 = peppermint.groupby('Names')[['Birth Count']].sum()
peppermint_5_____no_output_____peppermint_5.plot.bar(stacked=True, colormap='Accent')_____no_output_____
</code>
## II. Characterize all files
### 1. Read all names
To read the complete dataset, you need to loop though all file names:
yob1880.txt
yob1881.txt
yob1882.txt
...
Complete the code below by inserting _csv, data````df, names=['name', 'gender', 'count'], y and 2017:
years = range(1880, ____, 10)
data = []
for y in years:
fn = f'yob{_____}.txt'
df = pd.read____(fn, ____)
df['year'] = y
data.append(____)
df = pd.concat(____)
Run the code and check the size of the resulting data frame.
Hint: In addition to some pandas functions, you may need to look up Python format strings._____no_output_____
<code>
years = range(1880, 2017)
data = []
for y in years:
fn = f'yob{y}.txt'
df = pd.read_csv(fn, names =['Names', 'Sex', 'Birth Count'])
df['year'] = y
data.append(df)
usa_names = pd.concat(data)_____no_output_____usa_names.head(10)_____no_output_____len(usa_names)_____no_output_____
</code>
### 2. Plot a time series
- extract all rows containing your name from the variable df
- plot the number of babies having your name and gender over time
- make the plot nicer by adding row/column labels and a title
- change the color and thickness of the line
- save the plot as a high-resolution diagram_____no_output_____
<code>
my_name = usa_names[(usa_names['Names']=='Max')
& (usa_names['Sex'] == 'M')]
my_name.head(10)_____no_output_____my_name = my_name.set_index(['Names', 'Sex', 'year']).stack()
my_name = my_name.unstack((0,1,3))
my_name.head(10)_____no_output_____plt.plot(my_name)_____no_output_____plt.plot(my_name, linewidth=3, color= 'red')
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Popularity of the name Max over time')
plt.savefig('Max_over_time.png', dpi = 300)
plt.show()_____no_output_____
</code>
### 3. Name diversity
- Have the baby names become more diverse over time?
- What assumptions is your calculation based upon?_____no_output_____
<code>
usa_names.head(5)_____no_output_____name_diversity = usa_names.groupby('year')[['Names']].count()_____no_output_____name_diversity.head()_____no_output_____plt.plot(name_diversity)
plt.xlabel('Year')
plt.ylabel('Number of different names')
plt.title('Variation of the number of given names over time')
plt.show()_____no_output_____
</code>
The SSA files that we are extracting our data from are for the 'Top 1000' names, therefore, there are a certain number of unique names (names with a yearly frequency of less than 5) that will not be included in the data.
Our calculation essentially assumes that the number of names that has a frequency of less than 5 in the 1880s up until 2017 has probably increased too, or are at least equal! i.e. The number of names not present in the Top 1000 list does not affect the data enough that we can't conclude that in the present day there is a greater amount of name diversity._____no_output_____### 4. Long names
- add an extra column that contains the length of the name
- print the 10 longest names to the screen.
Hint: If having the name in the index was useful so far, it is not so much useful for this task. With df.reset_index(inplace=True) you can move the index to a regular column._____no_output_____
<code>
usa_names.head()_____no_output_____long_names = list()
for i in usa_names['Names']:
long_names.append(len(i))
usa_names['Length of name'] = long_names
usa_names.head(5)_____no_output_____long_names_10 = usa_names.sort_values(by='Length of name', ascending=False).head(10)
long_names_10_____no_output_____
</code>
## III. Plot Celebrities
### 1. Plotting Madonna
- plot time lines of names of celebrities
- try actors, presidents, princesses, Star Wars & GoT characters, boot camp participants…
Hint: When was the hit single “Like a Prayer” released?_____no_output_____
<code>
usa_names.drop(columns='Length of name').head()_____no_output_____celebrity = usa_names[usa_names['Names'] == 'Madonna']
celebrity = celebrity.drop(columns='Length of name')
celeb_stacked = celebrity.set_index(['Names', 'Sex', 'year']).stack()
madonna = celeb_stacked.unstack((0,1,3))_____no_output_____plt.plot(madonna, linewidth=2.5)
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Popularity of Madonna year on year')
plt.show()_____no_output_____
</code>
### 2. Total births over time
- create a plot that shows the total birth rate in the U.S. over time
- plot the total birth rate for girls/boys separately_____no_output_____
<code>
year_sum = usa_names.groupby('year')['Birth Count'].sum()
year_sum = pd.DataFrame(year_sum)
year_sum.head()_____no_output_____year_sum.plot()
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Total Birth Rate in the USA over time')
plt.show()_____no_output_____usa_names.head()_____no_output_____usa_females_males = usa_names.groupby(['year','Sex'])['Birth Count'].sum().unstack()
usa_females_males.head()_____no_output_____usa_names_males = usa_females_males.groupby('year')['M'].sum()
usa_names_females = usa_females_males.groupby('year')['F'].sum()
plt.plot(year_sum)
plt.plot(usa_names_males)
plt.plot(usa_names_females)
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Total, female, and male birth count year on year')
plt.show()_____no_output_____
</code>
### 3. Normalize
- divide the number of births by the total number of births in each year to obtain the relative frequency
- plot the time series of your name or the celebrity names again.
Hint: To reshape the data for plotting, you may find a combination of df.groupby( ) and df.unstack( ) useful._____no_output_____
<code>
year_sum = usa_names.groupby('year')[['Birth Count']].sum().reset_index()
year_sum.head()_____no_output_____usa_names = usa_names.drop(columns='Length of name')
usa_names.head()_____no_output_____
</code>
#### Now let's merge! Almost always 'left' and you will merge 'on' a point they have in common, eg year!
Can change sufixes too!_____no_output_____
<code>
merged_usa_names = usa_names.merge(year_sum, how='left', on='year',
suffixes=('_name', '_total'))
merged_usa_names.head(10)_____no_output_____merged_usa_names['Name Rel. %'] = merged_usa_names['Birth Count_name']/merged_usa_names['Birth Count_total']*100_____no_output_____merged_usa_names = merged_usa_names.sort_values(by='Name Rel. %', ascending=False)
merged_usa_names.head(10)_____no_output_____my_name = merged_usa_names[(merged_usa_names['Names']=='Max')
& (merged_usa_names['Sex'] == 'M')]
my_name.head()_____no_output_____my_name = my_name.drop(columns=['Birth Count_name','Birth Count_total'])_____no_output_____my_names_stacked = my_name.set_index(['Names','Sex','year']).stack()
my_names_stacked.head()_____no_output_____my_name = my_names_stacked.unstack((0,1,3))
my_name.head()_____no_output_____plt.plot(my_name, linewidth=3, color= 'green')
plt.xlabel('Year')
plt.ylabel('Name Relativity %')
plt.title('Percentage of people named Max relative to the total number of births over time')
plt.show()_____no_output_____
</code>
## II. Letter Statistics
### 1. First letter statistics
- use df.apply(func) to add an extra column that contains the first letter of the name.
- count how many names start with ‘A’.
- plot the relative occurence of initials over time.
- what can you conclude from your observations?
Hint: You may need to iterate over the names with df.iterrows(). A more elegant solution is possible by writing a Python function and using df.apply()_____no_output_____
<code>
merged_usa_names.head()_____no_output_____def initial(name):
return name[0]
merged_usa_names['initial'] = merged_usa_names['Names'].apply(initial)_____no_output_____merged_usa_names.head()_____no_output_____merged_usa_names[merged_usa_names['initial']== 'A']['initial'].count()_____no_output_____first_letter_sum = merged_usa_names.groupby('year')['initial'].value_counts()_____no_output_____first_letter_sum.head()_____no_output_____df = pd.DataFrame(first_letter_sum)
df.head()_____no_output_____df = df.reset_index(0)_____no_output_____df.columns=['year','sum of initials']_____no_output_____df = df.reset_index()
df.head()_____no_output_____merge = merged_usa_names.merge(df, how='left', on=['year', 'initial'])_____no_output_____merge.head()_____no_output_____merge['initial Rel. %'] = merge['sum of initials']/merge['Birth Count_total']*100_____no_output_____merge.head()_____no_output_____merge = merge.sort_values(by='initial Rel. %', ascending=False)
merge.head()_____no_output_____initials = merge.drop(columns=['Birth Count_name', 'Birth Count_total', 'Name Rel. %', 'Sex', 'Names'])_____no_output_____initials.head()_____no_output_____initials = initials.drop_duplicates()_____no_output_____#initials_s = initials.set_index(['sum of initials', 'initial', 'year']).stack()_____no_output_____#initials_s.unstack((0,1, 3))_____no_output_____#plt.plot(initials_s, linewidth=3)
_____no_output_____
</code>
### 2. Last letter statistics
- try the same for the final character
- separate by boys/girls
- what can you conclude from your observations?_____no_output_____
<code>
def last_letter(name):
return name[-1]
merged_usa_names['last letter'] = merged_usa_names['Names'].apply(last_letter)
merged_usa_names.head(5)_____no_output_____
</code>
### 3. e-rich Names
- Find all names that contain the character ‘e’ at least four times._____no_output_____
<code>
usa_names.head()_____no_output_____
</code>
### USE .APPLY to apply a function!
_____no_output_____
<code>
def four_e(input):
check = []
for i in input:
if i == 'e' or i == 'E':
check.append(i)
return len(check)
usa_names['e occurences'] = usa_names['Names'].apply(four_e)
usa_names.head()_____no_output_____many_es = usa_names[usa_names['e occurences'] > 3]
many_es.head()_____no_output_____len(many_es)_____no_output_____
</code>
| {
"repository": "maximcondon/Project_BabyNames",
"path": "Project_1_Babynames.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 336922,
"hexsha": "cb1200e5f226e06d999bb75bc96554aa4c60f417",
"max_line_length": 32684,
"avg_line_length": 64.7677816225,
"alphanum_fraction": 0.7013789542
} |
# Notebook from markjdugger/arcgis-python-api
Path: guide/14-deep-learning/how_deeplabv3_works.ipynb
# How DeepLabV3 Works
## Semantic segmentation_____no_output_____Semantic segmentation, also known as pixel-based classification, is an important task in which we classify each pixel of an image as belonging to a particular class. Our guides [How u-net works](https://developers.arcgis.com/python/guide/how-unet-works/) and [How PSPNet works](https://developers.arcgis.com/python/guide/how-pspnet-works/) give an intuition on how Semantic Segmentation works.
*Note:* To understand contents of this guide, we assume that you have some basic understanding of the convolutional neural networks (CNN) concepts. You can refresh your CNN knowledge by going through this short paper “A guide to convolution arithmetic for deep learning”. _____no_output_____## Challenge with Deep Convolutional Neural Networks (DCNNs)_____no_output_____Fully Convolutional Neural Network (FCN) is a DCNN used for Semantic Segmentation. A challenge with using FCN on images for segmentation tasks is that, input feature map becomes smaller while traversing through the network (bunch of convolutional & pooling layers). This causes loss of information about the images and results in output where: _____no_output_____ - Predictions are of low resolution
- Object boundaries are fuzzy
_____no_output_____<center><img src="../../static/img/dcnn_img.png" height="300" width="800" /></center>
<br>
<center><b>Figure 1.</b> Repeated combination of convolution & pooling layers reduces the spatial resolution of the feature maps as the input traverses through the DCNN.</center>_____no_output_____## History of DeepLab_____no_output_____**DeepLabv1** : Uses Atrous Convolution and Fully Connected Conditional Random Field (CRF) to control the resolution at which image features are computed.
**DeepLabv2** : Uses Atrous Spatial Pyramid Pooling (ASPP) to consider objects at different scales and segment with much improved accuracy.
**DeepLabv3** : Apart from using Atrous Convolution, it uses an improved ASPP module by including batch normalization and image-level features. It gets rid of CRF (Conditional Random Field) as used in V1 and V2.
These improvements help in extracting dense feature maps for long-range contexts. This increases the receptive field exponentially without reducing/losing the spatial dimension and improves performance of segmentation tasks.
_____no_output_____## Atrous Convoltion (Dilated Convolution)_____no_output_____Atrous Convolution is introduced in DeepLab as a tool to adjust/control effective field-of-view of the convolution. It uses a parameter called ‘atrous/dilation rate’ that adjusts field-of-view. It is a simple yet powerful technique to make field of view of filters larger, without impacting computation or number of parameters.
Atrous Convolution is similar to the traditional convolution except the filter is upsampled by inserting zeros between two successive filter values along each spatial dimension. r - 1 zeros are inserted where r is atrous/dilation rate. This is equivalent to creating r − 1 holes between two consecutive filter values in each spatial dimension.
In the diagram below, the filter of size 3 with a dilation rate of 2 is applied to calculate the output. We can visualize filter values are separated by one hole since the dilation rate is 2. If the dilation rate r is 1, it will be Standard convolution._____no_output_____<figure>
<img src="../../static/img/dilated.gif", align="left">
<img src="../../static/img/normal_convolution.gif">
<center>
<figcaption><b>Figure 2</b>. Animation of convolution with <b>dilation=2 (left)</b> and <b>dilation=1(right)</b>. When dilation=1, it is just the standard convolution operation.</figcaption>
</center>
</figure>_____no_output_____<figure>
<img src="data:image/PNG; base64, iVBORw0KGgoAAAANSUhEUgAABa8AAAH+CAYAAACbYPLRAAAABHNCSVQICAgIfAhkiAAAIABJREFUeJzsvWuspWd5n3/d9/Ou4z7NnsEe4xNgm4akiEM4tOCYBCECSipSJQpVFTWR8qHlQ9WDguoqUEVqmlZtVZX0Q6maPzSRWpVGbRGKVEjTxhzKqWlCCTE4DuADxmPPce/Zx7XW+9z3/8P9rDUmYAePk8zQ3JfZ0szee+31vO8sIc1v/eb6ibs7jaOjI0SE0WiEiJAkSZIkSZIkSZIkSZIkSZIkfxK4O3t7e6ytrVFK+aav6zU4U5IkSZIkSZIkSZIkSZIkSZI8IxleJ0mSJEmSJEmSJEmSJEmSJNcd3dU+8D3/8ue5vHsJFcesIqpg0ImAg+PM5gsu7uzQqVAXCwbDgqogUug6QQTMBRXFqZg5okqnHW6OU5FSEClQoSsF8wpaQRzRAUJBHBBANH4hjpu1kzruxO/dMIvzOj3m8TWrFRzEHSmCFmVQBkChDDqKCC4eP1o71EG9IEVx8fhw4mcQ90FVEDWqO0VK+7ojIriAioA77hKPUUFEsGqYGaqKqoCAET9cbfn98b3VDCnSLtnBHHfHcEyMTjvEQUSJmxD323D2Dg/ZP17g7brfee/Pceuttz/X11OSJEmSJEmSJEmSJEmSJMkfC1cdXs9mhxweXEYjg42AGYdlaCyO45Ri7O7uUlAOD53SFToRSqcRuiIggrtBUQSNMBrQAqjgLiiKxgOAHhTcFZGCEoGwiGAugFNEMHdUtQXZhnvFDcwNswivRRSrRhGNQNkK9EBniMKiryiOKO37pZ1RERUqFsG2g7hFsOyCu6Ndice0eybE5yNoj6zdEYQOBUIz7piH70XxuP64JNRp9wqKKEtZeaW2MJz4w1DBMAqCW9wX2tM6guPMj4853D/GXXG3CPCTJEmSJEmSJEmSJEmSJEmuE646vPZl1bg1h2s1RCU+3CNHFRiPR/R1jcODIxRlMe+RThGBosJyF9Jb0BqhK2gRpAi19iiKY7RoGFXFzJDWOPYWLGPRmnb3aHGzbFw7jgERnmMRthfATegGHQIULREIawEFLVHpXoXRLaCXFkCbO+a2TIVRicDZRQCl2upuLTNlVCI8xmnnAMSikW01WtarJjmsbiTRml4+1ltQjsgq9HZvT+jtc22L0z3eHJAW8FMNasXNMXFcFXKgM0mSJEmSJEmSJEmSJEmS64jnEF6zrFzTElbAcItg26GFr8JkMsWqc3R0DO4s+oqIxsPVooXdAlppIax7oe+JtjXLkLe1kttzqPofUoQ0BQdE2O1+ReEh0eDGhaIDjGgcSxePU23BuDhG3xrPGm1posVdWsDrDmZ11Z5eXiqEXqTWHncF7XCvgFNKoVqk1cszRggeYXQ1X+lWRCSuu/0+Au/2hoGDqjZLSuhAzO1K61pau9stdCEi9LVv9y9OqUoE7FFtX50+SZIkSZIkSZIkSZIkSZLkeuGqw2vrHVDcHFFHVaINTVOBEOG0e0UQptM1am8sfEY/n1OtMhh0DIYFsQorDUYTW7i0pjEtPpaoLrdfR0DruEcwuwyE+9rjTdWxDGxd2jlcQLWF1AVXaY3spS5b8NboRgQzVqoSEad6DW+1ltZ+9lCFSATf5lcCaBdDxCKMbt8bIbNEqAwRvhON7WCp94gWersKvLWrHcOtAgVEqDV+fllF58Tji+DLLU6HUgoiUGuoXCDc4qodmLSCdgbYSZIkSZIkSZIkSZIkSZJcP1x1eP3kuYsoM8aj4Uoh4i3ojdC3aTscRIWuCJuba+zsLBDpWMzmuAt1AdIpZaBYawtXd6iOlGgFhywjxhZb2RhFrwTIaPNHRzNb2tChE5oOkYoQw4dORXWA0gYPpWlGXFaWDsxQYmTRxWKgsbmmzYV4kgi8xXUVOIcyJDQd4oJYRMhifmVU0g0l/N7eQnjxaKIbjmgLwcUQL+1AV/QkUmgClfB9R/Ic94h2jZi16y9PUY44Khp/TtIjRaJJbpW6bG0nSZIkSZIkSZIkSZIkSZJcJ1x1eH0079m7fJHn33CK4WCAYKDRBi6ioa1wibI0kduW0rG5ucXOziUGwyGHx4dMRmNUBRY1VBZFWpMZvG8rgyGLbj+r+aE9GtnaWs9hCInwN9rgpQ0VLrvJ1nQZTm89KuUpihFvKXucVHWwOgMSjmtzxy0Sa7cIkdVlFS63PnP8J4qLYyKIyUrfISqgddXkbg8FCrZsPzcFiKCrMvQyL49Wd1lpUKSF5kuFiHsE7d4WHkVW2fVKReLmrNTYyxVIl9VzfTtcvnyZ3/u93+ORRx5hNptx4sQJvvu7v5sXvehFDIfDZ/9iuoYsFgsODw8ZDAaMx2NU9VofKUmSJEmSJEmSJEmSJEkSnkN4XV05mleePH+Jm244xXDQRg2LRGu6tbHNHXFduZwHgyEnTmyzs3OJ8XjK8dEx7jAcKFpClyGyDLB9FdA6oSUpIk9xQDcnNM4q6/XmuPYaTeNlFilN/9Hazo6jWuJLGCKOeY0mNa0FjSPWQmaWPz8UKdiy6d1mJKVpVHz5XKH6EJUrPuplaoy3tnr7uVowiwA9QmZQKVQclzirlGhZuy0D82XWHqG1NWf2sqdetCyz+JX7u1LRZTDuhqrEqOS3GVw/9thjvO997+O//Jf/wte//nUWi0Vzjyuj0YiXv/zlvOMd7+Atb3kL6+vrz/o19adJrZWPfezjvOc9/5LPfe5z/PiP/zjvfOc7ufnmm6/10ZIkSZIkSZIkSZIkSZIk4TmE1yqKe+HweMGZJy9w4w1bDLpwMeNOtdYKLks/NdCC5244ZOvENpcuXmQwGjGbz0JhIQWVNiKo8Vi4ogNZNpItPBdop80nzcpFLWHciGaz92CCFg0tiHkbZgSaw3r1eKIZvgyGPVL0OItLNHJbwI1bazc3E8jSyd2QZbO6/WBvw5WdS5OpFMQdp6eIUmsfVW4M80rRGHdswg9cQ3GiEoOS4m3UUkoE8bIcb2yhOUsFSm3Bd+TmYvF1VUHVMFugpYtA/I9IsD//+c/zD/7BP+ATn/gEg8GA7//+7+cVr3gFW1tbnDlzho997GP87//9v/niF7/I3/27f5ef/umf5uTJk1f78voT5cyZM7z3ve/lP/7H/8iZM2eYzWbs7+9jq+HPJEmSJEmSJEmSJEmSJEmuNVcdXhegEH7qw+MFZ89d4sbnnWQ46JA2tOhCG0CUVQPZWqA7GIw4sb3N3u4uRYX5/JhxN44BQXVwxVXjZ7mzlEYvc3AtMRYJ0dK21vRexsni2srWgtflr72ZqStuFtJqFdQFkdCMuBiqfsUpbbLSezgVbW3r5ZjiFTXJFe+0Nzd3tLEtBhcRihR6M6QUnH7loF6ereXk0QBvmg8Lx0gL95UOjyFIb4F8GcCynQ3hzq7tPojiuhykbKOPVdqIY4cWwykt83768PqJJ57gn//zf84nPvEJXvrSl/Lud7+bV7/61UwmE1SVWit/+2//bf7dv/t3vPe97+W9730vd9xxB29961uZTqdX+xL7E2FnZ4ef//mf58Mf/jA/8AM/wPr6Oh/84Aev9bGSJEmSJEmSJEmSJEmSJPlDXHV4jRtuTqcxKjg/Ms6fvcQNN2wz7iIgrQiORRt66RKuoKUgAoPhkLX1DfYuX6YMBhwfzxgNhgyGA/reKUWW64wtRHYMMHF6M5bW6rp0XddoJ8coYSTBZksPtrUBSW+DjjEQiYX2o6jH0CJgtbWe3ehNI9TWGFo0DNECldZ6rkCExhFkg2PUalBapG0RLM+9Igi2WESLu8RjlVB/OIZK6EwimG+3etngrj09rT0uJQJ774FCEQmvt1vTomjowr1Qa4+WK35vt9oC7nBpm9k3aEy+4Y/ZnY985CN86lOf4qabbuJnf/Znueeee74plJ5Op7zjHe/gySef5Atf+AJHR0fUWldfP3PmDB/4wAf4yEc+wkMPPYS7c/vtt/PDP/zDvP3tb+fmm29GNcYz/8f/+B/8xm/8Bm94wxvY2NjgV3/1V/nkJz/J3t4et912G3/lr/wVfvzHf5y1tTV+9Vd/lc9//vO89a1v5fu+7/u+6VwPPPAAH/jABxgOh/zET/wE0+mU5z//+fzCL/wCP/ADP8CHPvQhfu3Xfu1ZvfSTJEmSJEmSJEmSJEmSJPmT5+qd1xhmFcyakwL2j3u4cInnndhk1JWVh3oZnAogRaOp7E7XFcp0ioiwv7eLGRzP5lRzhqMhRo3ucRtKFJq3ujWql0OJikfY6x66EdVVk9gBLJrMpZRoTVvE4CIaOhJR3LUFuN5CaENVCYV3C2G1Q3D6lX4k9CHuYNaDdLAM7NsZRTQa1O15tYRneyWkboeU1X8FswjDV7qV1qoGKO3aBIlmuzdnt3soRUQoSvtCLD0WkRidxCmF+Jy2tjceTfOnKV7v7e3xsY99jEuXLvHTP/3TvPKVr/yWbWoRYXt7m1/4hV+g73vG4zGj0QiAL3/5y7zrXe/i4x//OC95yUt429vehqrymc98hn/6T/8pv/3bv83P/dzPcddddwHwla98hf/6X/8rX/3qV3nyySe55ZZb+OEf/mGefPJJ7rvvPv7RP/pHqCpvf/vb2d/f50Mf+hD7+/u85CUv4fbbb1+dycz4jd/4Df79v//3vPnNb0ZEOHXqFD/zMz9DKYWu6yjLP48kSZIkSZIkSZIkSZIkSa4rrjq8XqpAxMA1wlkz4fLhHKu73HByi+GoIC1IdSeCZZq+Q4VaK+YwHI2Z9JX9ehnU6avBfE6H0JUSMpDmu3YxaCoR1QjCzSuqGueJp6Oaoy2QFdX2+WhmLw0jK1f1MrQGXKRF4kK1iopQtOlKzKi29H0boqtHAVc82ebRNi+imDuKgocipK89RWPYUZo3O/zZvvwfV5Lk+Ly230dgrld0I+ahT/Hazi8tmFeqG+KyCrQlbv/qvql6nL+3pwxJfjNnz57loYceYjKZ8NrXvpbNzc1nfE384aHG4+NjfvmXf5lPfOIT/NAP/RA/8zM/w+23346IcObMGe69915+8zd/k9e+9rX81E/91Ornz2YzPvaxj/HOd76Tv/pX/yqnTp2i1sr73/9+/sW/+Bf8z//5P3nzm9/Mq1/9ak6dOsX/+T//h6997WvcfPPNdF28rC9dusRnP/tZ5vM5r3nNa9je3kZVV+F7Oq6TJEmSJEmSJEmSJEmS5PpFr/aBpSkyTCXcy9KhUnAX9o/nnLu4w7zWNrZoIBYOaLfWVI5QVRWKwvr6OpPpGt1wSG+VvhrWx/e1aHj1sZwXXH4txhGNWnuqGdUMc6fS1B7uVKvUWulrxbCnXHl83cyoVqENMq6a0Xol+DVogXUP9O0GarSrVSKob+OJqiE1EdfVIKQIocZo54rstKBLJ7gIuESo3VQe7oI4aLv86kZ1p7Zrd7d40pVPXONU7TwGuDgm1u7j8nq0NcNlFY5/Ky5evMjly5dZX1/nxhtvXAXD3y6PPvoon/70p5lMJvzIj/wIL3rRi1hfX2dtbY077riDv/SX/hKj0YiPfvSj7OzsfMNjv+u7vos3v/nN3HLLLaytrbG5ucnLX/5ytre3eeKJJ9jb2+PFL34xL33pSzl37hz3338/h4eHq8d/6Utf4ktf+hK33347L3vZy647/3aSJEmSJEmSJEmSJEmSJE/PVYfXVnsQCbe0FtxisFAczISD4xlPnLvA0WweDWeNcNVaqzl008aVUDoC7OF4zGA0ovbhZbYaAa4vfSHuoQFpbWmzFsrGWiFIDBPGwGMMGbrHaKTTzuBONVt9zXwRA4pqLE3dzccRITnh1XZtH3IlMF42uqNFXltI347alCoRDguipTW9rzR+w0+99GdfmZxczT8uxxSl3V9RVul9C96X124Q91dbKF00NC2l6VYQtPm+VQsqGtoMffqXwWw2o+97hsMhw+HwGYcdvxVf+9rXeOKJJzh9+jS33347o+Fo9TVV5a677uLEiRM89thjHB0drb4mItx5552cOnUK1Stqj7W1NUajEYvFgr7v2dzc5Pu+7/sYDAZ85jOfWQXgZsanP/1pzp49y+te9zpuueWWVIQkSZIkSZIkSZIkSZIkyXcQVx1eV5r3ugWo7kalR7yi4lSEg+MF5y5eZtb30ZI2w20ZzbbM1dvvJULXjc0NRsMRg+GQRb9gvjD63qkV3Jahd8UkWt2q2oJeae7qpsGwnloXONZy6ObMXjakmydbHDotVxzZTSeiGo3kZSPb3GMQUUGki7Y54N6jrW0dtWwDq7j1REs8zqIe/mn3RfNShwsbX4bhEZjHGGOzeS9T8BZGy5VIO55PWxjtdeXZri0wF3EEp7ThSlRbM1xxEXp3aq1UM6S5ur8Vw+GQruuYz+fM5/OnHXZ8OnZ2djg6OmJjY4PpdBoN86ewtbXFZDLh4ODgGwYeASaTCV3XfYOPWyTuxbJRLyK85jWv4dZbb+Xzn/88jz76KIvFgrNnz/Jbv/VbjMdjXve613HixIlnde4kSZIkSZIkSZIkSZIkSa4tVx1eSwt8vTbtBo6UGBE0s6aRVvYOZpw9t8ts1rfmc8Wkx0s0jW2p7PAIX4sI6xsblK5jOBzR155aDTOL0UOaYsOX5erWPvYIx1Wl6TMKKh3QnNBiII64RSP6Sk+ZWlsbfBmka4Tg0QyXcEyb4TU809FgBudK6CxewAQ3RSgoZRXRO7pSd4hKa3z7SkeyGmtsKa20awzNSNwj0daiXr1ZEOONy4HJ+FAGg/CMl6ZoMatR9HbBHKqBieAarnIRxZ+hTH3q1Ck2Nja4fPkyTzzxBIvF4lm9TpaBd9d14Sj/Q4R/W6m1Pqtg/Knfe8cdd/DKV76S8+fP83u/93scHh7ypS99iQcffJDv+q7v4ru/+7sZj8bP6txJkiRJkiRJkiRJkiRJklxbrjq8djcEKEUjSFSar9kpQEdBWpB7dFw5e2GP+TwUF+Z1KfRYOaJpo4I4dKWwubmJCZSuYz6fRYjthpljrYEdLWxvY5G+auMCuDm19tHSbs7raGXHAGIoOJbxNfR9be1w6BeGVYhAOwYVVUpcojlmfQTpspx2XLbIl45roXoErKIFBCq1DSsuVSO+UoIIbYxx1f72lZIkvj9GIN1BilJU2vgiiDS3tmh7fIi3a9Oj4IJqiRHH6vSLyuHhMXv7R/Q1gnx5hvT6hhtu4EUvehGz2YzPfOYzXL58+RlfFxcuXOCLX/wiBwcHQGg+uq7j+Pj4Wwbfs9mMxWLBaDT6luH2t8N0OuWee+5hNBrz2c9+lgsXLvDJT36SnZ0d7r77bm688cZvanwnSZIkSZIkSZIkSZIkSXJ98+zW956C4/SLni4qyDFmGJ3kGGnEoi3cwtyj48q5S5d53qkNRl3X5M7NS10kgurWpBYVuuGAEydOcvHieQbdgEXf4+4Muy4C83aOZevYlikyEJG0tcAyvmDV0RIBNNb0IOItAG7jha4gvgqGMW+t57LsTzf/dPuxduWXjq30HhGgX3F0O45IC84RfKkDeer9bLqSuCiNhrZbDDw6Ky2K0HQpT3nfQbRro5BNKtJ2G82jVd7Pj+n7BTgU7RiNhqxNJ5y/uE/voT55OjY3N3nDG97Afffdx3/7b/+NH/zBH+RNb3oT4/E3N5lnsxnve9/7+JVf+RV+9Ed/lL/1t/4Wp0+fZnNzk/Pnz3Px4kVqrd8w+vjEE0+wu7vLzTffzHA4/KNedt8SEeGVr3wld9zxIu6//34+97nP8dnPfpatrS1e/epXs7GxcVU/99ly4cIF7rvvPp588slnrVdJ/vgZj8fcfffd3HXXXQwGg2t9nCRJkiRJkiRJkiRJkuRZctXhtXgoH5BwSC9D3Eq0sK1F2Oo0d7RzeHjM4/MZJzc32dxco8TDqdVAwUVQ1abrCN/y9omT7O5eRIG+X9Cp0i8cHURIjkXwKqrgSjxUELXWRI7msRn0tac0d7SqRntbWkPaZRWI05zYiKBEUzwCcodam0JaV6oOs3BOx8+s4ZdG6KtREEqB5U2K8LtEyxvoiraknGiuEyG2eLS+azW0tCVHDPPaPNcgFKr7yrXtZvS1YtXorYIIRQqDwYD19TVGgxGldKiAubKze8hi/vTBNcR53/rWt/Lf//t/58Mf/jD/8B/+Q9ydN73pTUwmk9X37e/v8/73v59f+qVfou97XvnKV7K2tsadd97Jn/tzf46PfvSjfO5zn+OlL30pW1tbABwfH/O//tf/Ynd3l7e97W1sbm5e7cuR2267jde85jX8p//0n/jwhz/MAw88wF/8i3+RO++8808tuHz3u9/NBz/4wVXrPLm2iAhveMMb+Mf/+B/zspe97FofJ0mSJEmSJEmSJEmSJHmWXHV4jfGUgUAPpYaHxiNC7aa+aEGwSgwuzmbG1584i/A8tjan0c5eeaCjDb2adHRhOBqyubXFzu4lpBJh66jD+kpRoZTShvtCZVKroKUADjVa3ObLTrNhBiqKewvelw1wCX9083+0QNtb+3nZKC+hSxFvDuloYUtrQXttgXnTcIgsndShDnGMKnHWGGMM5Ymuov8WcGtToIiDxucFjyq18g1akkXfU+scqYvomHcDxsMxa4Mh3WjIoHSohItcXML9Tfiw4x2I0hzgT89NN93Evffey87ODp/61Kf463/9r/P617+eV73qVZw8eZKzZ8/y8Y9/nP/7f/8vw+GQe++9l+///u9nMpkwHo95+9vfzu/+7u/yb/7Nv2E4HPKWt7yFWisf/OAH+c//+T9zyy238EM/9ENsb29f9ctxOBxy991386EPfYhf+7Vfw925++67OXXq1ErHAvA7v/M7/Mqv/Apf//rXAXj44Yc5f/489913H48//jjj8ZjpdMpP/dRP8frXv/4bAvo/igcffJBz586t3nxJrj0PPPAAu7u71/oYSZIkSZIkSZIkSZIkyVVw9c3rEk1lW4bXvhR0EOoPI0JejSw3dBpEm7nreOSxxzm5tc7zn3+abiC4VUw0wuxl2BhbiIzGYzb9BAeX96mLObXv6boWGLdxQ7eKm6M6xKxvQ4gtUF6e2Y0YcGy6EF/9hDa+GA+y6hSJz6k81cPctCHuVzQeTXWiS2H3KnoXiihFtWlDYKn10JXPWsI5rXpldNKjtR6B99IMruFlcaH2xqJW+h4Eo2jl8PJFhl3Hie2TrG+dQMowJCdtxBIp+HLcsZ1OZfnhV3zjT4Oq8rKXvYx//a//Nb/8y7/MBz7wAT7ykY/w67/+6ytNSimFu+++m3e84x3cc889bG1trZQub33rWzk+PuYXf/EXede73sW73/1u3J2+73n5y1/O3/t7f4/Xvua1DIfD5xT8vvzlL+euu+7iq1/9Ki95yUt4xStewdra2jd8z9mzZ7nvvvt48MEHY2jUjForDz30EI8++igQqpQ3vvGNvPa1r31Wz78cnbz11hfwd/7Ou3nhC++86mtJnhv/5J+8i9/+7U8/6yHQJEmSJEmSJEmSJEmS5PrhqsPrGE6MUFYQ3JZuZ2lhqK+UGuFv9lVAenR0SNHCpctHbG3O2di4otQwi7B5GWBrKbhXxuMJmHC4v0tfF9S6HEh0zCMoj2A5msVm0iLkCNiXjejlSGS12lrbhnubXCzaGt/RatZ2BvMWbLefJ0Xa4yJoVVFUlx7tp6i3iWZ13Icr5e7QgjjetCOgOP0Vl7U5FEJ10vd47fEKitINOsaDMeONCeI9jz32FQ4unWM6WWOnP+Lw8nm64RqDyRi8cvbMWU7deAtbN9wUz9VC9HhOiRHJbyPbK6Xwwhe+kHvvvZe/8Tf+Br//+7/Pww8/zHw+5+TJk7zkJS/hlltuYWNjg8Fg8A1t58lkwo/+6I/yxje+kfvvv59HHnkEVeWuu+7ixS9+Mdvb2yu1h6rykz/5k/zYj/0Yo9GI9fX1bzjHK17xCj784Q8DETKXUlZfO336NO9///uZzWZ0Xcfm5uY3+LUB3vjGN/Kbv/mb9P3T61JEhI2NjWfVun4qt9/+Il71qr/AnXe+5Koenzx3vud7XsYDD3zhWh8jSZIkSZIkSZIkSZIkeQ5c/WCjWbisXagWKo1la1hZajx81SgOhYgg7tDPGa6dAOnY3z9mbTJCBrQwOMYYS1SZY1SRaG1PplOs9hweHWDmVFs6pw0R0EILzUuMSFp8zolMW1tL2L2CClbrKvw0c9RYBd7OFQf2cqRRWgtblnoRj8DcqBHkqzb3drOCEEoRd2/htoeCxIHl9TnUNuqoUrDq9H1lUReIKMNuwHA0YjQaMxwOkKLU2ZzLF89y4dwZDg532dw8wembbuXw8nm0HsPcmS/2qXXOtDNsfoB4bTqVCOhdSrthlVL0qYn706KqrK2tMZ1OOX36NK9//evj8xKhuqo+7WNHoxGnT5/m1KlT1FoBoesKpZRvCLoBptMp0+n0W/6cwWDAqVOnnvZ8J06ceMZrGI1GjEajZ/ye50ophVK6bwrOkz89Sum+6XWVJEmSJEmSJEmSJEmSfGdx9emaRNtaRFbOakWxahEQEwG0moS2WSSGDa2iIhRVKsr+bMFg54Dtk2t0A20N7hY8K02XQWtNK+ubW5g4x0dHmEGFVaM47ADSxhQVoSkDPPQVbqH1WLqobdmMZhm6ruJqltrudqmIxhNUvzIEiSiosMzIQpctTZfhrcVtaKTqsasIbZQyLs3MqP2CWhdgQikDRqMx65Mpo26I6jLcbW8IAAf7lzh/9lFms0NUnKoFHw6oonjv4PN4vlrpcGx+RJ0dodJFaI9jxHUsz/Ls/uiFrnv24ayIMBgM/tQGFJMkSZIkSZIkSZIkSZIk+c7l6pvXbcwQA7MeKdL814T2QsBrBMSiytKGcXR0xGgwjBFFhd6dS3uHVO85dWqTQaertnLbJ2Rpa3YcM1ibrmF9T7/oqbUPh7MKBUCEWiuqhniNINkEl2UjWvDWwEZC6RGBc9ttBEQsBh0BPLrKte/RUlqjPALtppFuiXTTjbTRRST8NDGBAAAgAElEQVQGFt0l/Nty5WzzeQ1vtUXbejoesbG+wWAwQjUao2V1xhicVBVE4w2C+eyIvi7a+YyD3Yt89WCPcVfYnI7CY93C/ojIe+ZHe3TDKVKGlALq0sQh8QZAaoGTJEmSJEmSJEmSJEmSJLmeuPrmtVcEw4jlw+oGJnSi0cpuXWwH+hrOaHNj0S9YG280AXQ4os2dvcMZ0u2xvbXBYNDs0q5tAjEc2jgtJC9sbW1z+dIOc4sBxzIqEfI6IBFyL+PbSNIFKJh4NLBVUFesX44tyiqDFiI4tqXDGwEpmEkMODpoR7hIWtvbq9Kb0RWN31s7v8GiX9BbT62VrisMByM2p5sMhkNK6SiyVJNIa6jzFG94NMVFPLQrVaA6YhGOK0C/oPYz+sEQH3WYKNUrYk3FMp8xnx8yxULt0TQuKtqGJX3VHk+SJPmjmM1m/NZv/RZf/vKXWSwW1/o433FMp1Ne9apXcdddd6VeKEmSJEmSJEmSJEmegav+W7M0SbJZjUFDFdBwYYMQ+autWtHuDrXSqWIIag6lqTVQqhuX944QUbY31xgMCmCh13AoRUPdwbLoXFjf3GRn9xJmzqKvdKW0VrVFcxmjt6YFcYgWsuLiSCWUH4SzW1WXGmuW+42iSx92XHNRXY07Wq04tQXrBSP83vO+xxfRRneUTjoGwyHra+sMuo5O4zpECnH1V8Jkk3Bjx1k05B4eoTUt5C9FqTjz2qNFKTqAsog3A8SpVpHaWuoucV53+mr05lhvmMXPXawULcsQPkmS5I/mP/yH/8C/+lf/iocffjj+vz15VogIb3rTm/jZn/1ZXvWqV13r4yRJkiRJkiRJkiTJdctVh9eRd0b4KpGs4qLRfpbWWBZvVg0HjPnsiPFw0ILWCJnNIgwuWrAKl3cPEYftrXUGnYSBRAsmgBtFCqWFyjro2Nja4vLuZar1uCvVQVFqrUihqTyiA27e4zWCcG0ptcUSY+hEPEYnm4AELLzQJhHwSvNlR1ANQhdhvVdmi0UbXRwyHo1YX1tjMBpTymDluxagQ6BEIK++7HuHr0RV8KhLx/BlG3/UpiShPW9faxs9VFyETgaYOl3XsTDBRJAyohuOGYzGDCcTBuN1Fq5goNKF8bsUvDdKt/SaJEmS/NE89NBDPPbYY+zu7l7ro3zH8vu///ucPXv2Wh8jSZIkSZIkSZIkSa5rnkPzmgiGHVSiPW3CahBwacEwr0BraNfKYLKGlC6GFoGBlghr3Sgto907OKIbFLY2xpSuw9tyoopSzVdBMALD0YitEyfY3d3FrbZRw/h+a41AbcVA1QjXl27r+GRcg9lSEdI2Iptve6mvBscFqjsLq9RFH9oOEYbDIesbm4zGk1W4XrQZpZs6W0QQv1IAV9XWWGwDkhoTkssmdkw0Er7w+AGIhHO7LhZ02qGimCtFhwwGA8bTKZPpOoPpGsPJGqojtBQQRVSxdsFRHldUFdUWwCdJknybmNmqcf03/+a9vO5133+NT/Sdwy/90i/y0Y/+OrXW+FcvSZIkSZIkSZIkSZI8Lc9psLHWnmXK6k6ErgrmFuGvCm6GA4vFHAjthpvhIm380KkeIfCy+Vz7yoVLewBsrU1QbUG0djiOUFYubDenKx2bG5vs7uzQLxbgRjdYtrvbuCQGpuG3dujdUC14BSTcJCJNcdL+M4/AuS7i18e2ABG6wYDpdI3JcExXSmhAtLmuawxZWhNoew3th7qu/NlxDxyRdq62Zikazyct0NfmDocIv809ProR4xPPY226wXiyzmg0pYzHlEGHo5i266QpWgT6FpKIGFZDE1L7RQxn5r/6T5LkKrjhhtN87/f+Be65503X+ijfMdx336/zmc98/FofI0mSJEmSJEmSJEm+I7h6bUjIM8CtjRMuw+rwNhvelB+hFDk+OmZtugGUCEst2tkmhmnTadACWgTr4ez5faTC+nSIlNB1oM3P7IJLeKrDC62sra+zf3kXWxi2MHCN5jFAdJoxiwY1CGYtGG6t59ByN51I80IDdN2IYTdkY33CcDSMNrj76nqfYv6gdHrFm4230ci4P2YWg4/LEUa1Nkzpy+XKCOWLr1Qh0mJ62r2e95Xn3XwrZjAaTxmUMeYSQ5TLK5UI0EXCyG1m7O7s8OBXvsJNN93EdDKNFnvviC/feUiSJHl2qCpdN6DrBtf6KN8xlFJW/8onSZIkSZIkSZIkSZJn5urDa3Gq1wg/ieZxlXqlhd30IcXDbQ2OluZwXoa7TTkiIphVcI/BRVFwobpwYWcf1zWmaxM6AXFfNbSVaDi7Cq5CNxow3Vxnb2eH4+NjSt8xmoybcdtRFDcPrzSy0jzX6rjXaCebMOiGTMZrDMcjhsMhpXS4OaUUWloeqpPW1g4VSPzM6jGuGJqQuC9mjkqMWrpEoO0e/+xeJM4lrSUuuhxbbEZx1ZW7RIxolXcds0WluoSyxQwxD72KOmpONYtwXxVwuuGAO+68k1qNyXSduujpF4bXGud9Di+iJEmSJEmSJEmSJEmSJEmSP26ekzYEom1dbam/EERaMu1LMbUzmx0xGg2ghbI4tJ1GxA2qI1IjxG4tZXVB3CPAvrRPdefExqQF3TEK6Q5d11FrO5Mqo9EI39hg3+HgYB8XYTgaggoL68Nv7WDVsFqjLV2GjKdrbE7GDEcjVKCTCLrRGJ6UQhtTbA7qFveuvNVxMaCyatX58r6IhCaFaInr0vgdUmtc4h5CqELEQYsgGi31VUlPmpKlDWSKKL1VRBxxa7oWjyDbaOeIoLwUpevKqiFZPT5o4XZ2r5MkSZIkSZIkSZIkSZIkuZ646vBaTSCKygiKWbSGm8EZMUOKAJX54pjpdG01eLgMfiNbDQ2IEfoOEXCzGDFs3+oGO5cuozjraxOKtrY1UHujaAEzlgaM0XgSehAzDo8PQYQigtU+Po8yHk/Y2NhiOBpRSodqAcJ5HUOT4b2OtnO7NndUvelSmiekuboFwdwQM7QsQ2eNS5DWqG4N9NCdOJ100VKXCLiF5b6jgAnmFZU27CjhyjaPO2zuuPVA6EgQu3Ik4rkOZvscHB2yMV2nIMwOj+mrUboB7uEhHw4GbcDxal8JfzI89thjvOc97+Gmm27ir/21v8bp06ev9ZGSJEmSJEmSJEmSJEmSJPlT5KrDa7G2c4i38FaxWsPn6YYIqDn9Yk6RrrmXlwYMR1xbAGyoNN2GRyCtTcVhLVBVwCpc3D3AHTY3pvE9ZlSvmGk0sK0CMVY4naxBbxiAK5PJlMGgYzAYIFLQUiglRhZlqfloTXF3o+9rG03s0K4NKJZCeKybKqQpTJAI7c0rahpBuEdL2qhQwRcdBwdHXLq0w+Xdy3SdcOutN7N1YpOu61ApqIRiZGkmEYm2t6qumtHujlmPmVGtogiGsZgv2NvfZzGfs7V1gqOjQx5/4gyL2vOCW25lOp4wX8wRF2o1ymBA1xXa2wZX+zL4E+P4+JjPf/7z7DQFTJIkSZIkSZIkSZIkSZIkf7a4em2IPMWVbEYpEsONFuOJ7o73lbroGXRDVDoMWVo4YujxKcFpEV2KR+Kx5qDaQtxQkCx6Z2f3CEEiwC6tvW1GX735oa2NRxY2NrbY3DyBA0XLSjmy1JkstRrijmBUM6otMOujjS3KYBiDj31fERVKKe18rZKNNMc3iJRVC3s15KiFRx75GmeeOM+F8xeZz2aYOX0/5/HHH+c1r301J7dPImIrI4lr3ANFUVVqG190Cy2IuVM9BjNrO8eFCxf4+pkzDLqOyXTK8fGM+fEsrrk6fY0QvM57Blqo8zlVYDAchh/7OvOGuDt931NrXSlqkiRJkiRJkiRJkiRJkiT5s8NVh9fVe8z71kI2equodigFEwF1rDfmszmj6doVfYbX0G+0QNqWihAMikagXX01RKht8BALl3TvzqWdPUSc6WRE6QS3StcVzGobhYxQWbuCtrDX3CmiTflhhDk6WtqCcHR8xOHRIU5lNBqwWPSMR2MQ4+jwmK7rKBq+6L5GK7uZrWky6qaxdmptKpTqPPjAQ3z5y19l2HVMBwO2piPm8wUXLy04c+Yc589fYnN9gzIa4kv3tFkMLerScR3+bHOn1oq1cLz2FdXwqojFOKYteqhGp0pXOhbzOX3f0/cL3CsqMF/M2dw6QcWpCC52xaudJEmSJEmSJEmSJEmSJElyHXDV4bWJY7T1Q/Hmr65NAeL0tsD6Od2gIEXie6kxbKiCyXLoMFq2jq0atkUEdY+BRACP8UIBXKJF/OSTF8CNw4ND+nnP+vo648mY593wPNbXJ2hHjC3WK75q88pgMGBRF0hrXptV+r5ydHRItZ7RZIiLMCgDShng1bG6QLquuaVbs1pCEyIuKK1lLTGi6O7MZj33f/GLPP61xzm1tc2JjS0GRag2Z29/j/lkzIXdA849eYEX3HIaG3UxdNlQM/o2yqgltCVmFiG2VWpv1NJjzRNeLa7PzfFqKw/5ohqLRaVIz/HhEX3tQQvjfoG5od0ICUv51b4U/lh47LHHeOKJJ1gsFrjD449/nb29PS5cuMDnPvc5Hn/8cUAYDDruuOMOtre3o2mfJEmSJEmSJEmSJEmSJMn/kzwH57XjXpsDRBAXDKH3BVIKVGN+fMxkNAnvtDmq0Zx2YIBG0Org6hRtPWZfjiVaNLTNQRT10Ig4FauVc09eYP/iLgWYrk04PjiiG3Q8+ujX2N7e5gUvup3nndpGiiO+VHxE2KkSOhKzaEjXWun7aDv3ix6RjoF2SFN2xPdWrA0kurcwvp3VMARpDm04ODriC7/7RS6cO8+JjTVOba0zHnZM19eYzedoURY97B3Mmc3mLHpjZG20URRBcYsBS5fWvG7nrTVc11Y9Gum+aEqRSr+ooMrCKrPFEYvFMX0/x72nKyPW1qZUj8HKeV/xWhlq6FGuZXRtZnzgAx/gff/f+zh/4TwQfyb7+/vcf//9fPKTn1wF1SdPnuSf/bN/xg/+4A8ymUyu4amTJEmS64n5bM5XvvoVnnjiidRNPQMiwq233soLXvAChsPhtT5OkiRJkiRJkiTJM3LV4bVSKFKoXkN1oV20sN3wGg1scaOoIh5/WTIHmts61Bs0jYdQaw1XNkoYMyTUIbGMiFmlSLS0a99zuH9IcWFrfcp0OmY8HlK6wv7REZd3dvndz32BW269mVtvu5nxsIuAF6N6T1cKWEVRqlUWi57qPdM1YWNzndmR4LUiOIvFAlBECmbh4nZ85Yhejjc6YMTQ4/1ffIDzZ89xYn2LUyc2mAyHrE/XkFLwbsBMC9Ybi/ksvOEaGhDDUaUpSbQNW0qE1eZYrZjXGIJsChE02uB9P6fWnsFgQO3n1MUCt0oRwWrF3VAVVAqzeR9jkO0i4s2Ca/sX/e/5nu/hR/7yj7C/v4+7s7e3x3333cfW1havfvWrWVtbA2B9fZ3bbruNUso1PW+SJEly/VBr5T2/+B7e+973sre3d62Pc91z44038vf//t/nJ3/yJ6/1UZIkSZIkSZIkSZ6Rqw+vPZQcqgUXpa9GIZQhokK/WFC6QgwjAiK4R7NY2+e8ibAVgRZuq3oE1GYMlqoONL6/DRYeHR9ji54T0yknN9cpKoxHI7rhgPW1NXb39zl77gJfe+Qxzp67wF0vfhHbJ7bQoiiGGQwGA/rFAl2GuKpUKseLQ6p1uHUcHlfcAINaDbEYeKSNSkIzfbRf973zyCOPsXthl1NbJ1gbTShaGE/GDEZDuuEQ4Rg3QduAZNyHaFSLRuMaPLQmCFhcu2KhY+l7jGV43a/82H2tEdBXxxYVMSKc90q/WNDXBSpKrdAverqua7oWgdXHtUFVedOb3sQ999yzuq9f+cpXePjhh7n99tt55zvfyW233dbulzAej+m6q37pJkmSJP+PsbOzw4MPPsgjjzySretvg0uXLvEHf/AHHB0d5b9iSpIkSZIkSZLkuubqBxsJnQYVpIRe2ixCabPK4fERG+MRTsFcVu1rwVEktBhyxRXt1dGiWG+oOuIOHg3nMEZEwOpeOTo6xqtx8uQJ1taH1MWC3ipYYTQc0qmyfWKTSzv7HB8cc//vfok77nghp2+6kdF4QF8XcU73OAOOMmB24BztH3F81OMmHB8fsjaZcsOpG3Fvmg7i3OG/vtK87mvPmScusLuzz4nNLTanU4oIo8mI9a1NppM1jmfH9G1ccm1tytbWFtPpWqhTWjAvGvfHaN4QKmg8RzULN7hFqO5eo+nutDcGCot5+LDjjkWj261itce1Y7Ho6ecLBENLaW8cXFttCMBoNGI0Gq1+v76+TimFwWDA+vo6W1tb1/B0SZIkyfVMaLUq7s7Jkzfwlre8jZe+9JXX+ljXHZ/61H3cd99HODw8iDfDza71kZIkSZIkSZIkSZ6R5xBeV0yMQoFaWwjcolwXtMJABvRtDFAAaUErKKJNFSISjWZpbWZx3I0iGvoMf4qqY9l6BhBYW5swXhvS9wNmi8rhbMHewRGbG6GY8E1lZ3cfzPjql79KrT0333yaMgh/cleG9Is549GAYdcxmwl7BwuOD40TJ7bZ2tpiOBxQuq41lOME2sYjkWiImzk7l3f5+mOPc/rkadbHIzp1uk4ZjccIwvHhES4wLANGwwEiEeZvbW1RSokWtLfGtSqYUUVYzVpKKE5iuLE1wmvFCW1IXSyYz4/bn45RzZjNZywWC6yPhnZvC/B4G2Axn4Eqw0nN4cMkSZLk/xn+/J9/GT/2Yz/B937va6/1Ua47Tp++iUcffYgvfOF3rvVRkiRJkiRJkiRJvi2uXhsiJcYOl4OBLtBc1XVxzHCg9DiiBqqISwwliuPeg2mMPTp0lNBdqIGGNxsFr62ZDeCGlEJfHatOCTH2KvQdlo5uVKgD5+DwmL7v6QaFra0pe/t7SOl4+OGvseiNm55/A+vrY8x6uuGAYTdhbTql9pX9vQN29/bQrlA6oaji7vTWIzhd6dja2qRfLJgdH3EwP+Tg8JgzZ84xGQ3Z2hgz6QaUohzOwmlda6VoYbHoOTw65OhwH68LCqDWc3w0oxt0qApFJQYuRdBO4/pdon1uBq4IBSGa2nihtzbqaDF8aV5RcSaDIdPhkKLCbDbDbUFd1PBrO5TBiCLS3lpIkiRJku98um7AZDJhPJ5e66Ncd4zHE7pucK2PkSRJkiRJkiRJ8m1z1eG1ewwNChGwihSq9Zg4x0fHTNfW8JZPY00ZYkYp0lzXjovgAr31qMavHY8wXDQGICUaz6LStBrQlcLMI8Su1Tk+noELZjCbzzk6mjHoBiAR8I6GI45nM4ooj3z1YQ7397nthbcymYw4ub3NcDhia+skmxubLBYLHvyDBziaHcXzRpRMmEuE6WTKqVM3UCicO3eOo8OeOp/xvBM3sjadMO6E6WCEu7O1fZLhYBTO6cWc3hbUfsHh4QGHhzMODg6wNqRYNNrntLBc3KMx3e5xrTR1SfwzX6s9uLBYOBcv7SAKz9veYNDFcKabs7WxAWJoEWo/x6vFmwAa99apbVzz+vuL7Ate8AL+7b/9t4xGI2666aZrfZwkSZIkSZIkSZIkSZIkSf6UufrVu5a1uhkiglldWqmRorgIqh1QsOpICU+2m9HE2Jgq2jzYy+YwGqN8eDiptakzaDm2uDEeDjhwEFemkzVmszmLxYLhaIR2BRXlYP+Ivvah2pBQZVjfsz5d49zZ8+wfHLKxuc4LX2CcOnmSE1vOaDjl1Mk1Ll26xIWL57Faw+Xtxnw+Q1QYlI6jvX0m03U2N04wGa9zcnufUgqXdncYDwesjUbMZzNUCydP3shiseDocJ8L559kNpshWuiGQ8brE8pAQwNSFS9CrYailKZQMTPMe9zDq71YLDCr4NAvFjz88Nc4Op5x8003MBl3oAXMERdKiUa1hyAbQXHCTa6dxJsLRa/lVuPTMhwOufPOOwFSa5IkSZIkSZIkSZIkSZIkfwa56vBaANxRjcAVFBWYHx8zGg4RUdxjRMmghc+CqGLueKtkW/U2GijhuvYCHo9xkRhJ9AjIVQRVoeuaYqMoijIej+mGHV3pGI1G2IZztDHn0qWL7B3ssaiVyXhMv6iU0nH69M3sH+7Tz53paJ3pZIP19Q20U6QoL3jhHZzcPslifhyjiOLs7+0xX8wZj8YcHhwxOz5mMl1je3ub7ZOblK5ww/ENHB/P6ERxc6aTdUaTKZcuXeTS5V32jo4ooyEyX3B4dEg3KKtr6a2PaxUBA3XH3DD31Ye3QN9dEHcWszmz2YyuKxRvFhZvbXGhqUWsDVMqLiBeEEIdIgq9GwO5DtNrMrROkiRJkiRJkiRJkiRJkj/LPAdtiLUEW9CiYEIh2tPdsENUERTDUS0RRHr7uoKJYDh0gjlNJQLVK6WUVYgrEuOOtPAWlNFgxKAb4FbDtFFKeKTNkct7uMNwOGFr+wTr25scHO7TlY7trVPUakzX1lnbWGPn4iW2tra5+fk3s7G+gdUF+/s7WK3s7l7g6PCAagvW19YjmJ8d41bpSkdfe46PD5hOp2xubTNZm7Jdhsxn8zboWLAKs+Njum6IGdx66ws4/+QZjvaPMXeGkxF97ZnN56FgERALrYqpxihjG7I0a/10UeT/Z+/Mo6Sozv7/uVXV60zPMDPADAwIDjLsOyKIrIogishBRaNoosYsuEZfk/hqfiaub4zGxLhGfdWgQRREjOyb7NuwhX0fGGBg1p7ptbb7+6O6W0cWDRHBN/05h8M51dVdt2uevnXvc7/3+0g7WQLTUahjIYSCItxYikzcK6eQIwIUFGTSw0U4iwW2BAUVgbMowDmawE6TJk2aNGnSpEmTJk2aNGnSpEmTJs1/JqdfsFFKsGxkKukpsUwLR96rJdyirYSXtcS0bRQhHXWwhZP4ljaqULARCWtr4aiGpY1pW4CKsJ1ErVAECAXbshFSIlUI1tfj8mhYwsYwTfR4HGGCNG2iLp2MQCY5eXm4NBehUIhoXKdT565YpkVeXg51OY3wejyggC0tIuF69HiEiooKDh44SF1dHR63m0BWFo0aZSOlpPzIUdyKStP8AmKxKMeMcjSXi5y8XDS3j0BWDoqqEYvpGHEdoYJh+CgqKqK2upJqIfD7fTRqlE3MtjlUdhRvkQuXS0UIJ+ktBNiWhSWlk6JWFIRIVK6UTnFGS9pIBTS3h5gZxcDCI2ynwKNwkv5Yzr2zbYkUNpYlEMJCYCVU2o4yW6aqYqZJkyZNmjRp0qQ5k0gpicfjxOPxs92UM4aiqHi9Hlyuc6+uSpo0adKkSZMmTZrvF6edvLYSVhZKQn0NYBhxXG7nI20pHKuKROJUJLyrRUJBbQkboQjHBgMc9W/S+9pK2lwkMrHYWBag2CiKgo2Ny+ehvLoSwzLw+TyOt7VUQJH4A5lk5+Ti9QewBXTt3gG3x8WxigpUTSMnLw+/34vmVjH0GLoRpzZoUF15jMOHyqivD2EZBrFonPq6CJYlsEywLJNjxyrwuj2guLFME19mBsHaIE3yDbz+AB6vD920cHtcuD0a7piKZVqY8TjB2lqaNT+PzEAumbkhTAHB+mpUVYNEocqEa7hzT1QneS2lRAjpLBgkSkgKoaBikenzEawIYpoGlu1GKGrDZLSUyMQRVbOQUoIJUjqKblUoTlFNmU5gp0mTJk2aNGnSnEnCoTAT35vIiy++SE1NzdluzhlDURT69OnD/fffzyWXXHK2m5MmTZo0adKkSZPme8zpe14nktGWLVESNf90UyfD60VRNScXKgHhGFxYtoWiOIlYbIFUIVWFEekUcpS2o662HXWxUEh4ZzuFGx1Fso0Q4M/MoL42SDgawzBMAlkB3B4Xhh7H5fNiSUl+s2ZkNcrFsiz8GQFa+f0YuoFLUzEtAykkmkuhvj6IHo9x5MgRDh4sI1wfIi83F78/QCQcIxYzse0Quq5TH4oRd9to7nrcmorbl4Ghm8SjcWKeCEKoGJaF2+3BSBR5zMzKpLpSp2l+M+rraslpnI96tJyqYBUZmRmoquLcBqEgpaMwlwnfamTCU0U4SWscsxVsHN9wj8dNPKY76upEElqIxH1OJK6FIpDSQooYSIEUGhIFUByFdtKCJE2aNGnSpEmTJs0Z48DBAyxdupRt27Zh2/bZbs4ZRdd1+vbtm05ep0mTJk2aNGnSpPm3+LcKNiqKklL1mpaF6taQUmLZjopXYie8lm2EtJFWsmijk4+1LYkmQFVEqhihEDYoJHybJVYicY1tOx7N0vF9dns8uN0ewuEw+PxEIzGat8ijWfPmHD58mGg0xtGjR4jEoxQ0a0YsFkZg49ZcCGlgGTZIi2gkTCRUTyQUobqyinA4Rn04SmZGHLemoWoqdfX1eN1eTNskrlvolo4I1uFxa6guDU1TELslgexsmuYX4M/MIlwfxKVpeL1eDNsgKzuAqVu4vB6OHT0K2ESjIRTFRkFzbFESqmoU2/n+QmJGYoRr6sjIzkZ4NGRiAcDGSVK7NBVdN7EtmUpcO8JrCYn77Jhnk/jbGAhpA24sW8EyTYTiSr4pTZr/aGzbJhKJoOu6swCE08+pqorX68XlciV2SDQkEonw/vvvs3btWkaPHs3AgQPJyMg4Y+2MRCJMmjSJ1atXM2rUKAYPHvwvXS8Wi6HHdTxeD263+4TfKc3Zx7ZtotEo8Xg8FY9CiFQ8nuxvF41G+eSTT1i8eDFDhgxh+PDhZGVlnbF2RqNRPv30UxYtWsSgQYMYMWIE2dnZX/u+mpoatm/bzoGDB1AUhaKiIoqLiwkEAmesrWnSnG0MwyAWi2HbNjk5eXTq1I3CwvPOdrO+VXbt2sa6davQdZ1oNHq2m5MmTZo0adKkSZPme87pJ6+lYwGCIrClTSwew+/1IBQtYQOCkzlVBEjh2H1I6Xhg2xIlofgVtuO7LIRGooRjQjXMF6psSUKZ7RyTtgWqisfvpaouhNvlJa4blJWVEY1FadmiJZYr+doAACAASURBVP6MDOrDYWzbJFhbhd/nQUiJ4vUSj+uggG7omKaOaRrE9TiGbhCNRBGKQiQSwe3SiMUMTMumPhx2VMxCwbYl0XgciU11TRWRcD1VFRXk5DQiWFtNRiCbrKws3C6NY7qOPyOAx5OB5nYR1WOgSAwzTqbfg5SOFYpMqqST3x8LRQrsuMmRHfto0bYIf9McREKBLWwbYSWsRGwwTMspfGnZaGqiOCYSaUts6STDbWFj2TqqkDjSdw2hOIsH6eRVmjSwefNmHn/8cVavXu14+EPCQ16gaRoFBQX079+fG264gS5duuDxeAAnGbFlyxYWLlxI9+7dMU3zjLbTMAy2bt3KwoUL6dq1K4ZhfKP3hcNhPv30U9566y0OHTrEXXfdxbhx48jNzT2j7U1zeuzcuZNnn32W+fPnYxqJmPpSPDZp0oS+ffty3XXX0bt3b3w+HwCmabJjx04WLlzIeeedh67rZ7Sdpmmyc+dOFi1aRGFh4dfGYygUSsXh5s2bU+3zer306tWLn/70pwwZMiT1fdKcPWzbJhaLEY/HUyrh5AKKx+PB4/GccPwgpWTDho18+OFk/H4/119/PcXFxWesnfF4nHnz5jFz5kx69OjBmDFjvlG/FovF2Lt3Lzt27CAajZKfn0/Hjh0pKCg4Y+Oi5EIUQN++A7jvvkc477zzz8i1zhYzZ07jyJFDRCL1/9bnSCmJxWKpZD98EX9utxuPx+MUhP8Kuq6zbNkypk6dSrt27Rg3bhxNmjT5t9pyKnRdZ8WKFUyZMoULLriAG2+88V++npQSXdeJxWJomobP5zvhd0vz3WFZFtFoFMMwGsSfpmmp/u9EBINB5syZw7x58xg8eDDXXnvtGfV+r6urY+7cucyZM4eBAwdy/fXXn/x6EgzTIBwOY1nWCU9RFAWfz4fX6z1jbU7z9RiGQSQSSc0phBDHCWpORDgc5vPPP2fq1Kn069ePW8bfgst95uIvEomwePFiPvzwQ/r27cutt96K2+3+2vdZlkVFRQUbNmygpqaG/Px8unXrRl5e3hlra5pvjmlahMOhBvGXfP76fL6Txl9dXR3vvfceq1atYvz48Vx66aVntJ2hUIiJEyeyYsUKxo8fz2WXXXbScy3LIhQKnXKenvyOgUAg/Qw+Bzn9go2qQBECy3a8rTUbtC+kvtg49hd8yb5CUQTOs1+iIJA2TsFHIZG2iaKIhABYc7ywRcIDW+BYXyTMLgQCqeu4XS5MS1IbrKeJx41X04iEI5QfKcebkUHTgnyysrLRFAVVFSBt4rqOZZlE6uqxZcJ325JoLg2hOv4nqnDUzJGIY0mimzaay0M0GsHn9WHqOnFdx+NSCYfD2JaJqinU1deDUFAVDT0SIh4L4/VnYhkmZElsFOKxGKqikNsom2g0iG6YCJeTRE74giCwcepTxhEZChk+D/H6EP7GjVA0DWmZCBQkKhYGigDTVjCQuIR07rFQUmrsZCJb01QQBvGYgRGXSAxo4tzTNGnSOAPFyspK6urquOCCC8jNzUVRFCzLora2lr1797J161aWL1/Oo48+ypAhQ/B6vUgpMU0TXde/m23gktT1Tjb5aHC6lGzevJlXX32VmTNncvToUVwuF/X19f/nt61/nzEMg+rqampra2ndujX5+fnOQrBtEwwGOXDgAG+//TYrVqzgwQce5KpRV5GRkeHssrFMDMP4RvHxbWBZVioev5yc+yrxeJypU6fypz/9icOHD9OnTx969OiBYRgsXbqUZcuWoes6GRkZDBgwID1wPMuUlpby4osv8vHHH2PoiUWJRE5X0zTy8vLo3bs3Y8eOpX///g12gFRWVrBy5UoCgQDDhg07o+00TZP9+/fz+eef4/V6GTly5CnPNwyDVatW8frrr7N48WJCoRC2baNpGi1btuTGG2/kpptuolmzZme03R6Pl+zsRmRn55zR63zXZGRmommnPcVIcejQId544w3effdd9HhiES4Rf6qqkpOTQ/fu3RkzZgxDhw5N7dqwLIuDBw+yePFi4vE411xzzb/dllNhWRZlZWV8/vnnRCIRxowZ8y9/RmlpKa+++hqffDKNSy+9lF/+8pe0bNnyDLQ2zTdBSsn27dv5/e9/z4IFC5CJXcXJ+HO73eTn59O3b1/Gjr2Wvn0vSsW8ruvs2bOHRYsW0bx58zM+ztJ1nb1797Jw4ULy8/OxLRtOkquM63EWLVrEY489xsEDB094Tl7jPH7wgx9w//33f6MkZJpvH8MwWL16Nb/+9a/Zu2evczARe0IIvF4vLVu2ZMiQIYy7fhzF7b5YHDZNkwMHDrBgwQKysrKwbAvXyQLiW+DL18vMzHTi71RIqKyq4r33JvLGG29QXl6OZVm43W7atm3LhAkTGDt2bLrY71nEsiy2b9/GPffcw84dO52DX1rP93g8NG/enAEDBnDTTTfRqVOn1GuGYbB582bmz5/P4MGDz3hbk9ebN28egwYNOul5tm2za9cuJkyYwI7tO056nqIqtGnThpdffpkOHTqciSan+Tc47ZFlTW0V8XgM1eXGskzcHg1LSsf4Qghs21EUC6mALRGaimlZCdWYREgbIVRsi4QHCdjii+KEKNJRdyeULzLxmlP0EUzLQNdjaG4FI25SU1dLTI/RtGkTbGzcLgUjFsNwu3H7/WiKgiWls4KZ8KJWpCAajeHz+7AslUBmBpkZfuK6gSqgPhTCtGxsFAzbxrIs4nocy7aRtsQ0TKRloaoGti0RCDQBVjxKTDdBCIxYnOqKo4RDQVweH26PH5/PTTTq/PBlolCjZeuJ7645N0NaSBmnuqoCZJz62loCRnOEZmFLG8uU2JbAskBRNGJ6mIhu49JUpKVh6ja2ZWIm7A9saeHWLBBhLEtBWuD2+JHyXxtQSSkJh8MNbBXAWaVPrtSfzFrhTBMKhXjzzTfZsmULN990M30u6nOcaiBpC2HbNhkZGaiq+p23M825T9OmTbn//vsZPHhwauCefOg9++yzLF68mM8++4wOHTrQunXrs9vYb8D27dt56qmnWL9+PaNGjaKsrIw1a9ac7Wal+YY0btyYO++8k6uvvjrVp9m2zf79+3nxxReZOXMmM2bOoEvXLnTs2PEst/bU7N+/nzlz5lBRUcGdd97JrbfemlIobtu2jWeffZZVq1axdu1aunXrRqNGjc5yi/+zMQyDmpoaampqKCwspEWLFqkFlPr6esrKyvj73//OypUrmTBhAjfeeGMqgShtiWEYCdXimbcm+/ICyqmQUrJ27Vqef/55li5dSvv27bnkkkvIzMxk06ZNLFu2jNdeew1N07jtttvOqOVOmlOTXDiurq4mPz+fVq1aoaoqtm0TDoc5dOgQU6dOZdWqVdx+++3ccccdqT7Dtm10Xcc0zVMuqH1b2LadiPd//Xr19fUsWLCADz+czJEjR6ipqTnjO7jSfD3JBeRgMEhRUVFqMcuyrNQC8tatW1m1ahX33nsvY8eORdMSFpqJ/ui7iL8vX8+yrISh54lJLn4fOXIERVUoLi4+LkmYnZ1Nfn5+elfuWURKSTwe59ixY4QjYTp16pR6tpqmSXV1NZs2bWLjxo2sWLGC3/72t1x44YWp93+X8Qdf6W+/xoq0praGiRP/xnPPPYfP50vtVNm4cSOff/45zz//PE2bNmXo0KFnvN1pTo5hGFRUVFAbrKVr164pO8Dkc3nbtu3885//ZPHixfzP//wP/fv3Bzirgq6vu57X66Vt27YnXJRL5rg2b95MTU1NWjxzjnLayWtNsaiuPYbXk4mqaQQCGVgILCkdWwoEFoBtoaoCy5KoCY9sO+HJLLATxQVx1MEy6bxsOQcSQaMgEF96XQjIzMzApalEIzEixBzRsqqgmyambWPZJhKTuB5BcwkMS2DbBrF4DDNuoMfjCQsQZ7CpqiqappKVFaA2GAQpsXRJXDewhYIGqIkJm5TSEZUjsEwTpAskaB43msdNJBrCiMecb2hlkJUdQFXApYFblQhV0rhxLppL4ejRo4TCIZLFK3XTQDdMbMvC5ZZYpkU4UodtuonWhdAUP4ZhYugmumESj+v4M/wYdpy4DaGQSbhO4tPceDQXmqo5awOqBykNorEIAoHH48bl8TgFNZVvnsCtra3l4YcfZu7cucSisS9eSIxvXC4XrVq14sorr+SGG26gsLDwO/vxG4bBhg0bWLJkCUOHDj1uArtv3z5eeeUVZsyYQYsWLXjqqafo1q1bOoGd5jiS24Xy8vIabAvNyclh+PDhbN68mQMHDlBXV3fKQaGUki1btjBlyhQWL17MkSNHcLvdFBcXM2bMGIYNG0ZeXl6DCUJVVRWffvopn376Kbt370bTNLp27crNN99Mv3798Pv9p2z7kSNHmDZtGvv27WPYsGH069cP0zTp3Lkz48aNo1u3brz00kusW7fu379Rab4TFEUhMzOTvLy8BlYayXjcuHEjhw4dorq6+mvjcdeuXUybNo0FCxZQVlaGojgKg1GjRnHFFVccZ5dQW1vL7NmzmTZtGtu3bwegU6dOjBs3jkGDBpOVdWp/6oqKCmbMmMGWLVu4+OKLyc/P5/zzzycvL4+RI0emklEA7du3p0uXLpSUlDgTtnA4nbw+R2jUqBE33XQTN998c6oPsm2bQ4cO8eabb/Lhhx8ya9YsevbsSe/evc9ya09NbW0tixcvZsOGDQwfPpz77ruP4uJihBDU1NTw7rvv8tprr7F8+XKGDh1K165dz3aT/+MJBAJcc801/PxnPycj01H3Syk5evQo7733Hm+//TazZ8+mT58+DBw48Cy39l/Dsiw2btzItGnTiEQi6T7vHKSgoIA777yT6667LvV8tCyL3bt389prr/HZZ58xffp0+vbtS6tWrc5ya0+NbdvE43FUVWXIkCE88sgjx9WoUBTllLYUab47VFWlqKiIxx57jO7du6eOx+NxNm3axDPPPENJSQlTpkyhS5cu57zVS9L28P3336dp06Y88cQTXHzxxaiqyrFjx3jnnXeYPn06W7duZeDAgd/KDp40p48QgmbNmvHII4/Qp0+f1HHDMNi9ezePP/44q1atZuLEifTs2fOctvtTFIWWLVvyzDPPnHBxOBKJ8Pbbb7Nz505Gjx79vRCn/Sdy2j1Ck7wA1dUZVFeF0FQPhsfr2F+AYxNi49hwqGDbEiXhxywTymwEjhczNkJxDiXS2Y7iOpEIVxCADdLxGFOEgqaq+P0ubNPAti2wQagCPaajZINtGaiqQNMEiiKJ6xHHPtsGadooQsXvzcS2JXX19YTCIRQFMjIyURUFv99HPK6jqBqGFcewdBTDQhECn9f5UQrpJNUVl5MgdrncZHr8mDGb+roolmmjKQoZPh+BQC4uj4Kux6kNVyOlhcfrxeNygS2R0iZuWNTW1RGKxrFtcLs0VM0mw+chMz+bytJa9u3ahbdxDqpLc1TtioIlQTfieP0uDMuiPhwmr1EhimnjUhTH/9qpBYltgduViaIquFxuNJcLRVH5V4o12rZNdXU1x44dS21jV1UnjCzL5NixY2zcuJFNmzaxatUqHnvsMTp06PCdJIildFReX115i8VifPLJJ7zyyits3ryZuro63G73cerxNGm+Dk3TUjsLvF7vKQdVUkoWLVrEs88+y9atW+ncuTMjR44kHA6zatUqHn74YXbs2MGPf/xjCgsLASgrK+PZZ59l1qxZtG7dmksvvZRgMMjKlStZv349Dz30EKNGjTrpNauqqnjllVf48MMPGThwIIWFhfj9ftq3b0+rVq3weDzocT29YPN/hGQ8KoqCx+M55URTSsmqVat4/vnnWb16NcXFxQwbdjmGobN27Voef/xxtmzZws9//nPatGkDQHl5OS+//DIfffQR+fn5DBw4kGg0ypo1a3j44Yf5+c9/zg033HDS30FNTQ3vvPMO77zzDt26daNFixZ07tyZjh07IqXE7/cfF4vODiSBy+VKx+k5hKIoZGRkkJeX18AaJDc3l2HDhrFmzRrKy8upqKj42udqMBhk4cKFTJs2jS1bthCJRMjLy6N///6MHTuWbt26NYjl+vp6Fi1axJQpU9i8eTOWZVFcXMzYsWO5/PLLvzbZV11dzbx581i7di09evSgd6/eFBQUMHLkSC699FI6duyYmnD5/X66d+9O8+bNqayspLq6+t+4a2m+LZydfX5y83IbFHRNxt/KlSupqKigvLz8axfwDh8+zMyZM/nss8/Yu3cvtm3TokULLrvsMkaPHk2bNm0aLOCFw2FWrFjB5MmT2bBhA/F4nKKiIkaPHs2oUaO+1p+1rq6OxYsXs3jxYtq3b89VV11F06ZNU68fPHiQTz75hGPHjjFkyJDUImGacwdVVcnMzKRx48YNYiMzM5MrrriCFStWUF5eTmVl5SmT11JKKisrmT17NtOnT2fXrl3ouk6zZs0YPHgwY8eOpX379g2uUV1dzZw5c5gyZQq7d+1GURW6dOnCddddx2WXXXZS323ngnD06FFmzJzBtm3b6Nu3LwMuGUAsFkMIQaNGjWjatGm6SPI5jsvlolGjRjRu3PiLg9LZwX3llVeybds2Dh48SDgcPmXyOrng/OGHHzJnzpyUgKF169aMGDGCa6+9tkHfBM6u5vnz5zNp0qRU39S+fXvGjh3LVVdd9bXJ8pqaGmbOnElJSQk9e/Zk6NChrFq1ivLycm6++WYGDx7c4Pn7wAMP8LOf/Sy9O/ocQlEUsrOzG8YfTv83duxY1q5dS2lpKaFQ6JTJa6cezw7effddli9fTmVlJV6vlw4dOjB27FhGjhzZ4P1SSqqrq5k8eTKffvopBw8exOPx0KtXL2655Rb69et3SoGklDK1IFJRUcHIkSMZMmTICceMpmmye/dupk2bRufOnbn11ltP3bemOWucvm1ITTV1dUH0uElNXR3VNXU0adaUDL8PoQpUAaZlOd7NQiATyWpFFWCrKAIsaQK2U7xR2kgJQlGQCcsRt0x4YCsCVVMRqkqGz09Ghg9pG1REo04hRwRYgkBWJhleDxk+D8K2iEfCYDr2HXZC3e3W3FiGSX1Mp6KimoqKSupCtQn7DIWc3GxH1SGdH6thGsTjJggFt8uFxyOwbQuvx4WiCqStoGpuMjMyEUIjpgtCph9F9RJo3JSKcC0BXSE/4OdoeTkHD5ZhmnGaNG1Ks+aF+LxuamoMampq2bZ7D7jdeD1esgKZoOpEtQjSZaFkeNBNC4/mJiOQgVAFprQxLQuvz+MU3lBUjLiF6TPJ9HjwuDRURSBtGyEUkC7cLs3x+hYuNMVJgn/d9p4T4fV6+eEPf8iYMWO+2CKc2La2bt06nnzySRYsWECXLl2YMGHCcR3ed4WUkkmTJvH888+Tk5PDXXfdxZQpU9IPxDSnRVlZGStWrCAWi9GzR0+aNGly0m2Vhw8f5r333mPLli388Ic/5OabbqZpflOklKxevZonn3ySjz76iK5du3LFFVegqioff/wxc+fOZcCAAakkomVZTJkyhT//+c9MmjSJdu3aUVRUdNz16urqeOutt5g8eTLdu3fn9ttvp23btilLn2Qy6EwX70vz3VFeXs7q1aupra1lxIgRFBYWnjQejx49ykcffcSaNWsYM2YMd9xxBy1atEBKycaNG/nDH/7AZ599lipW53a7mTNnDp999hldu3bl7rvvplOnTti2zaxZs/jTn/7ElClT6NChQwM1UJJQKMQHH3zAxIkTOf/887n99ttTRU5PNtkpLS1ly5Yt+Hw+iouL0wrE7wHJwnnJ/7/OH7WiooK33nqLt99+OzUJyc3NZffu3UyePJk1a9Zw3333MXz4cFwuF5WVlbzzzju8/fbbBAIB+vbti23blJSU8P/+3/9j79693H777SfdkVJXV8dHH33Eq6++SqtWrbjqqqto1boVzQubM3r0aLweLx7vFxOUL/9+TlWQLc25gaIoqfhzuVyn/HtJKdmxYwcvvfQS//jHP2jWrBkDBgzA5XLxz3/+kz//+c+UlJTwwAMP0KtXLwBqa2qZ/OFkXn75ZTRN48LeF6K5NDZs2MBTTz3Fzp07ueeee45TriYJh8PMmDGDP/7xj2RlZTF06NAGhUTr6+tZuHAhS5cupX///nTq1Indu3d/uzcpzRkjWbg2GX+n6v+klBw4cIBXX32VSZMmkZeXl9pNt23bNl5//XVWr17Nr371Ky6++GKkhLKyg7z++utMmjSJwsJCBg8ZTDgcZvXq1WzYsIEDBw5w2223nfSaFZUVTHxvIn/961/p0aMHt956K6qmEolEkFKesuhamnMc8cXzNylgOJWgxrIstm3bxhNPPMHSpUtp27YtI0aMwDAM1q1bx9NPP01JSQm//e1vadGiBfDF8/qvf/0rTZo0YfDgwei6zqpVq/jv//5v9u7dyz333HPSawaDQT766COee+45LrjgAsaPH4+u65SUlODz+ejbt2+DZKWiKGRlZaWtur4nJPu/bxJ/yaLGDz30EGVlZVx00UVccskl1NTUsGzZMlauXMnWrVt58MEH8fl8qYWWxx57jNmzZ6cEYFVVVcybN49Vq1bx29/+ltGjR5/welJKqqqq+MMf/sDf//53rrrqqga+3F89t7q6mrfeeovq6mp++ctfnvM7aP6TOX3ldbPmSFUgLYFpOspo4dLQ3C5URaU+HCcUiYN0vOmEEE5iWoKUFrblWG842msbKaSjxJYJVTYWNqAoKm6XC82l4XZrZGZ4yPR7qKkOY+ommurCUExUTcW2TRRN4Har2KZO3DIx40bih6WBsDHiEA/VUxE2iFoauP2oSgRTN4jpMQ4fOYrP53MKDkgbIcHQDRRVw1Y1rESyXFUUYvEIfm8GGT4/Ho8fT3Yh5aVHqa2tIBYPY0qbwqL21ETiBPw28bhOTbDO2Y5VF8alldMkv4BYOEo8ppObk4OhKCiKi1jcwu32YisuVI+frEKBV8tE83pRNEfRriDRpCQrI5NwZR0exY3bp2DFYwiPN3XPFUVBSgG2jaq5ENJGSoEENFU7LVsPIURqG/tXHzKXXXYZ+/fv5+mnn2bZsmWMGzcuZY0gpWT9+vVMnjyZFStWUFFRgdfrpWPHjlx33XUMGTKkwedJKdmwYQOTJ09m+fLlX3v+V5FSIqXk5ptvZsSIEYRCIWbPnv2dFTFL8/2kurqa9957j6VLl6b8NZP+cuXl5YwZM4ZrxlzTYBL6Vf75z3+yceNGWrVqxWWXDaOoTVFq0aRfv37069eP999/n5KSEvr27YtpmqxcuRIpJYMHD6Zdu3YpheOll17K9u3bqaqqIhqNHrfdKRqN8sknn/C3v/2NDh06cNdddx2nXkzz/SUYDDJ16lS2bNmSisfa2lq2bt3KwYMHufTSSxk3bhwFBQUn/YydO3eyfv16mjZtymWXXUa7du1SA81evXrRv39/tm7dyvr16xkyZAg+n481q9cQDofp378/nTt3TiVoLrnkEnbu3Mnu3buJxWLHxWM8FmfOnDn87//+LwUFBfz85z+nX79+p0wslZeXM2nSJEpKShg+fDh9+/ZNJw6/B1RWVqZU15dffjnnn39+6ln/VQzD4PPPP2fKlCnk5+dz7733prYFJ1X67777LtOmTaO4uJiioiKWLFnClClTaNWqFffddx+9e/dGSsmSJUv44x//yKeffkqnTp1OWKQnEonwj3/8gzfffJPc3Fxuv/12+vTpc8ok+8GDB/n88885duwY/fv3T09gznFqa2tZt24dpaWl9O7dO2X/ciKCwSBz585l1qxZ9O7dmwcffJB27dohhGDv3r28+uqrzJkzh2nTptGmTRsCmQFK1pUwadIkcnJy+MUvfsGASwYAULKuhOeff55Zs2bRuXNnrr766uOuF4vFWLBgAS+99BKKonDHHXcwaNCgVL9rWRabNm3ik08+oaCggGuvvZaKisq0z/D3hKRtzfLly4lGo3Tq1OmUBTZDoRCLFy9m6tSptGvXjv/+7/+mS5cuCCE4fPgwb731FpMmTUqJFDweD0uWLOHjjz+ma9euPPLIIxQVFRGPx1m4cCFPPfUUUz6aQs+ePU8oaKitrWXatGm8+eabtGnThnvvvZd27dpRU1NDNBoFSC0ErlmzhmAwSNOmTenduzfdunVLF2o8x0kWiV20aBE+n48LL7yQzMzMk55fXV3N1KlTWbx4McOGDePRRx+lSZMmqXo+zzzzDPPnz6dLly7cfffdqYKRSQHCU089RXFxMYZhsGbNGh555BE+/PBDLrroInr06HHc9cLhMLNmzeKFF16gSZMm/Nd//RcdOnRg586dlJWVkZWVhaIovPDCC8yfP5/KykqaNGnC0KFDuf7662nevPmZvH1p/k1s2+bYsWPMnj0bRVG4+OKLT5qPSfaVf/nLX9i/fz/33HMPP/7xj/F6vei6wYYN67n//vt5//336d+/P0OHDiUajTJt2jRmzpzJiBEjeOyxx8jOziYajTJjxgweffRRXnvtNXr37o3fd7x4oa6ujtdee43333+fSy65hIceeuikQkpd11m8eDFz5sxh8ODBXH755WmR4znMaSevM/w+sgKZqIrq+CYL4RRBFKBqbhQFLMPENCRSEYnktOPrrKgCLJkQVSsogI1jaC1tiaKAsAUooGpOQjwzw0ejrAz8Xg/B2lrqa+uIR2IgbbKy/GRnZ5GdlUFGhgfTNIgZBm6XB1WVGJaFx+1B0xQM3aC6tg4luwV+v4tI3MTjy0CRYVRNJRyNUl8fwbIlqqphWBaWbWPZhqPE1nVcmoZAxbIthCJwe93kNG6G8DcnZlagKjqWXommNKdRTja1FeXsP3CYTH+A3Lx8hCLQVEEoFKOguYvmhS2oC4dokpNNfVxHKB5CwTCGGcar+fH7G6FlKGiKBghUVQFFAgrSBq/PjW1bqIqKx62hSg23S8WlqSlPcenYkCMSRiwA5VA6WgAAIABJREFUtrQwTBOhfLsDFLfbTVFRERkZGdTW1hIOh1MFPGfMmMHTTz9NWVkZvXv3pm/fvqlVt+XLl3P33Xdz6623kpubi5SSGTNm8Mwzz3Dw4EF69ep13Pl33XUXP/zhD0+aRFQUheuvvz6lMNi4ceO3+l3T/N+kpqaG6dOnN3h4WZaFEIIuXbo4E9tA4KQLP7Zts3v3bqqrqxk4cCAFBfkNPisjI4M2bdrg9XrZv38/dXV1BINBDh06RCAQoLCwsIE6tVWrVvzmN7/BsiwyMjJSEw9wEkJTpkzhnXfe4bzzzuOee+6hd+/e6cTf/yGCwSCzZ89m/vz5qWNJa6R27drRtm1bsrOzEzZQx2PbNqWlpRw9epT27dtTWFjYQCHh9/s5//zzCQQCHDx4kJqaGurr6zlYdhCv10uLFi0aWEUUFhbyi1/8AsMw8Pv9GIaRes0wDGbNnsXf/vY3AoEAEyZMYODAgafcWppUo02ePJkLL7yQW265haKionQS5xyivr6emTNncujQITRNSxVs3L59O/v27aNPnz6MHz+e884776SfUV1dzbp166isrOTGG2+kf//+KcuFQCDA4MGDWbZsGZs2bWLHjh3k5OSwbt06qqqqGDFiBD179kyp8S+88EKuvPJKSkpKiMfjx+0oicfjzJ07l9dffx2v18vPfvYzhg4delwc6rrOmjVrWLhwIfv372fbtm1UVFQwfPhwbrnlFvLz87/lO5nmdAiHwyxatJBwOITL5UJKSSgUYteuXezatYtOnTpx++23c8EFF5z0Mw4dOsTq1avRNI2hQ4fSo0ePVIKuQ4cODBkyhOXLl7Nhwwb27dtH69at2bBhA2VlZVx11VX07duXRjlO/HXr1o1Ro0axYMECdF0nHo83uJZhGCxfvpy//OUvxONxJkyYwJVXXtlAZVhWVsb06dOpqKjgJz/5CV27dmXhwoVn4O41xLZttm3bxrRp0zh8+PAZv97p4Ha76dGjB1dfffU5sQOnqqqKqVOnpmwTbNumpqaGrVu3cvjwYYYNG8Ztt912UgU+OAt9K1asQNd1hgwZQp8+fVLjNJ/Px6WXXsq8efPYuHEju3btolmzZqxcuZJ4PM6gQYPo3LlzKvYvuugiRo8eze7du6mtrT1usTAUCjF37lxeeuklWrRowUMPPUSvXr3QNA3LsohGo1RWVjJx4kTee+896uvrsW0bVVVp0qQJo0aN4t577z1lf57mu8G2bcrKynjllVdSzyPTdGw6N23aRCgU4pZbbmHs2LEnTbjZts2RI0dYtGgR2dnZjB07lqKiotQcJmlntHr1apYtW8aNN96IEILly5dTW1vLuHHj6N69e0oQ07NnT8aOHcuaNWuorq4+rkheLBZjxYoV/P73vycrK4tHH300ZfEQCoUIBoNUVVXxwgsvEAqF6Ny5M4WFhWzYsIHf//73rFq1iscff/yU/Xma74akgvmNN97gs88+A5z5cFVVFRs3buTYsWOMHz+e8ePHnzT+DMNg+/btrF27lqKiIq677roG9jQ9e/Zk+PDhvPXWW8ybN48hQ4ZQXV3N7NmzcblcjBkzhpYtWyKEIBAIMHToUJYuXYqu61RXV+MvbJi8DoVCTJw4kddff51evXrxu9/9jtatW59wzp5MrL/77rt4PB5+eOsPz4lnTpqTc9rJa0URKAooqkQ4BtAIoaIKFU3VKN1/iNqaCAVN83F73BiW6ViHkKrKCBJs6RRyFFKgkHAJkRJFCDRVw+3xkJnpo1F2JgG/F7CJRaPU1NSAkrAKycggJycbj0vDMp2ijLYtcbk8ZGRkkpvXGJfLhaGb2FJQXhGkQ/s+7Nq+FY9LxRQSG0G7jj2I6TF2796JacaxpekUaEy02bIspGUhhUIsHsef4UJzuYjFo4TCOs2a5ZEZyKL8WBwXkJ2RiTRNJ4FbdZALLmhJ46bNCIXqsOJRDMukpiZIUdH5xOJxTNNE1FSiSwt3jodo1EARMuFpKlAUgUBN+F0nkhdSoGlupFRRhBev24tP8yU6ECdjLQQgHB9RaUtUoWILkAnvcUVTv/UkQW1tLbqu4/f78fl8CCEoLS3ljTfeoLS0lAceeIAxY8aQk5ODZVksWbKERx99lHfffZcePXpw8cUXc/jwEd58803279/P/fffz9ixYxuc/5vf/CZ1frLC7Yn4ctIlTZpvQvPmzbn99tvp0aNHKskXjUTZu28v8+fP509/+hMbNmzggQceoGPHjse9P1mJWdd1cnJyjkuYKIpCo0aN8Hg81NbWEo/HqaurIxwOp34zX37IqqraYFKULJZq2zZz586lrKyM+vp6fvzjH9OzZ89zvmBLmn+N/Pz8VLIvmWyJxWKUlpayaNEi3njjDTZs2MB9993XoNp8Etu2CQaDRKNRsrOzj7NYUBSFQCALn89HXV0dkUgEIQT19fV4PJ7jvKlVVW2gsEgqr23bZtmyZcyaNYtjx44xYcKEUxYZlVKyadMmXnzxRebNm8egQYOYMGEC3bt3TxfpOccIhUIsWrSIZcuWpY4lJ6znn38+xcXF5ObmnnInV3V1NQcOHMDtdnP++ec3mCAoikKzZs0oLCyktLSUyspKqqqqKC09gKqqtGzZsoEva35+Pj/5yU/QdT21xTSJaZosWbKE999/H9M0mTBhAsOHDz9hHCYLPb/zzjscPnwYv99Pv379GDZsGK1bt06rb84RIpEIK1asoKSkJHUsWUC9RYsWtGvXjry8vJP+vZLel/v37yc3N5cLLriggbLU4/FQWFhI06ZNOXbsGIcPHyYnJ4fS0lJs26Zly5YNnsG5ubmMHz+e6667Dq/X22AMbVkmJSWOYruqqoo777yTMWPGNFBFJn9PS5Ys4ZJLLmH48OHf2YLzkSNHmDhxIi+//PIJi1adCwghUurjcePGne3mpBIpX11AVlWVtm3b0r5dewKBwCnnUjU1NezZs4esrKzUd0vicrkoKCigRYsW7Nu3j9LSUjweD7t27SIzM5M2bdqkEodCCFq2bMmDDz6YWkAOhUKpz4pGoyxatIg///nP5OTk8OCDD9KvX7/U+y3LIhaLEY/Hady4MaNGjaJ///4oisLKlSv5+OOP+eCDD3C73fzqV786ZUI+zZlHSpnamfbl56tlWbjdbnr37p0Sw5wM0zQpLy/n0KFDFBQUUFxc3OCzvF4vrVu3Jjs7m0OHDlFeXk5mZibbtm3D7XbTrl27Bjs5mzRpwt133008Hsfv92MaX/QjyQXhp59+GiEEv/rVrxg8eDButxvDMDBNk1gsRjAYJDs7mz/84Q+0Oq8VCNi/fz/PPfccCxcu5G9/+xsPP/xwWohzlpFSUlNTw+TJkxvETFKU2K1bN4qLi0/pdW0YBnv27CEYDNK/f/9UnackyR31iqKwZ8+elKp7z5495OTk0LZt21Tfmiy6+Nxzz2HbNoFAgHAonGprJBLh448/5oUXXqB9+/Y8+eSTXHDBBScdm8bjcZYtW8a6desYOXIkvS/snRbOnOOc9uxQogAqAkfd6yiwQZUqhiGprYuQGcihtraWzEw//oxMbFNHkCjIhFOkESlACqQE2xFmO+ptRUEIcKsqfrcbv8eDLSVVFVUcO1qJS3OTm9uErOwAbpcLbAtpW8QNk3hMxzANpGkjpEpWVi5en4tGOU3xet1UVtfSOL+Q8mNHqK3YhTTiuISgcX4B+8sO4vL40HUdRUpHCS4UJ+kO1IfDICWNchqhuRVsS1BfV0dlZSWNW0kCjRoTDOQQqo1QURkibO6m/NA+crKcH4Lb5UYRgrhp4na7iUQjSGwKCvKpPFpOOFKPkDaGAX4lCyEElgWa6kr8mASWtBG2wLYFthQoQkUVbrA03IoPl+J2lNlCIoRTMBMpsbBTymtH7Q4SO/Hv2+PAgQNMnz6d6upqrr766tSEtqSkhC1bttCxY0eGDRtGy5YtU53JoIGDuOiii5g+fTolJSV069aNkpK1bNmyhQ4dTnD+IOf8Tz75hLVr19KtW7dv8Ruk+U/H6/XSuXNnBg0ahMftDJxsaWOaJiNGjOCJJ55gwYIFdO7cmYKCguMSbZZlYRgGUkpU9cSLQ8njlmlh2zaWZaUGA9/0wRkOh/n888/xeDwYhsHcuXPp06cPXbt2PS07oDTnJh6Ph3bt2jFo0KBU0eAvx+Mf//jHVAGy884777ito7ZtN4jHE8WGqjoWU8k4FEJ8Yfn1DeMxmWDyeDxYlsWiRYu4+OKL6dev33FJJcuyWLp0KS+88AIbN25M+XAXFxen7W7OQfLy8rjmmmu4/PLLU5PJeDxOWVkZS5YsYdKkSakFlIEDBzk7xL5CJBKhvr4et9tNVlbWcf2mz+cjEAhgGAbRaDRxfh2aph1XvMlZcPkimR0OO5MX27ZZt24d69evp7S0lPHjxzNkyJCTbqf2er2MGjWKDh06cPToUTZu3MjChQv5zW9+w+7du7ntttvOWs2ONF+QnZ3N8OHDGTNmTCpJo+s6R44cYfny5Xz66aesX7+ee+65l5Ejrzju/bZtEw6HCYVCNGnS5LjtzUII/P4MAoEAFRUV1NXVEY1GCQaDqWJ9X47XZAHTpDjiyzYM27Zto7y8nG3btjF69GhGjhzZIFZt22bTpk1MmzaNgoICrr/++tT2/e+CcDjM0aNHqaurS3x35ZwaL0gpsW2LyspKysvLz3ZzAGe30Q033MCQIUNStkixWIx9+/axZMkSXnn1FUrWlfDQQw+d0ELBtm0ikQjBYBCv13ucsk8Igc/nIzs7m1gslrL2CAaDuN3u4/qvry4gJ5PXSUuHZAHJn/70pwwYMKDBQk1ubi433XQTAwYMoHnz5g0Sn5dccgkdOnTgiSeeYNGiRYwYMeKElkxpvjsURaFt27bcddddKSWybduEQiF27NjB/Pnz+d3vfse6dev49a9/fcLdQpZlUV9fTzQaJTMz84TxlJmZid/nT8WppmlUV1fjcrmOK+iZfP4mjyf7Esuy2Lp1K0899RR79uzhRz/6EcOGDWsQf8k6PLm5uYwePZouXbqknu1er5drr72W5cuXs3LlSo4cOULr1q2/tXuZ5l9HUZTUbssOHToAiSRxOMKevXuYO3cuTz75JKtWreKJJ544od2LbdtUVVVhWdYJBV2qqpKTk4OmadTV1aV29oXDYXJyco4THiTPTxLGGf8ZhsHSpUvZtGkT0WiUH/3oR3Ts2PGUi9rBYJBp06ahaRpXX311WvD4PeD0pU3CRtEkCBshEolooaBIqDhahUv14Pf6kB6NWDRCOHSM/Gb56KaOLS0cx2YFoSS8ER2XECfZKiVgo6gKqgo+jxuXy0UwWMvRQ+UIS9I4N5fMrCw8HjcgMU2JbuhYllOcUFNd2BLqaoPst0rxZ2bQsiU0zc+lW88uVEeqaNK0CUfLGmFE6jDtEJs3riUUM7AsidvtwjB0DMN0FNcSDAxMy0ICRlUlup6FR/Pg94K0Yhwp20FuXnO0dhdSfjAXW1ocLtuN1yUpyM/H0KNE6uqIG3HHy1tz4VJVopEQjZvmU9iikHA8jtuGYL2OtHR0U6e6Okhudi4ejwcpbGeAqzgWIJZlg5S4VA1TtxzvceH4WSPBkhYCBUUIJMKxDLET/wuQyNNaYdJ1nRkzZlBaWpp6KNm2nfK/3LFjB+3ateOaaxxf4OSAvq6ujg4dOpCXl9dgsBwIBGjbti2qqrJ7927C4TDbt29PnN+exo0bNzw/M0BxcTGqqrJnzx6i0egpV/3SpPlXEEKkCvC43F8k0jweD23btqVLly4sXbqUbdu2UVtbe1xyQ9O0lBorFosd57GenPhYloXX50XTtFThnHg8nko0ft1vU1EUBg0axI033sjs2bNZsmQJkydPpnHjxqmCK2n+b3CyeCwqKqJLly7MmzePHTt2UFlZecKJidfrRVVV4oldPl9GSpmKu4yMDFwuF5qm4fF4CAaD6Lr+jePxoosu4gc/+AHr169nxowZTJo0iYKCggbbP6UtWbZsGc8//zw7d+7kjjvu4KabbqJFixZppes5itvtpk2bNgwcOJAMvzO4l8jUAsrrr7/OBx98wCeffEJRUdEJvaJN08Q0zdTkVdAwnhRFQVXVRPLKTi3qJc+Hrx+rxGIxSkpK8Hq92LbN8uXLGThwIMOGDTthbKmqSosWLWjWrBmWZXHFFVdw0UUX8cILLzBt2jQ6duzIVVdddXo3Lc23hsvl4rzzzmPAgAEEMhNFwr8Uf++99x5//etfmTLlIzp0aH/c809KiWEYX4mnhqiqkqop4Iz7Zcou7Jsmd3VdZ9++fanJ+dq1a1m5ciXXXHNNKvn9ZbuQn/70p3Tp0gVFUb6z5HXy9wVw8cWDefDBxzjvvPO/k2t/E+bO+wePPnJfg3aebfx+P+3bt2fIkCEowil0L6UTf5dddhmvvfYa06dP54MPPqBNmzbHvT8Zf6Zp4nK5TtgXfbn/M03zBP3f11NXV0dJSUlix67CokWLuOyyy1IFSOELlWP79u1RVbVBW5o0acKFF15I9+7d2bRpE7t27Uonr88ySauEXj17pf6OyfiLx+MMHTqU3/3ud3z22Wd07tyZ22+//bjPSMZfcrfAyZ6FQhENxDRJQc03HZdFIhHWrVuH2+1GCMGiRYu48sorufjii1Pfxe12p3Zkf3XM53a7adGiBY0bN6ayspKKiop08vocwOfzObvi+zl/x2T86brOiBEjeOSRR5g1axbdunXj7rvvPu79yXOBk+6qTAm6EvPlL8ffN+3/wuEwM2fOTNkjffTRRwwaNIhmzZqd8HzTNNmxYwclJSUUFxfTu3dadf194LST15rmQggNCUhFOMUVhYKtKpSVHcXr8qK5BJat4csMoOtxyg4fIi+vEZqqYmBjA6rQsCRoisC2bIQNiqqiuVTcbhf+DC9ur5tYNEptVQ1ZgUw8Pi8utxtVU1EUgbScHwYKaG4V01JRVA0hBEYsTnVNJeFoGEPXMfQWBHIakZmVgelx0fK8C6jRXASrjlEfiTr+0JaO1+NCN2NYUmLZEonEtpzEupLwmg6HIxw8fIT8po3wVR0i0+8iYtficmWSHfARqqukZUGAjIAblwei0RAu1YVbuLBMxbFNMS0syyIQyKJrt5643D6q6sOUV9VRdmAfdcEgofoQGb5MVE1FUxSEVJCWjSUtFOE8VHw+L9FoHMOUaIqFUAWaSKSrFeFYZKNg4zw8FCGwhEAIRz3+r0qvDcNg9uzZLFiwoMFx27bx+/1ceeWV/OxnP6NXr16prULV1dVYlkVeXt5xyjpFdVZhNU2jqqqKWCxGVVUVpmme9PycnBxcLhfV1dUY/5+99w6zqjrU/z+r7HLadIZhQAFBiiAgYqRoLIgK2K5iLDf6M8XkRtM0T0y7Lbk37aZqYowlRr1ETfSq8WuCSRQ1TzQkxkJERAUdEaXDtNN2Wev3x55zIjIYJSrE7I9/+DDPPmevmVmz99nvetf7hmEqXqe8LRhjKJcrdYF5MGpb4HO5HC+//DJ9fX07iH9xHLN+/XpKpRLDhg0jn88jhKC5uZkXX3yRrVu3EoZh3eFYqVR49NFH2bZtG5MnT667HbLZLEcffTTHHXcco0aNYsOGDdx66631TLF0u+c7H2PMDgseg81JKSXt7e0UCgU2btzI9u3bd5iPtS16vb29jBs3jsbGRhzHoaWlha6uLrZs2UK1Wq1fY6vVKitWrGD9+vWMGzeuvniTyWSYM2cOCxYsYNq0aWzYsIElS5YwevRozvv/zqO1Lck3fnz541xxxRU8//zzfOhDH+Kss86ivb19r3L/peyMlHKnBRTXdRk5ciRTpkzhrrvu4tlnn2X9+vWDiteu6+J5HlEUEQQBxhoUf3lwjaKIarVaP4/ruvi+TxiGVCpVrDXAaz9ECyGYMmUK733ve1m3bh233HILN954IyNGjGDy5Mm7/L5q5/R9n6lTp3LAAQfUI5lS9jyJgLzzAp7rugwfPpwpU6bQ0tLCc889x9q1a3cSr6WUeJ6XRAiG4U4Z6bWH6yAI6vOg9v/aNvdd3e9fPc5x48ZxzjnnUK1Wue666/jJT37CqFGjmDFjBkEQ8Pjjj/PrX/+a9evX88Mf/pCf/OQn9TFs3bqVNWvW8NJLL3H++eczZ84czjvvPEaPfmvEZc/zGTq0k87OXRcNvt20tLSxN+oHSqm6KFfD8zxGjRrF9OnTufPOO3nmmWfYvHnzoE5Vz/Pqz0Ovzki3loHrXAWtNdlstn69LJVKOx2/KxzHYfr06Zx99tmsWLGC22+/nR/96Ed0dnbWBRwhxC4FJCkl2WyWpqYmoijaoV8lZc8hhEA7eodrHyTzb8yYMRx88ME88sgjdcfpq5FS4vs+UspBOyJq178wDOvi8ivvv693Hmit2X///fnwhz9MV1cX1157Ld///vfZZ5992Geffeo7BlpaWnjhhRcGnde13YG1xZ6UvQOtB59/I0eOZPbs2dx///08+uijlEqlnV4rhCCXyyGEoFQq1RdRarzS0OV5HkKI+j14sPm6KxzHYc6cOVxwwQVcd911PPDAA1xxxRV87nOfG1QjqkXcbN++nYMPPjjdZfd3wu5nXgtFklI9UAIok2zl2Bj6in1ImcGPM6ASt4yXyaC0prunF9dxaWppoBoG2DgCmziIa0KstTFKSgq5LK3NLSgpKJbKZPzkPRzPSWJKrEUMZFFjDEG1SrUaEoYRxiYP5Fom7jGlJb1923n++YD8phwdw4pkGtrYb58WevOSvp5WNmzayObNG+iJSvT09dBXLBGGMZFJAjaUFEmm90DmtCQppXz55U1UiiW6t22jqaWZQmMzNjaE1TJh4BDFGeJ8BikFsRGAwnE0rqNxXY0xMVZYGpqbGT9hIo8uX05YLSOlJIoMvp/FmkSUMLEZcEwnWAAh8Dyf/tL2ZKUKg4NO3NYiKcZECoQFKSwCi7EmKc8UIoloeYO/f9/3OfPMM5kzZ059BVUIQUNDA8OHD6elpYWGhoYdMtZq7r1dxShonSw4RFFEFEb1ErDXc/zreahISflbiaKI++67j/vvv48wDOtC36sRQnLggQcyYsQInn76aZ59djVjx46tu7E2bNjAn//8Z4wxTJ48mebmZpqbmxm3/ziefPJJnnjiCWbOnFnf/tfV1cWll17Kiy++yBe+8IV6rnHtBu95Xr0w7Vvf+hbXXnst++yzD0cddVTaGP8OJooili1bxm9+8xv6+/sZM2YMba07f/iq5Yfut99+rFixglWrVjFlypT69rjNmzfzxBNPUKlUmDBhAu3t7Xiex7hx41i2bBkrVqxg7ty57LNPInCsW7eOq666iscff5yPf/zjzJs3r34ez/PwPI+pU6dy9tln8/LLL3PjjTey7777cuKJJ9LX18ett97K448/zmmnncaiRYsYOnRo6nb4O6a2gFK7x++KhoYG2traWL58OZs3b6ZSqeywRbOnp4ctW7bg+z6FQqF+fLVaZcuWHY8PgoCnn36arq4uRo8eXc9Q9H2fGTNm1Ofaxo0beeCBBxg1ahRtbW10dHSwatUqbrjhBtavX88///M/M3fu3J3m3yvd3yl7NzXhpVqt4vv+oHNQCEFTU1Pd0bdx48YdFvCstXR3d7N582YaGxtpb28nl8vR3t5OHMds3ryZUqlUFyXDMOK559bw9NNPs88++9TFZdd1OfDAAznllFPAwsaNG7nzzju54YYb6OjoYMiQIcnCzcDfzNNPP73D3AuCgFKpRLlcZsWKFQwdOjQVEPdyahnSr2VoqLlnOzo66iWPO84/U79e5fN5Ojs7yefzDBs2jOXLl+90fKlUYtWqVaxbt47x48fXt9AXCgUOOeQQTj/9dKZNm0ZXVxe/+tWv2G+//bjgggvIZDJs3769LnIedNBBdHR01MdpraVYLLJp0yY8z6OlpeUt/uml/K3UImlq96tdxRS2tLTQ1NTE9u3b2bRp0w4LzFEUsXXrVnp6ehgzZgxDhgzBcRxGjBjBU089xbp163aYf9VqlWeeeYbVq1czbv9xDB/xl/vv9OnTWbRoERs2bGD16tU88MADXHPNNVxyySXkcjkaGhqYMGECK1euZMWKFRx77LF1ITOOY7Zt28b27dsZMmRIWpz3d0DtmvFKp/Sr0VrT2dmJ4zisX7+evr6+Ha4tURTx0ksvEQQBHR0d9V6otrY2enp6WL9+Pfvtt1/9+Gq1yrJly+ju7mbGjBlkM0msSM08ecwxx9DS0sKaNWtYvHgxEydO5IwzzthhbLVxL1u2DNd164W2KXs/f0NhY5JLbbFIJQBDtdpPWLVMnDCKZ55dz8svb2BIeyvZbAZrLJ7roRta6C8W2bRpM21tTYAg6R0UCCmQWiarfa6D5/sgk+gLx3PxMj5SaqQUYGziPo4iTBRjBx4wpFQIaSA2RMYSBlWqQYjSinw2SxBEFEWVteteQIoupFRI6RJGho62PL7TTn/fdorl8sCD2F+2qgohcLRCiSTjRAqJqzVWWPqLFaLAUA5g/cZteJ5DU1MTBdfFmIjenl4yvk8mq8hkHKyxRHFIGFrCICCKYrTj0jKknX323YennnmGbVu209tbJJvNEQQhjquRjoNgwDltE1FdCIHjasBgbIxSbiLsS4UQiWt8II8FQRIpYuxfijNtPffl9aOUqmdX1z7M1wTsmqi8w0QbiEWoxSi8+oHQWku5XCaOY3K5HJ7v7RC78FrHZ7PZ1LGX8qayceNGvv3tb7N48eL63IqiiI0bN9LV1UV3dzcLFy7kuOOOo6Wlhb6+vh1eLwRMnDiRefPmcfXVV3PllT8Ekob4rVu3csMNN/DAAw8wffp0Zs2aVS/6mb9gPn98+I/cfPPNNDQ0cMIJJ9Df388111zDQw89xDHHHMOYMWMGFaRr2a2rV6/mJz/5Cddffz3Dhg1j0qRJ3H333SxZsqTPPs3aAAAgAElEQVSeOfbkk0+yZcsWbr75Zv7whz/U3Wvvf//7By2gTNmzbN26lSuvvJJf/OIXO8zHLVu20NXVxdatWznyyCM54YQTGNoxdFDnw5gxY5g7dy5PPfUU1113Ha7r8u53v5v+/n5+9rOfcffddzN+/HgOP/zwek/B0Ucfze9//3vuuusuWlpaOO2004jjmMWLF3PPPfdw0EEHMWHChEGLgjzPY968eXR1dXH11VezePFiOjs7KRaLPPjgg2zatIn777+fZ599dqfrtxCCGTNm8E//dCqjR496C36iKW8WcRzz2GOPcffdd7Nt2zbmzp07aOYmQHt7OxMmTGDp0qV1534tTqbWRr969WpGjhzJ6NGjaWtrY9y4cfzmN7/hySef3OH4pB3+f1m69F4+8IEPsGjRIuAvCyi+79dzateuXcvtt9/OyJEjOeuss3Bdl+3bt3PPPffQ3t7OAQccUM9pDMOQp556iieeeIJsNpuKN3s5Jk4i6X7xi1+wfv16Jk+ePGhkVm2L+qRJk7jttttYvnw5c+fOrS8+9/T0sHLlSjZt2sRRRx3FyJEjaWpqqhfrrVy5krVr1zJp0iQAtm7dwq233sott9zCGWecUd+qL4TAcRLHYuewTs444wy6urq4++67GTVqFOeffz4LFixgzpw5O7kKaz0Bl156KaNGjeKSSy5h//33T3dQ7cVUq1Uee+wx7rjjDowx9evWYI7S9vZ2pk2bxsMPP8wjjzzCCSecUL++FItFnnrqKV588UWmTZvG/vvvT6FQ4MADD+Shhx7i0Ucf5cQTT0xEapssIF9xxRX8+c9/5pOf/CRz584F/nL9y2azTJ48mfPOO48vfelL9TiTk046iQ0bNnDVVVfx5JNPcv7553P++efXd/l1d3fz4O8e5M9//jNjx45NPw/u5fT39/Pb3/6Wu+++m3w+z6RJk/A8byenqtaaESNG1CMPly1bxvTp0+uicV9fHw8//DDlcplJkybR0dFBGIZMnz6dJUuW8Ic//IEzzjiD1tZk99yWLVu45pprWLp0KR//+Mc5/fTTgR1jQcaMGcOHPvQh1qxZw89+9jPGjRvHWWedRUNDA7NmzeKOO+5gyZIlzJs3j2nTpoH9SzHq1q1bmTVr1i7jHlL2DqrVKg899BC33347nucxZcqU+m6RV+K6LpMmTWL48OE888wzPPHEE/U4oloh5G9/+1s8z2PmzJlIKRkypJ2DDjqI2267jd/97nfMnDmzHuu19oW1/Pd//zfd3d1cdtllTBg/AUh2GGQyGbLZLAcddBAXXXQRn/vc57jssssYP34806dPr4+pttPp+eefp6GhYdDdgil7J7stXscmAhkjBWBCqtUiQaVKHFhyvsu0qfuybn03L764iWohn4gzKKTU5Ap5okjT09NLPpPDz2QGBFaJ52lyvk9DIU8hX8CSFJ+Z2CBEkhE90FuYiN6AiSKsNYiBBEVroRoEhLElDmIcR+G6HpVKgLUVenqLaFfjaklYDclks2SzBfr7SpSKZeLYorWDJdnWKoTCGlBCkvF9PM8ltgYbGfKZDNpVlEolhIQ4DJCOxioBNh7I2ItwHWfATRwgSEogpZDE1mBig4mT78H1MoweM5apmzfzwtpNlMoBsRG4roMTu0glk6pMJVFCYiwoacl4blLEaCXCJCJyjEVYi5KSGJtEhliwEoQUKGTS2jggaL/hyaN1fVvbX0NKyYgRI/A8j7Vr1+50YYvjmBdffJEgCBgxYgSZTIZ99tkH13V3efy6devqx79Wy3JKyhulr6+Phx56aAdRTWtNQ0MD48aN48QTT+SEE05g9OjRu8yCy2QynHfeeVhrWbx4MR/5yEfQWmOMwVrLEUccwUc/+tEdyiTe/e53c9FFF3H55Zfz1a9+la9//ev1KIi5c+fy4Q9/mP32249yaXAnVktLC+ecc069ROjWW2+lubmZZ599lrvvvpuXXnoJSITPOI5ZsWIFK1euRAjBhAkTOPHEE9/kn2TKm0F/fz8PP/wwjz76aP1rSikKhQJjxozhgx/8ICeffDL777//Lp0Dvu9z+umnE8cxP/7xj7n44otxHKfuMH3Xu97FBRdcwCGHHFKfj4cc8i4+9rGPcfnll/O9732Pyy+/HEicPrNmzeIjH/kIkyZNqu+SeTWNjY2cdtppPP/88/ziF7/gpz/9KcOHD6e7u5vu7u76Q/yrqeXeHX300X/rjy7lTaK7u5vFixfz4IMP7uSS6urqYvPmzcyYMYNTTjml7tB/Nfl8nsMPP5wHH3yQpUuX1ovDWlpaWLbsD1x77bX09/dz9NFHM3HiRHK5HHPmzOF3v/sdS5cuZciQIZx99tm4rsutt97K//t/dzJq1CgmT5486JZQ13WZM2cOZ555Jt/73ve46aab2HfffZk9ezaHHXYYDz30EDfccAOrV6+uLyKuXr2apUuX0tXVxcknnzxo+VrK209vby+3334bK1c+ucP86+npoauri40bNzJx4kROP/109ttvv516JgA6Ojo46qijWLZsGXfccQfNzc2cdNJJCCFYsmQJixcvpqWlhXnz5tHZ2YnWmhkzZjBz5kzuu+8+rrzySt7//vfT0NDAXXfdxU9/+lMaGxs56KCDBi150o5m+vTpvPe97+XrX/86t9xyC/vttx8nnnjioKJMFEW0tbXhui7ZbJaOjo50G/Newvr167nyyitZsmRJ/Wu1BeTnnnuOvr4+5syZUxfnNm/evNN7tLS0cMQRR3Dfffdx9913M3ToUE4//XR83+e+++7jqquuwvM85s+fz7777osQgiOOOIJ77rmHJUuW0NnZyWmnnUa5XObGG2/knnvuYdq0aUybNm1Qt202m+Xwww/n3HPP5dJLL+Xaa69l1KhRjBw5klmzZnH//ffz7W9/m8cff5zp06cTxzGPPPIIS5cuRWvNwoULGT9+/Fv6c0356xhjWLNmDf/2b/+2gxO5Wq2yfv16nn/+eYwxnHrqqZx00kmDmrmEEAwdOpSTTz6ZP/7xj1x//fU0NjZyzDHHUCwW+fnPf16/P5566qlks1mMMRx22GFMnTqV++67j8suu4z3vve9APzsZz/j5z//OaNHj+bQQw9FiJ3P6XkeM2bM4CMf+Qhf+tKXuOqqqxg7diyHHHIIM2fO5JhjjuH222/nwgsvZOHCheRyOf7whz9wzz330NHRwaJFi3Yq1k15+7HWsmHDBr70pS/tsJgfhiEbNmzgueeeo1QqsWDBAs4444xd5qmPHj2a97znPXzrW9/iq1/9KnEcc+CBB7Ju3Tquu+46HnzwQWbNmsVxxx0HQGNjA4sWLWLp0qVcf/31tLe3c/zxx7Np0yZ+8IMf8Nhjj7Fw4UImTJgwaPRtNptlwYIFrFy5kquvvppvfOMbfPe7362bK4wxbN++nS1bttDS0lJfmEnZ+/mbnNfGBBgTEYUVgmqVzRt7yWWyaC0QImb4sByNDT6rn3uJTZtKtDQPwXM12tFoJ0tjQ4FKuUi1WqaxoQHtaHLZLPlsnkIhh6tdojBAuxJqMTvGYKzFGoONbSIqS4XjuBgssQmwtlovWkEKoiimv1jE9120lniOxsYWrMBxXayJCMMSQVCht78PIRMB3PN9ojAiNhaQKKVhoAxRaY2QgExcFpls4qCOBXgZH6SkVA1xHBfHdTAmplItg0gc6J7nY01c32Jh4hiwaNclJ/JMmTqVnr4K236+hN6+fqLIEhtDGEc4UqFIxi+wYC1aCVSykgBGYFUSqSKEBWNJHNpJLIseKP7CgKs0r68G6W9DSsmMGTNob29n+fLldHV1MWzYsHqsyIsvvsjy5cvr280LhQIHH3wwQ4cOrR/fOayznre0bt06Hn/8cVzXZerUqeTz+dedCZeSsisOPPBAbr75ZqrV6k7bP2ulTbUHy0wmU79JNzQ08MUvfpHPfvazNDQ01LO92tvbufDCCznllFN46qmn2LBhA9lslnHjxjF69Oid8tyz2Swnnngi73rXu1j55ErWvrgWx3EYP348Y8eOrR+vtebf//3f+fSnP10/X22MY8eO5bLLLqNcLpPL5SgUCrz//e9n0aJFgz7Q16hlHKfsPUwYP4GrrrqKcrm8y/noOA7ZbJZsNlufj/l8nk996lN8+MMfJp/P1539ra2tvO997+O4445j1apVvPTSS3iex9ixYxkzZgytra07LEZmMj7z5s1j6tSpPPXUU3R1ddXn2Pjx4+tCi+u6fPKTn+SDH/xg/Xy1MY4cOZIvf/nLfP7znyeTyaC15txzz/2rGXbZbDZ1HO5FlEolli9fzooVK+pfU0qRy+UYPXo0Z511FqeccgqTJk2qL4q8mloe9cc//nGuuOIKFi9ezP/+7//W8y07Ojo4//zzOeuss2hsbKwff8EFF3D55Zdz3XXXcd111yFEUio1bdo0LrjgAmbMmLHL7fqFQoGFCxfS1dXFTTfdxI033khnZyfz58/HWstVV13Fvffey7333jsQ1RaRz+c59dRT+Zd/+ZdBy9dS3n4qlQorV67kmWeeqX8t6XvJsO+++3LSiSdx6mmnMnXqVFzXHTRqo7bbpFKpcMUVV/CVr3yFb3zjG/XFsnHjxtWd0bX7ci2/tVb+dOuttyKlJI5jxo8fz0c+8hEOO+ywXe7+y2azHHXUUXR1dXHllVeyePFiRo4cmS6K/J0x2AJyzdCw//77c/zxx3PKKacwZsyYXUZgOY7DjBkzuOSSS7jsssv47ne/y+WXX16ffyNGjOATn/gEixYtqs+/gw46iE996lN897vf5X/+53+49NJLsdbWy5E/+clPsv/++7N9+/adzieEoK2tjZNOOonVq1dz55138qMf/YgvfOELnHrqqYRhyNVXX83NN9/M//3f/wGJUDVq1Cg++MEPcuaZZ6ZdQnsB1lq2bdvGfffdV59byQ4Ph+bm5vqi8cKFC3e56wmSa9HcuXMplUpcdtllfOpTn8LzvLpBZtKkSVx00UXMnDmzXlo/fvx4PvOZz/DNb36Tyy+/nGuuuQZIhL8pU6bw6U9/mokTJ+4y2qixsZH58+fzzDPPcN111/H973+fr33ta+yzzz5cfPHFCCG44447ePLJJxFCYK1l3LhxXHzxxcydOzfdVb0XYK2lr6+P+++/v/77qO2yb2xsZNKkSZx88sn80z/902s65RsaGjjvvPMIw5Brr72Ws88+G8dx6s+l8+bN47Of/Wx9F5zjOMyaNYv/+q//4mtf+xqf/vSn+bd/+7e6AWzBggV8/vOfp6mpie7t3Tudr/bM84EPfICVK1dy7733cuWVV3LJJZfU48WKxSJBENQz3lP+Ptht8bpc7qNU7EEIg7Cglaa5tQmtFVaCNSCIyOcEkw7Yl5df7mHzxs20trSTcTO4vks+6zGktZH+YhFXOzQ2NpLNZinkciglsRY0DkpqBpK1iU1IFETEsSEIqmCS8j5XOMTWoHSE1gopLCYOsSYp4nGFJAoiohg8xyKFBhKxXEuFdh02b+umUi5j4hjHcTDGIhwHEcdYY8Ea+oslfJOluaGAIcKYJPsul8lSjGOkFPha0tLaiuf6KCWQwmJNhOtI8tkcnusiBXhZDy0VhXwDSiiiKARhcX2fXL7ApInjeKHrRR55bAWQXNQZEO6xaiDqIxHWAaS0A/7qJBJEWBC1IiLt4Gey2IEEESEEYRATY5DEvOHGxt1g6tSpHHvssSxevJjvfOc7VKtVpkyZwssvv8xVV13Fn/70J+bNm8eMGTPIZDJMnTqV4447jhtuuIHvfOc7BEGw0/Fz586tHz+YeL1t2zauvvpqnlzxJGEU0t3dzerVqwH4z//8T5qampBSctRRR3HqqafR2pqKd//IuK77mh/+doWUclDht5ZzWBOr4ziuf+jcVZZ7JpNh5MiRdHZ2EkXRoMfv6nyQPNC/egW5oaEhdTD8HeK4DkOGDHnDr5NS0tjYuJP4WytNGTNmDCNHjtzl/HolnucxYsQIhg4dWt/mXltAeeV8HOx8kMzHWqZ7jVSU/vth9OjRfPOb3+SLX/zi4AsoQuK4Tn2rZs35L4Tg3Ue8m1un3YoQov479zyPWbNmMXny5HpOZw3HcSgUCvXFv9rxhx12GFOmTKFcLtePF0KQyWQoFAr1B5H3ve99LFq0iEwmU7/eCSHo7Ozks5/9LBdeeCG+79f7OE477TTmzZu302JlTZQv5AtpBuIeZvjw4fzHf/wHF1988aDzr3b9qs2/mujn+z6LFi3i2GOPxfO8umOxFsd1xBFH7FDCWItbKBQayGT+spPPdV1mzJjBpZdeWs/1fOXxDQ0NZPxk9+ipp57K3Llz8VyPxqbG+nFDhgzhwgsv5JxzzsFxnF1e/7TW9c+0Wus073UPI4TggAMO4Ec/+tGg4txrGRpaW1v56Ec/ynnnnZdEIQ4sCtdKtg8++OCd5p/rusl8eoVgnMvlOPbYYzn00EN3OL62Pb6hoQGtNa2trVxwwQWcc845O5xPSsl+++3HV77yFb7whS/g+z7Nzc0IIfjQhz7EmWeeucPOqdrfU0NDA9lsNu2i2IM4jsPs2bNZunTpoLvbagKz67rkcrl6NCcki7bnnnsuJ598Mtlsth7D2dLSwnve8x6OO+64HQwEQkh836OxsXEHEc/3febMmcPkyZN3MFHUCiBr5d5KKd773vdywgkn1M9XG+OwYcP4whe+wMc+9rF6jrrWmokTJ9Y/W7zyuuq6Lo2NjenCyR5GKcWkSZP49a9/vcv5V7v+5XK5Ha4XLS0tfOUrX9lht4CUko6ODi666CI+8IEP7GCmemVR7CsF8nw+zymnnMKRRx65w3yVUpLL5eomh8amRr785S/zr//6rzvcN5VSjB07lh//+MdUKhXy+Xz92qi15vDDD+ePf/wjWuvdetZK2TPsfmxIGFAs9uM5Lo6jEYDrKqxIIimEJRFarcFzBKP3baFjSDPbthdRIqKxUKCpkKchn0eqToxhoJBRJeWExoAwA85osMYmorJ2kVKhdZLnXC5ViKIIqcTAg1RSQOhoBy1CQhPXvy6VAmXRWqG1gxQKpRyU1uiBDx/dPdtRWqEdBcJFCImxMTY2lEsVgiAkjiHn+ThS4GdcXEcRVCv4nkc+n6W5uYnWllZ8P0MYBmgtieOBckoTU+zrQSmJ5zbS3NJMU3MDnucgrMHEMVo7NDQ2Mawz4uCDD+L5rnVs7+7GxDmMMqAT15FUCoRKBGmp0I5HZAyRidDKRbsumVwG13GwVoAVhCbCJi2XRCbCCBC6JoS/teRyOT72sY8hpeRnP/sZ5557bn3VzRjDggUL+PjHP87IkSPrF6YLL7wQIcSgxx9//PF84hOfYNSoUbtcnS2Xyzz44IM88MAD9TKp2kX4/vvvr198m5qamD9/PpCK1ylvPrWm+ddL7QNcuhKc8lZQ+8D5eudXOh//cXEch9bW1t3aUun7/qCRXq7rvqH3dF33r8Yn1BYKa67/V7KrxZVMJpM+IO/laK13Wvx6PdQefPP5/E5ff6O/99qupNfamSRIFgYHiw+RUu5ybr6adE7uXbiuS3t7+xt+nVJqlwvInue97vd8vcdLKXdpUhhsARkY9O9jb6MWTbBu3bq3/Fye5zG8czhDO964geWtoFbIXnOivhF2dc0RQtR36r3eMbiu+1eFPSnlLufTru6/Wuu/el1N2bO4rrvb82+wz3evdZ0ajNd7v97V+SC5/g02f2t/X4N1ZKTs3ey2eB2GIRJNuRwRRxLPc5DaJBnRA05gRJITLWyyulLIehRyQ0BoGhobyGcySCERSoPQCKGQwkEoiXIlRkRYkxSyKKkGEq0j4igirAxEg0RmYAuBwXMdTGyIwpgoivA9FxNXkSKJDnGVwnUcfD+DlBIlkwdxi6AahPT29SZZ1iQ50cpTiRiukjzpjO/R09OPjS3KWqQAJQ19/b14nkfGy+C5Dvl8lra2FnwvKaqsVMvEcYRW4CqJtQbtKIa0t9La1ka2UMDx/CSKZCDQW0hNY3MLw4Z38q53HcLDf3qEMIjw3OTnZCAprRQGISXWJDfeOLY4XoZ8voCX8VCuAgQmjLDGYCzExiSB4QiEARuZJCflddDc3Mz3v/99KpUKTU1Ng35Q3xW1FdjPfOYznH322fXSuEKhwAEHHMDIkSNpbm7ewbk1bNgwLrnkEs4666wdjp84cSKjRo3a4fimpia++93vUqlUaG5uTlZ/PZ+rr756B8fCYBQKhdTlkpKSkpKSkpKSkpKSsgcplUrcddddfOYzn6FSqbzl5xNCMHnyZL7+9a+nsT4pKSkpeyl/Q2FjSGwihJCEUYgVloxIcqGFUAMFiqCFxkoFaBQKRzmJ29kaKqUSWjpoT+JlMriej+N6KKWRUoGwiSHYJDEk1hqMjQmrVZTQCCETlzYMFB5aHG3wMz6WJA87jsFYi0CglEZpB2MNrnIQElxPE8WG/lKFyMQDW6EVQZg4toU1aK3xHJesZ2jIZWnKFwhKFSSWXEOOPjdxdWvHQWuF62oyGZ+21jawgv5iEaUljiMgCusrT02tjfgD37dyfYRUMJBhLYRAux7DOofT2tZFU3MTmzZvxViJVA7YRLC31iQxIIBWDlFkcZwMSrlYK6gGEX29fUk8ihJJyWMUExuQSid54fWFgb9O0gC7+1sraiuwhUKBsWPHEsdx3QU4WMj/GzleSrmTO0tptVsxECkpKSkpKSkpKSkpKSlvL709vTz22GO88MILr2k+ejNxXZfHHnssFa9TUlJS9lJ2P8zPGCDGYhBCgQ2JYwchDNaCFAphFcZK4hiCIEDEAkeGKEfhVDSu6xFpD08otK6iszl8P4PSblJIoUQiztok7xoBYRwihSCWsh4vYoylWikRmxjP99COBmsxUeKgtggc10MIjUUMxHOAsIbevl66e3oJwhhHaxzHwfM0UinK5QpRFGCNQWDxPJfGfJYhzc0QWcrlMm7Gw/cdpBQ4nkehkMd3NUpCJuvT2NSM0gOB9CbGmgilFL7v4XpJtIfreihX43geQqqBm3RSo5jN5jj4kBkYK/j1b+6l2FfCdx0EiUNcCJGMT4DruPT2FQmDkNg32FjSVyqyZcs2TCs0NxRQWMLIIIXAGIgtxAPv9XZSy8t6q45PSUlJSUlJSUlJSUlJ+fsiNnE97rG1tZ25cxew/7iJb/p5ojBk1aoV/PznN2OMGbQ/KSUlJSVl72C3xesoEsQhIA0SSxiTOKQtSK0xViYZy9WQaiXARgZhJQIS8drVZPwQpQNKpTKl/n76+/opNLZSaGoik8thhQKS4kYrBSaxUaOUg3TlQMegTURcaYnCAGsgig2NjZpctkBQrRIbC1Kg3Qyu46OEpBqUKfb30t/bh1YKpTSu4yKkwAIZ10MDcaQBS0NjgSHNzfiOJpfxKfb2k81nKTQ10NTahIkjkJDN5Whva6WxqZlCIY/ru0jpYKoh2vHIZrzEha0TkVoolYzL02jHRSoHKTUWibGgHYfmlmZmzppJNpvnoYceor+/GzBIAViBVYmALVVS+liplsjaLFo6uMrD8zL0F8t4UrFx9WpstUrjsA50UyOxtRjJ25J5nZKSkpKSkpKSkpKSkpLyeujo6OTssz/ApElT3/T3DsOApUvv5uc/v/lNf++UlJSUlDeX3c+8rsZUqyHWGsAihaBKhOsZHBeUVJTLRYJSCMbgaI0Qid5sIoGxIVEQ4WcyaO0QhSHGDORPhxV0t0a7Hr7v4TkeWjlIIWv5ISAE2nPIyCzaSSI7wrBKEASo2KAdB8/10dolNpYgDKiUy/T19VMqV4njgFKpH9epxYREhFGIkhLP9chmMzTkMxT7eshkM+RyeVqaG1GANIatlSJCKlzdTGNLE77nE5mkSDCby5PNF8jlc/jZPEI65AsaCURxiCUmtpDJZPEzWZR2EUogpRqIDlEIIZEIHKWRSiKEZNyE/enp7WP5449QrRYxJkw0ZwsxSVtwZGKCKCCKAxzj4HsujnLo7t2OEweExSI6iglKRXRLA0iROOdTUlJSUlJSUlJSUlJSUvYSlFLk83lyuTe/YDIMg7QkNeVvJggCNmzYgDHmdb9GKUVzUzP5wt5dnJqy91OpVNiwYcMbeo2UkqFDh+J53ls0qreG3RavK+WAaiUAYXG0QnsejvZwHB+sor8vIKhEmNiiRVIoiBRo7aBUElIRG7DGopUiji29PT1EAwK2n/UwJiKqlChJiaOSaA4hJVJqhFRICdbGCOXgZzSZbAGtJBZDHMf09xcpVyrEcUy5VKJaLhFWA1ytEI5PxvcoVypUy5UkI1omcSc1B7bneLhuM1JCS0sjuWwWYkPftu1IBGEYEFQrKAlNzY34mRyO65EtFMhkc1CL6FAWT2ukVuR0FiGTiA4pBAaJQCKsxFqJSGzQWGGTDkUB1kqkUjQ1NzPtoINwHMWqp56kVOyhUi4RxwYTD0R/CEsQhVSqFVzHw/OyNBUaKBX7sEbQ1NmJwJJpbkQ4PlJIbLJqkJKSkpKSkpKSkpKSkpKSkpLyGlhr6erq4v3vfz+rV69+Q68VQjBy5Ei+/e1vc8ghh7xFI0x5J2OM4emnn+Gf//lsNm/e/IZeK4RgxIgR3HbbbXR0dLxFI3zz2W3xWkoncUNLcF2HjJ8ljAWVckQUBBT7qxgrkhxp30FYiySJEhHWEBtLFBmkjPA8i9IK5TiJ6zmTI5cvoBwXa+KB3GuL4zqJU1lq7IDlWCmBQmFiSxCEVColypV+quUy1XKZOAqJoxgTxbiuRy5fII5jYhMTxQYvmyOqVikV+zE2QkpNNpPF8zJIIbDEaKXJ+BmkUhSLZZTjMHz4vlSDKgZDUA2QUpLL5XH8LJl8Adf3ESJxU+fyBQSaSrVCNQpQKolPcbTCIoitASNRQqKkHMgQT6JSkrxthRTgOJLmlhbGjR+P5zk8t/oZNm5YT0LQu24AACAASURBVDkoQRQnueDGIoAgCJPsbGvJZFyaGgsoKclmfaQSIARhnCRrS2tJ1euUlJSUlJSUlJSUlJSUlJSU1yYMQ1atWsWDDz5IGIZv+PXFYpF77703Fa9TdoswDHnwwd/x5z//OenXe4N0d3fzwAO/5Ywz3vMWjO6tYbfFa+0qhNJExhBVYnr7thOFkmwmS1CpYoxAKRfH0YlQ62ii2AAmKWMUoCSYOCKMIjzHxctkyeQKeJksjuujHQcBWGuI4ojIWsqVgMaGTPK+QmGsIaxUCaOQKA4RUuC6Hq7jkMvmiaOYKI6TtBFjqFQrKB2BSLZ4BNUqURjg+z7GgNYKrRWe66AdF4FASQiiiGK1Qi6XoznfSNbPIpQkqFaoBhX6+yr4uRAn6yCdDEK6SGURAirVCgiFkBLfySGEpVIpY4IqNgxQjofnaxztACoR5kVSKCmlwhiQWiKNSJzhboZcoYkhHSMwKLZu20i5v49KWCUyiQs7qlYJwhCpIxylaGluGeiANAhhUVLhkYjlSrsIId+kKZWSkpKSkpKSkpKSkpKSkpLyzqRW8hmGIUJIhg4dxoknnv6ar7HWsmXLRu64IykJLRaLb9NoU95pWGspl8vEcYzWmoMOOpRp0157ISSOI5Yv/xOPPLIMYwyl0t/X/Ntt8Xrz5h7WvbiVeMDpq7Um6+coE+AoB0cn8RtKKVzHQSuJ44BUEi0hCKqYOCaMAqI4IqMknuegHBDSYEWSa62kQGmXjBqICxEQVCsE1SpCKowdcA9LhetJrNFEscYYg40NQgjiJJaboFzGcV2CoEKpXASSGBObsURhhOO4uE4ydiEFURQSBiGVcplsPkfb0HYaG5rI+VmymSy+7+G4DmEQUCoWEUpjpURKhVQapUBpiYkNYVgligxxHGGtwfe9geNcMpkGtOPQ3d2L57lkMtm6mBxbg0USBBGVSpk4jgjDiCgy9PWXKVdifDdH7ARExqCkg4ksJjZUgyqun8EAjhZoR+N6GiUFQoDr+VSjZLykfY0pKSkpKSkpKSkpKSkpKSkpr5vm5hY+/OGLOfOs973mcdYYVq16kjvuSEtCU948fD/DhRdewqEzD3/N4+Io5Le/vZdHHln2No3szWW3xetiyVIugTGAgGxWobQGIyhXQpqbfPK5LABywOGLMElkhh2IuIht4nJ2XHzXxVpDtVIBqXGzebxMlmwmg5IKY2KCMMTEEUoqrLAIIdHSQSuN1hJr40QcBoS1hGFIGAYQRWDBcTW2alDaIZdrIApDKtUq0nERKsBzHcIgpBpECBETVCtJHEghT1NTM76XQTkO2YYCrnbIZLO4npfkUbcDUpLNFXC9DEKIJLM7MgPOb0u1WiKoVrDCEsUBWmdoaWumpbUjiUKxgp7uzZg4wPfzSK0TET2IKBaLbFi/nuefew5jDb29/Qgk3dt7CKpFlIiwxuI5HgaLwRJUK0CE52XJ+T6u52BMhDER+VyeajUEI5ICzDQ1JCUlJSUlJSUlJSUlJSUlJeV1o5SisbGJxoam1zzOWkuhUHibRpXyj4IQ8nXNvyiKaGhofJtG9ebzN2ReCxB2INdZoIRCCUm5WCafzdfd1sZaJAKlFCCIowBhk4LAjJ/BCIGxkii0+FkfP1eg0NRCNt+A47qAAqFwXAchJXGkwDhJm6sVINRAtIZFKoXrJK5rE8doQMjEFR5HIRIHKSVO7BBUA0CQ1Q6RicgXGgiDKlIoQGJMEgjtOA6FfAOu4yKlIpvJJJEeWiIdhXA12vFwXA8hJdrxUFpjTEy1UiEM4wHHdAjEGGsGhHiFwaCdDJ6fwXEchuihhFGFMAiI45hqGBBFMVJqpEjGVAmqdD3/AuVimZ6eHmJrMMYSxVWIQ4yN6StXyORaaW1toamxQD6XxXGT+JYwMORzBUqlclIQKQTYiFS93pmVK1cydOhQmpubkTKNVUl5Z7Fu3TqstQwZMgTf9/f0cFL+wdmwYQPVakB7+xAymcyeHk5KSkpKSkpKSkpKSkrKXsLui9cKlCPASMRAMaPn+DR3NOD7Lp7rYo1Ba4mjPASCMCgTmxAbmiRf2vFQjovWLq7n4bgurusSVANi24NSZRzlAAIkCGnRUmHjGCmSGBHtJKKi42gQdkAcNlhr0K6Dtpo4DDFKEWuDHHBjCyVxjUsYRlgEQghCpbFeUuQYRSE+Pq6XjCmKI5woJA5DRNYnikPCOMZTGisUSIUQinKpRLlSIaiWKfX1U61WQEBsYrSjcD2PjJfFGEUQGlzPR0qB1AJPZujsHMXWrZsp9fcTRiHVcpVt27qpVKsorWhrG8r2rT1s2rCJoBpQDips7+kmk9M0ZBwKOY99x45l+LBhOCoRvZV2kAOxK47j0b19O1JJqtUAqyHj+SDS3JBX88Mf/pDf/va3TJkyhfnz53P00UczZMiQVMhOeUfw+9//nh/84AdorTnmmGM4/vjjGT9+fCpkp+wRHn/8cS677DLK5TJHH300CxYs4IADDkiF7JS3jVWrVnH//fdTKBQ48sgjGT58+J4eUso/EE88sYLf/ObXjBo1isMOO4z29vY9PaSUfyBWrlzJTTfdxKGHHsqcOXNobm7e00NK+QchjmOefvpprrnmGubPn8+c2XPIDuzeT0l5qwnDkGXLlrFkyRLmz5/PzJkzcRxnTw8rZRfstnjd1OSzeZMkqMZgNCDxMx6uq3FdPSDIuigFWkmiMAYsJk5SKgwglaJUruB4PqVSiSiK6O/pxfEySfSGVLheBqkT17XjKBzHwXGSMkXP81FK1+Oa7UD8hVYSKyGKYmxssHbA+S3ACo12NHEcE4cRUoYIrQmjpOzRGIPGYuLEiaxVkrONkAilMALK5SpuxieIo8QZbULCIKJaDSiXi/T09iROb5FY+JV2UNpFKoW1ijAyaNclNhbHdZE6cY4jBF4mg3Zc7n/gAbZu3kI+m8fGUK0GvLT+JRoam5BS0jG0g7Vr14KxNDU20tO7HRsEDG0v0FzwETZEKUWisxpcx6NYLFKpVPB8jyCKcH2PyEqEtckvJWUHenp6WLNmDc888wxLliyhpaWFmTNnsmDBAo4++mja2toS53pKyt8hlUqF9evX89xzz/Hwww/z/e9/nwkTJjB37lzmz5+fCtkpbyvVapWNGzfyxBNP8Nhjj3HllVcyfvx45s6dy/HHH88BBxyQzseUt5Senh5+9atf8bvf/Y4hQ4YwY8YM5s+fzxFHHEFnZ+eeHl7KO5ytW7dwxx13sGrVKjo6Ojj00ENZuHAhhx12GG1tbXt6eCnvcLZs2cKNN97I1VddzdCOocyZM4cFCxakQnbKW461lk2bNnH11Vdz0003MWzYMI488kjmz5/P7NmzyeVye3qIKe9gjDG8/PLLXH755Vx//fV0dnYyb968VMjeS9lt8bptSJ5SqYW+3jL9fQHIkFwhh+9qHCGo/WdE4oQ2NgYh0drFyAgTG2JriKKIbVu34md8Cg0FMpkcSjtUq1WEUijtIpSDkgrPy+D5Hhnfxxkog1RKoaTAxBYhBdYaorBCFIdoJRHaQQmZCNM2xhibRDzHMWEYJi5tKYjimKBSoVoNUDqJIpEicVNLKTBxiJSCKDYYImJbxVpJGBhMbInjJNs6jJIIDinlwPgctNYIJRN3s1BEBkwY4Xo5tNZJtokQWJt8D83NzYyfMJFbl9/C2ue7yOcKOK5HsVRk3csvY6zBcVwia4hMTF9PP3JAnM9kMwhhicIqJg7xXA/XyxKFAY6rcdyBrGsk5UoVx8sibOok3hW1FtdyuczWrVt58cUX+eUvf0lbWxuzZs1i/vz5HHXUUbS2tu7poaak7BZhGNLT00NPTw8bN27cQcg+5phjUiE75W3llfNx06ZNPPLII1xxxRWMHz++vkNg4sSJeJ63p4ea8g7DGEO5XGbLli1s27aNrq4ulixZQnt7O4cccggLFizg8MMPZ9iwYXt6qCnvQIwxlEolNm/ezNatW1mzZg133nknHR0dzJw5k4ULFzJ79uxUyE55SzDGUCwW2bhpI5u3bGbNmjXc9n+3MbRjKLNnz2bhwoWpkJ3ylmGMob+/n/7+fjZv3syzzz6bCtkpbxtxHNPb20tvby8bN27k6aefToXsvZTdFq+149Pc3EpLi4MxCoRmaGsblWKRaqkfBSgpEQikUnhKo2RMpVpGux4yiunvL1OthmilcRpc8vkCru9jbCJC+04WoR205+P7Po7noh0P7Xi4bpJBLaVASIF2FVgwNgJp0dZJzi0lQoA1ST63RWIZEK+jkMjEBEFAXK0SG5skPwtJJpvH97NIqYijiGJ/L8bGBLGBKEIGhiA0CFlKCiIFIBRKJoK6VrWxKRASa5OccGvNwL+TsdUiKAbOjLWgHZeDps/gkYcf5eWX1/PCuheJjaGpuRlH6yTyo1Jl27at9Pb24bgOcWwQ0tLQ3ITSPhaDEFCulCiVq7S1daC0QxCGlColyuUyWjsoqZFK1x3Ev/71r1m7di1xHP9NE+udwHPPPUcURfV/v1rIXrt2LXfddRdtbW3Mnj2b5557LnH/p+y1PPXUUzz22GP09fXt6aHscZYtW0Z3d/cOX3u1cPinP/3pVY7sBUyYMH4Pjfidx5o1a7jlllvYtm3bnh7KHufxx5ezZcuWHb726oWVP/3pT1x++eVMnDixvrAybtx4PM/dQ6NOeadSE7LL5TLbtm3j+eef55e//CXt7e28613vYsGCBRx22OF0dAzd00NNeQdSE7JLpRJbtmxhzZo13HHHHQwbNoxZs2bVheyWlpY9PdSUdyA1IbtYLP5FyL7tNjo6Opg9e/bA9e+wVMhOeUuI43hQIbujo4MjjzySBQsWpEJ2yltGHMf09fXR19c3qJB9/PHHM2vWrFTI3kPstng9fMSB5LLbwAoiY0FA1kvEW89RmKCKUgqBIIqSXGnH0VhcisV+wjAijkK0FLiuS8ew4XiZDJG12NiAlYkYrF20dnBcD+04IAWxtUTGoEjEYSkVlqRAEiRae8iBIkkpJbFJMrIThTkZU1CtgFTE1TLlcoVisUQcRlgrEEJjTSLmBmFIFEYgFZKk4FAgEUpihMCa/5+9d42x7DrPM5/vW2vvc+rUpbvZ3ewmKUq8ihJJ0SYlWXfLUbPVigwEcBBoMh5jBhjHgAUhEOAACTxA5B8TezIBEo8D50eC/PQtf2LMRDE8hj0IAk00tiKblh2bpihaFO99v9XlXPb65sdaa+9TTUoii2RXk/oeobqqz9mXtXcfqrve89bz5bBSVAhiGIZZdnoLAaNDsLLGRGxidmwDbRMQDEsJVMHq8YTRyio/9T/9jzz77HPs7Ew5d/4s5y+cI806Dh8+zMpkheM3H2Pr6iYXL5znjrtu4/0ffB8bGxMgMd3ZwhRWV1YRbdmedaw0q2gMGJu0ozExNqiGXb7rX/u1X+P3f//3mU6ne39VvU3oum5XeL2Mme36xuLpp5/2e/YW4I/+6I/4lV/5FZ5++un9Xsq+M51O2dnZ+a7Pv1KQ/a/+1b/iPe95DxcvXmQ6ndI03n59Pfzpn/4p/+7f/Tsef/zx/V7KvjObzV7163H5JwTe+973curUKT7ykY/4m4d74Otf/zq//uu/zte//vX9Xsq+s7m5yfPPP/+yx5eD7HPnzvHXf/3XfPnLX+b4seP8yId+pG8kuqP4tfOVr3yFX/3VX+XJJ5/c76XsO1euXOHZZ5992ePLQeLZs2d58skn+ff//t9z66239mqHj3zkIx4k7oGvfOUr/MIv/AKXLl3a76XsO1evXn1ZoQFe/vrzIPuNYT6f89WvfpWf+7mf2++l7Dtm9l1LRdcG2U8++SS/9Vu/5UH262R7e5vf/d3f5Rd/8Rf3eyn7jplx4cKFV3zOg+wbjz2H16KraJhBgiALmjYyXm04evQINt1h88pFZltbLKbbLLrAfL5ge2eH+XSHACBCWFkBjaysrBKbMRIa1BLBoG1GjMcj2iYSY2Axn9OlBeOVEckCs/mctgHtKAF2DqdDCaiDKCEGUkqEGIkxliDb6BYJDQ1Xr25y7tw5tre3QQQJkSCBIA1msLO9k1vaVs8hIIKigBFUMBMoHwpI6pgvZmzPF8zmMyzBYt7x3HPPMVmdsH5gg9g0jEYjJisTbr31NsaTVYIGAAwp7W/hpsOH+Z9/5u/xK7/8y6hmH+O0M85fvACXLnLs2M188EMf5E8e+xq3v/MW1jdWCDGSkjCeRLrFDqGdMBqv0YzWWV3b4NzZs6xM1jDrSiMdjET9lv/q1atcuHDhe4YIPyi82kFhNch2bnx2dna4ePGiN11fI/P5nMuXL3PkyBHuvfdenn76aQ8b3gBmsxmXLl3y1+NrpL4ejx49yn333cfDDz/MsWPHfAbBHrhy5Srf/OY3eeyxx/Z7KfuOmX3fN0CWG7Hnzp3jW099i9/93d/lE5/4BF/4whf46Ec/6kOdXwOXL1/m8ccf58///M/3eyn7jpmRUvqe2ywHiefOneOb3/wmX/7yl/nUpz7FF77wBd7//vdfp9W+Pbh8+TLf+MY3vmtw4ezmlYLs//gf/yMf//jH+eIXv8iHPvSh/V7iWwYz4+LFi/53b+HVlA9eKcj+7d/+bU6cOME//If/kPvu858MfbWklDh79qy//gqv9vV3bZD9G7/xG/ytv/W3+Pmf/3lXyl0n9hxeG3noIlgeptiOWF8/wGQ0plmDtbUDbF25yJWL59jevEzq5oh1qII2ysbGOtqMGU/WSAgEJSUhoUgUTAPajCBEZouOtlFGzQjVyHy2oIkNKRid5sAYIbe1ESx1aKPMFx0hNjRNDq9t0ZEs60LOnDvDpQsXSIsFIeYBkxTNSYhluCOSBzt2HfPFFEsd89k8f57O0RAJsSVZQjBCyAMfRYWmaWiblpWVFdoDI47cfJTQjsrgxoagiqUuD27sOjQEVDR7rwHI9+Kuu+7mf/jJv8vv/d9f5s8f+2MuhwXzLjHf2eHsS88x29nh3ffcx/bWDtPpnHZlTAxjBGXUHmbctjTNhGa8RtuOMDvHuB2RuimXLl8gNmOaZly64PClL32Jz3/+89/3H9A/CPyLf/Ev+M//+T9/T4XKxsYGH/nIRzh16hT/+l//a/7qr/7qOq7Qea38+I//OO9973vzG1Y/4PzBH/wBv/mbv/mKTa+KqvLud7+bT33qU5w6dYr777+fQ4cO8U//6T/lD//wD6/jat+e/NiP/Rg/+7M/y9WrV/d7KfvOf/kv/4Xf+I3f+J5visQYuffee3n00Uc5efIkDzzwAAcPHmR1ddWbc3vk4MEDvO9973NVGDnIevrpp19VkBVj5J577uHTn/40p06d4n3vex9Hjhzx4Po1ctNNN/Hwww9z4MCB/V7KvnPx4kWeeuqpV/X3wWg04r777uNv/s2/yaOPPsoDDzzgzdc9UAexX758eb+Xsu/UN5Jezd8Fbdty77339q+/hx56yF9/rxFV5fDhw3zsYx/b76XsO2bGpUuX+MY3vvGqth+NRjzwwAN89rOf5dFHH+W+++5zhdJrJITA8ePH/fVHfv2dOXPmVf8UbNu2fOADH+Czn/0sJ0+e5I477vDX33Vk781ryU5rE0HEaNuG8cqYEFqatmX94EEOHj7K5oVDXDx/mqtXLjJenTJfdKyurnPT4WM0oxUSwtnz55lO51hSVIxm1NCOVpDYgCpt2xJjpEuJxfY0e5/pmAGgiGouP1tH6hZEDaTZjKYdkVPt3PyeTbfZ2drmpRdf5PLFi+zsbNNZDt/zLEWlCQ2Y5ccRgiixibSjFm1aJpO17LMuHusupeyzDjn8NbKDWzUQNBKj0sQGDRGNLRrygEZVYT5LaAh9QxylrBcoDSANkUc++CMsum1uPXqAcxde4tKlK8znC+bTBfP5gnYsmE5oY6QJLbGdoCHQNpEmNsRmTAwtQRtUlZ35NufOvoSGQNOuotr06pAHH3zQg+vCr/3ar73iN6IHDx7k4x//OKdOneLjH/84x48fZ21tjd/+7d/miSee2IeVOq+WW2+9lWPHjrleAHj++eeZTCYve7xpmt4pfPLkSd797nf3AWHbtohk1ZPz+qn+XH895p/6+Q//4T+87PGmafrhoSdPnuQ973kPBw4c2PV6BLx1vUfuv/9+vvSlLzGbzfZ7KfvO17/+df75P//n/Kf/9J9e8fkYI3fffTcnT57k1KlTPPDAA6/4WnRePQ8//DD/7J/9s++qaPtB4itf+Qr/5J/8k+8a4IxGI+69914+85nP8Oijj3L//fezvr7OZDLxv5P3yMMPP8y//bf/1r/vIb+B/PnPf/67/uTttYG1v/5eHzFE3v/+D/Bbv/Vb+72UfafrOr761a/yuc997rtuc21gfc8997C2tsZkMnFlwx4YjUacOHGCD3zgA/u9lH1nOp3yO7/zO3zhC1/4rtssB9aPPvoot99+O2tra6yurhJCuI6rdfYcXqsIZkBKhCaHuWLGaDRi1ExoRi06XmXUTFjduInFfIcuLUjJaEcTQmzpkrC9s41e2SQmoVsISIeoEpuGdjRCEKxLzNMcS0bbjljMOizAOMSs7UjKbDFnPp/RNDkIHrUriEZElK2tLc6fP8tsusXFCxfZ3t6kDZG1tVVCcWkLghQvdtWDiJDPj+SAPOVxj/n7k/y8Zrk1Knn4oqGICogSJKtKNDZ5cCMBIeRvcFKHpY7RqEVDPvZiPieEvOZM9l8341U+8CMfZnb1PJPvTDl+oKEZjYlxRNNEFkk4fe4ClzavoGvrjGKDav4QbVEdEWPDqAlcuniWSxdfYDrfwabCaGWNrltkNwr5mzMnk4d95m9Gjxw5wic+8QlOnTrFRz/6UW6++WbWVtdYmaz0Abc3rm58loek/qAT4zCodTQa8dBDD3Hy5ElOnDjB3XffzcbGBqurqzRN46HMm4Sq+j+6C9e+Hu+//34+/elPc+LECd797nf76/FNom1bd0UWDh8+zGi02+MfY+TOO+/MgfWnT/Hg+x582Zt5zt4ZjUYcPXp0v5dxQ3D48OGX/X3Qti333HMPn/nMZzh58mQfGNbXn/P6GI1GHDvmQ1chv/6u/ffx7jdMTnL//e/1wPqNQmA8HrlqAFgsFhw+fPhlj49GIx588EE++9nPcuLECQ+s30BEhMlk8oolph80ptMpBw8efNnjHljfmOw9qZQc0g6FMYMEUSNN0xK1IQRBtKFdmWDksBYzNEa6LjGbd8yL8kM0EBtlsegQYDqdM59vIihBhWQQm4bFIhFDpG2FzqDrElvzLbZ3dmjahpQ6mqalSx2bm5tMd7a5evUKm1ev5jUKjNfXiEFpNIfuGiKqDU1skVCuS8oAyJTyIEUrShBLpNKKFgwRK1evWAIkD5sUAVUhhAgSEQ1DKF2GPDZN03/DLiqcOX+WjY0N2nYEaN+GRoR2vMZDj3yES+deIE0vYdNLjFcOM1kZISFyYO0WNrdnXJ3OWWmMzsjBvgSCKvPpFt9+6q957LH/ym3vuJmVlRVSl73dKt76eyWOHz/O5z73OT7xiU/woQ99iCNHjrC6usrKyooHoM5bnslkwsc+9jF+8id/kr/xN/4Gd9xxR/9N8XKQ6DjXg/F4hQ9+8IN89rOf5VOf+tTuN1Bi0/9QkuO82YQQuOOOO3j00Uf5zGc+w0MPPeSBtXPdaJqGu+++m1OnTvHpT3+aBx980ANr57qxHFifPHmS977XA2vn+uGBtbOfXBtYv/Od72R1ddUD6xuIvYfXCp0ZQbO2QzSQTIixxQyMDg0NGhsgf+MZNLeoNSopQZzNMRNUG6azLSxBDELTjBhNNojNCEUIqnRmdF2HSNaINE3DouuYTqd0sykUP7Wo0LaJixcvMJtOWcxn/Y9ki2p2TkclquYWtSoaAzG2NO0KqgFUSJbyfpoQkzLIJ5GsI4hgyUiLRS5HW74hUpq6EgISQnZpyzBM0pa/37GsI1EJ2TctwnQ65fTp0xw7dqz4qfMOapA0cujoO3j4w5/iq7/3m6SdC0w3ryLJykDKEevjwNrqCCLMEbYXHV2a8uyzz/DSC98mxgQsUG1YmayxmE0JGhH8P8ZX4h/9o39ESonJZMJ4PPbA2nlb8ZnPfIZPfvKTNE3TB9aOs1984hMf5/3vf4SmafybFGdfOHz4MH/3v/u7/MzP/Aw/9EM/zE03HfLA2rluHD9+nJ/+6Z/m9ttv58EHH9ylpHGcN5tbbrmFL37xi7z//e/3wNq5rogIt912G//4H/9jD6yd646qcuedd/KLv/iLnDhxwgPrG5w9pxWXLl7hyuUrRBUmK2NWV/Pgv/FkktUXZtntHAOqgoiggEnKgxFDDr+btuHIzUeZz6d0XUcMDSG2xNhkT3OMdFkuTdCIYqgq8/mcLnV0iw5FSKljNp8SGmVn5yrz6ZTODBUhSCAEBVU0NIhEBBCpzuk2B+WxyXoQhICRupQVIgZmCSOhRFIyVCx7r5Pl5yUH16qS1QQhh9aiIV+vCiTDFCybRhivjHOobaCSw/HtrRkXzl/gyJHDOfg3AzPEDNHAbXe8m1vveh/PPPE1FvMtpnRICDTtCDFltLLCKIaiDhH+4skneOKJv6CJC2659TaatmU2XyAYTRPRYLlh7rwMl+87b2fqX8yOcyPgP77o7Dd33nknt912GzFGD6yd6869997LO9/5TpqmYdSO/KdNnOvKXXfdxRe+8AXG47EH1s51JYTAXXfdxT/4B//gLR1Ynz9/ll/6pZ/nV3/1f/+e25kZOzvb12lVzvcjxsgjjzzCAw888JYOrDc3r/D5z/8kk8n3/t7ezNja+v6DoW9U9hxeHzxwCOsS1i1oYqRtWkKIzOcLQgiI5kGFIQAihBiIGvIwxAQasutTVAAhxqYfctg0I0ajMRpaTARNiRJ9g3Usuq4MNMzvlqRkeWAjwmLRkbp5dleLoKLEkB2ZdZKi2wAAIABJREFUGrPCA5P8j0JRVBtCHOfhkNV5LQKpOozLtql6vhPZ/mGgAimHzyYCWG4vCv25RRQxsM7yKRP9MZfdndY3uPPE5xgjBw4eyE3upYFUoZ3wyEc/zdUrVzj37ONYt4NYxzwEmmbMysqEg4cOMVrZIMUJaMuZF57j6s5FUjKaGNnZ3mS6M6KJLUkEadT/kew4juM4zg8sTdO8Zb9pdt76+OvP2U/89efsJ03TcODAgf1exmsmhMCBAwdQVbqu4+zZ05w9e/o17f9KvmXn+iEieWbfNTNP3gqoKkeOHAEgpcSLLz73mvYXkbdcWXPP4XUgEDViorRtZG2ylnUeMQ8pDBpoYpOd0CK5ydI0qAQWXUeyjigx+5ybvF/XJZpmRDseEWJDaFoWiwVBlSTkljNgpSms2U+CaCKZgmkOrrXpRRgiChpKu9gQSb2vG1XQJg92LLoQEenD7T40RspDVrzWDD7qZFlhLUtDHkX7oY4GlLQby1rwkhMbsYl1N2Y7uXmeU2zh4sWLhBhYX9/oh0WWC2KyfoRHPnyS//PXn+T8c0+yujbm0NGbWTt0mGO338uhm29FmxFG5N6VQzz37HP8tyf+jOnOAkXY2d5ke2uEjY1mtFZV4I7jOI7jOI7jOI7jOM73IMbIAw88wC/8wi/wta997TXve9999/F3/s7feZNW57zdaZqGEydO8MUvfpFvfetbr2lfVeWHf/iH+eQnP/kmre7NYc/htQi9BxpgsZgTQ2A0GjEej0ldl5UdMSChaDRCIGggNi0ikFJXlBw53BZNtKMRbTOmaVqSgIbc1rbOsuKCgJDPmX3ThmFISgQCSbUfDJlzc83tbaXXl1Bc17HoSUKI2TudZNeQRCiu63IuIf9Bl9MCgigogGh2WlvCSsgdVXLoTlZzSFlybVjHpsnrS4YGZbqzg2p+DDPOnz1HExtWJpNywxOkDlCO3vIuPnrix/n//p+OQ0cO8I477uamw8dZO3CUZjwhNA2gxNGCD334E8yB5575Npgwn07Ln5lBUkK5VsdxHMdxHMdxHMdxHOe7IyIcOXKEL37xi+zs7PQZz6vbOQ8IfCs2zp0bg/r6+9KXvsR8Nn+NO8N4PGZjY+PNWdybxJ7Da7OUA1kxYtFOjFfGTCaT/KMTBiEGNEjxSucmtkr2QNfhYIuuy0G3Zq1IE5s+UAZyc9vyecwMQdHS5s5V5kQyIwQjdV1pSQfEEiklRIpzO2SFh4ogKCFEYhPQoAjWl6nzPtK3p60WnvNFg+To3KQoQGp1WopbRKSPgU1AargOw/+hSdWGtKXFDTEEbn/n7Tz55FO0zYgYIyklTp85w/Hjx2hHoxywF0VJaMc88P6PMU9z5rMrrG0cJIzX6SwgnaGN0o5XGE8C7cqER1dXeeyPv8Zjf/Jf2d66TDfviEFRLcG94ziO4ziO4ziO4ziO831RVQ4cOOAhtLMvqOpbTv3xetC97tgJJITYBNpRYLI64eCho8TYsJjPSGkOAskSKhC1DDAsLWwkonHEdDpjNpsSCHmQoiixaUAhVN+zKlr2k5AHIgaEoEoIAdX8EWLMupEY0dCioUFDJISWqC0hZEVIiA0xRlQbVBoMpcxdxIAOI5mRUsJSlz8wktTOd4l7pe6RMFvkZnTRflg9RnFzp84wUtlFEct+HdGsJEkqTNbWeMft76BLC7pugQnMZ3POnjnHYr5AVEFCaXcnNDa04zXmnWJEDGWREotuwWLesZjPMTNG7Zgjh2/m5qPvoFvA9uYOzz3zLGdeep7UzfM99vzacRzHcRzHcRzHcRzHcZwbiL1rQwjE2DJqlSaOmKysE2NgOtthMZ+hKCoBUWjaEU3T5uZ1zH7poJFF13H2zBmwVEJtJfSDGxs6S6gZqYqmcy48NJvNytDF3G7GShqvOVxOKeS2dYiIZcWHkVvOKtK7tOsRU/VKG9DH1JRhkCw1sa1vK5tVD/fywkp72+qyqw875eGOpY2dHeB5KKSl3Oo+eOgglhIvvPACgQZVYXtrk7PnznLkyBFiaOgr3AaLxYKrVzYZjVqCGCmMEJtgocv3IhmhiTSNcPDgQUJQsMRsNuPsuTO8Ky2y+9txHMdxHMdxHMdxHMdxHOcGYs/hNQAmWBJAGY8mxBDY3p4y29kGU2IzZv3AOm0zJoYmB9MxoiFCSiwW21y6dBFECG0kaEM7WiFIC1bUHVmKjYpAyAGziGYVR3VxSP2kEALJOgSIIa9NS+u7Kj1yzixYbXaLkFJaCsateK1zSC29E1r658RARUgieYCkVKN1GfZYni/GbKzuJ9mjrRixBPV96m2GhsDBQwdZdB2nT58mhEBU5eqVK8QYOHjgptw2L2ufbu8QVZnvbHF++ypNu8pNh1pkJatKmuLVFjNGoxE70x2MHGhvbu0UBcrSGhzHcRzHcRzHcRzHcRzHcW4AXkfzGlKXMNM8RzDl4Y0iQtd1pMWC2WzGeLxK07S5Aa0R1ZjbyJY4ffolZrMpMQRMhNiMaNsVRANWAldIhEAuQpuVELoaOiw3nKmhMGC5WU0ZJCkS+oCa2rLuP0sJovNxUkqggtYcV3obNpYMUUqYnc+dUm5l59g86z/qPaCsp7as0dLSrotEaEdNCcbL+ku7XEPk8JEjGHD+7DlSCcIvXrxEDC1ra6uINigw3d7hwoVzrK/fxk2HjzEarzEarzNZXWW8stJ7vmkCN998jHvuvoft7QvsTLfYmS2yykXDMKjScRzHcRzHcRzHcRzHcRznBmDPvgiVPEyxW8yZz2ZsbW2xublN6iBozsS7tCDGhvF4haZti+taSJ2xubXJ2TOngdykDtrSNhNibJEQ+jZz1WqILgXQJqhk93VtRNeGtGrIUXLxYIvWfSlDGEsYTcnDUx7suDwetrqqk5Fd2L0zpITRRj5GLWNTAnByW9ySkVLWneRNczjO0nEAmjiE1zXTtjIUMsbIkSNH2NjYoFt0dMlInXHmzGm2t7ewlHpH93PPPcPlK1eJzYTJ2gajyQrtaJyHYMZYdCzKysoKNx0+ytGjx8sffX5DIWj03rXjOI7jOI7jOI7jOI7jODcUr0N2HHrFhogwX8wgkYcgakvbjFBVptMdNAghhKzACJG2bZhOd9jZ2YKU/dkhNsRm1PuXVaRXY0hVc2htQpfAW7R0nus6DJFUSsSyKyi20tzOehHJlemUtxWx4qkuSXXJsq0k173XmiHiTpZIZSijkUNuDMSyqsTKoEmreyWDlDUrKVlRmUQsCZJTdMy6vI7aII+RYzcfY319I4fVZnRdx+nTLzKd7WAkklm+xzESYsA6I3WJ+XzGYjFnNpuyWMzLPVWOHruV0coGiwVsbk7zvdfQO7wdx3Ecx3Ecx3Ecx3Ecx3FuBPasDTGh6D0SEiKxicwXMzQENESadoV2NGJ7a5v5RkeMDU0cIQKXrl7i+eeeZTqdYoDGhhhHhBiRUHQhZohlNUkNo60E1DlY7nJAXMJzw2o2nbcp7Wokx9uq1vet6wDFnDYXJUidxlik2H1ovnTNvda6tK+FZW3J8nbDAMjq58by1zUjbprsu87nATHFypDIqhsBITQtx285zgsvvsDVzasAbG/t8OILL/COW2/BukSkYbKymgdTimKLjk5mpLQghEiIhlpCJHLXXffyxDf/ghAaFl1HiIIGy5Jux3Ecx3Ecx3Ecx7kBmM1mvPTSCxw+fPMbfuz5fMaFC+ff8OM6juM4bzx7d16LYCokAtvTOVc3tzhwYEajLWmeGDVjRALzxZz5fI5qZDadMZ1u8e2nn+Sll17MYTeBEEc0ozGiWhzS5LIyKXfDU2lBS3VDp6wfsQRkFUjVaPSaEbL2gxIJ1+GO1XFtllBVLNXA2MpAx+ICKftipfGddg9cFHJj2kglNGbX8/1aLfXhdv46rzeEIbiuwfxw8ZXcQo9Ny7Fjx+ieX7B5dZPFYsHli5d4sZyrHa3QzTu61GGaz5ssoSkPuaxOErM52IK0mNM0gZXxCjG0BHxgo+M4juM4juM4jnPj8NRTf8XP/dzfo23bN+X429tbb8pxHcdxnDeWPYfXIYTsShYwhK3Nbba3t0FhMe+QbSV2M0JQrl65woXzF5hOp1y4cJaz515iOtvJ4W1siKMxqg01dLZUBjJSdR+Sm8nJimdaB3+0CJaKz7ouLlkOccn+6t5XXZ7On6X4q+vz9dmsK1kOoItouwyatOG5cjCzlNfE0vPFwV0Pa6WlnUPkRNu2qObnc/O6lrrLdgLWWb9vbEYcu/kWvnnxL7h8IQ9xvHLpIvPFHBHj9PPPcfOxm1mZrGKEwcu95OlOqWN1dYKacO70Oc6dOU9KEELj2hDHcRzHcRzHcRxnX1lZWeHYsWNAbV4//6afczQacfTI0Tf9PI7jOM7e2HN4HWN2WEP2U6cEZ8+e4UB3iPl0xmIyQ6eBrc0tnnnmWaY7M9pWmc92cqMacnAdG2LICg0kDzocUuZqs4YaIJt1VDV1r/yguqaNIYI2rHizKYE4MoTF5QQl5K36j7xtSl3ZSOjFIUsBd21UpzJIMll2ZqvoEIIvKUiWLoIqtI4xDE/VwLtsk/rfp97xbclo25bxaMTT336S+XzB6to6qysNhw5sYLMpF0+fZmWyTruyRmiaPnhPJjkIN4hxxHvf8z5+//f+gOnOIutfmrjbe+I4juM4juM4juM415mDBw/yEz/xE8xmM5577rk3/Xyj0Ygf+qEf4sSjJ970czmO4zh7Y8/htaqiKgTNAXZCmM2NixeukKzDyD+Gs721iaWEIDRtQ4iKooi2xLBCbMYgSpcSaA7BU0q9N7r6r600n0nLio/cYqaEyFb1G2W/GjybgBU3Nb2qo35dGthFFSIi2dZR3NbD0MflyY/sOkY9Dv2d0H6Io4igNkg5UuoQgaaNiEoubCfoq9xlWGMpb5ev83Vmg0m+36dPP8fa1jrh2FEOHDpEtI7NK5e5fPEC6xJyezwBCyM2goZ87mRw6+3v4vitt/LMi88gloo6xZ3XjuM4juM4juM4zv6hqtx11138/b//95nP59flfKPRiMlk8qafy3Ecx9kbryO8DoQQUc26jBACyYRFZ6hGtrbnLBZAiKQ0z+rqBJoUQwkxEmMLEnrVs5j1QxYxy6FqcULngYdkL3ZKu0wfqQ5r7GvVQ/+6Dl0UGbzZKVle95J6pGyev855+FIgPQS7Irr7+PWJEjD3oXsfvu8uYasoKXVFG6L0sfYwP5I+tzbKIErrt+m6jiZGgghXL19i+8Aqqwc3WIljNg4epB2NQPMbB2agIbK2ttE7TjQoMQbW1iasTkaEpQa44ziO4ziO4ziO4+wnIQQOHDiw38twHMdxbhBe18DGGGP5WlHNbV8RQUMeABhjQFQRAtYlRCIaGprYEJsRGmIeupgMFaVL1fFc2sZA6lIJcpcCajO6lDUdtTmdqtKDEimXRHY51BbV/uvl9nYfQte6cx2iWELseuC6jl4zsqQEGXJu61ve1Yk9CE2kb1a3TVs2T6UaXs6JFOF3rV+XAYwpS8Drmwbj8Qqz2Yz5vCOhdOTWepfyu8chBhZdomkaRIcBkbP5gq3NTSbjEcdvuZmmCUuyFcdxHMdxHMdxHMdxHMdxnBuDvYfXqll7Ibk9LKqDhgPBTFBRYmhRlKQdMTa0o5YQI6oRQ5CUSAZoaUYjOdDO0xaxlPpgmhLiigyN6n49VLtHDsJry3qghsfWB955H+kHMcpy4twfdGmg43JgLVIa41VfYkuDHkuYbiBodnxLQkzzNkBssjaknq1vWcOuD2xoXhtGMxoR25a2XaFtJxy66WZUlZQWbG5uM97oWCw6iEqMDU07IoRICBExJYhw/sJFrm5vsbaxXkJ3D68dx3Ecx3Ec5weF559/lt/7vf+LO+64Z7+X0vPYn3yNxWKx38twHMdxHOcG43VpQ1Rzw1okB9WEoTmdFR+ghNLSHhGbWAYzCkggJUMkh8ZdcToXATT9gWqMK9mFbfmLHJyXoYykpSa0DUG09eF19lyn1CFlvb1OhKHlXdvXdVuTHE3XNnfNtmvH2/IG+bwCZtIPXsx5vJTVD63w2vJu2xZBy3UmekP3Uo5c8vv+HpsZsY2Mx2Oa2CDasLp+iCgd2k1JGNs724wXq4TQEmMs9yS3skNpk1++eoVnX3iBM+de4qFHkmfXjuM4juM4jvM2Z1AiwlNPPcG//Jf/G03T7OOKdrOzs9OXdpbX6jiO4zjODzZ7D69Fsy6khLdL8o0SZksJfC23skMeTqhSG9Bp8FsvKTpq0Ku2FBOLAB2qQtcZIiH7saUObMwIkKo7GoYUHcNSDpNVSmBcvdrQt7Fzbm6gmpN3SgBe1R/VKrLk0RYrjfGiwhYtDm+zbAOpxya3sKvRpG3a0lrPB8qt6xyyC/m4yaQs03q/d9uOmKyuE5uGRTI2Dt7EqIlsXblAkESMLV1npK4jdfmeJQ2oRTo1Njc3eeKJv+LClUucOXeJ6bzLfz6O4ziO4ziO47xtOXz4MHfeeScA8/mMCxfO7fOKXplDhw7163Qcx3Ecx3md4fXQPM6BrvaGjVIYzv5nJSs/oATBhlAczqK5GWyL3kmdG8eJqvrIwxZDCYxL2N1XpYfzINV3DalqQOqgx7JhP9QxpUEDsjyXsVd/DLaQqqoWltvcIGZLvWzK+ut1VTXJ7uY2xdUdY1OUHYJZKoMZ6Y9l5Pzc0m45ShMbVtfWGU/WmHUd49VVNtY2SJ2xvXWJjogRScC8WyAqqCaSJRazBd/85uN8+6+f4uKli8y7BaYBK8oXx3Ecx3Ecx3Henhw6dIif/umf5pFHHuHMmTP7vZxXZDQacdddd/HQQw/t91Icx3Ecx7lB2HN4jaQSCucgWUhFo5FDW9PaUC6V6iSYCnaNt5ouIbVBXevXkr3XNVdOif7YFDf0MCwx5eC6hr9907m0rXf7Q8o+NRAubedBqT34r2tize5sG8vBuKogVRtCDezTMLwxFQ+2Kv2ByPckBiWWoZb940tf977r0sfuh1UCIUYmkzXWNg6ySMbq6hobBw5x6eJlnnvxW6yMRtx2W2CdNVQicWWCastiAWdeOsOfPfYnjJrIjzzyIb7zzDOM48i1IY7jOI7jOI7zNkdVufXWWzl69OgN65ZWVWKMN5TOxHEcx3Gc/WXv4TUluE6lIWwJqW1sahCccoDbB8+SHc6Sw9ggQpc6VBUpCbKlBBqGr0X7Y+fStJbedoeYlcA7D43sg+tlh3VVeNTCdW1DS2lv1+S2blNC9+rCHnxrZQ1peMzE8u9V+3xcam26tK+TGSp1KKMVjYrk8LsqQZaCcsp2+RoSZl1+Pi8R1cjKSg6sEWXcrhBCYNEteO75F9navMzzzz/H7e+4nSNHjjC9qWM2X/Cdp7/D2TOn+c6zT2Oq3Hv3/bzr1jvZWF1HuSbcdxzHcRzHcRznbYeqMhqNGI1G+70Ux3Ecx3GcV8XrCq8NQAdPNSmB5tC2BsVaJCFZmZGDaivhcR8ks+x1lhJg11Z3Tp1ryG3SQY1bjTLAMTeha0o8BOgyrFPycMc+pl0axmhWFSJL9gypxu6sOam16+VtjOy6Hn4/BOb9cMZeYZI94AAa8uDKXNCubpJyjBpclwA+D4GkD9dVA6PxmIMHDyIaadtxvvVdx3Rnh4sXLnDuzEs8+c2/4sCBgxw+cjNdSjz11JOoCleunGd14wArk1VuueU2kki/FsdxHMdxHMdxHMdxHMdxnBuF1xVeZ1FGDqiHoY152GD1PFsZRoh1eUCh2OCWTglRGRzZvcvassrDSths0jeuVYBUNB1VrL2s+qCG0ctDCKUfiih9gD0E2cO6rG85D4HysnakekxANNerzcr1laCaXY3u+rgsaU6E2LTFj13v4lC8rgMo82NCfneg60N0ExiNW9bW1xENhNggaD+Q0szYmU6ZzbY5d+EcL54+A2ZcuXIejdUf3tGJcfDoUTavXmXeFV2L4ziO4ziO4ziO4ziO4zjODcKew+vEbpV01VWb5cctGaggSzqRlIZBipJSPwgxFf2HaihBr4EYXQnBlXxQMyPZEJDnKnL1dTAMY8wrymvYFRzX8Lp4uU36IZC5Mb0UXNeBi2WN2ofzggnluvJ5QElFg1Jb0yy3vJeGS4rAqG171Uj1ZVfVSX8TyzpqA7s+b0BsWiaTVdBACAFLCUspu8RLiC7AbLrN1uYWoorIHDpoRy2palU0gmhurTuO4ziO4ziO4ziO4ziO49xA7Dm8ll4HUqYcytBqtqrZqLMauzToQigNaKC3cUgJh/u2cw5iTRQh60X6drVo3s+snMd6H3ZuX9O7pGs63qs8qkakJO25hVy1J0NzO2+zdAEmpKoQKecwKU3num9pXFsZIClWG+lZV5Ld2PkYsS0DSIxd7ev+N3XgY7pGJ1KCdNFAO1oBUaSoUlKqje3B9x2jMl/MmE2nxEYJUZnNF3RmZQgmS3+CHmA7juM4juM4juM4juM4jnPj8LrCay3N5V3t5uUM1IyEEMoARjFDJJVic01irdeGSEmaa2OapdZxrmALJqnXjlThdA1s+3C3KjxsCLL7UF2WPNaUQ1DWwRCOIzVIzo/l2ZG6K5g3rQKSUgA3I5Xj1/UMiyohPEbbtjl0pupVyu0q17qsEem/kCHEF1U0av9cZ5afUyHlSZGYKIgSQqBLkDCiRkyM7emURZrna4uK1jq74ziO4ziO4ziO4ziO4zjODcLrcF7nsDc3ihUzQRVAesVG1XD09uclj4ZVhUd1Z+Rdy+ccXFebhZVWc+05101lKSjOuwxRcM2NsURQzaEuAKm0vJeHK0oZhqj9cMU+dK7baN43FIf1UPEua7SEoIOapDxfCtElTM/h+Wg0RkXrDMjleY2DOoRyvZaGe923w7Pqw8xIXSrqlbydiuZ1IIgEYgMaG0KITHd2mM1nbG+f4dKFC3SLef8n6TiO4ziO41w/fv/3f4dnnvk273jHu/Z7KW8o3/nOX/PCC8+xujrZ76U4juM4juM4bwNe18BGsTT4ms1IRYKtMgw5FCW7OUSoMubadK5xdD9osehEqmu6l0Iv5du7w96aQKey5xCWC0Pb2iyVVjWI5XA91bayGGjoBy3K8qBHoShJUr8GS1kDMgx+HFzbVVVSo2epN8CWrguhbRpUpdePsNQSx2SIkmuGXhL63rst9NeEJFDDUoelrg/jRfMHyWhiZH1tHVs/wHwxZZEWbG5e5dKly7SjsStDHMdxHMdxrgNN09A0WR939epl/vRP/yv/7b/96T6v6o2l6xYsFnNijLRtu9/LcRzHcRzHcd7i7D28TrnpLCpFG1KCYCktYJOhwUxfOgakaEEgWQeqOaM1Q1V7t3Xdvga31e1spcEsUgc9GiJV91G0GqUBXUPgPFIxe7FTCb8tCRqqn9tyyFyb3ikVrcfg4ZbiNlnuKFuqwyhz4J36IY45jU6pBOYl6JbiMhmN2hyKU691EF8Pxx8C/eGE5Ub2a6r3R0ipDLMsMvF651UFVaFpG7RZQeeR2C2YrKyzvT2lacdI0bo4juM4juM4bx533HEHJ06c4C//8i85d+7cfi/nTSPGyIc//GE++clP7vdSHMdxHMdxnLc4r6t5DcMgwZyZ2hDelhC1Zq85oCa3sKnt6hzmCvQO6C51UJzVVeFhZkVBnYUYywFyDcgVJdHR15j79Uneq4TeVqvL5fhVv1FN2FX30X9N6VBbHeqoddzkoEXpBy0uV8SrjqSk7zl1JwShaZo8vHFIm/ubJPWW9b7v/hd62XZ5vhey2PDn0JU/A0vlWovzpEtGMAhl/YrSaERE8psG3r52HMdxHMd5U1lZWeFzn/scjz76KItFt9/LedMQgclkwqFDh/Z7KY7jOI7jOM5bnL0PbCwDAZN1hKLDELFsCAmlFV3CZliKkxN5umEfDDOEylrD6PJMaXJXpQeWm866O73Oqg3LLuhrou3Sii775/QcE8tqk1cgZ8A5dLYaPkv2b4sIknYdfgjnl742M0jVk52d3TUMl6DEGHftX3UituuAg46kBth9qbuE1lZvqMF8saC/BQISIqSunF8x0cHFjWHWkSyhCiLpmvvmOI7jOI7jvNGICOvr66yvr+/3UhzHcRzHcRznLcHrCK+zCiP07ePymAhpaTqj5co0SjZTo2BoCbuXZiIuBbn5eFndUT3S+des5tCycYJypL6DvCSRLspsS7v0JUh2XteGs1G1J2UxlorTOreXMcshtJRwuebjVlUgOag36jY17JbhuvJqit1bUA1ZNzL0vQddSp+Ap/7xsjtVY5JYeswUWyywZKTOijNbSZZIyehSAk2k1JFsQaLDJNHR0aVFPl5aTuQdx3Ecx3Ecx3Ecx3Ecx3H2H/3+m7wyybo8yDDVRrDUfHVwM1Mj2xxGJ0s5wK6ajuXJhEvBNeV4LysDyyDO6AcXpqrJqI9aVpNYjsjLlMUcSlffdQ2Je91H/bIG59YvoW+C133LtZQqN12yJW91icP7wZSy6/hmRowNMYalYY/5HiWzovowLOV7lXcszfLSHq8tbkNKaG501tGlLg+hTOQPSxhdPkFKpLTIH92ClDoW3aLcA3YPqXQcx3Ecx3Ecx3Ecx3Ecx7kB2HNqaaRe65HD2y4Hy10OkXvnsi2HuzlktpRVFymlPok2yy3hxNK25VxSfNWpSzmwXfJBS0mVLXU50rWUH+tD56oeeYVrsGISSZYDaYbtloPuLD/Rvt2dc/UaMJeA23K4rGgOnNNyGF/T60RsAhrC0vrq8wzbima3tlkO/Mti6n20lPp7YCIkgXnKAbal3KhOKfWBvJnRzRd089zQJsFsNmN7ZwsRIYa4pGtxHMdxHMdxHMdxHMdxHMfZf/ZeuRUhJSsDEaUPP6s6pM+La4BdvhbLz9esNpXgOaUhsK6KECEH3HmwIX3gm1Lqw9xUmsuiRf9RG8p9YD54N4xlnccpDdNPAAAgAElEQVSw5n7QYpFK75qhKGUNZv0mQ/P6Wle0DeE82XmdTSR5LaJlWCMMre961UVF0itPWFonQ8YtZZv+DltptS8WSEpIMmyRfdZdSsy7jtQlusUit61T1q5Md7bZ2tqkS8lt147jOI7jOI7jOI7jOI7j3HDsPbw2QwhAIIlmv7SQQ2QbQlfZVZ+mhLYlWJYSKfcO6xwwSxnQmFvOxXt9zZBGs6LnMKMzikZjcFjXxHdZGT0YTSyH5UXVURUgQ9Y9HF9VSwjd9SE2klXh0q+vDluUch8UM6Er3umULNs/UqJpIqrar6n/YpB2c03mvvvSEdQUMUHLGwdRBOkSUpwhddeu3NPOErP5nPli0etYtq5e5ezZ0yy6rrTZ9/pCcBzHcRzHcRzHcRzHcRzHeePZ88DG3LpeIBJL21lIKbeNKSMZzRKUMLtqsFW0+JtLC1oSIgGS5OdKk1qLKgQRTIZQm5Qt2lYDbM0t7mSWlR2iQzNahJQSEspYx6WANmtPpKwzP2KSdR9DKF281ZanNNZjiAhoKKG69WFxTcvr8EoslMu3PrsfjdrlKY7ZAV5C7+UFDkMobdfjZjXiH7QmKYF1WcEiZWhk1xV3duroMKTLShdr8pp2tq9w8cLFsv6lwZCO4ziO4ziO4ziO4ziO4zg3AHsOr3uHhiRSojR6a106u5ilDBXMqotCCaep4bAIZh3VKS3ZWE0d9FgHIBbxyGAPqUXulEC16DysHwhZnSUJ0DzpsD8eJUw2s9KUrhqT2noelBxZL1Ld1/TnYallvnzdvR+lT4NTvYrcvG5HJey3YXvJV513lWEdyx7sa/Qk197PRZdy+xyhy/Mps3e7NquDAkIyRVVYLOZsXr1M1y3yXfLqteM4juM4juM4juM4juM4NxB7b173YWv2TmsJgVOyLCOR0qw2qaaKPs8VclNayj4AqCGSelG0Jelb3HmfHO+qSp+z9vtaGlQhWN6vZMPah+SG1JAdyroMEd2l6BCp2XXuN6ey4DwYMuWVSA3Ys/5DVHaF2vnBOuhxaZ0ijNo2a0Oqe5ulNZX7WWPkIa63/gHr73ttZedNEqkoQoxU3ixIpaWdLDexO4OI5oZ7l5jubJPoQJI3rx3HcZy3BX/0R1/hl37pf+Fd77p7v5dyw/HEE3/BX/7lN/Z7GY7jOI7jOI7jOK+avTevgdoQVsmtXhiCWkuGaX5OUvVb182EUJQeSUqjWTQLsi2V41GC3PJ8Ca1TSjnITmAhH1fLkMXsbu76JnY/RLJ4S6yE0L1T2oqSpAyLrOe1pea02RAQ18a0FK91HyaXgD4fs7pTpD+S1ma3QYxxKUAfHNzLv5e+kb37sV5HUq6hNtItGV3q6KwjWYeIgXWDdqU0w0WymiWEBhElWfFj226liuM4juO8lYixIYQAwObmVb72tf+XP/mTP9rnVd14zOcz5vM5kP89Uu+Z4ziO4ziO4zjOjcrrDK/p28rA0CaWvnydN7FqbpY+xK6+6prSmimSOnKnO/WN6CHYXR7EKKVRXULxEuzWx7U0wG2XCqMOicx6Eit1cSnb9IMhi6YkpVQeK2G6SQmRS0qdyL5oGTQoeWmDnqRfX7ISchtt0+ZGel1VPyxRXpYfm/U9611f70q1yeoU6/K9EzqEBWaJZB1d6lACQm6a1zZ8SkIn5GGTy1oXx3Ecx3mLcWBjgx/90R/lscce49vf/vY1f/87lfF4hKrywAMP8OEPf5jRaLTfS3Icx3Ecx3Ecx/me7Dm8VpSUjKC5vVyzVBVBrJiexchTBHNAHUqQK6XhvByZipQUvNSMU21cV49HbQ/3TunchO4HMRbVNZKHSfblZiOrRlJ1Z2edhoqWww5aDunb1fTHlRpUawm8lwdFSrm86sQWKQ5u6ddedSJVD9I0DTXBt/5jtygEWwq1c9Keh1Du3gSTEvWbseg60iIhpohJDqwtlLUrZoJqRDXka5EEKeW1+Tf5juM4zlsYDcpP/MRP8KM/+qPMZrP9Xs4Nz8rKCocPHx60Zo7jOI7jOI7jODcoe29eJ0P7oYYMAxRryKz5wdpgziFu3qaUp7MSpM4ttJQD4eKUXm4vD63o3N7WQSHdh92Dq3p4st+nKjhKiL28bw2Vxaz3alOGTfb7Z0l3yZzzwZbL5ma1PW69TiSlNLitVfrraJqmvzf1Xgzt7HLN5bH6JsCQbReJiknf5Daz3m2dDd35106y+1pQzMgt63p/pAbzYeleO47jOM5bl/X1ddbX1/d7GY7jOI7jOI7jOM4byN4HNpYQOJHbvoggKRE0lEGLObDV2o8WIVWNBqkPlM0gSB0wuCvSzSwn4+VTWgq96zGWpy7WrHoYrJhRakN6OX628nUZxlia10htb+egWER2N5RKoN07qutjS+oUK6up61AVtAlD6P5yT0hxY9cme9mo+rmXFCLDIQwskVJX7mENs8uN6RIE7e999rkI5a2HcipvXjuO4ziO4ziO4ziO4ziOc2Ohe90xCXQl9qy9XytDE1NJoUWGMLj6n0tMnTUhpCHkLb8kjJQS1iWwMkRxSbGRSsDbfy7qi+yqLosrjeRctpY+U7ZUQuHiw67t71ofr03mquxY9kzX8+3KyWVweEu53qEmPahOqoYkxEAIsT+A1WPUY9b2OvTX1PtYejXKsI56jSkZ3aLLpyuWE5L1Ab0YiFnxW0OXEl1KvUbccRzHcRzHcRzHcRzHcRznRmPv2pA+XM2O65zQSvFJSw5HzQia8/GULGs0ashc6tGDNsSwa5rNZjnIrr/vP1W3NWXo4zXuaBNQhgC7hsjIct960Icky7qN4djLXpK6fQ6wgaJLqQMZpb92S6n3fidLxZs9tKXbOM6ubbIvu4+wl8Lp4b4uITYoQ8r96xXYBtYZlvL6us5KyJ9YWEcnKZetUwDT3vFtZnSpWwr/HcdxHMdxHMdxHMdxHMdxbhz2HF6LCGKCJECzb1kllAazZIe1GCnliFZLyzclIxUtRx52KPn3JMSq16I4pGurGekD41SzYpVeOWJ5EblRLYKK9EHv8qDGvBBDa2M6lUYyIKrFdw3Q0Vehi6ta+3VSgucSmqfUryVZCa5L4J6PO4hQmqZdWkvWqPRDG3m5SaT+Pi09mIdCWn9MSjs8B9HWX5eJIgppMYfQkCjtcUtgiphikkhdR3JtiOM4juM4juM4juM4juM4Nxh7Dq+7rstt6ZT906ZZRyFSw+eC5Pg5lba1SXY3KwJKCU6FINnLLGS9RW4XpyVHs/TOjmXTRdWR9K7rqt9YtnfkDcv2iaTaq6Rr8Fw37hvZZZ9+2GRtmOcEPrfElzzYVhrRybqlhnfCTPsVN00cvNlLjez+9Ne2rkVKqduWGtNLWxQdSbK01KDWwXWdcjKe0lJIbpIHOqZECIqI5rD7Vf2pO47jOI7jOI7jOI7jOI7jXB/23ryGvjmdU938aM6SUw6iVdAyFTDHslkpArkd3Degrcue5qLgLjFtDoPrkMJrzi5VE1Kl2dA3rmtYbCmvobeUlPAWy61uyC3qGvDm9jWDfzq9/KrzGazegeFmCLXKjaDVhUJVdIgITWyoF2PLdWvoNSRW7lu9R0Mt23ZtMzS1LetKShieujx4kpSwzhBiDqz71Ft6X7YBaXHNOhzHcV4lFy6c4/nnn+H555/Z76W8ZThz5kV2dnb2exmO4ziO4ziO4ziO85Zg785r8rBEAbItJOssguZ2s6iWgHYIemueq5Y1IJLl1GSPs2YFh+Ql2VIL2TC60jImJVS0D6777Uor2ko7uc/TrQS8ZkBiSK2tZrx9C5pkZZDk7rZ2lXRLaVubLQXi/WbWPzIsvbS4JYfH7WiEoENevTSQcek29QMl6xbLR1++K/U+peKu7sdh9ppwQVG6En6LGEiXw3UVQlBMs3LE02vHcV4rs9mMX/7l/5V/82/+j/1eyluGM2deyvomx3Ecx3Ecx3Ecx3G+L3sOr/Nwwhzopi6heURiyYRzkJ1VH9IPaKyKjwR1UmIuTitIcVekLuVBjiIg2ofEOdxVRBKWDNWl5nM5ttSA2ZaC5fobLLuqy/NKDtkR3aXwkOXjljA8pdQ7sQXNuu6USlZc9CE2HDsfVrJCRcr+CG3b7hoCmdLQoM618EFBsitK7sNn+gurGpG67i4tSKnrQ+86vLFLiQSEkH3XZh1d+WwoXTdnkebXntFxHOe7cuTIEVZXVzl//jwvvfQC8MJ+L+ktx8GDB1ldXd3vZTiO4ziO4ziO4zjODc3ew2vLAWkuPAspgaiRSnCdfdD03ugcFFOax0LCkJCb11Yq0NI3qeu+aVCDSFWKWNmnnBtFMFQ1b5/AitOaMlSyajvyAkpg3gfgJRSW7I7O1ujsgq6uEWNpyKTVNnVpmEs+r6oWJUnWpkie6jiEzUA7bhGtTWspEpLqyoa+/bykCekt3MtV7zQ8n5vXiZQWWFrkgFqXmtSi5Q0C6x3lkLC0QKTNbyQsB/aO4zjfh7/9t/82o9GIxx9/nMVisd/LecsxmUz4sR/7MR5++OH9XorjOI7jOI7jOI7j3NDsObxODAoLLQ3qXEbOX9c4NJVAuaqxa9u5H3SYoCo5pLSYlwcuqgx6DUsd6JIShEGsYX2oLCXslV6voap9W7n0wftrGELiHKRXNchy41tYalaXJ2oYLqV9XVvUIvRe71QGTtayddOO+jZ3ry8pyKCkxnIePgySrM3w8kvvrC4fXZfb6HV4Jh1FwTK4ra0MbUwGWlrcQTSv1a0hjuO8Bm6//XZ+6qd+iul02v9/r/PqUVVWV1cZj8f7vRTHcRzHcRzHcRzHuaHZu/NaDJPsyDAp4a1qCUalD1yT5LA4t5nlmmGG+bNo3t5KytsbNOrnEsJWLUfdV0WLmiS3rQUrGpDB6AGlpV0MITmFLm1x8uBGra3u4qYO/WnytWGGFmW2aBl+SHFfF/92VaKYQeoz+LKvBESENrYoAUobvCpN+iBaaghvQ7bd+09kaKLXQLre45RIqWyYSkKN0Fmi6xagSiiDG6020cva0sKwzsOnN5I//uM/5Gd/9r9nPF7Z76X8wPKd7zzF1tYmcHi/l/K2RFXZ2NjY72U4juM4juM4juM4jvM253VoQ4o32XLCXP+XUhqUG7mWTUpZEZJMELG+dd0rOOjtzeXXUuO2agtJefSgSO/YFgYtSR3caCIo/XzFPvDuw+XS8u4QRCN1YqP17e4ybDGl3EiuYbLRh9jWleC4DnYsoXVtjdvSNfVV6nL9sYn9E0N7uvyuXEu/1uXP/f22IfSuRzEjdR1WpjR2yXIrPmWdSH2DgVQC6/Kx6DpEda9//PvOlQvPcuH0tzh8/D1MNo7m19x1wCxx5tk/o1vMOHzre2lHa/1zGxsbxBjZ3LzK44//GcPPHzjXm/rfy+rqKk3T7PdyHMdxHMdxHMdxHMdxnD2wd21IZ6RFApW+JZy6hIYcXKcSpoYSONcGcTKDLhE09KGvJcMklXZxDoFrUE0uHGdXNJCoDuyhvSwEYEmR8f+zd99xct3l3fc/p0yfnZ3Z3dnetU3V6rIkF1kIywZjgynB9CTAk5BAuElII/fruZ/klQAJyZ2Q8AoYYkIwwWDccZesLktWW2l7773M7E6fOeX5Y1YrrVdrycRWsX/vv+zdM2fOGc1O+Z7rd13mXKsQQ5pv5ZG+HwNZljDm0m3pfIX2hf4cc0MW0+1OJFNKh9ZS+n4vtIY+v3264ny+nfZc6xDmg/R0H22QkWUZVUkH5uZ8T+uL2p6cP4y5/zCNCwF2unLcvCi8vhDOGYZJKqWhGwa6bmDM9SI35q8emBeC9Lnz1XXjwimnO3r/pk+Da8I0TXqa99B19lk27voKdpcPRbVelftOxEI0vvowup5k064/wup3cT6k/uIXv4jL5WJ0dFS0UrgO2O127rvvPqqqqq71oQiCIAiCIAiCIAiCIAi/gd84vJZMGYy5Fh2KNNeH2kQyTfSLguvzLSrMucbXsqxgSiaGqYMBiixjzg1xPF8XnN6TmR6uaMrIkgqmkQ7EJRlFUua2O1+xDCAjS+kgNh1UMx9wp/tVp4PpdAsTab6iOl2wm24FcqEumbljNJFRmY/NJeb2nz4u5tqWpGczGuc7e1xo+SGZ6RBcAtkiIyvKfEBtvu5szwfZpnRhQKN5vg/23L6YGyiJyXyPa8M00E2NlK6jY6Tbuczt4HyPa8UE6fxwR12buyigoCiW+YGPN1LYqmtJpkZaCAWG0bX4Vb3vWGSS6dF2LDYXhrFwUN3OnTtZv349qVRK9BC/DkiyhNfrxe12X35jQRAEQRAEQRAEQRAE4bojmRellrFYDEmSsNlsF1p7LGFyahpd19M7WbDHC/+3aA+X2Kd0qS0v8b/mxb9asB9z8Q0ueR9XSLooUL7krZf6+aXv9/zW5wd0yRe36jBfl3Fe1Dd7wc/mN59vgH3htqZJMpUkHAqR0lLzgbY5V719fkClJMkoqjr/76ppKRRFweVyz/WvzUBVlMue05XSUnH62/bT3fA8U6PtaMk4Nmcm+aXrqN1wP1kFdfPheX/bfoY6j5JfvpGS6luw2Fzz+wlOdNNR/zQWm4uatR8kGpmk/fQTtJ9+iujsNMU1W/DlLqNm/YdQLQ66G1/Aas8gr2Qtfa2vMNhxhHgkgN3ppWzFTmo3fBh3ZgGSJJNMROg48ySz0wNUrXk/2QV1yIpl7mE36W8/kD6usg0ULdvGUNcRus49R3fDHixWO4XLNuMvXkXNug/i9hZd9m9GEARBEARBEARBEARBEIQLTNMkFArhcrlQLpFN/saV1znZWf+jAxMu5TcLP202Cxlu1+U3vEpSiSj1h35I87FHAMgtWY3dmcnMZB+tJx9nqPs4W9/3ZxQtuxlZtjA+cI6WE4+BJFFQvmFBeB0OjtBx5mkc7ixKa3eQiM4wNdJKdHYaLZkkONGLoSWJ1+5AlkN0nX0WwzDoPPMMSJDhLSTDW8ho7xlO732QcHCE9Xf8AR5fEVoyRn/rfsb6z5FTtApfXtV8eI1pMjF4jtYTjyMhkVe6lvDMKNOjHSRiUXRdY3q0HdViJ5WMcSUXUQRBEARBEARBEARBEARBuHK/cXgtCEsZ7DpC62uPIkkKN9/9xxRUbEJRbSQTEc4dfoiW1x6j+fjP8eZUkOErRtcSpOJR9FRiUfsSw9BIJiKoFjumoZNbvJqNu75CODhKODjKhp2/R2Hlzbgy85kYakRLxpke66OkZjMb3vOHZOXVIMkyY/1nOPrrb9J17kWKqrbhdGdjmgapZIxkIoKhL2z1YZJuT5KMR9C0BAA1a+9DUVRCwe/gySpiy11/QlZ+DU53zlUbGCkIgiAIgiAIgiAIgiAI7xYicRPeUul2IQcIBycoX3kHRcu2kuErwpnhx5tdRvXae3F7cxntPUNwsgddT72p/VusTtyZ+ahWO4qi4vLk4ckqxWpzI0vpIaCKYqFs+U5yS27C7S3A5cmjqGo7+eXrSMbCTAw2kkxG3uSZSdhdPpwZfhRFxWJ1kOErIsNbiKLa3uS+BEEQBEEQBEEQBEEQBEG4HBFeC2+pRGyW4FgnsiyTU7gSqz2D+XYakoQnq5QMbyHx6CyzgQF0LfkWH4GEIyOTrLxqLFbn/E+tVhfenEpkRSU03Y+eurqDHgVBEARBEARBEARBEARBeHNEeC28pVLJKInYLLKiYnd4keWFnWksVgdWRwamoZOIBjEN7S0/BqvdhdXhQbpoOKYky9icXmRZIR6dedMV34IgCIIgCIIgCIIgCIIgXF0ivBbeUoaewjA0JFlGUlSQFg4xlGQFWU5PDjUMfVGP67eCLKvIsor0ugGKspw+HtPQ4W24X0EQBEEQBEEQBEEQBEEQ3joivBbeUopqRVYsGLqOoSUwTWPB7w1dQ08lQZJQVfuC6uhLMo03HTQbuoahpzAvmsBomib63PEoFtvl7xcwDeOy2wiCIAiCIAiCIAiCIAiC8PYQ4bXwlrLa3DjdOeh6ivDMGMbr2nMk4yGi4an0sMXMXBTFkg6SpXTV9usrseOxIFoq+aYqtOPRWRKxmXSF9XmmSTQ0gaFpODNyUBQbkiTPh9imoS0Mu40U8WgQQ9NElbYgCIIgCIIgCIIgCIIgXAMivBbeUjZHJrmlNyHLCiM9rxGPBuGiUHhiqIHZ6UEysgrJzC5HUa1Y7RlIkkIkNI6uJea3TSWjDHe/RiIWXngnkoQkSZimORdqLwyXY+EZxgfOkUxELvwsMs3USAumaeLNrUK1OlBUCxabE0PXiYWnFgTts1P9TA23kEq+fqCkBEiYpvG2tDwRBEEQBEEQBEEQBEEQBCFNvfwmbw3TNHnhhRc4ffo0qVSK7du2c+ttt2K326/WIQhXgayoVKzcTV/LPgbajtDg+0/qNn4UuzuL8f6z1O9/kEQ0xKqtD+DJLkWWVbLyarA7PYx0n6K3eS8Vq+5ETyXoOPMkY31nwDSRLuqdLcsWLBYnqUSMwHgnOYUrsDuz5sNkWZbpqH8Gu8tH+YpdALSeeJTh7lN4corIL1mH1eYCJHy5VfQ1HaTr3PP48mvILVrD7FQfjcceJjI7jiQv7JutWuwoqpVwcIzZ6X4crmxsDg+KYlnU31sQBEEQBEEQBEEQBEEQhN/cVQ2vDx06xM9+9jMikQiSJLF5y2YRXr8DZeXXsOXur3N677/RfOyXtJ18AllRSSXjKIqFNbd+mtoNH8bu8IEkkVe6jqp176fp6CO8+uzfc2rv9wDwZBVRtvwO4pFgusraMDABu9NLQcVGxvrPceKlf6XhyE9Zd8cX8fhK07fLLiSvbC3Nr/2CM/sexDR0YtFZXJ4c1t3+BXKKViArFgCq1tzD+MA5BjuO8/LDX8VitYMkUVC+nuLqrcQjz2MYWrr3NuDLrSK7oIbe5kPs+e8/xusv5eb3/TmFFZtRVOs1ebwFQRAEQRAEQRAEQRAE4Z3oqobX0WiUQCBAJBIhFouJtgvvUIpioahyC5nZ5UyPthGY6ERLxnF58sguqCMzuwy70zvfb9rmyGT9jt+jpPpWpkZbSSUiuDML8Bevxp1ZQO2GDyNJMm5vIZIkoahWVm79NL68aoIT3ahWB4UVW+ZalIBqsVK7/kOs3voZJkdaiIUmsLuzyC2+CV/uMiw29/yx+nKruO1Df8P4wFlmJnsBE19uNTmFK1CtDm667fPYHJk43DkAOD15bH3/X1C6fCex0ATuzEJ8/mXIylX7U3rbmKbJaN8phrtexTB0KlbeSVZe9XzQLwjXA9M0mZnqo7d5D8NdrxIKDmPoGg5XFrkla6hcfTf+wpUoFtu1PlThXcA0TQLjXQx1HiEWmaKgYhMFZRtRrY43vE1wsofe5pcZ6nyVyOwYsqziza2kYuWdFFdtx+70XsWzEARBEARBEARBuH7d+ImbcF1SVBuZ2aW4M/MpWnYzpmkgKxZU1T4fWp8nSRIOdw7FVdvIL1uPaejIqhXVYkeSZBzu7EX7d3n8VKy8E11LIEkyqsXOWH/9/O9tDi/+opVkF9RhGBqyrKb3JysL9iMrKplZpbg8efP9tlWLHVmxIkkSLk/ewu1lBV9uFRneIgw9lT6nueO80c1O9VF/4Af0tx7BYrXj9Vfg9VeI8Fq4bhi6Rl/bPs4e/A/G+hpQLBZcHj+SrDA12sFo71l6mvaw5tbfpnrtB7A7fdf6kIV3sERshu7Gl2g+/t9MDrXNX8T0F65aMrw2dI2BjsOcPfhDhrtPkYhFMXQdkBjtbWSw/SgrtnyMlVs/hTuz4CqejXDDMU3CM6P0t+1noP0gM1P96FoSmyOD7II6yle8l8KKzVhsrkU31VJxxgfO0t34IhODjSRiQWTFQkZWMcVV2ylf/h4yfEXX4KSEG4mhp5ge66S78XlGek4Qi0yjqnayC+uoXHUXRcu2olouv8LW0DWmRtvob9uHaRoUVmyhoGLzgpaBwruXYWjMTPbR17qXVCJCXul6SmpuXfK7l2maREPj9LbsZaDtALPTgxh6Cpszk5yCFZSvfO+SK2YTsVmGu4/R0/QS02MdpOIRVKsDn7+S0rodlC3fic2R+XafsnAdMAyd2al+Os8+zXDPCaKzEyCBM8OfXjW+5v1kF9TNb2+aBuHgMJ31zxCNTC25X0mScWXkUrX2A4tyBtM0iMyM0nnuWQbbDxOZHQNJxuevoGzFLipX3nnJ93ThnccwdMYHztJR/zSTg43EYzPIikqGt5Ciqm1U33QvTo//wg1Mk2hkirZTjxENTSy5XwkJmyOT2g334/YWLvidaRpMj7bTfuYJRntPE48GsVgdZBcup/qmeylcthVZvraZlwivhbeVolqvuJ2GrFiwXnFQKqFa7G/4oViSlCt7gZcuv6+Fm8vvuDeOVDJG26nHGGh7lchMELvLha6lEIsjhOvJ1EgLjUd+ynDXKUprt7Fy6yfx5VYhSTLR0ARNx35G59kXaTj8E1yePMrq7hDtfIS3nGHoTA410njsYXob9xKPhtBSKWRZJpWMYbL0C+fEcCONR3/CQNsxMv0lrNtxDzlFK4mFp+hueJ7B9tdoPfk4GVkl1Kz74BW/LwnvLoahM9Z/hrMHf0h/6xHi0TC6pqVnhMgyw1319Dbvo27j/aza+mlcmfnzt42FJ2k79TiNr/6MmYlBUskkpmGAJKGojQy0Hmaw4zBrb/8i+WUbRIAoXFIqGaW/dT+n9/07E4OtpBIJDF1HkiRGexsY6jzGips/zqqbP43V7l56R6bJ7HQ/jUd/Qtupp7G7MlEUKwUVm0gPSRfezeKRAN1NL9J49KdMjXRhszvRb05RXLUdSVkcohi6xuRwE6f2/hv9bUdIxCJzr42kXxs7T9Pbso8Vmz/G6lt+e24GUjrwnp3u59yhh2g/8zSpRByXJweb00M8GqTz7Iv0tR6kv+0AG3d9Ba+/8mo/FMJVpKXiDHYc5rWX/i9Twx0k4/G5Qse67t4AACAASURBVAOQFYWhjtfobd7Dhp1/yLI1dyNJMqahMxsYpOHow4QCY0vuW5JksgoqKazcsiC8NnSNiaEGjj3394z0nCERj2JoOkgw3tfMQMerjPfXs+nO/yVW573DpZIxWl57hDP7HyQcGJ/7nKanP6cpzQy0v0pv0x623/u/ySlcAYCJSSw0SdPRnzEzNbzkviUJ3N5ciqq2LwivDV2ju+kljj//DwQnBkglEpiGMfee3khfywE27Px9Vm79FPLrikGvpusqvG5qauLUqVNomsbNN99MVVUVTU1NPPHEE5w+fZpAIIDdbqeqqop7772XHTt24HK5MAyD06dP8/TTT3PmzBmmp6fnt/vgBz/I7bffjtPpXPJ+BwcHefnllzl27Bh9fX3Mzs4iyzI+n4+6ujp2797N1q1bcbneOLDs6+vjySef5NVXX2VkZASAgoICbrvtNu677z7y8vI4deoUra2tmKbJ9u3bqaioQFUX/zNMjE+wZ+8ejh49Snd3NzMzM6iqSn5+PuvXr+eee+6huroai0VUxQr/c0OdR+g8+yyyqmJ3uRZVxwvCtWYYOiO9J5kYbMTlyaFm/Qcpqb51vsLVk1WCYaQIjHcxMdjCWP9p8krX4fLkXuMjF95pZqf7aTj6X3TWv0B2QRV1mzcz3HWcsf7mN7xdIjZLX8s+hrtOkl24jHV3/D5ltTuw2FwYukZOwXIstgeZGGoiFpoklYiI8Fq4pOBEF02vPkzX2T3YXR5Wb/8YxdW3YLG5mB5to7P+14z2NtF68nFcmfnUbfwYqsWGlozR27KXc4d/wuzUCLmly6m66f14/ctIJsL0t+6jp/EVehr3YXdl4c4sEBXYwiKGoTHWX0/9gQcZ628mp7CKmvUfxOuvZHZ6gK5zzzLcVU/bycfx5lRQsXL3khdBEvFZ+lr303n2BaKzM0iSjJaKX+UzEq43hp5icriZs4cfoq9pP4lYGMMwMA0DLRW79I1Mk9nAAGcPP0TXub1Y7Q5W3vxhimtuxWpzMz3aRkf904z1tdL82i/JzClj2Zp7kCSJeDRA59lnaD7+KDZHBhvu/hKldbejqFa0ZIzh7uOc3vd9us69hN3lY/PuP5kPvoV3FsNIrwQ58dI/M9rTSEZ2Hmt33E9uyRokSWak9yRtJ55gtLeJ+oM/xJe7jOyCOkzTJJWIEgsHkGWZ0uW34JxrO3oxSZJxZeZjv2hluWkahAKDnNzzXfrbjpHh87Pujs+TW7KGeDRAd8MLdDfsp+3UU/jyqlh586fEheV3KENPMdB+gBMvfZdQcIr8sjrqNn0Er38ZqUSE/tZ9tJ58hv62Yzhe+R47PvItrPYMME1SqRjR0NRc+9fbl/gOLGFzeXFmXHhumobO+OBZjj3390wN95BXvpwVmz9OZk4pM5N9tJ58lOGuBs7sfxB/8Wryy9ZfvQfkda6r8PrUqVN8+9vfJhQK8Y1vfIOTJ0/y4IMP0traSjgcRtM0JEnixIkT7Nu3j6985St8/OMf59e//jXf//736ezsXLTdgQMH+NrXvsYnPvEJ3O6FV/51XefZZ5/le9/7Hs3NzczMzBCPx9HnKgdUVeXw4cM88cQTfOhDH+LLX/4yxcXFi8rlTdPkwIED/MM//ANnzpwhGAySTCYBsFqtHDhwgBdeeIE//dM/5amnnuKpp55CURS+9a1vUVxcvCC8NgyDvXv38r3vfY/6+noCgcCCY7JYLLzyyiv88pe/5Atf+AIf+9jH8PnE0nhI96OuWXcvkqKk/yDFi/oVmZnqp/m1R4iGpihfuZOR7hNEZpZe7iQI14KuJQjPDJOMR/GXlJGRVYJyUbAnyQqerFJcmXmM9jYQmRlDS0av4REL71S6lkS1Oli59WNUr71v7kt2y2VvNzvdz1jfaQxDp2z5Tkqqb8HuSr9/K6oVf8kabr3v/yWViGBzerE5xdJkYTFdSzDWf4bB9qPYnBmsvuUzrNj8W9hd2ciyTEH5RjKzyzn1yvcY729hrO80ZXU7yfAVEQoOMdB+iJnJYYqrN7Jh15fJK1mLarFjmjq5Rauw2jNoevVRxvrOMDXaNj9vRBDOi85O0Neyh/GBFvLLV7P5zq+SV7oe1WJH1xL4cqs4a/0h4cAI4ZlRdD2Jqi6eQ2HoGhODDXSceRI9lcTtyxbzkAQAZqcHqT/4I7ob9pBTWE1JzW30Nu9haqR7ydtoWpyJwUb6Ww5isdlZc+unWbXtc3NzlhQKKjbhzV3Gq89+m5nJQQbaD1FadwcWq4vIzCj9LfvRUkmq1t5C7cYP43BlAenv+TZHJrOBQc688hDDXccJTnSTW7z6aj0cwlWUSkQYaD/IaF8Tbp+fbe//c8rq7sBicwISeSVr8fiKOfLMt5ga7mCo+1Wy8msxzfSFFcPQcXqyWb3ts/NVsa8nyypWR8b8/2vJGP1t++lvPUqGL5ft936Dkppb54sb/IWrUC0OhrteY3Z6AC0Vx/IGs1WEG1ciFqLt1GOEApPkldWy4yPfIiuvGtVixzAM8krXYbV7OLX3IQY7jzE91pFuu0v6eaTrOjaHm1VbP0VuyU2XvA9Jkhe0P0omozQff4SpkR4KKlex48N/hzd3Gapqo6B8E/6ilRx47Bsk4hGCkz0ivD4vlUoxOTnJ1NQUhw8fpq2tDafTyZ/92Z9RU1NDMBjkqaee4uWXX6arq4uHHnqI2dlZfvWrX+F0Ovn6179OXV0ds7OzPPPMM7z44ot0dHTwH//xH2zcuJE1a9YsCIoPHjzIt771Lc6cOYOiKNx9993s2rWLoqIiEokEZ8+e5Re/+AU9PT38+Mc/xuVy8ZWvfIWsrKwFx93a2sp3vvMdDhw4QCKRYOPGjXzwgx+kpqaG6elpnn32WY4ePco3v/lNwuEwQ0NDOBwONE1b9Bjs3buXv/mbv+HUqVMA7Nixg507d1JRUUEkEuHo0aM888wznDt3jm9961sYhsEnP/lJPB7P2/uPcwPI8Baw+pbfRgKsdo/4snUFtFSc9tOPM9J9iqKqLZTW3MrUcCuRWRFeC9cXWVaQZRVkKb3E3TTfYEGxlO4/LF4DhLeBx1fCutt/D9Viw+7yMT3ahnSZ5e2maRKc6CY42Ycr04+/aBV218LPEopiWdR/ThBez9A1JEnGk1OCJ7uU8uU70/3R517vbI5M/MWr8RetYqy/hUhogkRshgxvIVoyhqJayStdTvmKXeSXrl/Q0iEzp4K80rV0N7xELBxI99vERLRvEM4zTYOZqV6Guo5jtTupXLWbwoot86ugFNVK0bKbycqrQkvFsbuyUJRLt+8KBQZpr3+amalBCirXk0pGmRrpuJqnI1yPzHT1q8VqZ+XW32L5po+hpeIMdb36hjczdA1dS+D25uPKzKNy1ftwX9QySXF68ReuJK90DdMjPYSCw/MrnOLRAKGZESxWO1l51dgdF9oySFK6R6wvtwpZUYiFp4m9QU9Z4cZ2/nmUXVBJXtlaSmt3zBcaADg9uRRUbMaTXcT0aC8zE73pi26miZaMYhoGqsWB21uAM8P/Bvc0xzSJRabpbd6LYWhUrNpF2fKdWG3p92ZFsZJVUMf2e/83ydgsNkcmqhhK/45kmiZaKo4kKeQUVVKz/kPkFq+Zn6kjK+DOLKCk7nYajv6ceGSWUGAoHSabJqlkBNPQUVUXbm/hFT3/TNNgdqqP/taDWKxWVm37NDlFq+Zbg1gVC3ml67jrcw9i6NqCiu1r4boKrxUl/SCZpsnzzz/Pli1b+PM//3NWrVqF0+lE0zQ2rN9AKBTi0KFDNDc386//+q+sWbOGv/zLv2T16tW4XC40TWPjxo1EIhFeeeUVWlpaqK+vp7a2dj68DofDPPbYYzQ3N6PrOp/73Of48pe/TElJCVarFdM02bFjBzfddBPf+MY36Ozs5PHHH2f37t1s3Lhxvl1HKpXiscce48SJE8RiMXbu3Mlf/dVfsXr16vlj3rlzJz/4wQ/4+c9/Pl9J7XQ6URRlQcDa19fHD37wA06fPo0kSXzpS1/is5/9LMXFxdhsNgzD4L3vfS+bN2/mm9/8Jj09PfzoRz9i9erVbN68Gav13d3bVVYs81fJhSsz1HWUzvpf43D7qNv4ERzuHNEyRLguyYoVX24VDpePUGCI2cAA/uLVC9oqhAJDhIOjWKw2MnPKxVAd4W1hsTlQrUVv6gKpriUIBYeIR4L4i+pwefIJB4cZ669ndqoPSZbx5S4jt2QtzoxccfFVWJJqdVCx4r0UVGxGVdMXUF5/oU5RrahWO7IkIUly+vkkSfjyqtmy+0/mBjtmLprfIckKimKd/9IinofC62mpODOTvcxODeLJKiKvbN2iAbWqxU6Gr/gN95OIzdLffoD+lv3kFNZSve5eOuufeTsPXbhRSODxFbN+5x9itbmwOX1Mj7Zf9mYWqzN9Ua58A4piwZmxeMm8pCjpVXtzr43p8WUSsmJBllVMTIy53rJLHZwkK/NhkvDOY3N6Wb3ts9Suvx/V6sTuen1/aQlZsaKcn9M191wxTYNUMoppmigWC6rlyiqjDUMnHBxmcrgVmzOD4upb5oPr82RZSV+kFoO839EkScLl8bP9A3+FriWwO7MWv9ZIEhaLA1mSMUz9os9p6bY1pmEgqxZU69Itky9mGjqTw82EAuP48koprLx5UU9rWbHgzal4C87wf+66euWVZRkJCdM0kSSJBx54gHXr1s23+7DZbNTW1bJ161ZOnjzJzMwMyWSSBx54gA3rN+DOuLBddXU127Zt49ixY8zMzNDZ2UkymZzvWz0+Pk5LSwvxeJzs7Gze9773UVlZid1+IQix2+3ccccdrFu3joGBAXp7e2lubmbNmjXz4fXw8DAHDx5kZmYGr9fLAw88wPr168nIuLAUxOVy8fnPf56mpib27t2LYRgACyq1TNNk7969nDhxgng8zu7du/n0pz9NXV3dfKgP4HQ6uf/++zl79iw/+clPaGlpYe/evSxfvpzs7Au9kwThcmYDgzQff4TI7CTrdvwueaXr56qsBOH6I0kShctupqRmGx2nn6Ph8E+QJJnS2ttRVDsTQw2cPfwQ06OdFC7bQNGybdjsYkWK8HaQ3nRRv64liIen0bUkkqzQee4ZRntPExzvRUslAAnVasNfVMeqbZ+hbPlOLFf4wVN4d5Ekea6tzBIDm0yTcGCYwFgnkiyT4SvG4Up/PlQt9jes7k8lwoQCgySiIXz5lWT4iucCHkFI05JRQoEhUok4Tk8eTref4EQ3o32nCc+MoKhWsvJqyStdu+RQMcPQmBhqpP3UE6gWO7Ub7sfrrwLp11f5bITrlWp14MkqTQczV9hJRpIV7E7vks870zSIRwIEx7pQLVYys8uw2T1IsoLTnYMvr4qZiUGmR9tIxGYW7CeVjBAY78TQNdzeAjKySt6K0xSuQ7Ks4MzwL1m1augpwsEhwsExrDY72QV1yLKMZhpoyXTbEFW1I8squpYiFp5AS8WxObzYXb5FF4UNI0UoMEQ8MoMnq5DM7HIis2OM9LzGzGQfJibenAoKK7bg9FxBJbdwQ5MVyxte/DUNjamRVpKJOJ6sPLz+dKhszlX+67qOarGhKBYMXScaHieViGJzZOJwZy96/ul6en+6ruHJKsbpziEw0c1w1zGioXEUi42cguUUVGy+Lr6XXFfhNTC/MrGyspL169fjcCy8amWxWCgrK5uvjq6oqEhv53zdVX9VXbDd1NQU+tyUWICioiK+//3vMzk5SSwWY+3atdhsi5dgZGZmUlFRgdVqJRwOMzY2RiqVmv99d3c3/f39aJo2fyyvHw4pSRIVFRXcddddnDx5knh88SCS2dlZDh06xPT0NA6Hg7vvvpuysrIFwfV5Ho+HXbt28dxzz9Hd3c2xY8f4xCc+QVZWlqiSEa6IlorTfupxhrtOUFx9M+Ur78ThziIaGr/WhyYIS3J78lm34/dwuP10nn2Wg4//H1SrHUmW0ZIxZFmlev37WbHp4+QUrhCrCITrhqFrpJJR9FSK0d6zzEz24y9ewYotH8flySM42Utn/dMMdZ0mGQ8jKyply99zobJHEC7DMHTikQCjfadoOfFLhrtP4S+uo3zFrgWDod7o9uODDfQ278XEJL98Pdn5tVfhyIUbiZaKE4tMYZoGup6g5cQv6G16mVBwHENLgSRhtTvJL1vL6u2fpbjqlkWVY6HAMJ31TzMz2U/N+nsprd1JZHbkGp2RcL16q7/TxiMB+lpeYWygCU92AWV1d8y3X3Bl5lF90z2MDzTQ13IQt/f7rNj8cdy+IiIzo3TUP0X7qadwenxUr/0AnsusLBDemUzTJBIap/Xko8TCMxQuW0fRsq1zvzNIJWOYhkkyHuLsoR8y3H2SyMwYpqmjWOxk51dTs/5DVKy8c37lqKFrREPjaKkUsqoy0H6A9tNPMjM1iJ5Kz1BTrTZ8uZWsufVzVK25B1l8NnxXMk2DwEQ3zcd/DqZJ2fLbycypPP/bdOW/YWDoKRpf/SnDXccIBUYwDB1FteL1l1Oz/oNU3/SB+RVTpqETCgwCoFqdnDnwA9pOPkYsHJifuWe1OcgtWc3m3V8jr3TdNTr7tOsvvJ5TXl6Oz+dDkReHt263O12lLUmUlJSQlZW1aIgipCuez7cJicVi8xXPcKE6e9myZeiajmpRL/kmKUkSLpcLRVEwTZN4PL5gmEhfXx+hUAjDMKioqCA7O/uSx6IoCps2bSIvL4+JicV9siYnJ+nu7iaRSFBQUEBdXd2CKvDXH1NNTQ25ubn09vbS09PD+Pg45eXl8xXh7xamadLXspeeppcoqNhExcrd2BxLV1uapklv88v0Nu8hv3wjlat2vytbCwz3HKej/hkc7ixqN3yYzOwyUV0l3AAkYpEAwYkuIsFxZEXB5shBVqyEk3Eis9NMDDQyU9lHVn7tgpYignAtmYaOYWgYhoGWSlBcvZXV2z+HN6cCRbVSWLmFrLxqTu39N4a76+lt3kt2wXIys8uu9aEL17no7DjNJx6h/fRTJOMRdC2Jxeaget37qN3wYfJK1y1aAvp6pmEwMdhA45GfMD7QTGHlBqpu+oBoxSYsouspUokoqWSS0d6zhAPD+ItXs37n7dgcHqZGWuio/zW9TQdJJaKoFgeFlVvmb5+MhxhoP0Bvy36yC+uoWX8/DpdPhNfC2yoRm6Gr4XkajvwU1WKhZv19FFRsmm/5oFqclNbuACQaX32YhsMP03ricRSLFUNLoWlJvP5SVt78SSpW3CnCw3cj0yQaGqfhyH/SdfZl3F4/a275HBm+orlfG2jJCHoqxeRQF8GJAXQthcXmwDB0EtFRgmMDTAw2ExhrZ+2O38dqc2GaBsl4GF3TmJkYpOHIT/HlVbJq26dxZxYwGxig48zTDHWeJh79Z1TVQeXqu67tYyFcdaZpMDPZy7Hnvs34QDt5ZStYvf1z84M7TdMklYhg6DqBsUHO7H8ILZnAanfOrTqJEBwfZGKwmamRFjbf+cdY7W5MUyceDWLoBiPdJ5kYaKS4ZhslNbdhc2YyOdRI87FH6Gk6RDIeYudv/SO+3GXX7HG4LsNrSZLwer3p4PkSF13P94qWJInMzEwsFsslg+eLe0ovNb1almWC4SBnTp+hvaOdsbExpqenicfj6LqOpmk0NjYSiUQAFgTgpmkyPj5OMplEkiTy8/Ox2+1LXikuKCggJyfnktXUExMTBINBDMNAVVV6enpQFOWS2wIEg8H5xyoYDDI9PY2u6zd8eJ2MhxkbqMftyceTXYqiXqaPt2kSnOyhr2U/FquT0prb4Q3Ca0yTwFgn3Y17UFQbpbW3v+vC61BgiObjjxCdnWTtjt9N9ysUgx+EG8Bo3ylO7vkXxvrOUbnmvXMXXkqQJJlEbJauhudoO/UkJ1/+LpgGlWvet6hvnCBcE5IESJiYeLIKKK6+haz82vnKaqtiIb9sPUVV25gYbGFisJGZyV4RXguXpetJQoEhJgY70ZJJZFnGkeEhHg2ipxJI0hsH14auMdJ7gjP7v89g+zH8JctZc9vvkFt8E9JlQm/h3cc0dAw9hT43dL60bgdrbvltXBm5yIpKQcVmvLlVnHz5u4z2nqO78QWyC+qwOTIxDJ2Joaa5diE2ajfeT3ZBnVg1KrytYuEpWk/9ivr9PyKViLF884dZsfmBBcNqJUDXksxO9zMz2U8yHsXuysTh8pKIhYiGpgmM9TAx1EjRsq2X6IMsvJOZpkEoMMSZff9O68knsDndbHjPlyir25keJk/6InAyEcYELDYbVevupnrtfbgz8zH0FBNDTTQc/S9Ge5poPv5LMnMqqFl/P6ZpoutJDMMglUrgL17Jlru+ToavCEWxomlx/EWrOfb8txloe42WE7+koGITjitYUSW8M5iGzuRIC0ef/TsG2o6TXVDBtnv+gqy8mou3IhkPYWIiKwoVK++gbtNHyPAVYRoGgbEOzh7+MUPtp2k+/ii+3CpWbH5gvlIb0yAeDbFx1ydYte0z2ByZyIpKfslacgpX8sov/pThnnO0nfoVW3Z//Zqtbr4uw2tIt/1Y6sPMxT9//dDDN2N2dpZHHnmERx99lJ6eHiKRCKlUilQqNR9Sm6Y5/zP5dZWppmkSCoXm25FkZGQsGTYDOBwOMjMzL7lNJBKZr+oeHBzkr//6r99wAKNhGExMTJBKpUgkEszOzi5oi3KjCox3cmrPd6lYeSd1Gz962fBakmVq1n+IkupbsDkysV9BlZChp9CScXQ9CUtc1Hin0lIJ2s88OdcuZCsVK+6cq6wSXxyE61siNkt34wuMdJ/GX7ycFVseoKB8E4qaDv9M08Du8hEOjtBZ/yLdTS/hL15NdsHya3zkggCyrKKqNmRZwe7OwpmRu6gliMXqwpNVgs2ZQTQ0RTwyfY2OVriROD15bHzPl6nb+FES0SBTo230t+5nqPM1QtNDrLnlc1Svu++SvQpTySh9rfuoP/AgE4OtFFSsZd0dv09RxRZxUVtYgnShgCi7mPLl77nQmxiwO73kl22gpOZWpob/k4nBBmanB/AXZRIODtN59hmCk33UrL+Xsto70p/z32WfxYWrwzRNQoFBzh3+MS2v/QpJklhz62dYufVTuDLzuPi7TzQ8SeOxn9Jw5GGcGTls3v1VCso3oqhWdC3J5HATDUd+QtuJJ0glItx819dxZeZfu5MTrhpD15gabeXEy/9MX/MhnJ4sNu/+KstW371g6LHV4WHN9t+hYuVuLDYXmdllOFxZ6Sp908TrX4bbm8+hJ/8/pkf76Gp4nrLl70EinW1JgN2ZwbI178eXu2x+RbRVcZOVX0vVTfcw1HmKicEmpsfaKXJvvTYPiHBVGbrGQOdhjj37bSaG2sgrXcHW9/8lBeUbFrTkUhQryzd/nKJlW1GtDjKzy3Fk5Mx/1/DlLiMjq5j9j/4FY32tdJx5isrV71sw1NuTlU/12vtwefLm92tzeiko30jFyvdQv/9hBtoPseaW38WZkXPVHwu4jsPrK/WbBtehUIh/+qd/4uGHH6a/vx+Auro6Vq5cSV5eHpmZmdhsNiwWC4cOHeKVV15Z3KvahFQqNT9gcqkK8PNUVZ1vY/J6uq5jGAamaaKqKjab7Q3Da4DS0lIg3UbF6XS+IyoXgpM9BMZ7KFw2i2kal78B4HTn4HRfmz+gG8344Fk6z/6aeHSWeHSaxld/uuACQSwyzezUIKlEnPYzTzIx1Ehe6TpKa2/D7vRdwyMX3u2ioQmmx9pJJRPkFK4gM6dsPriG9BAzd2YhOYUr6Gs5QGCsk2hoQoTXwnVBUa3YnJmolnRQY5pGOrC5+H1bkuaHrGipBIahXbsDFm4YytxwH7e3CNPQKKjYTHHVNhqO/ISOM8/RceYpfHlVFJRvWnC7eDRIZ/3TnD30EOHgOJWrdrLm1t/FX7RKtFwSliQrKorFjqwoONxZuDLzFn3/sNkzyMwuQ7XaiIWniYYmSMZDDHYcoq9lPzlFy6ndcP8VFZwIwm/CMHSmR9s5ve97dDfsxeHKZO2OL1C97j4croVDy3Q9xeRwM+2nnsI0DOo2fZiadfddFEyauL0FGIbGkaf+loG2QxRXbad2w/3X5uSEq0bXkoz0vMbxF77DWH8T2QVVbN79NYqrti0IriH9XuzLXYbXX5kOoy9euSRJWO1ucovXUFa3g6nhHxMc7yEamsDlyUO1OJAUBavdPXcxcGHBpKpa8eZUYne6SURDhKYH4dp1bhCuEi0Vp6P+KU68+C/MBkapWHk7m+78GtkFdYsKYCRZwZtTkV6xKUmL2sWpFgc5hSupXL2bsf42AuM9RGZGyPAVo1qdSJKM0+PH5clddByqxU5O4QpkRSEcHCUWmRLh9dW2Z88efvGLX9Db20tWVhZ/9Ed/xD333EN2djYWiwVVVZFlGdM0CYfDHDly5JKDFi+u/D4fZC/FMAw07dJfRs/fpyRJlJeX87d/+7fU1NS8YSX3ebIsk5ubu2SP7GstmYjQ0/Qifc17CU70oGkJbPYMsgtXULP+Q+QVr0FLxWg79TgdZ58hMjNFb9MewsFhyuruIL98I32t+wgHhympuZ3hrqP0te4ju3A563d8iZmpXgY6DpNbvJqS6lux2jMwTZOxvtO0nX6M8cEGdC1JZnYZ1evuI5EIX/I4dT3FcPcxus49x+RQM6lEBLsrk4KKzdSs+xBef+WioTM3msjMKNHZSRLRCP2txxnsOLng96ZhkEomMQ2DvubDDLQdo2ZDgNySm0R4LVxTWjJKKhEF00S1OlDkxS2SJFlBsdiRZIVUIoaupy6xJ0G4+hSLHVdmATaHm1h4mlh4EsM0kC9q6XC+72EqGUdRLKKnpvCmSJKEpFiwOSzkFK2ipOZWhjqPMT3WRWC8c0F4HYtM03ryV5w7+BCpZHoZ/aqtn5r7nCOed8LSVNWG3emdmwVkXPJ7jyTLKBYrsqJg3T3u0AAAIABJREFU6BpaKsb0WAftZ54iPDOBy+On5cSjKMqT87eJhieZHukgFY/R17KPVDKCL7eKsuXvEQUqwptiGBqTwy2cePn/0t9yBG9eGRt3fTndKtK+uLWklowRGO8kFBglM6eY/NL1rwsmJSw2F1n5dXiyi5kcbmdypIlaRHj9TqZrCQY7jnD02W8SGOuluGYLm9771Te8wCvJyhuuZVZUOxm+IuS5QfOpZARZVnC4s1FUFTDhUsV7koRisSErFkwzjqbF3pJzFK5fqWSM1hO/5MRL/0I8EmLVtt9i7e3/D56skiXnmEiyjMTS7TxkxYInqxRZUdCS8fnnn9tbAKRzoKWyTNWWLpTVNQ09tTgTvVpu7CTuNxSLxdi7dy/Dw8NIksRHP/pRPvnJT1JaWrpo2OL5auhL/kNK6aGQ50PuSCSyZDh9/n6Xau/h8Xjm+2Wbpkl+fj41NTVLVmrfKLRkjPoDP6DltUdRrQ5yCuuwWF2EZ0bobniR0d6T3Py+PyMrt5rp8Q6CE31oqRSh4CimaZKVV0N2wQoGOw4z2nuGwHgnY31nMfQUGVkl6R5AQ020vvYYeipGQflGrPYMRnpPcOy5bzM51EZ2QTVZhdUkYjOceeXf0wMVUokFx6lrSZqO/zeNR/6LZDxMTlEdXn8FM5O9NL36c0Z7T7N59x+TV7Zu0ZWuG4nXX8nKmz9ONDwJLH5OxyNB+tuOEI+EKK7ZhDennPzyTdjsGVf/YAXhIqrViWp1pAfnRgJol3jjNPQU8UgAPZXA7vRcvme+IFwlsqzg81fiySpmrL+JicEGipdtw3lRhUMiNkNgvJNYOEhOYbXoZyhcUiI2w/jAOQLjHXj9lRSUb8ZiW9gSRJZVLDY3isVGMhxIX/ibk4yH6Dr3LA2H/hNdS7Fq2ydZseUTZHgLRI9r4bIsNheuzAJUi5V4JJBe4ZRfu2AbQ9dJxiNoqSSyakFR7cQj04Smh0hEIgx31zPa17jgNqZhoKVSGIbOcFc9k0NtlNRuI690gwivhStmmgbBiW7O7P93+luP4C+uY/Ndf0xh+SbUucFmr2cYGsl4CF3TUFQr6iVaLEmSjKJaUSx2DF0nFY+83aciXEO6nmKsv57jL3yHwFgvy9bsYuN7v4rvCi/wnl+Vv/jnRnqonmkgq1ZU1Y6sWMjwFmGzO0nGw4SCQ+SzYdH+UokIqUQMSZaxWMU8n3cyXUvS3fAcJ17+LolYhA27vsiqrZ/B6cm9ok4LSz3/ME2SiQimoSMrFlSLA1mxkJVbjSzLRGYniEWmsbt8i/YXj6Rn8ymq5ZKvkVfLjZ2M/obC4TB9fX0kEgncbjfbt28nNzd3UXANEAlH6O3tJZlMLsr6ZFnG7/djsVjmhzemUktX+g0ODjIxMXHJ8DonJ4esrCwURSEQCDA5OYmmaTd8eB2c7KGn8WVkWWXL7q+RV7oWWbGia+ney2cPPkRv8x6y82tZf8eXSMZDdNa/SNVNd1G74SN4skuRkNC1JOHgBJLUwOrtn6Zo2VYc7hwc7hx0LUkyHkZLpnuGJ+Nh2k8/wcRAM5Vr3suaW34bt7cQXUvS1fAcZw/8B6l4YsG/52jvSZqP/TfJRJRNd36FoqrtWCwOEvFZzh78ER31z9Ny4he4vYV4soqv3QP6P5SVV02GtxDDuHR/9OnRdgLj3ehakmWr76Z8xS7sTt8le2UKwtXkzPDj8y9jyHKC0d5TTI214fIWLLiYNDPZw/hAPYlYlJKaalwZi5c+CcK14stdRl7ZWsYHmuk69wKZ/gpq1t6H1Z6BlozR37qf3pZ96LpGTtFyMrPEsEZhsWQ8RG/LXtpOPklx9RacGbnkFK5YsI2uJYiGJohHZ9JhzFyVmKFrDHcfp/n4IyQTEZZv+Rgrt34Sd2bBomXKgnApqsWBN6cCty+P2ekRRntPUVC+cUElYjw6zfRYG1oygdOdjduTh6xYqNv8USIzI5fcbywSYLTnFOHgFLmly8kvW0d2fi1257trqLrwPxMLTdJ26nF6GveRlVfB5t3/i6LKLSjq0j38ZVnFas9AlmWSiQjxaGDRNqaRDh3j0cB8ewfhnSk9nHGQ+oMPMjHcQdnyW9h059fw+ZctOaTONNIXTU7u+RdmA0PUbriflTd/YsH7qmmaxGNBRvtOA+DOzMXhzkZWVDKySvHlVTDa10h/2wHKl79nQfW/noozMdRAPBrGk52HJ6vk7X0QhGsmPdi4kZN7/o1YaIa1Oz7Lmlt+5w0LWkzTJDIzyvEX/oGp0TZq1t3HTbd9ftHnulQyynDXqxiGicubi8PtR5YV/MVrcLg9hIPjDHYcmm99c+F2MYa7j2MYBm5vPg7XtSuuubGT0d+Qpmkkk0lM08RiseD1epdsz9HY1EhDQ0N6mOIlKlVLSkpwuVxISPT09BAIBCgoKLhkBffx48cZGxubHwZ5Mb/fT1VVFSdPniQQCHDu3Dm2b9++ZCuQmZkZDhw4gN/vZ82aNddtz+tYeIpYZBqHOwuvvxK3t3D+D2nllk9QUnMbDqcPpycPSVKwOTKRZAWHKxtvTgV2l49EbAaQMHSN7IJaylfsmvujkjEv8ViGgkOMD5xDtdooX7GL7Py6+avtlavuor91H7PT4/Pb61qSvtZ9zEwOUrPhXkpqd5DhLUKSJJyePGo33M9Iz0mGu19jdqoXd2beDbusVlFtb/gBLhqaQFEtSJKM1e7B4fZjWaJSQRCuJqvdTcWq3Yz11zPa18BrL/wjU8Mt5JbchGqxMzPZS+e5ZxloP0aGL5eyFbtw+4qu9WEL7zCGodN59hmaXv0Z4WA6hNFSCaKz0yTjcRoO/5TO+l8jywqSLFO2/A5Wb/9c+v3M6aNy1V1MDrcw0Hac48/9Iz2NL+H2FhCZHWdioJHQ9Di5pXWUL98lhkEJl2RzeHF58tC1FP2tR/BklWJ3+uaXfRq6xlh/Pd0NzxOdDVBQsYbMnHIAQoFBeppeZHq0m7Llt1K34X4RXAtviqyoeHOXUVi5iaZXH6Pr3LP4cquoWPleFNVKIjpDX+s+ehv3olqtZBeuICOrBNViZ/XWz2AYi4t8TBOmR9uIRwIk4lFKqrdz022fR7XYsdhESChcGS2VYGygno4zT2OxOVix5eMUVt78ht97IN3P1euvwOXNITIzwUDbAfLLNmC9KDxMJkKM9Z8mODGAw+Uhp2jV2306wjWSSkToa3mF/tYjZGYXsfa2L+DzVy4ZXAPpHsMWK/HYLIMdp0glIniyyyip3p7OK0yTaHiCluM/Z6D9OFa7g+Lq7dhdWUiSjMvjp3zFexjtbaS3+RX8xatZsfkBVIsNXUsw3HOCluO/BCCncDneXNHw+p0qGZul6djPmBrtobRmC6u3f+6yKzElSUKx2EklIwx3nSMeCZLpr6R8+c75z3fJeIimYz+jt/kgqqpSWnsbzoxsJFkmK6+a4pqbaT/5AucO/wSvv4ri6u1IkoSWjNNx5gn6Ww+hqColNbde04vK78rw2m6343a7kWWZRCLB1OQUmqZje9172/j4OD/+8Y/p6enBMAwkSSIRTywIn6uqqigsLKS/v5+uri4aGhqorKzE6VxYqdrV1cVzzz1HIBC4ZAsSh8PBjh07eOmllxgYGODZZ5/lzjvvZO3atYuqr03T5MCBA/zd3/0dgUCABx54gD/4gz/A7/e/dQ/SW8ThzsHu9BIY7eHckR+zfNNv4S9chWq143Bnzb1op0P3S4X6F5MVhZzC5TjmXuiXEpkZIRqawuH2kZFVsuBDizuzgMycClT11PzPErFZpkbbAPAXrcLu9M4fkyRJZOZUkJFVzGjPGULBYXQ9dcOG14Jwo5Ikmfyy9Wze/TXOHflPhjqOc/Ll76Fa7UiSjKYl0ZIJ/EU1rNr+GcpXvAeL1XXZ/QrCm2KaxMJTTI20MzM5vujXkZkZ/n/27js6jvu+9/57ZnvfBRZ90Xsj2MDeRapS1bZk2eq27MR27OQ6vjnPSXJynzz35tzYsePENZYcdVmNlEhRoth7LwAIEIUgCtGIRvS6bZ4/QFIsYFExRUnf1x88h+TMTt3Zmc/vN9/fcH8/AKqq4IlOP1/iRlF1RCcWMWPZ9zEYLTRW7qK+fAc6nY5wKISiqiRkTmfKgqdIyJgnZW/EpAwmK77MBbQ3HqGubDPHdr1E68l9RMRmYjDZGB7opLu1kr6uVix2J8m5S/DG5aGFw5xpr6Gt/jCjQwM0Ht9OZ9OxK47lYTTbyZh6NwVzH8VkubxOrPjycrjjSCu8g66W43Q0VbF7zT9z4uhqLA4vg70tdDZVMDLYjy9zOhlT7sRotp8vuzApTWNksBOd3oiiTNQXtjqipFHlSyrgH6GpehtHtvya8dFBYOJtkqG+bgJ+PxV7/kRDxSYAdAYT8WmzmXXb/0ALh2it209vRwsABz74JaU7np10GYqi4o5Oo3jFD4lJmkZkbC6ZU1dSuv15qg6+xfjoAMk5S7A6ohkb6aP15B5qS99D0zR8WfNJSJ97Y3aGuLHO3uM1HN/I6OAggfExNr/611e8dqk6AzFJRcxb+Q9Y7V6yp99Hx6lSOk5VsfW1vyU6sQC7O56gf4Tu9hrOtNUSDPpJK1hM5tR7z9cuNprspBbcTlv9ARqP7+HA+l/QeHwzrsgkRoa6aD9VxsCZdtzeePLnfhOz1X0j94q4QbRwiL7uRk5VbiM47qe17gjv/O6hK/4W6vRGMoruYtZtP8ZktpNb/BCtdYfoaW9i2xt/R3RSIU5PIqGQnzOna+hurZ54Ozm7mNzih1DVifs/k9XFlPlP0NlUTlfLSba89mPiUqdjtkXQ391Ie2MZY8PDJGRMJXvGA59pibkvZXjtcDjIyclh69atDA4O8u66d5k7by5paWnne0zX1NTwH//xH2zfvp0ZM2Zw+PBhBgcHaWhsYGRkhIiIidDV5/MxZ84cysvL6e3t5dlnnyUlJYWZM2diMEwEnPX19fzsZz+juroah8MxaV1sRVFYsmQJ8+fPZ82aNZSWlvKLX/yCv/u7vyM/Lx+9YeJQhUIhtm7dyi9/+UuOHTuGw+EgLS0Ni+Xm7B3r9qZSMO8Rjmz+DdWH1tB4fCuOiASiE6eQnL2EuLTZ111PWdXpMVk91wyOx0f6CQXGMTijJspdXNAjXVX1mK0eVL2OcyMqBMaHGR/pwz82SumOZzhx9J2Les6Hw0HOtNXiHxthbLiX8Bd4EDh3VBorvvEfBANjE6PPGq7eW0GIG0lvMBOfNgd3VBq9nXX0tFczPNBBOBzCbPMQEZNFREwmdlc8BpPtpnwbRXy+qaqOrGn3kZA+l1Bw/OoTKwpmqwebM+b8P+kNZuJSZ+GMTCJ3VhVnTlczPtKH0eIkIjabyNgc7O44eeNFXJGiqBOl1pZ9D5PVRV3ZB7TWldF+6jiqqhIKhdDCYTwxieTPfZisafdhsrrOvvY+hH9siHAozMjgICODg1dcjslqITblNNoVyoyJLy+d3kR86iyKb/0RpTueoaX2CAM97aiqjlAohN6gJzV/IUWLnyY6sUhCaPGRaFqYsZFeulprGRu+vLb08EA/wwMTjcR6gwGbM4ZwKIAWDjM+2k8oGETTNPq7O4COSZehKApaeGKQZFCwOqPJn/MIOr2RqgNvUnVgNXVlH6Dq9YRDIYL+cYxmK1PmP0z+3EexOm6+DmPik9PQCAXHJ573w2H8Y+OcOd18xel1+omSM+FQAIPBQlLOEubf8/eU7niGjqYaBs60o9Pr0bQwoWAIk9VGwbwHmbLgyYtKfyhnx0WZfftPMBht1B3bQkPFTnQGPeHQxPhrscl5TF/2PXwZ8+Wa+gWloRH0D+MfGz5bZ3qYseGGK06vNxiISWoFJhpS4tNms/iBf+bo1t/S1lDBYO829Ho9mqYRDAQwmEzkzb6XqUu/e/6NPJjIx2KTZ7DogX/m0IZ/53RjBYM9HSg6HaFgAEVRSZ+ylOJbf3TRfJ+FL2V4rdfrueOOO1i/fj3l5eVs2LCBnp4eZsyYgc1mo6WlhZKSEpqbm7n//vspLi6mqamJoaEhdu3axf/6X/+LgoICnnjiCdxuNw8++CB79+7l0KFD7N+/n+9973sUFxcTFxdHX18fhw8fpqGhgZUrV3LixAkOHjw46XpFR0fzgx/8gPb2dvbs2cO6deuoqalh5syZ+Hw+/H4/tbW1lJWV0djYiE6n45FHHmHp0qWX9fS+WeiNZjKn3ktEbDZN1dtoqd1Dd1stnU2V1Ja8hy9jFtOWfo+o+DxQrt6KoyjK2R5CVw+kQqEAmqah6vQoisql+dVEK+eH/6gRRtPCKKqK2eY5+wrFxetisXsnXpWMSj3fSvVFpDeY8URnfNarIcQV6Q0mnBGJ2F2xxKVMJxQMoKFNDFBmtKDqjBJaiz8fRcFij/xEgynqDSZcEUnYXbEkpM0hHA6hqDoMRgs6nZHLfrSEuIRObyTaV4jj9p+QM/OrnGmvZrCnlXDIj8niwuVNJSI2C4fHh8ninLgXUhWSc2/BG5836YC3l1JVPRZ7JEbpdS0mYTQ7SMpajCcqg+7TlZxpryEwPozFHklkbA4RcdnYnTHXLNkAgKLgic5g8f3/m4B/GItdel1/mRmMVtIL7yTaV0Q4fHmHrwspiorR4sRq96IBs279GwrmPspkg9JfMiMGg+V8eTlV1eGKTKJo0dOk5t/GmdPV9HXXE/SPoDdacUYk4Y3PwxmReM03gMXnl6KoOCOTufWR3xAYH7qu6SfeFPGComCyuMmYchcxSVM5c7qGno4axkf60ekNODw+ohIKcUYmYbV7L8sazv2uL7j3n8ib/XU6W8sZH+7FaHYQEZuNNz4PuztexqH6AlNVHdFJU/nKD1ddR2dJBeXsM8k5BpONlNxbiIzPpaf9BGdOVzI23Iui6rG744lOnIIrMhmrI/p8r/9z9EYLSdmL8USn09VSzpnT1QQCo1jskUT7phARk4nNGfuZ52Bf3BTuGmbMmMFPfvITfvrTn3L8+HF27drF0aNHJwZr8PtxuVw8/vgTPPXUkzgcDnbs2EF7ezvd3d28+eabHDlylPvvvx+Px0NhYSH/+I//yM9+9jP27t1LeXk5dXV1GI1GgsEgDoeDxx9/nK985Sv89Kc/RVEmTrZLM1idTsf06dP513/9V5555hnWrFlDRUUFtbW1GI1GNE1jdHSUQCBAVlYWTzzxBF/72teIj4+fdLDJm4XJ4iQ2eToR0ZnkFj/EyGAXpxsPU314FQ0V2zAYbcy+4ydYHTHX/jC4VnY98dqhqkyEWuHQxP3LBfMEA2MTtbLP3tfodEZ0ehMGo4nMqfeQnLMYneHyWuMKCkaL84qjVQshbhxVZ8Ao5XvE55WiXHMMAiGuRtUZsLlisTi8RPkKJ94E0DQUVY/eMHFuXRSwKApmq1teNxafGr3Rgjs6HUeEj8SshWjhMKpOj95g+chlj/QG82feo0vcHBRFxWybKC35UTk8Phwe38dbrqrDYovAbHUTEZtFKDA2UTZUVdHrTeiNFgmtvwT0BjMRMZkfa95zZY8iojNxRSaTmLVgoiycoqAzmDAYLFctuaDqDDgjfNhcscSmzEQLB1FUHXqD5aJBccUXlYLRZCfqE9TU1xsteKLScXoSSUifSzgUnDj/9EYMRutVz79znRgdHh9J2YsJa2F0qgGDyXrTlMy9YeG1TtXxt3/7tzz55JOEw2Gio6NxOC4uF7Fy5UqmTZtGMBjE6/Xidk9+g33LLbewYcMGgsEgkZGRRERM/uO2ePFi3n/vfQLBAB6P56LpLBYLd999N/n5+ezdu5fq6mqGhoawWq1kZGQwc+ZM0tPT8Xq9APzDP/wDM2fOpKqqilAoRGZmJi7nRLFyk8nE4sWLSUtL49ChQ5SVlXHmzBmMRiNpaWnMnj2b7OxsYKKus6ZpqKqK0Xh5D0Gj0UhRURH/9E//xKOPPkpJSQknT55kYGAAnU6H1+slPz+fwsJCkpKScLlcN3VwDROvfylM9Go22zw4I5OIiM3G5U1h95p/prP5GP6xwesPr6/BbHWjN1jwjw1OvB6rhVHP9uoOBEYZGewkHPqwJd9otmN1eNE0jXAogMXuxWS5uBC9pmnSm1MIIYQQNxVV1WM02UEGthOfAUVR0BvMEqyILwxFUTEYrdLDVXx8n6iDgoJOZ0AnDc3iE9DpjR977Jyb+Tf9xvW8VsDn8+HzXbk1NCIi4opB9IXcbvcVg+0LuVwuXIVXHg3TbrdTUFBASkoKo6OjhEIhVFXFYrFgt9vR6T5smUhPT+exxx5jZGQEmAisna4PX6U0m83nB29cvnw5weBEK8e5z9Lr9TQ3N9PX10coFJpYtysEz3q9nri4OKKioigoKGBsbOz85xkMBqxWKxaL5XMRWtccWUVtyVqypt9HasFtGE0Tg7aYLE4c7gQMRguaFp4YahyY6CKtnR3U8hqvfF2B3Z2AzRlFT0cdfd0NeONzUc+OGN3f3ciZ9hME/R++imEw2ojyTaGpZg+t9ftJyVuO0ew8H1b3nznF0W2/xWC0UTDvUVyRydLyLoQQQgghhBBCCCHEn9mXtmzIOaqq4nQ6cTqvXtNPVdXzgfOFxsfHef3119m2bRtdXV08/fTTrFixYtIa1JWVlbS0tBAKhUhNTSU6Ohq9/sqHQK/XT7rMzwtFUTGaHfR0nOTI1t8SCIySlLUYk8XFUF8bVYdeZ7C3nbTC5RjNDhQmekED9HaeZHigE0XVEf6IgwU5PAnEp8/iTNsJKve/gtniIjqpiMHeVsr3PM9wX8dERn62I7Wq05OSt5ym6u20nTxI6Y7/InfWwzgjE+nvbqRi70s0HN9C5tQ70emMElwLIYQQQgghhBBCCHEDfOnD609Kr9fT09PD1q1b6ejoQNM0UlJSyM/PvyiYbmho4IUXXqCtrQ2TycTSpUuJjIz8wpeiSEify5SFT1Cx52X2rfsph42/Qj07cmkwME5i1lwK5j2Gxe5FUVXiUoupLXmX+vItnG44SlLOQooWfvsjLVNvMJFb/BCDva00Ht/Oltd/gsFkQQuHiYjLwpc1j/ryLYRDwbM9vMETlUbxrX9DybbfcbLsAxqOb0GnNxIKjBMKBkjKXkDurK/L6NJCCCGEEEIIIYQQQtwgiqadr9fA6OgoiqJgMpm+8KHqp6myspIf//jHbN++Hb1eT1FREUuXLiUjIwNVVTl16hTbtm2jpKSEwcFBbr/9dv7lX/6F3Nzcq/a8/qIYH+ljoLeF7tNVDPa2EA76MVndRMRk4Y5Kw+6KO1+TJzA+THvTUTqaSgmHAkT7phCXWszYSB/+sUHsrjjMNs9FvZ9HBjsZHujEbPVgc0aj6gyEwyGG+k7T3VZBT0ct4VAAV2QyUQmFGMx2xoZ7MVmc2Fyx6M4WoA8F/QwPdNDTcYLejlr8YwMYzU4i43JwR6Vjc8Z87NpBQgghhBBCCCGEEEKIi2maxuDgIDab7aISzudIeP0pCAQCHDlyhJ///Ods3LiR0dFRnE4nZrMZRVEYGxtjcHAQg8HAypUr+eEPf8i0adMwm2/OQuh/DpqmEQqMEQyOg6adHY3cPOnIpaFQgKB/BE3T0BvM6PQf/3wMBccJBsbOfpYJnd58zc8KhQIEA6No4RDq2RF+b5YRVoUQQgghhBBCCCGE+KKQ8PoG8fv9tLW1cfToUfbt20ddXR0DAwPAxACTmZmZzJs3j8LCQuLj4zEapQevEEIIIYQQQgghhBDiy0vC6xtsdHSUwcFBRkdHCQaDwERdbKvVitPpxGg0yr4VQgghhBBCCCGEEEJ86V0rvP7iF1y+wSwWCxaL5bNeDSGEEEIIIYQQQgghhPhcU689iRBCCCGEEEIIIYQQQghxY0l4LYQQQgghhBBCCCGEEOKmI+G1EEIIIYQQQgghhBBCiJvORTWvFUXBP+4nFArJoIJCCCGEEEIIIYQQQggh/mw0TSMcDl8xi74ovDYajRJaCyGEEEIIIYQQQgghhLghDAYDqjp5gRBF0zTtwn+45K9CCCGEEEIIIYQQQgghxJ/NdfW8vtqEQgghhBBCCCGEEEIIIcSNIgM2CiGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOvrPegWEEEIIMSE43EpV/SBubzy+OCfKdc+pMdBcSX2/geiEFOI9xk9tnbTgMG0NVRwrr6axpZOhsTAGq4tYXxp5BQWkJ0VhoYOSw40YPIlk5fgwX/+KTyowcJIt765ld7XKnDtXsmRWBnbdp7M9N0pg9AwNVUfZs6cOV84sFi6ZTpThs16rLzhtiJqDm3hv/RH0qUtYuXIJaZHXd6urhcc501bL0X17aA0kMmPBCqYkyQETX1JaiOG+01SXHeBwTS/e9AXcuzznz/vgqI3RUrOfdas20OOYyb0P3kN+rHwHhRBCCCHhtRBCCHET0Ohp3MPad/cTjJnNrcvTPkJwDaBgcpho3bKGjdt83HLnrUxN9/BJ8t7QWAelu9/nzTfWsvtoPcOqG19yCr5YNwatloNbV/G7rlGskbG4DIO090bzwHf+kqQsH+ZPFDSHaT22ndWv/jfvHBqnOewlKSWFKXGfj1uW4FAT+7eu44033+fAsRO093q5/69imL5wGlGkXxe/AAAgAElEQVSGT5jqi6vy955g9wev88zvt2Aq7MWTkEnK8uRrvGYYpK1mH+++/iJrNh/mZHMPqfOfIjb/FqYk3aAVF+JmofnpaCxj8ztvsH77QcqqmxmzZnH/U1ncvTznz7rowFALx3a9ye9/+zoj8bUYYwvIeTD7E/2OCSGEEOKL4fPxJCiEEEJ8YYVoK1/Hi3/ag5K0gJXzphDr+ui9zYzORObeuoKR1W/z/pogoXvuYmaG+2PUB9MYOl3Cqhd+z/Ovb6Ku38PsFQ/xtfuXkZ/qxWrSo2ghxobPUHN4I6+/+hbbDtYwbJ7N0qExgtpHXuAlVByR0XgjXNhsYbxeD07r56fKmc4STf6cu/jKQAenThzlyGmVkbEA2ifeL+JadBY3kV4vbocNc4QXb6TjOhqBdET6Crj1nttobqzh4L4KPAOjBENywMSXkKLHE5vF0vu/gcGs0Vi6n1PDMYyNh/hzfyN0JjtubyzRbivdzkjiYj/O75cQQgghvogkvBZCCCE+Mxpnajfx3LNrOBO5mIdvXUxWvAvDx3hiV1QTEXG5LL93iI4/vMY7bymYvnEvU5JsH6kX92jnEV753S/43fPraVPy+Pp3/4qnHl5OdoIHi/HCPnCppKSkkZEWh/PffsXbB/wEQ0HCH33VL+NOW8z3/99MHhqEiNh4YhyfnwhD0ZnxRCdTkJ9FYowbVfk09oi4HjqzjyUP/k+yFj6NaosiPt512bkfGmvhWEUrIYOP6UUJqCiYbB6Sc3PJTE/Apb/5e8eHxtqoqGxmTEmgeJrvzxjwaQy2VXKiqR+TbxYFPnls+OJTMVpcxKfkU5BXSEqshWOtn+4ShrvqOFl3ilDMQqanfthQqxq8FC39Nr9+7W78Bjfxid6P+AaSEEIIIb6oPj9Pg0IIIcQXjL+vgndeeZlDHVHMWbiIgiTnxwquz1MMuBOmsWJJJt2l77H6vX0094eue/ZwoIXNb73Iy3/6gJpuJ0vuf5RHHr6TgmTvJcE1gIrZ7iVn9gM8+ch9zM7yEAqGuP6lXZnO5CIhLZeiolwSY1wYP4d3K6peh06vk/DlRlKMuKOSyJtSRE56PE7LJedseJCKXWt5+aXVlJ4avLgnqapDp9ejKjf5EdMGqdq3jpdfeJOjDQN/1t6wgcFGdr3/J156YwenB6QR5stFh05vwGj4dIt2BEdaObLtDZ59fgPN/ZecU4oemzuOrMKpFOSkEGGTgiFCCCGEmPA5fBwUQgghvgC0IUo2rWbNxlMkFBUztciH5VPo9amoFlJnLqDAN8aud1ax41Ado9eVcGm0lW3lvfe3UNLYT3TOIpbfsoiCRCf6q9wt6EweCpfdzpLZuUQYQ4Sl2sJ5N3kM+iUToLl8M6+/8AIbDjYxGvg8nqhBWiu28cbzz/P+/lN/1m3Qgj2U7nib5//4JmWnBj+FckDic+lTvIhpwX6qD63nud+9xMGTfQSkPUQIIYQQ10ne/xNCCCE+AyOnD7Nxw3Zqx3zcmZ+Fzz15netwoJ+GylJKSo/T0NaDHxu+rBnMnzeV1DjnpINZGZ1ZTC1IZe2W9XywaS5FuclMSTBddX20UBsHd+7kUGkjI2EXi4pnU5iXguU67hQsEVO4/wkbfqsH9yQrpIWGaT1ZzuFDJZw41Y1fsRCTlMvM2cXkpEZPugz/UAfVZUfp0nzkFOST4P4wQQ+MdFNbcYj6XjfZBVPxWTsoObCbQ+WtaDYfM+YtYGpeEg7j5cmLFhqm5WQFJUdKONk2jNERS3pOAYX5GcR5bR9rcDD/cAfVpQc4cKSKzkGVuKxiCpxD1wjyNYbPnKKqvITS4/V0D4Tx+PKZM6+YnNQoTJc1GGgM9zRRdayEsuN1dA2E8fjymDNv1iTTa4z2NVN+pIoxRyJ5BYkMN5Swb98R6jvHccbmMnf+HPIzYiYZXFNjpKfp/Hp19YdwJ+QyZ94sctOiL1uv8aF2qkqOM2CIITs/gf6a/ezaV4kWVcSiRfPIirdOsu0BOk9VU1lZS/fw2QRLMRKTnEN2uof+5mqqT3Ywfj7cUrB5U8jNyyM52kz/6ZNUVVbS2htAZ/aSm5tHTnr0xNqHRmhvrKSiboiolAKKsrwo4REayzby3K9/xSvvlzPsNlB2YBOrtUo8sZnk5ucTZ790HUP0tVdzdP8+SqraCBijKChewNyZOUROUoNdC4/ScaqKIweOUN3QwUjISJQvi6nFxRRmJWA7+/XWAn001lZxvLqVsTAoeifJmbnk5SZiGu+i/kQVlSc7CWiAYiImKYu8giwizWM0lW/h+V//Jy+uK2XQoXHs4GZW66pxx6STm1+IL+JaZ2+Q/s4GKkqOcKy6iZ7BANaIZIqK5zJjSjqus5eI4GgnJVtf41e/fIaNZS3EajXs3bya4WonCWk5FExJw6ED/3A3teUVdPqdZE9NZ7zxMDt3lTHqyGbBksUUJJ3bqWGGe1qoLD3AkWN1dA8GMbviyCksZua0HGLdxol9E+yn+WQ15ZXNjIZB0TnwpWdTUJCC2d9DY+1xKk50nN03RqISMsifkovXeu57rjHa30Z16QEOHaunZ0TF68skMy2FxPhIYhPjuf7y+RpjAx2crDzK0WMnae8dxxaZTMG0mUwrSMM5yeU0ONZHU20pBw4eo/F0P2G9E19GAcXF08lK9lz20BUaH6D5ZBlVjSNEZ86jIGGU2rK97DpQw7ASSd70ucwuziPSHOTM6XqOlx2nc/jDxNdojyY9Zwp5KW4ID9N+qoayknqGwqAaXaSm5zEtP+H8MRjpb6e6dD+Hy2rp7PdjcsSSWTCD4hkFJHiM17VX/IOd1FWXUnlqote/wRZFWvYUCtI8jPa1UXu8hJOnR9EA1eCceIOmIBGDv4fKfW/zm3/7T9YebMCZ5uPgtrehwU6UL4OiGTm49aCFx+k9fZJjlS3ovVOYPy3u8tI/4wO01JVx4EAp9W19BFU7CWl5zJxVTG5qxGX7WQv76e9soKysljFLOgvnJjPYXMGubXs52RUmNnMqc+fOISveKg2OQgghxE1KwmshhBDihgtSe2g3B0trsCUsJCk+DvMkYeVA6yHefP6/2VxtZNay25m/rJC20vW8+cd/ZPW7t/GXf/UYS6f7MF36xK1YycjKID4iyP6duylbPpfs+MzLp7vAeNcJyitqae4aRWdKIzMrmfhYy3U9zCuqlaScHDRUdJeEqIOnS1jz6ktsLB3ClzuTgrQU+lqOseW193jhWQ+LH3iSx756y0TIGRqgoeIgWzduZNvuQ1TV9VJ414/4q8QcEtwK3Q2H2PL+O6zbfIDK2haiZzzCA8vq6S5Zy/o9FbR2DzIW0BGXdQtP/+j7PHTnNC5sEwgMNrDpred4a+tpEgrnkJ/opnrfOlY//wuGVA/xCXFEumxEps7l/nvuYG5B9DW2PEh79XZee+E1DrcZyZkxi/R0HR1V7/Hs8SPsLe8gTMRlc2nhASp3r+GttQfpU6OIj9LRWXeYdW+9xIsvz+Lhb3+HB+8qJvpsKDcx/VpWvXuAHiWKhCgdnXVHWLfqJV54uZiHv/VdHlpZTJSxnxOle9n0/gfs2F9CTaOeBfffRs7BAba/9R4ldV2M+EPoTQ6Scpfx6He+w0N3zSDS/OFyqvau4601+ziDF1+0ga76I7y36iVefHkmDz31Hb5+92yiTUM0Vh1m+6aNbNt9kIraILPvf4AZVaNse+VVdlV1YYpZQEhzkPXw7En2mx6zKURDyfs899p2WkcjmHP7Q3z7L2Zht9oIWcI0HlnD62v30e6PYOay+3jkiam4bAYUFMw2C+NdFax5cTPB9Hv5QXYhfe3VHNq5kQ2bd3K47ATjkQv49vfjKczyEh7qpLmphZbuUcJhP4Gxftqaaim39BI/7iAhLfei8FrThqja/yZ7D73GpqOn6BsaZmRUw+ObxcPf+T7f/uZSYi0fBqbDXdVsfvtl1uxqxZNezLSsVEa6qtjz7m946VkjxXd8kye+eQ9FKXYUvRV3hIvx9jW8/OpaKnsTue9bP+CvsxKJ1VtwuFzQv521q9aytwbm3/80P/Jl4LR30dLcRHPXuW0Y4HRzLeX2PmJHrMSn5l81vA77z1C2YzUvvLKNsahi7rprEYXBBra++w7/3+urWPi17/Ltx24lyakw0NVGU1MDPWMhQv4gIwOdNNZWoOuLJGCwYtK3cXzbBrbtPsCx6lHyb7mTxa02dr/4HFuOtaOPnMHAmJ2C7ywm7O+hYu9aXv7TFrrUZGbPySPF3UPFgY38+xv/jTNzBY88+Sh3LkjHorPijHATPLOOP720ivLuOO745vf429wUfHoTDpcb3dBu1qxazc7KEMV3PMHfJGbhteqBMGcaD7D6lT9xuDOGFfesYIZjkGO71vOfLz9H2tJH+NHfPHBd4bUWHKD26EbeeH0DTf54pk7PxhfTwaHt/80Lf/gjc+5+nKe/9RUK488m2FqAjvqDvPPKy+w8ESaneA7p6W46ag6y9tk3+eMfUrnz60/yyFeX4HMqDHTUsm/TKt5ev4vS440YfPNYea+fko61vPbuAZp7BhgaCePxzeCBx3/AD761BLvVQqCvhlXPvsqBhhGi0ubw1Sf+hlnus4GrYsRqMzF0ej/Pv3GE6Nnf5IdTJ7574UAfNYc/4JWX1tEUSGT2vALSPAMcP7iN37z1HObkRXzjqae4d2nWNfeP3mzD4zHRvWE9L7y6g9GoYh7+7v9DXpoHg8mKy21jYP9W3nz9Xer9qdzx9b8iPc+HobeLpvpausdCBMdDjA330HSyAsugm1TNgS/FwvFDG/lg0w72Hz5OnyGD+578e+ZOi/uwQVEL0N1UyruvvsCmY6NkzppPdrqH7pOHWf/Cav74hwRu/dqTPP7wCpJdKqGxM9SWbOGNN99jz6FyTo96mbPiKwS7BnnjuVUcqTvD4MAQijWO2Su+wY9+8n3mpZqvfYIIIYQQ4oaT8FoIIYS4wbRQK+Vlx6lvGiVlZgKREc7L6nhpgWY2v/kc//38BqJW/A9mzJ3HjDQL+SkqdWVHeHbNy7yTlUtachxZ3ktDKwWXL5G4aDe9G0qoqKln2YIMEixXjqKHTzfR2t5JX0DD7I0jNjoSl/n6+6Hp9Jf3HB/tPMKffv9LXtoyzJIHn+Qb980lzmUgOLqIoozX+fWvX+Slf//f9A+O872n7iYnMoRicBHpUBnoaKCyupfYeSMEQhPb5IzNZt6yOZQf3suWujraRt9G9S9j6aKv8k9f+RHB07t48Q/Ps27vWtakF5Kfm8WCLNvZHdrLgfWv8Mwf1mOe9Sgr7r2bKTE6ZufYGer8Bc+/s4vOoaXcujyPjJQ4XPZr9UTUOHNyGy/89tdsbozn/kef4O5FGTjNCiNz8tjwShdH9h8krF0SXmuj1Ox+i+de2Y81ZwWPrZxNrF3PWF8xr/3m33n2tVX8wW/E6XLz4IoszIxSs3sVz72yD3P2ch6/e87Z6Wfx+u9+yTOvruYPfhNOl4sHl8Vjc0URaQ3T01JD5Ykgo+tgZOlClj/xDzzqGKf24HpWr9lI6c43GNPMuDwevro0HRNjnNj7Ns+/shtD+nIev3cucWeX89Z//Qf/9crbPPM7A063m4eWxqIz2jAro3Q0VFBeraJs34ZZP5vCZXeh6d7nULufgHalJEzB7s1k5vRCNq7fwP7qLor1DryxkViMeoxJ01iycCYH9+zk8MEB5rniychKxGWduG012aOIinBijYglKqeIbJ+NcF8Pdo8DXbCXuspKSM9lZGyiArveGsuUBXdyR9NJaspLqLOkMGfp/Tx2RwZmixOn68Lb4TCtVXvY5bYwd9Zj/MsTKej6S1n1wh957f3NvLcugylFU7m7OAIFGO+p4r1Xfs1/rWqg4NbHeOyRZSR5TIT9i5ie4+OPv3uGVb/7V3r6hvnhDx5hRrIFV1QaU/IzibaNsb60le7eIYKahmqw4k3IZvacaZTu+YA1m07S0dXPeEhDb4kmf84d3NXcQO2xo1QbEpm1+D6eujsLk9mBy3W1W/oQzeXb+dOzv+WDk8k88v35LJpVhFnJQtffTNn+X/HO6hgycvN55JZEHNEZLLh9Jc2NdZQeaiUqcQor7nuSJVkWzCYYHWzHog/R03Scsko/YyYnbtdcMhbdDcb32VU7jj+sooX6OLbjdX7176/SFbmEp773JPOzIzAoAeYX5xL/yh945uVn+LeuHkb8f81Dt6ThjEyhMD+beGeAdUda6eqZ2DeK3kpEXCaz5kyj4sB6Vn9QSXtXH+Nni+yHxls4smMN725vYfrXHmbZnKk4DEHinUG6T6+mvqsPv6ZxrToYWniQ6v3v8Iffr6bXu5Qnvn0veT4XBs5g8zezb8szvPkSuLwJJH1nMS5dmM66vbz625/zdpmF2x/9K755ey4ui8r4omKy097gv3793/z+5530DY7zw6dvI9qdwNT582moK2Pz2hOcafej1xtZNH8BP/6/T2IeqeK9V//Ac6u3sf7dJKYWz+S+GbFMm7eAeSW72LZ7I+2OAlSzh0jn2euUYsDu8eL1unG7Y5m2YClZcfaJBqn97/Drn/2RBuNsvvXX32JxrhejLsj84nyS33iW3z77Aj/vOsPQ2P/ksTuv3sipGqx4E/PJz8vEEfwTJ1pO0zswjgbozU4SMqZTPKOJfe+9xrbSVrp6hwlp4I5IZNbyu+nobOLw7nqssVksvecJ7si3YLZa0SlD9Nkd2E1jNB6voDfCyeBo8IKa7mHOnDrKm7//v7y6T+OWb/4NT9xdgNuqwz88i7zM1fzXL3/PM7/spKd/nB//YCU+ixNfzmwWzDlFyba1VJ9oxq+YMIXmcuvT/4fvRPo5vvMtfvfrV9m+4V1SC+Yy9S/mfYTe+UIIIYS4USS8FkIIIW4wf18j9Y1tdA9ame51Ybdf/nMcHGmhpvIEDS39RBvt2Gx2TEY9Rm86yYnRWEJ7qD1xivYz42R5Ly/NYIqMJMLtwDhaRl19Gx1nQiT4rvSzrzHY18fA4ETQYLI5cFitk5SuODf5ENUHd7Bjx15qWnsZOV/fQcXqSmT28rtYMieBqg2reWv1LkxFf8HiZYvISnSjUwCXG+eKh3igrobaX73FO6/8iYysFOLvm0p8Wj4OcydH9u5i35HeixZrtLhIyCgiLzORSLtKry2F+Svu5qF7ZxLrscJoHF21FRwrf4PaqmpONXcxL8uGCoy0lbJr21YONSh8/esZZKRE4TYpOAuWsXTRLvYeqKDebyGjaAH33DENj81y1WMYGmtg27pVvLOpnaKvf4MVy6fh8xhQAJdzCovmz2bblh0cO33xfKOdpWxc+x4nBvN4at5SCjO86BXQYlwsXbiXXTsPsOXQNnbuWcSc4gwS/WVsfHcdJwZyeXL+soumX7JgL7t27GPzoa3s3LOQOcVppCUXsHBhMXt2bWN3VSdJ+Yt44BtPsTA3GqtRY3j+NBK9Nn75m1cp27+JLdvmMHNqCknBMjavW0d1TwaPP7aMKRlR55ezeOF+dm7fy8Yj29m5axFzi79GWkoBCxfO4tCebeworwerj2lzH+C2qW7GvvZN+sYMxMT5rrj/VL2N9BmzmTN7CruO7qK5qZmWllEKIp3oTU4yZs5h1qxCth/ex+m203R0h8nxTMyrMEJn5wA2bxaz5kwlwmIirE9iyuxb6DxZxq4te2i+YFmK3owrIpqoSBdmvYreYMXjjSMxKel8r85w4MPpze40Ftz6Vb55Zz4xbhtKIB1/Rz3lRyupO1lLbV0LweIIDNoAx/au583XPmDUex+Lbl1BfsrE8QE3zkX3cl9rPVWVf2DD6jdISc8k9VtLiDSYcLlduByXnGOKit5gxh0RgcftwHBBiHjRNhgUdJNswxVpY3S01FFdVc8QqVhsduxWEzoM+BIT8UWZ2FFVz8m6VvzLEjGb7URGRRPhsqJXFEwWJ1FxiSQlGUELEXB7mLdgNhUHNrPh8HGwRJE3837unR1F4KGH6BlS8Mb66D55gPfffJUdTRYeuvs2ls5IxXW2lI/LNZ+77+2grrKc57au4a2ENLKyvsfsJCNOlwu30wL4L9gBCnqDGacnggiPk0srAvn7TtNQU8GJDphrteOwGTFgJCa9mPkLu7E0Bglds263Rk9DCevfeJXDnfE8+fgDzC1IxKQDsFNYVMzM/LepONRJ2+luxjSwDTVwcOtbvPpOLXHLvsuddxaTFG2aiMhdLhbdfjcdjccp/8V61r4VR0Z2Lk/enkJ0Ui65uTkkePT0qlHkFd/Go48sISnaiRpMR+s/xaE9xzjReILK6ibuLS7E4yti/tLbmb/5AJtaW6mtqqI/nI737HU6ONRNV9cw9pT5LFuQjNWg0FN/jE1vvcjGE3DP03eyYlY67nPHwDmbO+45Q0NlKb95bz1vxaSSm/9jFqRcrcSUgt5ox+mKwGnVweCFh0jFYHLgiYgkwm2+oJlAQW+0EhEVQ5THhl5RMJjsRMYkkpRkAjRCQRu5MxYz2t/CjjXvczBw8VKDIy2U7nqLF988jmP2U6y8dw7JMef2s5N5y1fS1XSc0v/zNu+teoX03Hz+8p507B4f2Tn55KRE8F7NGDFJU7nn0ceZkxGD3RQmyTZO/aGd/PumZqrLKugMziVlknJTQgghhPhsSXgthBBC3GDj3R10neljOGTF7jBjnqSHs86awfxb7+aMaRaFK6aRfK53taLHoNehU4MM9g8wOjKOxuW1OlWjA4fdhkk/SMfpLvr7xsB3WWHfszTCYY2wpk3UKtXp0enUK4/qrJhJzC7m9ggv6qu/5dmX36eqI0Dy9K/ynb9YwvxpqZj7j7Bn9z7KG42sfCiD5CTnRHB9ltHuY/a8WRRs2EHFrj3s3lPGojlFTE+0oka6sdstTDZ+paIzYTKZMOgVbO54UlNTiY88W6vaFkt8fAwRTh0n+vsYHBomDKhoDLadovFUK33+CAwGPfqzK6MaIsjOziIlMYLK+mECYRWLy37l4P7s/uqqPsjOHfto02fy9bxsEl2GD4+BYsDtceNy2lAIXjBfiIZjB9l/qJxmdOzZ9Aoth3TnP7OnroZev0JwpImTtQ00nx4h1HaIAwfLadZ07N30Cq0XTN9bX0PP+MT0dbX1NLcHyc6z4HQ5sVlMqIqduMQ0MrMT8TgmNsgUm8OyO27lyKFDHF9dTdXxKupODaD1HuHAgWM0BTX2bX6V00cuWE5DNT1+heBIM3Un62g6HSKnwIrL7cJuM6NXHSSl55JfkEpMlBGiYki+2u47yxKRw7TpM8hL3sOxijLKymtYVFiMTQWzO4XUtHQS3Js5UV7GsfI65mTkY1JgrLuG6oYBLLEzmZLtQkVBNZjQ6yNwOR1YPlH4pOCOTiEzO4+ESPvEMdVFEJfgI9ZrpaKzn77+fvxhCPec4Oi+XRyoClH8aAaZmREXnbMGSwxFM2ZRXLSeA2tL2L93H8uXz2NRxkT5E0W5wnoqE//3qUVoipnk3NmsfOgp8pjCghmJZ7/bCjq9Hr1eh39kmKGhAcY1uOoLF4oOg8mCy+3C6bCgV+zEJWVTUJRJbJQRoqJJBLRwD3t3HWTnzgqM0Q+QlZ2D84LjoqhmEnKnMmveDN7f+gZlh/aw7/DtzEjKvuq+UZTJ/+/c/uo/VcqmtW8yJSeCFTMSMBpcZE+fhydtBI/u6ntUC/dSW7GXLdtP4S6+jRkz4s8G1wAGfAW38N1//A/mNQZJnbYAjy5MV0Ml+7dvp37cxZzMXNK9pguOm4rNm0VR8QKmpW5ifcURdu/Yz21Lk/EZDBiNZkwGFbM5Al9qFilxrokHM52HqJhE4rxGyrqG6OvtJwTodA7SC2awZMlUtv+2lIqjeymtX87yLDMQovNUHXXNPSROe4h0tx4t3E99zSG2bSlFdS4nO7/gfOPBxDEwEZtRwOxFc3hn/QscL9nDrr33MCel4NoPiIrCFU9fQL3Sf15hDp3eiNXuxulwYzOrcFF4rdHTXMuBrZupGbZxf1Y+mVEX72eLJ5XC4sUUZ77H2zVl7Nq8m3tuSyPZpKI3GLBYjKg6lciYFLIy43GcPcndEdEkJ0XBeAMDfT0MBoDrK/8thBBCiBtIwmshhBDiBgsODzE0OkoAA0ajAf0kKa1q8DLz1kfInBfC4nThMAzTWH6QnTt3sn5jCacHw8QHgoRCoUmXoSgmTCYDBr3G8OAgo6NjwJXCaxWLxYzJNBHABsfHGPf7CWpc4RVyPTZ3NDaXi+IpGaxxKhyqG8fry6FgyhQSvFba9tdysr6ZvpAdp8OG9bKi3jqiMzJJTUrAEdpHfW0dbaf7mJ4YNRGMcKUX/JVzG4iq06HT6S6YTo9Bb0CvUwiHQ4TDobOvnSsoTATzgZE+evsGGBrXiNYrgA5PZAQetxOdXkWnXiW0P0cbo76qkpraZqyRC4nyejFeMpOiKqjqxQGkFu6h4eRJGptGMGebMRk1gsEPw217/BRufzCVRWNhYnOKiLFOTN/QNIxpkultcYXc/mDK2emnEms/17V0IlhS0KHT69BdFNrpiEzLJicngxhLBd0dHXR31KM2n6T+1BCmDDNm0yXLic3n1q8msWAsTEz2dGIdE/sS5dzRMGK2WDBfz+ieF+4jnZPcqdOYUZTGgbXllJUco+H26RR4dQQGWmhtbaVzMEznQDllpeW0rMgl3RHmVGU5LQMKqYunEW+7cMcrZ4PPj7Qal64Ver0Bg95wUThmMpowmfSEQyFCwSAhNIbbTlFXW0eX34zd4cBuvXTBKp7EZNIyU/Fo1TQ11NPQ2M2ijLhPsoIfgw5vSjFf+04W45oZl8fK8Jl6SvbtYPMH77O7uptw2E0oGCR4zd7JF1AUFMWAyWzBar342IdGO2huqKGuZRRnih2H03bZ99lgjycpJZvUGJW9rU3UnainP5j9sbbQ6E4is2Aq2d5dHNjwDH9/upL9X7zbEOEAACAASURBVHuUh7+ynOw4H44YDd01uqgHh07TcKKCE5165kXF4rVfPIPRFkVu8a2kTw2jN1kwMkp72ymqq5rBNB2703XZ2AWKaiU+IYWczFjWVrRzqr6GU50hEhPg3EVOUS+9jqnodAaMBhUtHCYU+vA65ojNomjOIqa8c4Ca6hL2769hUVYRhkAXDSfr6BiI5s55WZhVCI100dZYzYlTI5im23G67JcdA701hoSUPNLjdGxtb6W2qpbeYAFRn8kTojJ544Q2TndHE8ePNxI25mB3uS8r7aGoFmJiU8jPSeDNklaa649T3x4kOVmP8uGOPtswe8Ge1ukwGAwomjbx3Q4jhBBCiJuQhNdCCCHEDaaFQoTDGtq5kHHSsE3F6vRiMnVTsf8dNm4rZ8yWyrTp8yieeoyyY9WAhqZdKW1SUVUVVYFQMEg4PHnIfY7NG0WkeyJ8Gentoruvn6EQ2K6W5CpGXC4ndqsZhVGsdhtmsxGFMAM9PfQPDBMMWQmHw4QnqTdrdEUR7fXgMIUZ6OtleHAYiLrqel6Lci71vmS32BOSSfLF4aSEmuO1NJ4aJDXPiYKGf3wcvz9ETHI6Sb54LNdIr7VwH+3t7XSfGUEfZ8RoMFwhaL9EuJ+enl76BhSy06ez/O4HyfFMtjAFvcmKzdxOWW8P/QMqWWnTrjm93XZ9g43pTJF4o7x47Crd4+P4x7ro7e2hb0AhLXUat6x8kNyIKyzHaMFuN1/f9l6TSkTqFKbOnEbKltc4XlZKeWUHeYuiqD92hBPdetKnz8dec5TyslIqT9xGcn4Px4414TckUliUeFn5iD8bhfOhmnb2z6GBfnp7BwiGjIRCISbLvQx2D5FRXjwWGBgcYLB/ALjR4TXojFY8XgNnWipY/+JGjtaPk5g/k+J5c2muOcLxHq5yLbm6yQ5BcGSQ/p4e+sdD2MMT17vL5lNteCKiiPJaCXQMMdDfx4gGlxdBujbV6GX60q/w7adbGfzdW5QfXE9bYyUHdm7lwcce574V0/Beo0dtYKif3s4OegMTDWPqZRumoDea0Z/9HC08ytBgDz29Y4QJE560IVHF7nITHROFUWtmeLCPvv4wJFxjg67QAqMaPGTmzWDBnDQOvXuCo/v3UndPIb7hRuoaWtH5FlGUPFGOJjg2zEBPN33jITzhEJOtnqJacbmiiIm2EWwZYbCvh+Gw9gmvwp8uTRtjeLiHM2fGCIe1C8L8C6lYHS5i4qIxUs/IUC+9vWGu9QqIcv4PIYQQQtzMJLwWQgghbjDVaMRg0KMSIhQMTxoqgEZf8wHefP6PbDpuZM7t93D/oiKSojXGjjoxTVZT4yIhQqEwIU1DbzCg0139J9+akEZqUjxeyzGah07RUN9GZ0+QmJirz6eqKqo6kRgrnMtcJnq5oUwMgNbXN8TwyOXduBWdBbPZhNGggt6Aqr9m9d6PzRI9ndvvXsmxyiZ27nybVbmZxEfcRbqji4MHjnJqJJm77r+LuVPjrn1zpI0yMjLK2HiYkN+PPxDg2kPBAQTwBwIEAiOM+1VM5ihiYy8f6PL8YoKt+P0fTm+8xvTXTTFiNBoxGlRMFjNGs45AIEDAP4LfD4ZPaznXQWfxUTR1OlNyN/FuZSmlZcdZXOjjyKET+G0z+eYTena+1Mh7x8soO1ZFrqGTmrYRIjOmkB372b7fr5w957XwCIODgwwOhsF88Tk8UebGismoTPSC1382t96h8S5Kt6/m+Ze3MBY9h/u/ei/TMuPwNwyx7ZqDk350yvnSJxojw4P09w9MUt5IxWg0YTEbUVQder3+2vW7r0iPMyaflU/+HVFJWbz03Cu8v7uGnevbaag7SX3D9/juE3eQ7Lpyy5QWCBIYG2d8bJjB/j6GAhoXFR6/fCvPl8/wj0zMMxwC9yUboTMaMdssGBQVnU7PJzsF9HhTcpi5YB4pG16muvwAB0ruBH0dTacD5N1VTOTZ6+zE9VgBwoyNDNHX1zfpMTAYTdgsJpT/n737DrPjOu88/z2nbuqcM7oRGqmRAYIECQJgDqYkm5RIKtKW5FlZXlujDbN+/Mwz2pW9z478rCzvji1rZi2PEiWRogIhJoEkEgkCBIkcuoFudEDnHG8Odc7+ceo2MgGCWT4fPiS7b9etdOtW9/3VW++REsfnx3lnty6862a3RkA6ESM8OUnEhdKL97PfTzAvl6BXYe17y9fOsizLsqyPEjuesmVZlmW9zwJFxRTl5xIUCWLxJKnUpXVkyfHDPPkv/4V/fvw4Zcs3cd/9m2maX01Brv+a+olqnSKVSpHJOBQWF5N7lQEIfbmL2LBhLUsWlCDcUY6+eYjTbQMXdGy+rCtUjReXllBcmI9U4wz2jzA+nrp0Mu2ailXlo7yqipKSoqtu1/USviJW3P4Yf/2fvs79q3zs+80/8p/+w9f4n/76O7w2VM/n/+dv8Bef30x9yTUEeSJAMBjA74ep0WHGx8dJX1PRaoicUIigP0pPRzvdPSNcvh7eZbS3h97+CVQwRMAfpbuzg+7uq0zfN0j8Wm571ymSiSSptJ/yqmoqKmsIhkIEAzF6ujrp7h6+4nLG+nro6R0g9q7dXh+kfsUa1t2wjLyZVo4fOcyO371Gc1+ChpWbuO/em1m7bjmBsRaOHn6F53acYCpdxLI1yyn8QMMpQX5RESWlRTh6irHhIYaGkpdOpl1clcFVDiWlFZRXlr3/q6pnOLHn13z329/l4EgRN9/7SW67YTE15YUErtIH+nr5cgsoKiulMKiZHh9lcGCQ9GWmU0rhuoqcohLKK6sous5gV2USJBIpAsULue0P/we++ff/xHf+9s/YuDCHvubd/OrnP+OZHae5zFlolgwGCebl4k+P0d9zhvau+OWXlY4TDU8TTYYoKCijrCREKjzJyGA/o4lLTwTaVbiZDNoppKSsmqrKd3YBw583h6ZVG9m4upiB9mb2vvxb3mzrZlosZsPaytkLAE5OPoVl5RQHYWZynIG+vstsv0YpRTrjEioooqKmhpL37hri9ZE55OeXUVEaIh2dZmygl5HL7WelUJkMSuZRVFpLzVUuvFqWZVmW9dFhw2vLsizLep8FK+ZQW1VOkT/C9FSM2CWJo0vXod3s3PkaPfESFiycT1157uyAh1rrS9piXEwlw8yEYyQyJdTUVVBaepVQVuSx/LYHuOe2G5hToOg8vI0Xd+zjzPBlArlrUDx/MQvn11Pqj9PZ2kpP79AlQXg6Ns7ExDQx5rB02UIa5uRd17KujUDqCIMjmqY7vsj/+fff5ht/9XX+xz//c/78z77Ig/feRENlwWUHibxkTrKU6uoqyktDTPad5lRbB4ORt05zNSCccqqrq6go9XH2+Gu8tu8A3VOXPi8yeIjtO3ZxuM1HTVUVFWU+zh7fw2v73rzi9Dt27uLgqclr2hNufJSRkREmM3UsWbaExgX1VFdVU1Hmo/vEXl7b+wZnJy9dTnToMDt27uJAy+S7eqd9bvky1qy7gaV1KU4eeJ4f/+hFpuRc1t+8gqrKpaxes46lNREO7foVTz3Xhr+iiZVLiq/7j1its21s3glBYd1cGhc1UhVM0Xe2g/b2nksC2kximqmJCWYylSxYtJhFCwrMs7P9wpVCK3XB21nP9tW+/HLNNuhr3obkeDuH9rzMjjeHyauYx+KlteRd0KT9MvOZbTqv0VqhL9P2463IUDX185awsCGPqaFeOtpaGbo4cNQJZqYnGJ9MUTO3kabli8nJticGs2/0hS0isj3HLz46UxNtHNm/m9dbYgTzy2lceSsP/+lf8Tf/x//Kw7dWMNzZSvPJNqbe4mpcoLic6oa51OTGOXP8TXZu38/YxWmvjtN1Yj8vvbCLnpif6toGljY14KRG6D3bStslgbcmHgkzMTqJUzaHxqYVzC18hx+/RIi6xcu5efMN5M10sufZn/H864OUr9hI43ll3zJYQW1DE0vm5RMeG6TzVDMD8YtfgySR8ARjE0kq6+axbFXTVdsmzfaWB7TSF7WE0bjKJZ2+Qpuq84+pa34P+qmoqmfZivn40uMM9LRw6syl+zkRjTA+Mo4oqaFx2Rrmv0WVvWVZlmVZHy32t7plWZZlvc98uQtYumQeNeVpRodGmJ5KXBQfKcaGBhmdmDa9O6cjxL3SXjc+zMjoJDMx02NV68uHpsmJUSampkkXLmBh4xyqLr6X/RKC/Kq1PPInX+aT966lIN3N8z/7Po//cjddl6ua9mh1+RAtVLaaLVtuZuWiIvpb3uDgsZaLAl7NWFsrZ7oGKV55O5s3rqO+yPHmqVBKmz6yWnFhbqbNP5fNPUwVYTZoO38alehh+1PfZ+veMRrW3M1dm29i9aqVLF+2lAVzqynKC1z7H0Uih0UrV7Js6Vx8sQ5efWk7rx3o4lw2p5iZmmZ6JoLSaVLJNOmUBpHH4uUraFpajzt1iuee+AE/+9Wr9E1lI88M492v8/Mf/ILj/Zq6hUtZuXIFy5bUo6ZO89wTP+Cnv3rlMtM/xfE+RV1jA8ELNkKZfuMXhUsjbc00N3eSu2QjmzbeRENhAYuWLWd501zUdCsvPPlDHn9qN72T55Yz0bOfJ374FEd7XOoWXrwc03v9evNg4RSxbNVa1q2ey1THYU71JahuupFV8/KQTiFLV61h7aoGxjtOMZkKMH/5GuryL/dqud5AnXh91s873oSfYDBIMCBJxuPEYjEyZI815R1TJlAz87jwfTU7nT4XNgeKlrD+ls1sWFXOaOdxDh88RPf0hcf4VPdZ2k934W+8iU23b2FRuQ8Q+IMhQjkhRGaCkaFBhka9fa0TDPZ0caa9j5ibIZNJk86W9Qs/gUCIYMAhlYgTjUXPbcNl+wAbqfAEo0NDjMViRGammJpMmml1konJMUZGZ0ygr9RsKCx9AYKhEAGZIRGPEo2lyL6/XDe7JG0eu0wQKWQhS1bdxG1b1pCX6Kb56Jscap66YB0z0T7OdrbQHZ7D+o13cOvaCiQCfyBITm4IkZlidGiAweHsvkky0t9NW2s3EdfFzaRm941KR+g8fYLDR1pJaBAyQEFpPetvu43bb1tHqcNVW/s4wVoWN61n/YoypgcO89vHv8v3f76HgRnXW8Y0p/c/w6+27mRQzGVOoY+y+SvYcPsdrKxMc/b0cfbvO070gkMgTn9fF81tkyxafQt33X0jxT7vLPYW7xnzepy/ny+UW9rIivVbWDff5WxHD6MzhdzgDdR47jXIZ8GyG7jz7vUUpPo5fWI/bxybvPA1iA/R13WSjolK1tx8N3fcVH2udYvOjqmgL1wD4eAPhMjN9RMPTzLU10P2V4RKTTHY08bpzklc1yXjDfwLIHx+AqEcgo4imYwSCSc5d0wpQKG0Nx6E1ue9ByXF9Yu56a57WVut6O1oZu8rh7nw10mC4aGzHG8ZZe6yDdz3wC2U+LObceWxIbTWaNe7eHTBNIqZ4dO88Pjf861/+BGvtUxc7XqxZVmWZVnvIXs/lWVZlmW930QRazasp2nRbt7o6WZkbAyX/PN+KUvKyssoKswjc6aFZ372QwLxHuYXztBxppfTR3vJ+GCo6xivvvQU8fH1rFu5gsY5Bd7zNRM9vQwMT1K9/OOsXLKQkmsY2U7IHOpX/QFf+485lJR9j8d/e4Af/T/f4Gzrp/jMow+yad0CSvK8wQl1mvG+Y7y8Yy/NneP48qqpqCylINekJ8IpZO29n+Fz7WcZ+deXePHpraxc1sjDdy4h3wfxiZNse347bZGFPPzHn+GuDXPJ8VKT9Mw04UiUhEoRmZkhFksAJonQboRIJEYioUgnk6RS54J1rWaYmp4hEsuQ8sWIJRKkFPglRPtPsO/VV3nuxQEONx/hmfnVFBXkkZubQ05uLgVF5dQvWMH69atYMKfkrVvdIihfvIUH/uBNjrX088bB3/CPf5dk6OxDbFxRRXzoKLuff4Y32qZRKsG+bT/nnwMujz54J2uabuO++97gSPPjHDy9i//6n3vZ+/wqFs0vRyZG6eoYInfebXzhy3eyfE4B/sot3Hf/Gxxp+QkHWnfx3/4vM/3i+RXIxBhdHYPkzb+Nz33xLlbU510QwGt3mGP797Dv9ZuovWMJhQEIDx3m+aef4dBgDR/7wqPcfctcQlJSuXQT9953N4dP/og32nbz/32rl32/W82SBRU4iTG6OgfJadjMZ790Fysb8pFoUrEY0VictIowPTXNzHQGrqXtyiUkZQtXs/aGtTTs6qFk2VpuvHk1JUFThlu6YCVr1q2mYfco85tWsnrN5Qdq1CpKJBImHHVJECUcjpDSmGpefJSUlFFeXsTM4dO88dp2XqgZIzE4Tl7tSjbdkCIajpJwFdFwmJlw+Fx/YJ0iHAkzE06STsSJxaIkFBT58mja+HE++/l2+v/Lr3jtxd/y3JplfOmP1lIUgOR0O3t2vMSB7mIe+PRj/OGdi8n1jvGcqjnMbVxATfAEB3b8ku/mu7TcWEOkt5WzIwP0uSWUyE7ajuzmN08upviTm1i1pIqSklIqKooIv9nGgT0v83z9FOmhMQKVy7j7nrUUXCbTDxQUUlxWRqGT4PSBF/nx9woY3jyX+GAXHaeO0hV1yESGOHV4F7/8heTGRQtZv66I4pJKyoqh/cxRXnnpGYqnQwyPutQtu4PVeXGikRgpZQLxyckMlJ9/gUxS0nAjH3v0C5zpGuHZwzv47a/XsbT+UZoqA2g3TOuBV9i+q426Wx/hC5+9n3nF5gwYqqilYeEi6kJHObLnN/zT30lab6kn3t9G52AvXW4ZpbKdjpOv8euf/5riT21maZFmpu8kr/W43LhpKbctLkAg0JkM6aRL2fwlrFm38pJ+1BcQIeav3sInHv4Exzsf5+TJl/mnv+ng5aeWMK+umEx4iJFpPyvv+DwP3dFEgV8gfHVsuPthHmvt5P/98QG2P/NL1q2exyc2VOPDZaLnCLtffIke/418+k8e47blJUjMIISJWJhI1CXjT5NMnHeHi44TjUwyOZ0ik0oSj0aIKy64YCR8hTQ2rWHTrU3sH0vQ2LSeNfMuHupSUlS7mnsf/iKtHUM8tf9Vtv7iNyyf+xgra4JoN0rn0b1s23aU0vUP8Sd/8ocsKPF+C+kE8fgU09l1iISJza6DQ3lVLUuWLoADp3n1uX/hWzlj3LjAT9+ZNrq7zuJUVaJaBzmx/1meeKqWj22+iaVz8iksrqaqVHCk5zSvbPsN9aqcsYFJChrv5Q82BIjHJpmayZBKxQlPT5NQkC/BF6pm3ZZP8qUvnuH//pdX2fXcE9y4fgGfurUWH4qpgZPs2fY8rZmVfPorX+bu1aVeCK9IJuPMzERRro90Mk5aZS9jaOLxGOPjUyidIpmMmr71BQ46M8bpQ7/ln771jxyP1dM+lsP8v/k0de/PUACWZVmWZV3E+eY3v/nND3olLMuyLOvfFkFucRHhvlMcOtxP9Yp1rFjWMBtsgSSvOJ/URB8dbe20d5yh+Xgzg5EC1mx5gM1NQQY622hp62BwHBqa1rN21QKKsumvnuHAi7/k2VeGWfPAZ3nogfVU5l1bI1PpBCmqmMOytTexpqkWEe7lyP5XePF3z/HC717k5e07eOl3z/LLn/2QH/70ad7sSDB37QP8u7/8Gp97cAsLq3LxOyZ0DOSWMHfRYurLHfpP7Wfnjld482gzJ4/s4dmnX+DUZDUPfO5LPPzAjdSV5OBGejj02u/42Y9+znM7D9I7GSM8EyYcnkbmleLEB9j77E94/KmXOX52gpnwDJFoGJlfRT4THHjpJ/zg8Wc50DpKJDpDLB5H5tdSU1VJUR5Mj3Rz6thRTpw6TWvraVqaT3Ds2FEOHzrIm/tfZ8/unbxxcoKC6nnMbyh9ywBb+vOoaphLVbFkvLeVE8ePcuCNPbz80ssc7XSpqy9FJ6cIU8P6W+/g7js3s3ZZA6WFxdTOnUtFgWast5OOzk46O8/Q0txC75hk4YY/5LHHHmHzmnrygxInkE91QwOVhTDW10n7+dOPwsINf8gXHnuELd70AMmx0+x55RXeOJWkui6Xyf4jbH/hZXZtf5Ynn9jKibEK7vvMn/L5h26loTQHKUD686lqqKeyCMb7umjv7KTLW073qKbxxk/whT9+lNvWNpDjDnH4la386/cf57lXjjMyHWVmapLxkVFSIkRRSTlFuW+vPkL68nHiQ/RPSJbc+hAP3buMAu8FkL48RHSA/imHJRs/wcc2LyDn/F7NOkr36Td57smf8OSvt3GwbYRINEw0Mk3EDVFUXE5lcS6hkGJioIsTR45xqrWZEy1D5Ncto7HG5dALT/Dk07s5NTBDJBohMhPGV1BKnhPh+O4n+MGPf8Urh7uZiUaIRCK4/jJqqmupKi+ldv5i5tXkMtZ5iF07drH/8ElOHnudbc8+z6GeIJsf+hJfeHgz8yrOtf6RgUIK8xyiox0cP36SluajHDzcwpRoYPN9t1AlxujsjNOwZgMbblzD8qXzKC3MIRhUTA13c+LwUU61tXCiZYBQ9Uo2bbmROSUh5GWOWSeQT9BJM953mubWTjrONHPqzDDBmtXcc//NlDPKycMn6B4cJROoYfWGDTTWleFXEQa7Wzh45BStp47T1u9St3AhRW4Lv/7JT/nt9kMMTkaYmZlidHiUlPaRX1JJab557YUTpKRqHgsXziGY6OaNPTvY/epBTracYO/O59m2u42iFZ/gy//uM9zcVEXI69kjAwUU5AVITHRw7Fgzp1qOcvBQM+O6lk33bmJOaJL21jBzVm9gw41rWdE0nyIxyolDr/HSK0dpbz9Dz9AwPe1H2P7cC+zv8HPHI1/mkftXUpLzluk1/mAh1fUNVBdLRrrbOdPZxdmznbS395DwNXDnJ/+ELzxyJ4uq881rKRxCBZXMW7iQ2uI0Z47u4eUdr3L4RAvH3tzFc8/upDeziIe/9BUeunsFZXkOsfFuXt/2c370k6fZ2zxIOBYnMjOFGywkP8eh682n+O///Ql2HupmJhYjHg3jBsqorG6gbPaOA0EwN0BsZpypSC53PPpFNi4quOTuEeEEKKyoZ+GieeS7AxzYs50drxzgZMtJ9r/yAs9vP0Fw4R/wp3/2BTatrCHH5zIxcIqdW3/E4z//NTsPdjETTRALTzKdkOTkFFNfVUggL5/cXMF453GOnjxN87FDHD7ehShbzV13rCM3fIbTYwWsu/kWblizmqUL6yjMDeHTKSYGmnnj4GnOtB6jpTNCxeL1LK9P8cazT/DkU1t55Wgf4Vic6Mwk4YyPnPxy6ioKCOWV0dC4iPpy6Dr+Gi+/vJuDx1s4fmgPLzz7Im2RuTz0pT/n0ftWUZ7vR6em6Dqxg5/+4Eds3X6EkZk48fgMM3E/+QW5MH2EJ//1X3ny+f0MTMdJJKKEYz5Ky2uor/Ix2H6Yl7Zuoz1awOK1d3D/Xcu47A0flmVZlmW954S+9oZjlmVZlmW9a9IMHPsV3/rb7zM197N8/d9/gfXzzhtUUSWZGO7m1InjtHaN4RTWsXDxUhYtqCNPjHPy4Buc6k1QVr+ElSuWUFtRSMDLZeJDr/Cd//3veGVsFV/9D3/Jx2+uv6jNw7VQJKNTjI6MMDzUR1fnWfqHxokmXXyBXAqLy6muq6OmqoLSklLKK8oozAtyyfhvOkNkapSBvm7Onu1jdCqJP6eQsooKqmrqqKutpCg/aKoRM3GmpyYZGx1jaiZKMqOQTpD8wmLKq6ooCGoik0MMjUwSS2RA+sktKKKiqoaSPEl0cojB4Qki8TRaOITyiqisrqWitJCgmGD/Mz9m675hCqrnUxaIMD45TSQaJ5lKkUzEmBzp5kzHDGs+8Zf8+7/4LKtqrhLAetvW132GtrYOeociBIpqWbhkGfNLU4yMTZOQJdTVVFBeXkJ+rrd/dJrw5Ah9Z9tpO9NJ/1iCnOJq5s6bx7y5DVRXlpB7fk/iK0zfMG8e8y8z/czprfznv/lb/vHXU3z8q3/JY4+sQw/1MxWHgtJq5jTMZ15DDaVFORe+XjpNZGqU3q52zrR30jcaJ1RUxdx585k3r56ayhJyAw7aTTA9OcbQ0CgzkThpVyN9QfILiigtL6e0uIicwNtPeeIT7ZzumMRfspCmhSWcHzPGxto41TlNqHwJTQsKLwrpXOLhacbHRhmfnCaaff1zCygpq6CirISCXD/ajTPS28rRw8fpHlNUzF3CyhWLqSiQhMcGGZ2YIZ50zXGVX0xldTXFeQ7xmRGGhseZiSZRWhAI5VNaVUt1RSl5QQnaJRaeYKi/m66zvQyPRZGhAsrKK6isrqW2torSwpxL3hvpxDRDPWdoaWmjfyxJfuVcFi1eyNyaIFND/QwMKcrrKiktLaGoMI+gT6LdOKN9Zzh6+Bjdoxnz/l+5lPrqktnw9zIHKsnoBL3tzZxoPsNIxEf1vMU0LV1IbXmI6b5mDhw8xYysYOmKVSxeUEtxro90fJKeM8c5fOwMU5l85i9ZybJFteTKOCNDI0yHY6RcjXQC5OYXUVZeTklJsdkn5y07k4wwNtxP99lu+gZGSKgghSXlVFRWU1tXS1V5EcGL1j2TmGGor51TLa30jsTJLW8w+6Y2l+jYAD19Kcrrqij1Bob1ZaYZHOhjYDyNFCliEXMHhi9UQFlFDXPq66goybmm1kBapQhPjtDTeYYzZzoYmlIUVc2lsXEBcxtqqSjJu+S11K55Tm93F929A4xHXEL5xZRXVFJTU0ttdSWF3gUdNxVjanyIweFxZqIpEA7B3ALKq6opK8olEx1hYHCU6UgSpSWBUB5lVbVUVZSRFzz/ok2akZ52zvZNU7PyRuoLrxTMazKpGBMj5jXo7R8ilglQWFJGRWUNNTU1VFdljx9NOhFlcmyI0fEp75iX+IO5FJdVUFFWSklhCFAkIuP0tJ+ipbWTsYiksn4hixctoKoAxvo6GEgWUVdVSmlpMYX5OfikIJOcuEchLAAAIABJREFUYaDzJIeOnGI0FqR+8SpWNs2jNE8zPTbK+OQU4WgShSQQzKO4vIKKsjKK8wOz+zkyPUpf91m6e/sZnU4TzC+mvNzs55raKoq9/axV2rwvB4YYn4qSVuAL5FBYWklVRQk5MsZQ/yBjk2FzHPuCFBRXUlNTSWmhj8joWQ7v30/nVCGrb9nEysa3vqBpWZZlWdZ7x4bXlmVZlvUBcVMTHHj2e/y3p4e55eE/5TMfW0PRBbclazKpBIlkGuELEgoGcKQXMCTjpNLgCwYJ+J3ZXq5aTfHaE9/mvz49yA0PfYXHHlxPZd476xKmVYZkIk48kSSdUSAk/kCInJwcggHn2gbv04p0KkEikUYJH6Gc0AXr/V7SapojLz7OT5/ronHzp3jw3hUUOC7pdJpMJtsX2iWdinJy+085MDGX2//oj7mjKXht89cuqUSCRCqD9AXJCQVxyJBRIBw/viskZlplSCUTJFMK6Q8SCgXwXa509m1Ofy68nuGR/+2b/Me/epQGX4JUBvzBHEJB/2UrdC/enmTKNcsJBvBdclXiPaAzpDMahA//RWGmVmnSGY2Q/kt+9rYWoV3SyQTJNPiDoWs/fq+JIpNKkUgkcbVDIBQiFPS99fy990UyrXH8QW96hZtRaBx8l9nWC7chSDBwlWXMLipDMpEg7QoCoXPvP+WmSSaTKOEnFLrwApRy0yQTCTLaRygUekf73swrTiqjcfwhQqHgFd8b3gqTTiVJpl0cf4hg0IfE9EfWWuI7/8nZvsZConWGVCJOKq1xAiFCocClF9WugVYZ8z7IaHyB0FXfN2Y1zGuTSJqLa8FQiKD/vSvXVZkMGVfhBK5tGy98DYLea/AO3k8qQyqZJO2ac0vQL73+8C5aXv7cN3scKmn6qr/lQXCF5WrXtDNJpNHSHJtB/7XdXXTtC3FJJZOkMoJQbs5bH6uWZVmWZb2nbM9ry7Isy/qAOIFS1tz9BT49/SN2HXyVNxoquX1t7Xn9fAW+QA75gZyLninwB3PxX5KtJunYv5VnX5tg2V2P8OB9qyl/h8E1gJA+QrkFhHILrj7xlWdyhXV+r2nGW1/h1088yaHwZjY1LKSm9OLK3awEYxVzqHaKKC669iBECIdgTh7BC14mP1fLUoT0EczJv+h57970s89z/OTkBbjWp11+e94Hwof/Cj1lhfQTuJ522hfPRzgEQnkEQu98XpeS+AIh8t/OzC/7vpA4b5GUXe82mPdxPhc/TTp+cnIvv+Ol4ycn791p9Jud1zUfVkLiD+Zcum+cy+wbIRBCeF9656t3uL5C+gjm5vN2Tlnv7fF1KenzEXgbp/i3/RpchTkn+S7YR0JKHPkWx+8VjsO3tVzhvPe/T4RDIJTLu3DasSzLsizrHbLXkC3LsizrAxQqamDzH32e2xfFeO25rbx6uI+Euo4Z6QTt+7fy622dzLn5U3z6oc3MLbu22+R/vynGezto7zjL6RMHOXz0BANTqUumSsdGOLz9Sba3xKmYt5LGine5iu99pdEaNBqtFfYmO8uyLMuyLMuyPqps5bVlWZZlfaAk+WUL2PLxz1D85h5OnT5JdWkBy+cXvY12BprJzkOc7IFld3yKG9Ysoqo497pulf/9I6lqXM7ypkb2NL/JD779v7B/2wqWLppHZUkuUqeYGR+ib2CagjmruedjH2PL+gWzgwV+9GhikQjhaIyMihKeiRCJKCiwlzEsy7Isy7Isy/rosT2vLcuyLOtDQZOITDAdhdyCQgqucBv/laRjk0xFNaH8AvJz/O9LL+mPCpUO09t6gBeffZpnX9zL6e4RokkI5hZRVTeXxcvWcvPGW7hp/WoWNlRRlBf4SO6/TKSXg69t59dP/IIXdu2nfThNdeN6br/7Lm6//XY23byeRbW5H/RqWpZlWZZlWZZlXTMbXluWZVmW9XtPu0lmJscYGZ0gHI2TTLumP21OLgWFJZSXl1GYH/pIV6vrTJypiTGGhscIR+NkFEhfkLy8AopKyygrKSYv9FFuh2JZlmVZlmVZ1r81Nry2LMuyLMuyLMuyLMuyLMuyPnRsA0TLsizLsizLsizLsizLsizrQ8eG15ZlWZZlWZZlWZZlWZZlWdaHju/8b5RSpNNpbCcRy7Isy7Isy7Isy7Isy7Is670mhCAQCCDEpYMQXVB5nU6nSaVSNry2LMuyLMuyLMuyLMuyLMuy3lNaaxKJxBXz6Esqrx3Hd8Wk27Isy7Isy7Isy7Isy7Isy7LeDVrzlsXUvosfkFIgpbThtWVZlmVZlmVZlmVZlmVZlvWe0Vq/ZQ5tB2y0LMuyLMuyLMuyLMuyLMuyPnRseG1ZlmVZlmVZlmVZlmVZlmV96Njw2rIsy7Isy7Isy7Isy7Isy/rQseG1ZVmWZVmWZVmWZVmWZVmW9aFjw2vLsizLsizLsizLsizLsizrQ8eG15ZlWZZlWZZlWZZlWZZlWdaHjg2vLcuyLMuyLMuyLMuyLMuyrA8dG15blmVZlmVZlmVZlmVZlmVZHzq+633iTx//F6KRaQQarRUIidAaKQRogUaTTqeZDofxSYGbyeDzSaQUCCFxHIkQGqXN96BQWpmfCR9aazQK4UgEEpTAJyUKF4QCoRHCh8ABDUgQCDQChEYrhRACrTUAWinQCqVBKxeNi9IarUEphdAa4c1HOg4+xwdIHMeHlKDRIMy6CgRSS4SUaKnNz8xiEEi0MvtBSI1CI4VEa0B76yhACkBrtJkbCAECtNIopZFSmH0pQGGeLLQGJRAIEMLsL2mmEVqbbdUaBWihcIQDWiCEAG8/KC0ATSQRJ57MoLX52Re//FWqqmqu+0CyLMuyLMuyLMuyLMuyLMt6N113eN3f18XkxCjSy0WFcAANWiE0aAFKuURjMWKRCCK7QEfiyGx4DSZ5lWjcc+Gwlgg0ONoExFogkUgv5AYXJGgkAp8pHxcm0DUxr8ZBoLwwPRuEo12UMsG20i6uVggkylU4woTCQjogJQGf34TqUiDRCAmuNkG0QCC0g5ACJRRefO0F4GY+WmukI1HaxNOgZ0NkLYTZ9uz24yARCJENmE0QLc0moQSgBVJrb3oTbHt5OS7KC8OZnb8WCkebCwkIYYJvQCFAQyQeZyqaQGsJWpFMJK73ULAsy7Isy7Isy7Isy7Isy3rXXXd47SWlprIXiXJNwGtSWS/OdQS5uSFcN0MykURoQTqtwOdVHguB9NJv7VVNa62RKIQE6QiU63oV1RqFMiGvFCilvWJlhZYCrRVaC4SUpvpYa7J57mzZMxrHkSghEBqklmgt8XmFz45wvBBcIhyJlCZgF9qslyO8Livaq4jWGqXVbIgsTTSMCZglSmULsr0vBIhskOxVeZvJvSpxlQ3Cs3PMVo6bwFlrU7Wd3Vbw5ueF3tkq8+wacN6ctFZevi/N11p5+13PBv+WZVmWZVmWZVmWZVmWZVkfFtcdXmuvyveCsNr7gclmvTYcQpKbl4fSkEok0UqRTiuE8COlQOHiOAK0nJ2P+UfiZvBaZEC2plohERqvtYY2LUK0MoEyJg/OTpldRyFMCK29oN1xBMo1VdPSi3mllKZNBxotXBDaa08ivYJmjRSmRYlGo5RrFunlviY/Nl8o153dIxrTkkRK6c3Da/uBqdQ2CbvXzsRr+4HwWobobOadDa+9PeMF4ELKc+G8np3YC7G9Zwpw3WxVuwn7RfbVEhJMXfn1HgaWZVmWZVmWZVmWZVmWZVnviesOr5VrSoe1NhXQUoLS7myIm+0/rbWLFJLc3FxURpFRmkwmhUpq/H4Hn18iFNniYryU1vSi1sJUWuP11Bagcc/1stYatAmaBRIQuO658PZcWxITcJtvvZ7VUiBw0V7VsvAqwfEqlE2vaZCzRdAapV0vW5ZeBXS2XpzZViFag8JUUJv1VN5WZVuXSK/FCgjhPV9IzGTCK8gW5nshvGrybOMRhXaV2dlez2tTqe6tYrYSXgq0MBcPNCaYN9Pr2Sp0ISRC+pBKeFXcNsC2LMuyLMuyLMuyLMuyLOvDQ159kssbm5gmmUp7VcnKC26F6bcMZnBCrxpYCEHA56OgIBfHL3D8Dq5Ko5SLyrhopWfDXxC4gKuyrUS8Cmbt4uqMN9BixgtqFVq5oLzKZ61RymvjITC9pSUIoRBCe5XLLkIKpGMGMsxWQmsTkeOS7TltqrNdpbzgW3iBumlJoszoi7NV0bP11N5/hDcQomlPohEqWyDtzgbkGuFVXJsN0GQf12ihzgX0ylR6gwapUUJxrlmJaVEiZi8YZCvRzf7Lloeb70yIrYVGOAKfkF77EDU7oKNlWZZlWZZlWZZlWZZlWdaHwXVXXkfiKWKRSSrLSgj4fHgJLkor5GzzDq+lBuZfv99Pfn4BMzMz+HyQSCYIBYKmQhllWklLM4ChALSbHdxQgBSzrUPEbNWzCcbxgmWNBim8hhlyNlgm24taghYaV2eQOF619rkWI9lKbYHjzTfbFcVrZaK9XtbZNtHZamWdbVXiRchCeF1MxOw/ILye4CrbVjs7viLoc/XbZNfFa5di2pjgVU2DFtLrcy29wSENIc0+ELMzzT6f2V7bElNl7RV5e/tSz05zza99JEp7+xkGBwdJpVIUFBSwYMEC6urq8Pv91z6jD4FMJkMikcDn8xEMBmdfd8uyLMuyLOtCWkM8DqOjipkZ8/dwUZGgvFyQk/NBr531b1EmA+m0+VpK8PvN/y3rvaI1RKOakRFNJHLuPFhZKQiFPui1s37faG3Oc5mMOdZ8PvPvlaTTMDammZgwXQFKSgRVVeItn2NZHwXXfQhnlCAaTzM2MU1FaQl+v9dl2STLXpad7YQtvMBXEAiEKCiAcHgGnz9IwqveDiBneziLbGDttfow4bZpk+Fkl0G2WNirz87+kWLya9MTOjsOoRDnslwh0EqjhEZKx6u59oZ21Aqhpelzne0brfW5ZXnbJITXU0RolNKz8/YanACSbNGz1GJ2bMvZQRqFWda5vtTObDidDcWlcEzzEe/pZiBKvGpqvL3qbY/WpoJbeftKgyO9MFyeC76zbV1mq8Wl157kGg0PD/P000/z4osv0t3dTSwWQymFz+ejsLCQVatW8eijj7Jx40Zyc3PfzuH0vlNKcfDgQR5//HFaWlq49957+dKXvkRlZeUHvWqWZVmWZVmXUAomJjSjY5rCAhOUvJ81A/E4nD6tOHRIMTSsZwPDYBCqqgTr1kqamqQNb36PKQXT05rhYU1urqCm5v09Bi82Pa05cEBx7Lj5TFZZKdi8SbJggU2vf18lkzAwqEgloaZGUFj4/hUeaQ2RiObIEcXRY5rpaU0mY34WDEJVpWDdOsmKFfIDfV9Y75zrwuSkOdcVFQnq6gQfRI1bPA4nTyoOHFREo5qcHMHGWyRr18pL1sd1ob9fs+c1l+5uTSpljtlQCBoaBFs2O9TX20I966PrusNriURpQTiWROtJyksLcBxpkmMFs+XJktle0AiJlIJgKIQGZqancXyQTKWRQuL4HBM8Y5poS/ONV/WMF9pKtGtKjKV3WX02WJbSC2cB1GxFtBRytitGdoBG4bXX0IDOBtBSmp7RmMDXVF2bZZhBH72WHN76mEpsPVu1rWdDYbyA2oTT3mQ42fA5W5muFVJIlFbeYy7ae8z0EteAQotz4b/U2rRjQaOFRCtp2oDg7Xu88B+vohxtqrxny62FNzilRqkMQjqmxclVSq/b2tr4h3/4B7Zv304kEmH58uXccsst5OXlMTY2xqFDh9i6dSuHDx/mL/7iL/jkJz9JUVHR9R5e76mxsTF+8Ytf8Mtf/pKWlhampqZobGwklUp90KtmWZZlWZZ1WdGo5vBhxdFjihXLJVu2SPz+9+eDaCoFzS2KnTtd+vohff6fTAKGhzWTky6ugjWrpa3w+j0Vj2tOnFTs369obBTcc7fzvh2DF0uloLNTs/8NxcCA+diVTmtisQ9kdaz3gVIwOKjYtk2RycC998j3NbyOxTSHDil27FRMTpoq/5ISExoODprz4MSEi9Kwdo20dwB8hMXjmjfeVBw/rli5QlJT4+A479/yzTFlguhTpzQTE+ax/HzNiuXn8qcspWBgQPP88y5tZzSJxHkzE6YSO5l0efCPHEpLbYBtfTRdf3gtTGsOV7mEo0lAUVZShN/n8wYKzFZHm9EYhRReIbMJdIOhHAq0JhqJIAWk00mkE7qgulhL0zs62/fazPC8QHq2ctkbjND7Hq2RnLsapbX06qvNQIUmJDYBuBm4EG96rx+0MGcDKSQoU51snu161eSK7DCJ59qKZNuamO9NwbfppJ2tnHaExFUa4TizA1lmh3IU2ZAZjcI11dfZViJeaxQhBI4wPbyzbUyE46CFqdA2qy1NGI95IJtLC0eY18U1O0X6HKTjIHCyJfNXfK3Hxsb43ve+xzPPPENtbS1//dd/zU033URJSQk+n49UKsXQ0BBPPPEETz31FP/8z/9MXV0dW7ZsIfQhK78Jh8N897vfZevWrSxcuJC7776bHTt2fNCrZVmWZVmWdUVam6rrzi7N4BDMm6tRb+PuuXdqfFzTfFLR2wc+B5atFqxZLXAVHDuqOd2q6euDU6cU8+aaNiLW7xetYWYGOjs0A4NQUWHClA9qXUZGNIcOK0ZGIHsTrfX7LZUylaWdnZq8PFOF/X5RCsYn4OgxzfgEFBbArRslq1dLYjHNmwcUb75p3hsnTyqWLBbk59vz4EeRucMETp8y55dwWL+vw4NlMtDVpXjueUVfn/ldH8qBaOTKz4nFNAcOuLS2mTvyly8XrF8viMdh3z5Ffz+cade0tipuvtn5QKrILeuduu7w2gz0B45wQClikTRCzVBWWkjQMeGzi8j2spjtg60UXuWvIBQKoZUmFgmjUCSTSYJ+Pz6/H9c1AbTXANrQXlG30Lha46jZLtGmfYarkNmWIkKANkGukOcGRTQFyMJrt6HMAIpkQ2jvLx+lkV6fD1fJc/2h8QZNFF64TrYK2wvBvRYe2YEWtVSzLTpMX20TKquMC0Khpbdsb99orRFSm0D5vDhcZ1uXKBObm9V3vGpsU7VtwnJ1XkW5nO1z7bou0vF6g6PRSpnqda8Htp5tYXIprTW7d+/mpZdeIjc3l69//et87GMfo6Sk5IL+0PX19ZSXlzM9PU1raytjY2Ok0+nZ8Hp0dJRt27axd+9e+vv70VpTXV3Nli1buO+++6ioqEBKU3G+f/9+9u/fzw033EBubi4vvfQSR44cIRaLUVVVxf3338+9995LKBTipZdeorW1lU2bNrFmzRpyLmq42NXVxbZt2/D7/TzwwAMEAgEymQyPPfYYd955J6+++ir79u27nreAZVmWZVnWJWIxaG83HzpzcmBOvSAagePHFTk5cOONknnzJMkk9PYqTp/WDA1pEklTyVddJVi2TDB3riQQMK0R2to0J5sV3d2aTBrOntXs2OHS0CBZvFhQUCBIp2FgQHHqlGZgQBOPQzAEDfWC5SskNdUCKSGRMGF0NHr1T+PBoOmXOTVlAhs0VFXBTTeaW+OVArTL0LBmbAwmxmF6Rtvw+gPmujA4pGk9rUilTHuFoiI4cVIzNqZZsVxw000OqdR5x8ygOWZ8PqisgKVLJY2NkmDQtEo4c8Ycg+0d5hjs79fs3OVSVydoWiopKhJkMqb6tKVF0deviUXBH4A5dYIVKyRz5phjMJUyx2A4fPVj0O+H0lJBUdG5Yyoc1jQ3K9rbTYgZCMDo6Hu5R623KxzWtLZqhoY1BQUwd65gZFhz4qSmuBg2b3aorDDhWtdZxelTitExc2yEQlBXK1i+XFBfL72Ka82Ro4qOjnPV9QcPKkZGNAsaBQvmmzutI1FzvjzTZqqjTc9fWLLEtDXKflRMJMyxmkxe5RgUEAoKSkshEtaMj5siteJiaGoSNDQIkknB9LTm+AlNIg6TkxCJQH7+e7qLLUzQ29enaDmlcaQ51xUXC15/3SUShRvWSdaskbiuqT4+cVLR16uJRE1v/JISWLxYssxreRWJaI4dU3R2mgsRmQx092i2bXOprBSsXm3OiZkM9PcrTp40585kwgTMDQ2CNasllZXmfOW65sLz5OTVz3WOA8XF5vd5tmXJvHmClSsEJ5s1zc2Xn4dSMDauaTll2nnV1MBdd5rzdzqtKS8TDA1pSkoE8+Z9MO1PLOvdcN3htYtCKdeEvFrhAuFoHASUFebjdyRecbLXzcK82RyTJKMBx+cjNy8XAcSiYTIqTSKVxq/N4I5mEELXC4Al2ebS4vxWInjzPtc7BCG9ntnZim2vd3Y2HPUeRHthc7antqlY9sJyrZFSIKRGaYVC4EgTKiuVbc2hZmvCtXa9ftcmDM6eFEylt3lMaY0j5bk+2rMtTLyBFr1qdu09Lxtaz26o15LEVGxLpBd4g8I8orJF2mSHrURpHEzVtdLK3L6kNFLobJTtVYhfXjQaZdeuXQwPD/OpT32KLVu2XBJc4+3b+fPn841vfINwOEx1dfVs3+ve3l6+853vsH37dsrKymhsbERKyenTp3n99dc5duwYX/va12hoaACgpaWFH//4xzQ3NzM1NYXf76eiooLR0VH27NnDoUOHUEpx//3309vby09/+lOGh4eZO3fuBeG1Uordu3fz/e9/n40bN3LPPfdQXl7OV77yFfLy8igsLOTAgQNX2HLLsizLsqy3L5HQdHQo3jxgQps5fYKpKdPioKYGliwxfSyPHnXZu08xPAyJJCjX/J135oyms0uw6VZYvVoSDsPJZsWJE5pk0nTm6xuA8QlNJKKoq3Pw+800e/eq2eDadUE60N6u6enRbN4sWbRIMjGh2bPH3Fp8NRUVgs2bJfPnCR552CGR0ASDprI6EDAfmnNyzw0ElR2DxfpgKQWjo6YaNBw2FzAKCuH4cfP5qbJCkExCc7PilVcVg4PmNnPXNZ+zgkHo6HS5+WbNTTc6xGKm3/mRo94xqGB4BGZmNAsXwpw6yMmBtjbF7lfMhZtYHNyMCYja2zVnuzWbN0mWL5dMz2hef93l5BXCmPOVFAtuvVWybp357JFOQ3e3CTIdBxY2CqIxs73Wh0N2YNfTpxXHT2hKS2F0RNDeoRkchPp6uOEGM9jhgQOKvftM0JxMmWPL8c6DXV2C226DxkbJ4JDpbz49Y6aJxeBks7lQFwhK5s8z4eSePS7HjmtmZsyxorW5uNHR4TI0pNmyRZKbKxgd1bz4knnsrQgB1dWCu+8yd3XL8/KN7HnPDKInvPGouGA6673lujA0rNm71+Qg8+cL/D5TIe/3Q0O96Uve3a3Y9qKit9f8fsxkAAEBP7S3uwwPaW6/3Zzrjh4z1f2JhImNBgZgelqxaKGgqcmELSdOKHbtdhkZMRdClDK/bzs6NGe7NPfee+4i9ZEjiv1vXP0XYygEN90k2XSrQ1mZ4P77JIsXm/7pHZ1Xvs0lk4HhIdNaxOeDOXME8+aZi99+v2DpUsGCBRq//4Mdo8Cy3qnr70iXHVTRlFyg0ChXMB1NoF1NWXEBvoBpyQHMhsSmm4UJS7UyLTWCObm4SuG6YZSryWTMm1OLDI6Upj0GXrArsuEzXoqtTQsOKWcDa7TA1fpcf2uRLZ7WyPOroxFeMH7uXjOdnYc3QKTADH5oniNQLmZAR6HOq9ZmNkQGafplo3G09ArHvWVocF1lenVrfe65swNGYtqUCJEt7gbNbDW58FqZSOFF0zrbBkWZViUCtHIBies9WWL2u8j2RdIa6YCjvatuSnnrfXljY2O0tbXh9/u59dZbKSsruyS4zpJSsmDBggseSyaTPPXUUzzzzDOsW7eOr371qzQ2NiKE4OzZs3zrW99i69atLFu2jEceeYT8/HzS6TTj4+O8/PLLfO5zn+PBBx+kurqaZDLJT37yE374wx/y4osvcsstt7Bs2TKklOzbt49HH32UyspKfN5fEjMzM+zbt4/R0VGWLl1KUVERgUBgNiRX7+f9tpZlWZZl/ZuRSkM0akLpeNxUhy5bJrwBxqB/QHHosObsWcjLgxvXC+bNFXR3m+q9ri5Nfr6iulqQlwfz5grGxzQ9vSAUVJSbSsb58wXBIPT0mj7E7R2agny4eYMZYKp/QHPihKnIystTlJYK0mnNxCQMDnHVVgtCmA/weXlmPS4eIyUc1vT1aaanTdVYQQH2VvkPASHMxZB43LT6ONutKSyAmmpBVTXU1ArTduOQqTAMBmH1KsGiRYKBQc2RI5qeHgiFNHW1ivJyQX29YHgYus5qMhpKS2DBfPN4To6p9N73uqKtzdxxcMM6U+U3MqI5dkzTdkYTCikqKsznmKmpazsGTR/rbLGQCSiPHFGMj8OSJYKVqySHD9m/6T9stDZV1JGICZEzGY0UsGa1oKpKEAwI+vpMz/LBQSgqMuetqirBmXbNseOa9k5zHqysFJSVwsKFgrY2E9KFQiaonFMnqKoURKOaY8cVb7xpwsn6ObBmjQmcT57UdHSaNjPl5YJ168x5cGzMhOlvRUjw+UwAWlxsljc2ppmegc4uc0EyGtWc7VYkkhAKQl0d72sv7n/rlGsuZqQz0NWlCQSgsVFQUQ6V3rHx5gFFa6spUlzYKFizRjAzA6+/rhgagsNHFPX1ppK+cYEgkdB0dZmLFBUV5tirqREIaSqud+126e421fVbNpuf9fZqDh7SnG4zx21FhTDtlsKaoSHOK7y8vJwcCIfN79I5cyQ1Neax6em3fqLrwvi4aaMTCkFZmSAW07S0aMIRM7huQ72gquotO8Va1ofedYfXvnOlxShtWmtIIXBdzUwsgUZTVl5IQIrZwQ+FN3Kg9vpYCy+klY4gPy8f5bokkgnSSdNhXkgH4TNtP7wGGpio16vezvZq9gZSNOMeer2ws2MgmubZpie2N5ChkN7ghrMV0BrlhctC6tnt0l6lc/YqqpptYZIh+0OB9JpNZyPobL9r8zN4GRgrAAAgAElEQVSzrWYGwiuv1tku3kogpWMidG91smF3tlc4mEEas58VXK+/drZ6/fw+2+Yrr0Ldu2igNEiR3W/C+695rYTXPkXoK5/FxsfHmZycJD8/n7q6OgKBwNs6TgYGBti1axdaax566CFuvvlm8r17qOrq6vj4xz/Ot7/9bXbu3Mk999wz+zOAyspKHnjgAdatXYc/YC4Tbtq0iaeffpqzZ88yMzNDU1MTy5YtY+/evZw6dYqmpiYKCgoAOHPmDM3NzdTW1rJ27VryzKcuy7Isy7Ks94wQ5z4gplKQVwmbbjUVVMEgBIOCiQlNU5P5MFlUJFi6xNxmXFGhGJ8wAeDwsGZkRLN6tWTVKkk4AkPDinTaBNe33+ZQWmo+uHZ2aLp7zG3TTU2C2283gzKNjWmUcjlwUHP2rAmaFy0S3HuPZMNNV9+WUA7U1lz+78RIxFS/HjmiiCegrBQWLBCUlNhPxx8G2ePQVaZWZcECwcaNDvn5prI6FoNFiwQlJebixJIlgtpaSf+AYmLC3A5vwj1NQ4OpmI4nFAOD5jNYba3gttskZWXm9T570gThQpjg6M47HCoqTMsZIVxee03T26Pp6lKsXu1wxx2S1auvvh3BoKl8BRMSnjqlON1qqnnXrZWUlQuwh9yHUrYYK5OBnBDceadDfb2p/gwGTZuZ1askixZqiosFq1ZJCgoExSWK/n6XoWFzUWR62hyDGzbA+LhiYsJcEFy3VrBsuSQYEIyMmLtTwhFzcW/TZsnqVQ5KaUpKNFPTLhMTphrcnHsFDz3oXNMAn7m5ph1FICDYuFESiyu6ujQ7dyqOHdO4rmmbFArCihWCDRscgsH3fPdanmyxpHLN78M1ayQ33Xju961Smob6/5+9N4217DrPM5+11t5nPnesWxNrHkhzMEnRtMxJAz1otBXHDaQRI4bhuNFAGhYMdAcNIwgSw3ACGGj4Rzs/ggRpt92W5VGK3AosR7JapqiJFCVZJMXiXMUab013POPea33941v73CpOVSxSrEt6PQQl1r3n7LPuubv2Oefd734+Qy0H5/RYt3+/ZXVNdR5f/4awvKxXdNx8s+Xuuy2lh6NHA1kGu3frcNpGAxA4ckQ4fhyyXN3SP/VTjnZb283Doeex7wjPPS+cOCkcPGC55ycc+/ddnTZk61YzufrlailLDalF4lUxi8Iff9pz7pxqRLJM9V93/5jhnnscbzDOSSQ2DdccXgfvAUOwVl+VvA4nNAJlENb6A8LFwPxMh1qebTSeTRReiLqXjY2RqrO0Ox2CEUII+GJM5i3BiA5nrMJW0aZw1Y++tNEdRRlU6g8tH294okXUUm1io7qa7hFCiEG6uqc1PCZe9qP/HRA1lwAEwUz0I8RWdlA9it1wTauGRMPyyoutqwsbi42hevXcsJFTT26y0XSWSY28+tnUk60hd6hicqsWbWwMwePPbkQ9cxrSO4yxqlIJr30wHQwGFEVBvV6n0Wi8Zuv6tTh+/DgvvfQSCwsL3HTTTZdpPfI859Zbb2VmZoYXXniBwWAw+Z4xhsOHD3PDDTeQ5Ru76czMDM1mk+FwyHg8Zm5ujnvvvZevf/3rPProo/zUT/0U3W6XEAKPPPIIp0+f5mMf+xh79uyZNLITiUQikUgk3g6cU3frTTdZFhY23kMtLBi6XfVwZhk0GqremJ42xHPwjGJr21potQyt5kYg2airc7XTUZfl4llh0NeQpd3SsGhpSShLodMx1HJheRUWz+qH8/37zVUNfHyty99XVmRyuf/Zc9Bqwm23Gm671aYPxpuQ6Wl1/lbOadBw5O67LWWp+2mzqfvgVFf92Bg9+dLr637QahlaLSaKwlqt2l8NFy7oyZb1nm6309YCzdKSMC50P63XNVg8fUa4807Ys8eya9eV117tg+q21da1CNx6i+XwYW1PJjYp8ZBXq6nK4KabLNEqiQhs22aYnlandZbpPmYMzEzrlR4i1XEQ6jXodlUJAbpPdDqGmWn1/S8tqTrGoG3VVlM91CK67akpw7nzwtlzwtKSBpKHD5srtmGrj77O6f9v3WrYvRvOnoWLF9VnXK1n61Y93s/Npobr20n1VBujx7rbf9RMTngBeG+480491oEe61R7ZZifM1gjeK8nZI3R/aqKLIzRfW9mxpA51XUdO6ahcLuuWqPhEEYjYTwWZucM1uqx7sRx4fAhPdG3bduVd4hr1c2IwDgOLx2PNVwvSsgzPW6ORnosHgyE6Wn1dicS70TehPNaJmGppsgBMRpoWwOlwGpvCAjzs1M0chdzWhvVIWrNMLE9bRBs5uh0uogPFCIUZUkgwxmLM2DiIEgxPga2Nraf47uoOISxGpIYZMMdbY36qsVsBNnxXjhj1ftsok7EqAIDVC1Rdb41IzZgLCEImVHXtYnVbP159NZVg9qgl3eC04Y4JQar7W+JwbetwnCDNRo8TwZLxhb1pWE5xEA8CNaqNkViezuEgMNOVClinIblNibeYhErhFJDexGDsRmvVVnIsgxrLWVZUpblFS/tezkXLlyg1+uxbds2ut3u5HmtmJubo9Vqsbq6SlEUl32v2+1Sq9UuC8ydcxN3uYjgnOMnfuIn2L59O9/97nc5efIk27dvZ3l5mW9961tYa7nvvvte1dOdSCQSiUQi8cOklmvAVwU2FVn8UPncc8KJE4G1dSgLbcOePiOxoKEtqiuFK8Oh0OvpbQcD9LLlp/VTuohuczDUD7Jra0JRCI2GmYQxb4RK2/CNbwYefTRwcUkD87vuNDzwgHo609utzYU12qyembk8GHFO95kXX9TW/tqqBh6DgTb3qitCr2YfHI9hbV2bj6MhfP9x4eixmBSJ7n/9vj7+2hqMx0Kz+cb2wfPntel/ZlGb3Xe+x9JuG1ZXk+t6s1Or6dDNRmPja+qJ1n3n6WcCp08Jvb4eF3s9OLPIZKzVlU60VcHjMM4EWFyEz/2VnzipyxJWVjZc2aurWn57I70miSdjvv71wHe/qxqR2241HDqs/vinjwROnYZHHlW16AMPpPb124110O3owMZLcU4b10eP6dUhK8saPo/HcPac6KwGNq5SeS0EDYLX1vWYMxzAN78V+P7jeqfq9bb04AoNuqsTMz9sB3p1FKwc7z/905aDB/Tk3lcfDjz9jHDuHDz+ROCmm+xlfxcTiXcK1xxeG6shdAjatgbVe+A1VNQBiIaVdVWAbJmZIs8diEesDkucNI5lQyGSZ452p8NaWWKDoSh10oexhhDb15eHzyZ6sA1CwFVDGMVN8lghTE5/msmQQ3VYExUkxlUDEs0lwxInwo/o5xacdQgWaz0aMQfVbhiHBH0sa8wkuKbaiqnWUbWg9bGsYRKiX/of1Z8nP6u1GvhL9Q7SxCDYT0JZYyzOxbN2MYjXRreJCpNKNWIIRoN8Xdtrf8qYnZ2l0+lw9uxZzp8/T1EWE4XH1TAajQghkOc57lXeoWZZhnMO7/3GgMqIqdr2r8Kltz18+DC33347X/ziF3nqqae45ZZbeOaZZzhy5AgHDhzg1ltvvazxnUgkEolEIvF2YC3k8VLlihDg+PHAw19TB2cV2EjQ4CWOk7kqRPQDt/dxdEpgMnjvUtqtah3qQD5xIkwag69Hq23YvUsHNIrA2bPCww97vv2YDkSbm4P3/rjlve+1bNlybYF44odMDAnzfGMfFIHTp4WHv+b5wQ90cF4R90F5+T54pd1EdH+r9sGqLbuycvnNWi1dR+bUg7y4GK5qyGKjqU3WkyeFJ58UWi1VoNRybb2urAjFWG/rvQZLy8tCu52Gk20G4oXJ1GqXf+QsSzh6LPDlLweOHRMGQx3wKbGAFl57Pt0rkLgPVsNiS68nSS4NDPWqAm1l+6AB9gsvCr31K1WvodPWGQInTqg3e3lZfe8/8yHLzh2qP9m+zfD/ft5z6hR87+8DBw6omiLx9mFQlcelr0Mi+rt++GuBxx7TAbZFsXFS7g2/3vqN19cgerLvZf07uh3d3228euXEicDJU3LFx8ky2LHDsmfPGzsDbK22w6ttHNhvuPcey9SUnlgZDnVo83isJ3b6fT2BnUi807jm8LrSMDtj8cFr0IrBaIJLZtQxFYBer8DIOvOzU+Q1gxePsfGoUik2jEwa03me0+l2WVtbxTpLMR5jarWJDsOGqrIdQ0wTKgMI4sxlSg2J4XC45NWyau5WQTLEQYqxrux9eFlwqoG2FT0dJ1FFgtWDkKq1TQzQKwu1ht/WOlSo4hHxuLhNEcHaGMbHNYXLjmhmI8tmo/FQKUKquZXGOH2lNpVeRL0gIlVjwmhLG9W0BO8pg2dYeryXS7b56iwsLLB7926efPJJHnvsMd7//vfTenl96BJWVla4cOECO3bsoNls0mg0cM4xHo+1uf0yinFBWZaThve1MDU1xX333ceXv/xlHnnkER588EG++c1vcv78eT784Q+zY8eOVw3OE4lEIpFIJH7ovOxtVq8nPPkD4fvf3/Cz3njYsLDVMBzAU0cCx45d5aYNZNnGpfSdLrz3xw133G5f0fQyVltpvZ7w7ccCR45c+RP7tm2GBz+oXuPz53Ug36Pf1nXv3An332+543bL9LT5oTfLEtdGdeHopfT7wtNPa4N0ZRVmZ+BHf9SwY7thXMAzz6iz9WofQMNx/WMzNvF//MftK05mmNgCL0v43vcC339crtjqnpuD2261rK6ql9Zl8LWvBb7zHU0qiwKWV6qrAuBv/iZw5Cnhfe+zHDqUdsrNgLn0Q21kdVUVME/+QPClKjduuVlbs+s93T8WF69++3muIZ6zOqzxp3/KMTPzytvmuV6FsLQEX3tYHe5X2vaOHYYPfsBy9qzug1kG81t0HkC7baIDHuZmVU2yvKxXL+zbl/QhbytmQ61VUZZw9KjwzW/qkNdmUz3Vu6NC6YUXdajxVW0+7mdVY7/dgg98wHLLza/8JVurx7oQhKeeEr729XDlgY0NuPde2LPnjeUWzhmmpnS+mup0VH1SrXfLFtWkjEY6APdVIplE4h3BtYfXCL70OBP1FyYGpZVHOvqZDQYfDOv9AmPXmZvtkDs72QoGxG4MEgQdJpjX63SYYmVlCWcdRVkiIuTO4TITxyJymZdav1S1qv0lL5IGHwRrrX4pVKG3DpO0xkZPh91YU2VECUGDdhOHLE6a3jLxU8dHBCsaxMexjVX4rY1rbVtbNhrklz+f1WPG582Y2OK2GohXwxV18doOr+5s3aTeLVRqFaO98CCU5Rhf6qlsaxxZntFttSjGfTS/fu1T29PT09x333089NBDfOELX+BnfuZnuP/++6m/ynVQ4/GYP//zP+cv//Iv+ehHP8o/+2f/jIWFBTqdDhcvXmR5eRnv/WXu6XPnz7G2tsaWLVvIr7EeYa3l7rvvZvfu3Tz++OM88cQTfPOb36TVavHe976X6enpa9puIpFIJBKJxFtNrwdnF7Xt2mpqaPi+B3SQ3smTwksvVe8uX53qPWP1XrXZ0NDa6Vx0Wk3Drl16WfClt6vC5f5A29n9/pV1EIOBxMv4he9/f6O5dsMN8MEP6CDJdjupQt5pDIbqP19dU3/6zT+iQ8empvQy+tNXagluXKAKqOd6qqutahH9865dVr2xsUkLG/vg4qIwGunfhSsGOk0YF9oM9wH8SH3Dr0bVLMxzuapBfInrQ6VXOHlSf2fT03D3j1nuvc+SZ/DSS8KTT17ddkCbtpWneDjUkHF+HvbutZPbhaD7n2o24fz5wCAql14PY7W56r3ufxOTp7wymA7x70XVHk/HxetPUcCZRWEpnvjau9fwkQ/rsON+H5aW/BWPQRXGaKN6ZmajSFmrwf79dvK7rrQj1bGu19PG89Uc60J4ZYv7asgydbE3GnqsXF7WE5Tdrpmocoqi8nenK1IS71yufYKdbMTNOklYG8deK9gxvNZStDUG74W1ng7/m+l26LSb+gISByZK1IGoskNdUY1GEwmB9fVVgi8py0Kb3l4wmbaJER9fPCxGzORFw9igahMMxrh4mUfAWT0DZqydvJGS+LNMXogsl1SdUa81GnQbPMGYjda0MYQQwPgYKAeq036lFzKjOhU942y1wW2svvgB2WWt50tFI/rfQcJlQXeQ2BAnYLF4iQsWQYLgg0d8oJTKP25xztFo16llNZzLsNYAjrX1Eb7wr3skdc7xoQ99iP/+3/87X/3qV/nd3/1drLXcc889NC6RJfX7fT772c/yn/7Tf2JxcZFf/MVfpFarsX//fg4ePMgjjzzCk08+yS233MLU1BQA49GYRx55hOXlZX7yJ3+STqfzhnfDiv3793PXXXfx+c9/ni9+8YuTx7rxxhupvU2Tg77whS/wJ3/yJ5w+ffoVCpTE20+9Vucf/fw/4ud//ufZsmXL9V5OIpFIJBLARrChV9BBo27ih0zh+HHhTPQNh6CXv4fKfhcbZcHHAHoE0+gQsx3bDe22BnbHXhLOnAns3WtZXhYeeyxw5Glhehrueo9l/37Lz37c8eAHr/xepVbTluLx49oUX1rSJtf0tMF7ePbZS1JMo6Hltq2G+fmU2mxqZEMRUg1f7HbBWsOpU4ETJzec19U+CButxhA0AB8MYHZWB+3t2GHodlUnc/yEXip/6JBlfV0btt9/XOh04PYftdx8s+VDH3Lce++V98E8h3pdh0LOzr4yVF9bE55+Rjh1GmamNYg/eFDXk9i8XKpssAYaDR0WuramOo+LFzduVzVFL72KoCz1d18NZJyfN+zcabh4Ue975Glh+3YNto8dCzz0UKA/gH37DPffb9mxw/I//hPDaHTlfbBe10G6vZ5Qr2tAfvaccOaMcOCADow8c0a4cEFihqGD/BKbAwn6r3XV8E59bTt2LPDc8/Hqj7ifTY51bBzren0dyJhlhmbTcGC/4Ykn9ATcc88Kd94hLCzovvvww4FnnxNmZ+CeeywHD1ruv99y661X3h+s1WDce7hwQTh1WpCgJ5ErDZP3cOqU8Pd/H+LVVHqicMcOw44d8MIL8NJx4RvfCPzYj+kVK9/+dmA41GPp1m3aCE8k3olcu/M6nk0MEiYKEUQIQWI4qm1kPesk2Phmu9cf0++fZ+e2OabajY1ToMThgaZqNOs/jWYDMYG11RVCGSiLAkyG9QFrwFlHYGPYopRWBztiwAvG6iBFbU0HQtDtWrFV6n75K+El7W1911Z9YrCA1QDbCIjdUIRULegQg+a43WrzLg6pFIRQPXGV9uSyznlsrsSWd0CwRPezBG2HW72Hj0fZovSEUID3sf2dUctzaq5JVsvJXIaz+pxYnfyIQ/AhPtfmylOW9+/fzyc/+UmWl5f5xje+wb/8l/+S973vfdx2221MTU1x8eJFHn30Ub72ta+xurrKr/7qr/LBD36QVqtFo9Hg537u53jiiSf4gz/4AzqdDu9///vx3vOlL32Jz372s2zZsoUPf/jDzM7OXuvuSLPZ5L777uMLX/gCn//851lfX+e+++5jYWHhMh3JU089xec+9znOnTuHiPDkk0+yvLzMI488wm/91m/R6XRoNpt84hOf4I477rgsoL8Sf/RHf8TnPvc5+v1+Cq83Ac45lpaXuPPOO1N4nUgkEolNQ6ulmoZaTcO/x74TWDwrcaiYUKvpbXo9ePxxodP2HDhoaTb0w+f6uobG//W/em680XDrLZaDBw0Hnzc8/oTw7HNCvx9Y2BJY72nw3OtpqNdqaZO11XqV6/hfg/EYLlwInDsvk2bYi0eFxcWXvdcxMD9reN/7bAqvNzmNBszOGRp1YTSEJ54UVte0gbh0UcgzaHd0/zxyJDA7AzfeaGnUDbU6+FU4dkz43Oc8hw4Zbr3Nsm+f5aabhG8/pgMg/+vnPNu2BYYDDVNWV9VV3eno4zebhm3brm4/CQFmZw17977y/fXpM8KFC4HTp4V2G265xXLrrZa3qbuSuAaM0ePQ1gV47nkd9vmNbwaOn9Bj1dKSMD+vx5q1Nfj2Y4FaTfUdrTZgmAyiO7MoHD5kOXzYcvuPGo4fF9bW4OGHA8dfEmp1DZZPnoR2Gw4dNDQbhnoddu+++uNgCBoS7t8v/OAHur2//Ixnx45AWcKJExowNhqwf586shPXH1W8aCu/31eFyJ/+qadW1xkOxmjzf21dv/fQVz03/4il1db7jsY6WPnP/syze7fqkG680XLggPD006pX+uM/9mxZgPU1fW0cDOCmmwzT0zoHYssWnRtxtYxG8OxzgS99KajLXfR1v/re9/5eOPK0DiTdt9fwj/+xYXbW8N4ftywuBlZW4P/7SuA739V988IFzaRmZ+HOO2xqXifesVx7eG00FA5BK8QTBzNM/lwJoaXyMce2cRDh5KlF+tMdFhbm45DFGNVaNprGUYXRbGoDu7/eJ5QF3pcYXNx2VGyEgMTgvBq2aACJ/mptim941SaBOxLXXjWmQYLENQRtdE+WE8PlaiBlFbGLhvO6DRufhdh6tnbSrKESisRwHiOIVGqQEANxCAR1WRsNu41W2fWyOy8UIeDLamBlybC3Qm4d3akpmt0OLqtr2zw2342xBCzVqquQ3FZtd17/ZTvPc+6//37+/b//93zqU5/iC1/4Ap/61Kdot9tkWUZRFPT7fW6++WZ+/dd/nY999GPs3LkTay3WWj7+8Y+zsrLCH/7hH/Jbv/VbTE9PIyJcuHCBLVu28Gu/9mvce++9NBqNaw59jTG85z3vYd++ffzd3/0dBw4c4K677npFm/vEiRN85jOf4fnnn0dEGI/HDAYDjhw5wtGjR7HWMjU1xY/8yI9wyy23vKHw+uTJk/R6PUSE2267k273VURribeFI0ceZ2npAqdPn6afrhtNJBKJxCai3TbcfLPl5KnA088IZ8/CxSVhqgt33G7YutXw2HeE554TXnxR6HZhxw7YeYNhz27DyqqwvAK9H2gra/8+YedOy/vep1cePvWU8MILwksvaeCS53DzzYYH7rfccMPG+9qrpWo+ToZUBeit67+vduPhMJ3A3+w0m4YbDxteesnwxJPC+Qt64qTTgVtuMezbZ/je94QfPCUcPw6tprBjp7Btm+HAPsPSRQ0InzoijMawd6+wf7/lgfvVXfP4E8LRY6rBCaJKm0OHDO97wLJ37xvfB63VVn+9/jLtoqh6x2Ubt9Ng/C16ohI/NKamDLffYTl1ynPsJTh1Gs6f1ytE3nOnZcsCfPnLgTNnNDycng4cOpRx6JDhmWeFlWXVjvR6Qqsp3H473HqrZTiCr389sHhW92lrVfcxOwPvfa/l7ruv7cSGtToD4CcftGSZzgw4elT3cdBjZKcDd9xheOB+R6eTwuvNQBXw3nWX4ZFHVNf1xJNCo6En0+6+2/LkE4FHvi2cPw+PPhrYsd2w6wbL3r3CM8/oibfvP6HHvDvu1P3gQz9jyXO9qum55/V4F4Iei269VWdFvJHA+lJEYNCHs+deObg0BD2hs7amjzUzrV9rNOD22y3jsZ7UOXtO9SHVc7B3N3zwgzoHIOlsEu9UjFySFg4GA4wx1Ov1VziZX84//59+mcXFM4hMEmmkKhVbG73MhiB+wzWNxrejUQ8EcmfZs2snnU5D/c7WTcYU6sObaMQIeO8Z9ocMeqv44Mmcw1mDs06vNUKixsMRRK8/ciYONoyBeMyd489mcM7p+qvHtDb6szWMrmLoiQJFtLm8oUmJAySNieF1DNMncbCJ4XeVEIs6ttGFiNXbOmMJUiJCbEcHjM3wBB1yGNSOYlAFiMvq1Gp1jJScXTxOb/kcrUaTZqtBo9nE5S2yeg2RwMVzF5mZ20p7dkEb8SbgMIhYjp8+y2DsIQi/8a9+k12797zu77woCk6fPs3Ro0d59tlnOXnyJOPxmOnpaQ4ePMjBgwfZu3cfU1Pdy9rOVVD93HPP8cQTT3Dy5EmstezZs4dbbrmFw4cPMzU1NbnPyZMnOXbsGPPz8+zbt+8yv/bq6irPPPMMADfddBOdTmeyrxZFwdNPP82FCxdot9uv+D7AhQsXePbZZxn0B7GN/0qyLOPQoUNs27btDQ16/OAHP8hDDz3Ebbe9h//1f/s37Nt74Krvm3hr+d3f/W2+9KXPs23bVv7gD/6A97///dd7SYlEIpH4B0JZwsWLwvKyBiezs9qKunSg4Wikra/jx4OqOGqwfZth1y5tBC4uCidPqmd1+3bDnj2WLIPTpwNHj+qQvXoddu8y7NtnabW0Ib20JJw+I5w9KwwHepuFBb2cfm7OXFNoE4KGQBcvXnnQU16D+TltnCWuH9rUE86fF4pCm/YLC7pvVYzHGha+dDxw4bwGIdu2GnbvNrRahnPntAlbjHUf2r/fUKsZzp4VXnghsLQMtRx27DQcPGDodFSfsLysOoXFRdXY5DVtHt6w0zA3b2i8cmzOm2I00kvs19Zksr+ny+KvP3rFhrC6KuQ5zM3pMMYKEf3dnT6jDemVVfX379ypreUsg+MnhDOn9bP3DTcYDhywrK0JR49uHB9nZuHgAcvOnZXfV49/p8+o5kgCTE3rcMWt2wxT3Tfn6K+Os4uL+ji9ngbb3Sk9hm/dutG4TfzwCUGPdYtRt9Xp6BUdlz7/3utr8kvHhcUzeruFBdi92zI3Z1haEl58MdDvqxLr8GHVg5w6FXjhBR1Q3GzADbsMhw/pyY+igPMXdHuLi8JwqAHytm2qLJqfN2TXWBMNAZZXhHPnXn/2gDHQbKk2LMv071Svp+s5dUpPcjsLc/Nww07LwoJ6sROJzYqIsLa2RrvdftUc7prD61/51V/i3Nlzqg0JKk2rvGmmCqtFfdHOWm0bW21Y93tLNJpTgGNuqs3Cli4m04Zw1QF2lU7E6g9BUKdzb32VwaCPijwsmXOEOFlVdSV2sh0jURsSPyzYOF5RRD3UGG1HV+qMykddDWK0MXMWrUpjjT6BxsikQW5NJf3QwNwYq21sYzBYDJaJlhpBjKEatUnErbgAACAASURBVBh049G3XWJx4IXSl5RB15c5R57VqOU18jzHOksoPIO1FZaXzrGyeoFWs8nCwg5G/WUsY1zWBGsIvmA0GNGa3srMjj04VwOj4XnAaXg9KrAY/vff+Dfs2vX64XVFCIF+v89gMCCEQJ7ntNttarXa6+43l94PoNVq0Ww2Lwu638lU4fUDD/wk/+7f/R6HDv3I9V7SP1j+1b/6JH/xF3/Ili3zKbxOJBKJxKZEgg5XGo8lNks3Pux6rx+ORbQ57dyGf3M0grLU+9Rqep9L336Vpfo5vdf71dKApsRrIKL72XgscRjZxr5SKWJC0OZetZ+FoAFeUVx+n0v3Qe9fuZ8mjUfi1bh0f3JOT7BUmYX3+j3Q42AV0FXHOPVd630u/TgpovcbjyXeV/e/t/Ij56X7uDGvvo7E5uHS/QYuPyZVx8Gy1H2sOp5Vr7e6b17+Gl3x8tfbV7vN2011YujSY3Q6/ibeCVwpvH4Tf7W0oVzpQ1SNYVXfEfUczhpMMNjozVC1R4kFMufwZKwPx9RWB0xNN8lyg4QqGI6F6lD9IAZjHZ3uFAEYj4aEACE2nAW9LM1gYtPbbkwikUotUg2UqULsQIiNag2v9Wu6DVNpqTXEjkMcgxq2qVrVE+8G1X9WgyejokNEw9lYxtY8Xt3ZYPAhEHxJ8AWIxVpHntfo1Bo6YNE6rHN6P2vAGtbWlzl/7jiD3hoSPGTTZK0Ww+E6oRhhGOlzFkpyK4TxgDAe4eqZhtqmGtBiMdYi4Y395q21dDqdNzxg8Vrvl0gkEolEIvFuw7yGCgH0Q/CrNfesrZQIr10W0KAxNU8TV0aDDQ03Xk6l6ni1rzca0Gi89j7mnHrbr9YnnPiHy+vtT869UgFjjIaLef7a+5Yxr31sfatI+/g7i9fbbzaOg5d/vXq9bTbfWa+31dDQ1ztGJxLvRK45vBaRiTJEQgAbw2EBvER3NFGPoeFtQBiOh+RZrlPWLYxDYGm1hxfP7GybzFVtZZl4qfWvXfRWG0e71UK8pyxKSu+xDqzEBrQRQtBhjhgfbR12MkXWxOGJQQRMoBKaWLMRVmMCUtW14xqC14mu+rMzmbY9OSTEuoFIFVQHbYtPhi5q89x7T1EGSu8pA1hjqec5zUaTLMuxLscabZSb6M2umttVa3o06DEa9RECxgj9tRWOHxuTIXTqGZYAYuLPEBApKIZrZHkDY/PoBa9GUIKPP1MikUgkEolEIpFIJBKJRCKRSGwWrr15LSEOQAwIQQvOAbJKw4EgouMBqxq1F2FUFjTa7UnqKwaKEFjtDTAOZqbaOKftZYkDG1UCos1qjMG5jG53ivWVVYqyoPSemnM6LlG0ES42aIgbVSHgQGxsHUc/tqgba+KrlktUIVEvoqF01JkEE4c/GmwG0YuiQTHgRVR3Em8rYgkBSinx4glB/d+Zy2k2m7isRmYdmbM4Z3VIo63WZCZTJU0crKhtcKNDKENsdYsQxkMG4wH1Wg3JOwRvCZWLXAK+GDEe9Wh257WVHp8SG/3cNnq8E4lEIpFIJBKJRCKRSCQSiURis3DN4XXleQ4haFAa68U6ANHqMDyjagwdlBggeFwMg1VbIZPhhqUXVtb6GGuZ6jTJMxe3r8MRrTUYW4XahizLaXU7rK2uEEKgLAPOWcSgwboYVXyIPoYG7Y44eRFiIIxAkDDxLscsWtcWvxaif9oZi+g3CcEj4jVkjj9PkEAQDx5CKBEsDkeW5zTqDZzToNo6h7VZ9GNbbBSRVMMcVXuigbg+tMEYISA4Y/EiFL7EGINzOWI9zgQwAe9VexJi+1oD/0BZenzQqfRSBoKI/pnYnE/N60QicZUcOXKET3/60zz++OM6VDbxhmg0GnzkIx/h537u51hYWLjey0kkEolEIpFIJBKJRGLTcs3hdRUqazRsVZFhNLQ20bWhWgttYYNQjAY08gwhhrMS1GUdBxuWRWBlpYcRmO62yVwMk63ZCKWNm7im83qddneK9fU1fCgRsXiJTumgmo+NeZRCkBJCdEfHQFomleOqLQ4GV91FZf0I4gVjA8ZYAhJd3JYQtH1elIUWzG1GLa/TrLfJanUym2OswWYWa3TLxmmorHMgo88EbUILGiQHTAzGTfx6XKEB7wPee5xzCIbM5gQTcJmjDPp8YWu4vE6tViOvN6g1OxRiwQvWWAKBYJ3+jmKmn0gkElfDF7/4Rf74j/+YY8eOXXKMTVwt1louXLjAjTfemMLrRCKRSCQSiUQikUgkXoc317yezD/UbnMwJkbXMnFCB/FYtN3ry5JWo4txmQ5KRINmbWbrfXwprK73yZyj26ljnEXiUEiLIVRDImNAXm82MMawur6GhKDN7Er5IRqbW0FDYxOHTE5CY0WIYXIMcEP82UyIehFLHNSogxbLECjLMm4H8jyj0WiR1+s4k5G5TJvjxsWRw6Kt8fiQqiexk9DHGAdGkKqJHoc5io2aEqpBkKr8KMsCa52G0GLIbQ2XZ9SbTRrNFnmzRV5v4Vwdm2U68NHpSQOMJcRfjnUW4xyEDbN4IpFIXInFxUWWlpYoioIDBw4zN7flei/pHcPRo89z/vxZTp8+zerq6vVeTiKRSCQSiUQikUgkEpuaN+G8FkLwgGAkSqNj0zropMaJRkSAohijdWwzCZWrYLYU0VayaLN4VJRcXFkDI3RbTYzxBARjnSo+nFaFo5CEvFan0xLW19bwZQkiuIz4+HG4pJHoeSaqQgRrTbRiC0F89D9XmbzEVjiEsfq2x1Jq6Jtl1OsN6nmNzDmsrVQgDgnabFYPtiBB9R5WqiDaqAs8hvVVK53qJxId7MglYTrEwF7AIwSbU+/O0Wq2qNVb1GstXKNOVqthbAaZrgVsVI4YQggb7e3gVSlSlpPBmIlEInG16HFDmJ2d55//81/jnns+cL2X9I7hP/7H/4O/+qs/nzyHiUQikUgkEolEIpFIJF6baw6vPWESUosAwWCtuq2rJjUwaRIPB0OajTbgCCKXKznshvvaiwbFYRw4d6GH8YZWMwOrrmdsqKTUk3BZRMjynGarRX9tleBDDKUtxqo7u3o8CQJWy80+wKQ+HQIYQxml10FC/BkMzuZkrkazUSOv1XBZhrGGzEavSQyIDUzc2aEKpUVUcyLRB27iQEer7u3JYMZLQ30X/d6XtLCh0pMEZrduxwfRtnfexOAINjbLYaJYidMeCRJYW1vj2PHjzG/ZQqPewGLVjx0b6olEIvFGqdVq7NlzgFtuuf16L+Udw/z8Vpxz13sZiUQikUgkEolEIpFIvCO4duc1ouG1BHVeV0G0RN+1MVG0YWLDLMQP7AZbDUUMgrFRoRG8htYGxFiCWEalcHFlHTEt6q0GmQETt4WYiX9ajCo28kaNprRYX11jNBrjvKNW1yGOQQJZdFQH0aDaECCYuO6ADwERyFyNRq1BVsvJ8xpOa9xkWaaDI42qQCRs+LSpMubKDWIMRkz0awtiA9WESBNvL0bQ8ZZm0qyufCwS1Sbq9ya6xEGCp95sMC4FkzcwWQ2JIbSIgBWsCMHHoNxGPYk1zM7Pg7FkeY1Qqj/c4vV3kUgkEolEIpFIJBKJRCKRSCQSm4hrDq8l/mMthBCDUiMxezWI2BiKCuNiSK2WgRFC1Xw2BjGClaAlYXwMaQ0iDiOCEWE8hgsrPaYRptrNSXtaC8s6tFCHR4KxjnqzSQiB3lpgOBgAhqymQxNL8QhCCGiIHQJGwLqMeqNJs16jltew1pBbfWqssxgbByZWRWjksmci9sv1WTHaqq4a4RIHQ1a30TZ3iPeN9zLq8hYEFwN5ZwFjdSCmqbrXGsILdhKcl8FryC0BH4c9GtHneXI/ETLnaLfb6tu2Dm/AA7Euf627QSKRSCQSiUQikUgkEolEIpFI/FC45vDaBgNBg1eDUeuGtbFtDSYEcAYIjMdDmo0GYlD1RxXFGtWMECDYqNAwQPDRG63bL4vAyvIaFmg361ireg5BMMFgjUOCBsJiDI1GU8NpHxiMhjSMDpMMvkQzX0Neq9Nut8nznCzLcVmOMRaX2Wj00ABYwoafWwgYYzW6ju3t6kep2t1GDBYdpFhF2mrv0Pvrz61DKh1u0tiuBlBiiA7xS73YitpWtCUeQkBCQPBRESI6zBImbffBuM9gNKTdbGLEMB4O8UEoxyUhCGUxJs8yJJpMNhOLi4t86lOfYn5+np/92Z9lfn7+ei8pkUgkEolEIpFIJBKJRCKRSLyNXHN4bYL+G0zlTLaEELDOQghYo+1mX44xUfEhUQNtJAa02iXWYDcYxKk6w17ibo55LqNxYGmlhwh0240YfIuqPrBYl+kASbE4Z2g121AGQn8dCVBrNMiaDucyjLVkWRY1INVQQ9VzmBgCB+9Va2IczjpE4iDGS1vXEqLqwxAwiHgN02ObOQgIpYb8ZAwGI9ZWV1lf7+Eyw/ZtW+l221iXYY3TgY4xLA8B1ZpYM9GywMbgxRACPpREgQlFWdDr9ynLgqlOl+FwyJlzZxkXBbt27KBZazAej3VYZeFxUYFyeYt887C+vs7f/u3fsmfPHh588MEUXicSiUQikUgkEolEIpFIJBL/wHgT2pAQw15tHDtrtIwcBEv0LHtPOS7JsxrWOIKYSasZ2fBjG6L3Og43lMn3LF7AGYMYx6gIrKwOMBg6nQbWatisTeTYbo7N5cxZut1p2u2u/qBZHsPajYqxieGtkSq0DpShIIjHWg2T8zwHiKGxxcX16bDK6rmoNhi91dWgxSCINZw6dYbz51e4eHGZYX9AUZSEULBt+xbuuPNHmZ6ewRAmehJsDPGpWtRVwK9qER8EXylQJIAIS8vLnDlzBmsttT11ev0B62trGAxl4SmdxxhDKEqchVAUBKNaFC9h02XYIQT6/T7D4VC1NIlEIpFIJBKJRCKRSCQSiUTiHxTXHF57SoL46LhWjYUxGQYTBygKoQwU4zG1Zgsqd7P4aMrYGHoo0ZeNjWqNoIMekaAqkkCc8GgYiWd5eQ1jhGajhssECR4X9RpV6xljcHkNC4T4Z2tsDKH1HyMb5o/RaMhgNCBQUsszjLHUanWMFUbDIdZqUxtDHOwoGzG4JuYbwxa9hu8hwLGjJ3nuuRe0xY1Qzww2WJZ7I44dO8GuXbvptDq4Wk4spoOItrCNi8MkY8tdAsFr81piO9wYgxHBjwvKcaGakTJgjcEZR1mWeF9SlqUqWgwUZUG708UjiLWIN5tOG5JIJBKJRCKRSCQSiUQikUgk/mFzzeF1MIGADjwUUzWoSywZHqEMJeLHOGewzhBlFzGg1SB6w+UsiPg42FBHGxqROAAyup9xEzf0qCg5d+4i1hiGwyF+XNJqt6nXa8zOzdJq1TEW1F2iQbBHQAIuc5ShjAG3JUjAl57+oE/hx9QaOQHIjMOaDAlCWRTU6w6JmpBK2y0SQAzWiM5gjMXuIIGi8Dz33PMcP3aCZq3B9FSXPLOIlKz31imLBhfXepxdvMANO7aS1xxBbHxCdOneeIxYjI1t7xCiLkQIpcc7T7SdaFNdBO8DEjxWp2JSel2LMyWjwRDvPWBoSFBHtnGqTLnO6fXi4iLnz5+nLEsATpw4Qa/XY3l5mSNHjrCysgJAlmXs2rWLbrerJyoSiUQikUgkEolEIpFIJBKJxLuSaw6vCarrkBhBG7EEtJGNsVAGxsMR9Vpdc90gGCd4gWAMOWjgi3qzndVQW73YEhvHggmxNY2g/xvwwXP2/EX6K2sYH6g36qwurZLXck6ePM3M7Aw7b9jO7Mw0WMGiag1B/dbW6ForPUnpS8qixFijzeZgyWvqww6xZR0kqFM7eBAzieERbY8bVMEBlsFwxDPPPMfpU2doZBlbZrq06k2arSajYoy1hqIQ1vtjBoMhRVlSD0HDemM1sA+AFcQELPpQIWjzOgRP8BpUI9pSDz5ou9pYSu8pyiFFOaYsxoTgsRYa9RpehIBhXJRICOR1i40DJK8XIQT+6q/+ij/7sz9jeXkZEWE8HnPs2DFefPFFnn/+eWq1GgAzMzP8xm/8Bg888ACNRuM6rjqRSCQSiUQikUgkEolEIpFI/DC55vDa4TQ0rfrT1upgwuABwRr9b2cNJg411KKyBsdlCJigTWqsidoRDbCNVd8z6PfVq61qkCCBsvSsrawThiVTzQbteoO8lpHljt5gwLnFc6wsr7Bz53a2b1+glrvY6hYK73FWh0rGB1C1hngaTUO7U2c8Ur+0Mfq9SVM8aIgtEtct0bMdn4KAUIaC5557gVMnTtGqN5mfnqJdr9NuNnF5rtqOcQ0jBl+U2piO7XJ1fgNYrG4eEx3XIoL3Hh8CPgQdhhk8RoRgBO/HeF+S1XJKX1AUY4IvMQaCL4FAljkchnEZoidcn4MgMjkJcb2Ym5tj//79rK6uAtDr9Th79iydTod9+/bRarUA6Ha7dDqd1LpOJBKJxGV873vf46//+q85efJkfG1NvBrGGA4dOsRHP/pRbrrppuu9nHcN6+vrPPTQQ3zlK1+h1+td7+Vsaur1Ovfccw8f+chHmJqaut7LeVewtLTEV77yFR566CEd0J54TVqtFvfeey+f+MQnVAmZeNOcP3+ev/mbv+Fb3/pWvMo38Vp0u10+8IEP8NGPfvR6L+Vdw8WLF/n0pz/NU089ld7/vQ7GGKampvj4xz/O/ffff72X864ihMDZs2dZXFy83kvZ1FT74L59+67p/tf8im3EYMRgjUOMwQfBilEVBYbSlzinjV4NZA1BDBiNvGP2G/vUBhETg2y0eS2Cc0790YbYRtaQdTgc4ouSqUaDuakOWWZpNBtktZxOu8XKWo8LS8scffElzl9YYu/eXUxNdXXAo/FYIMscZakvriEIYkFswDNCyAgiDEcaEksACVCKrxJlJETHtTAZPOmLwKlTZzh35izdRotOq00tz2k0GtQadbJaDkOQYNRpjajeBG1UT1J70eGQNmhoXrWiJQS8L/ES8CIYX2pb2xpK77W5HoRQlJigmzIi+LLU34ex+AC+KHEuY+PUwOWDLN9urLU8+OCD3HXXXRNtyLFjx/jX//pfs2vXLj75yU+yc+dOAJxzbN26dTJIM5FIJBKJfr/PZz7zGf7zf/7P9Pv9672cTc/MzAx5nnPw4MEU3rxF/OAHP+D3f//3+fKXvzx5L5N4day1PPXUUywsLPDggw9e7+W8K3jyySf5vd/7Pb773e+mQedXwDnHM888w6FDh7j99tuv93Le8YgITz/9NL/927/N6dOnU3h4BbIs4+mnn+buu+9mYWHhei/nHc94POY73/kOv/M7vzPRjCZem3q9ztNPP819990X86jEm6UoCh5++GF+8zd/M5UXroAxhlarxa//+q/zC7/wC2/4/tf8iUXY8D8bazBIdChryDoYDWnlGYIliA5rxGgQ66petYkJNsQhhRbxcTqhiA5rjGF4la1KEAaDIeI9swvTdLp1gi+pusP1ep3GaMx0t8PKWp+lCysM+gP27d3D/MIstdhMzm2mAyHjkEVLzngwZjjsMx4GxFvG4wHNepPZmbk46HEj4g2ikusqhPal5+y5FS6cX6bdaDPdbuOspV6v0Z7q0mq2GBeFhvKZo9ls0ul0aTaaagOPzfONNrcnWAs+/tlEdYl4JPgYqKuoRcQRAljjKIuAL/XkgIkalhBKfY6Moyw8xbiAWsBap35xY65jdK3Mzc0xNzd32deazSbdbpe9e/de89mZRCKRSLz7GQwGnDhxgjNnzgAwPT1Lu929zqvafKytLbO2tsra2hqnTp2iKIoUXr9FXLx4kWPHjnHx4kXa7S5TU9NxpkiiQiSwvr7G8vIyx48fTw2lt5ClpSWef/55lpeXabXazMzMX+8lbTqGwz4XL54HdLbO2bNnr/OK3j0sLy/zzDPPEEKgXm8wP7/1ei9p0zEejzh/Xo95x44dY2lpKYXXbwFlWbK4uMjx48cByPMaCwvbr/OqNh+lLzi7eBrQk+16xf/1ToDeHYzHYx566CG++tWvppN3V0FVnnlbw+sST4i9aULQNrXGpdpGLgJ5PccTA1LQRvEkjDYxAAYTrNazRRACRlQ7ogFs9EpHDzYxJEegM9Wm2c4py5JxERiMC/rDEY1GjbYIIpa1tR6j3pAXn38R7wu2bt2CzQ2U4GyGLwvqeUZmHePCsN4r6PUKut0pOp0a9XoN65w2lEUDcmttHChJDJQDq2trnDh5kpn2DJ16nZozuMxSb9ZxzlFGRUjmHLU8wzlLljmmpqa1vS5CEI81AWutakskDrW02livHNcSPAQDwSMECBDKkqIY6e1FCEEoipKyKAilB+/xJvq5RSjHI7CWWrOLsemDayKRSCTeuVTzKQDuvPPH+af/9Fe5+ebbrvOqNh+PPvp1Pv3p/4vnnjsyGQKdeGu49Pn80Id+ll/+5X+RFGcvw/vA5z//F/yX//J/6gDypBd4y6gGu7fbXT72sX/ML/3S/3y9l7TpWF5e4r/9t7/kT//0/57oGBNvnkp9GUJgbm4Ln/jEP+EXfuEXr/eyNh3nz5/jz//8D/nrv/5s2v/eQi59LufmtvDLv/wvePDBD1/nVW0+lpYu8h/+w+/w6KNfUy1uClnfMkSElZUVRIRGo8nHP/4/MD+fTky9nKIo+P3f/w+ICEtLS9e0jWvXhhgXvc+VExrEoh7sckwtU7e1sQGsQwSsUTVIIMRgVgNah6o0xASM1VuoY9pD1HMIgrEO7wXx4JzFOUN0gZDnGcZYvBdGo5KiDNRrOdJp0u/3KH3J0aMnKMvAloU5Wm2LMQGbZdTrNZqNJmVRsr62zkqjR17LyXKrfmwDXtTlnVlHp9vFFwXjYkR/MGAwHHFm8RzOwEy3RTPLcc7G4YyW4D1YofSB0WhEMRogocQBmRGK8ZggGdYYrDXaSzcG4yw6qdFijWjTWrS7DkHVJWLwEggiBB9wFoKUGITcOXLXwKJnhCSU+DIgXk8E2Kymufi17gSJRCKRSGwyZmfnue22O3jPe37iei9l07G6ukK3O329l/GuZ/v2G7j77vtSeP0yQvA8/vhj13sZ72ryPOOGG/Zw9933Xe+lbDqWli7w5JN/f72X8a6mVquzd+/BtP+9CouLp/jGN75yvZfxrqZeb3DTTbem/e9VOHdukS1b0hURPyyqkwHNZpNf+ZX/hZ07d1/nFW0++v0+f/EX/w+DQe+aT55ce+U2aimM0TayMVaHGSIMhwOajSbBgBVABIvFiiDWQAiawaJZbJCgaoxqw6gLuwquqeLxqPlwVlvQ3guhDIzGY4wYQoDRuGAwGJE5BwactdRqNcZFgS9Kjr54jH6vz87d22k0akxPT1PP68zOztNutRiPxzz/4vOMxiOMiYMnLxmm2Gg0mZ+fx4rlwoULDPsFxWhApznN1pmttBs1WnkNROhOT1GvNfClx/sCfEkxHtPv9RgORgwGA0T0TKGpnCQmesODIJQaYhsIRiZDFvVfDxT4EpZXVhHxzEy1yF0W29nCVKeNMYJzlrIYa1M7xBMDIoh4TAjYTXjJyI4dO/i3//bf0m63WVhIB9pEIpFIJBKJRCKRSCQSicTmw1rH9u03sH37Ddd7KZuOXm+dLHtzc+velC/CWB1kaGLAqyoNTWHFWqxzEH3MxkKolBdVIGxMDIc3AlViQxsMgYDFTlzTccvUazk90aGR9XqTwWhEUZbkeY28BsEHhoNxHG7o450FgpBlGWdOL9IbDOh22+zZDTPTs4hY2u1ptmxpsLK6ytLSRULwk/WNiwJjDLUspxwVNBptproz1LImnfY6xlp6/R6N3NGq1ynGY5zLmZmZpxgXDAY9lotzjMeq9jDWkdVzbGb15w8BMZbSe21gByCKUoKUiEDpPWVZaHAdwPuCEyfP0OsN2LowS7dVx1oHXjACtTwD8dFPLlqNJ7bYnUWssFmr1+12m/vvvx9jDM65672cRCKRSCQSiUQikUgkEolEIvE28ya0IcQhi3F4oWgQXYxG1Gq5Dl8UDaS9qqrVY221WVypQUIQMlGns4jHiVNns1GjtkTntZaSDcYa8izHWoOzjsxm1Ot1XObIs5xOXsO3A/3+kNXVFdb7PbwP1Os5vvDkeZ1Wu8tgNKDHEGdq1PIGzUYL5xx5rcae3fuY6U5Hh3QAhF5vnbIsaDaaDPt9RsMhjUaTLVvmmJ2fIcscvf6IsijIrcN7T7PRotnq6joGffrjMSbLMDZnVAzJ6xku14a4D15D6ugHtwEdzog+fxI91l6CnjAQoSjGrK2tYazFispEJD5r+ksKQNBn2oAYg5EYlsdSu4+t982GMYY8f3NnZhKJRCKRSCQSiUQikUgkEonEO5drDq8lDiw0OrIxjm5U73KjUcdYCxiCqLrDGhsL0IIz2qL2WrAm6JRBQINway1YIADYGMJqedpiyfMamcuQqBHJspzBcMi4KHCDIRIgy2pMz0zTmmrTG/TJs5ypzjQhCK12h1qjzvraGtNTs2zftoNut0uQksFgDeegKIYMBj1CKGk2m9RrOePRkKIYYY3F+zHDodBqteh2p2m0msxuyRgPR1jr9FkRR1EUuGxA6QNbFrZhjWHYH1H6QK1eJwRPURQYWw211MDfGquDf4w21qsAG6O3M5UTXAAjWGMwJo86FtW3GKP3t2IQLGL0ZINFTyBYtNFsjIlnIxKJRCKRSCQSiUQikUgkEolEYnNw7c1rEYj+ZFCdRyirMNVFS3XAWnOJ11qqHFpD2BC/L+p1Nhj1kADeg2CxQbck1mCMofQaWOMM6/116s2cksC4GFOMx+BBSsHVajTbbWZm5qjV6wz6AzCWQzceRkSYm59ldXmJVrtJXsswBvr9dcrxkIsXL3L8+HHWVteo5RndqS6dThfvS86fO0/NZszOzTEa9bkQzpHXaszOz5HVGpjpOax1jMfqtzbDIY1Rkxt27aK/tsLq0hKNRp1ut8MYkuo68QAAIABJREFUYfHMBRq1GnkeA2/jJo3oIAKBGGzLRH8i8ftiwOU1xn5ESaBuJr+cShWuLnABMYIWrj3GBExQHYrK0tO02UQikUgkEolEIpFIJBKJRCKxuXhTzesQBB2krqlpUY7J8gxtXBuMsRq2isRGsHqxRQzBCFiQoGGsnQxshBAEbVzLZHyjarEtxhokCFmzxuLFC5TBU2/klGWpbm2BeqNOZ3qGZnsKm2XcvP8g1hqWV1Zottu02x06nSb1ek5ZjhmXY1bXV1m6cI6zi6dZWVllOBiwtrau/uxhyWjoCcFz9uwi9bzO/8/em0dXVab5v589nTFzQiBMSSDMEFBGC1BRBmWwwKELRcuh1O7S6rK7f1V9b3Wv+t31q9W3q1f3uqvaasuiLKdCVEShEBURmQyQEEYZA4RAmJOQ6czn7Om9f+yTozGAQJmS6j4f/5B1zj577+zznnf4vs/zfQQKpmXi8el4/Rn0KOyFx6fh9vgxLIFLltFcKpqmYJkmbpdKPBqjR4+eeDx+PFlhTAmCkTZkSU4WhHRiqpM1Lp2o6WSUuiQlPcU7TEEkGVmy8Hk9hFpCGJbpeGHLMskSmkiiozCjQCCQZSciG8sxFyEZ4Z38Qq+3KaRJkyZNmjRp0qRJkyZNmjRp0qRJkybNN871F2xMiqq2LVKRwYap4/NlIMlKMoo6eSDJKG1Z0BHoK1KhwY6IKmyRrP4Itm0nBVuS3tkdwq5znCSB1++jORjEFY6QSGj4/D40j4phGLh8PhRVo2dREf6MLFRNIzsng5y8PCzTxu1xOfYkqowqKYTDQUxDp6HhAmfPnCUcipCTnYXH7SMajROPGdh2GMPQCQSiuNwWqhZEVWVUzU0irpOIJ3B5EiiqhmnaqJqGaVgoqkJGVgaBVpO8/AJcLhe5+T3RLjbRHgkgKRayoiLJzrMQHf7f2I647zzklK2HSBZxdGxBBF63Gz2hY1tOpLaU9CHvEK3BsQURWCAnwJYQKAgcWxdZkpFE0r8lTZo0adKkSZMmTZo0adKkSZPmLwjTNGhoOE9d3dFOryuKQm5uAdnZOd12bcuyaG1tJhhsv+T7brebvn2L6U7NxTB0zpypT2lAX6atrQVdT3TbtdOk+XNw/bYhfBEtLRBYtoWkyk65wKSHtUiWD3TU6mShQcco2/G5tgUKIMvOC3an4oGO+GoLx89Z2AJFVhwrDVnC7fGgqi4i0SjC40VRNQpz8ygoyKelpYV4Ik4gGEDIEvn5+eh6HAmB2yUjCxPTtBG2RSIRIxoOEYvEuNjUTHswTDgYxuN2k+H3o6oaoXAUl25hWSbxhIlugayEcLsUNE3D1d7GmVP1ZLZnU9CjB26vn3jERlZk3C4XkmTjy/BjeTy4PB7aWttQNIV4IoosgSzJjn1K6uEKJ+pcAiuWIBYM48nMQHIpqWcnkjYimqqQSBhOtLr4wi7EqZDpRLZ3YAvbic4Wztdu2zKWZSHLdseH0qRJkyZNmjRp0qRJkyZNmjRp/mKIhtrZuPo19lV+lHrNFoKYYVPYZyj/189+023Xbrhwhn/7f/8Gl6Tj0ZRO7xmWTVzK4Ikn/m9GlU/qlusn4jG2bPmIZW8/T+8c71feFQSjcfYfqe2Wa6dJ8+fiT/C8Tu4bSRI2NomEjtvlBlnGJhk1LWxATVpWOFHEUtK3WZbkpHWFDZZAkmQkvhDDk84XjgAuHBFW6hDDbQlJkXF7PbRfbEVTXCQSOi3NzUgS5OXl4XJ5SBg6iXiUcFjB43EhCYHf4yGuGyCBbuoYegJT14nHYyTiCWLROAKIRWO4NBVdN9ANA90wk7tYMqZlE4nHsIRGa3sr8XiE9rZWcnJyCba34s/Mxufz4XKptFk2LpcXzeXF5XKhGwY2FoaZwONyLFYUORldDsm/3gZsZAF2QqfheD1FpcV4C7IhGeUubIFkOfHTwhKYpu3UtxQ2StIKRCRFbmE7diFCElhCR5ZsEDIC1Sn2mPQTT5Pmfzq2bROPxzEMI/WaJEnIsozb7UZV1Uv+VuLxOB9//DGHDh3i9ttvZ+zYsXi9X504fHPE43HWrl3LwYMHufXWWxk/fvw1XU/XdXRdx+VyoWla+vd/g2LbNolEAsMwOmXSyLJ8xe8ukUiwadMmdu/ezYQJE7jlllvIyMjotvtMJBJ89tln7Nq1i7FjxzJ58uSrul4wGOTkyZM0NDQgyzJ9+vShuLgYv9/fbfea5toQQpBIJNB1/ZraoBCCo0eP8umnn+LxeJg5cybFxcXddp+6rlNdXc3WrVsZNmwYd0y7g6zsrK/9XCKR4Ny5c9TX15NIJMjPz6e0tJSCgoJ0v3gDIIRA13USiUSnSLKO9udyuS75PRmGweeff87GjRspLi5m1qxZ5Obmdtt9GobB/v37Wb9+Pf369ePuu+++5usJITBNk0QigaIoeDyedBv8lrEsi0QigWmaX2p/EqqqpPq/SxEOh6mqqqK6uppx48Yxffp0VPX6k52/jnA4THV1NZWVlYwdO5aZM2de8XqmaRKLxbBt+5LvS5KE2+3G7XZ31y2nuQpM0yQej2NZVuo1WZY7rUkuRSwWY/fu3WzYsIHy8nLmzZ2XtHXtHuLxOLt37+bTTz9l1KhR3HPPPZf9bXwZ27ZpbW3l6NGjBINB8vPzGTJkCNnZ2dd1H36XzKwBKouGNCNJLanXTVtwqCnB0soEq1ZZVzjDn0ZLc4RTh6r42RQvud7Oz7s1ZrG0xsvqD05Td2J8t1w/kYizf89xCgL7eaSs8/zHFoJaEmwLBK76fJZlEYvFOrU/SZJQFOWK7S8SibBmzRoOHDjAnDlzmDhx4vX9QVdJNBplzZo17Nu3jzlz5jBp0uU3ByzLJhaLdvqbLoWiKPh8PmRZ/qZvN82fyHX3ZLIiISNhCUf4lCwbNVls0BZJkTnped3hp/yFr7VATkVoS44VhrCRJcdqxLG1+CJ2u8N9xEakEi1sw8Dl0jAsm2A4gup2YVom7W3tKLKCx2+SnZNDZmYWqiKjIAOOEGCZBrFY1IlCTtqUyMoXftqKrJDQDaKxOIZuops2iqIRTyRwu9xYpkFC13GpMuFwGNPQHasTIbBtiwLTJhoKYBpxNJeXrJw8MjJtBAqJRAxZkcjK8BOLBjAME1u2U9d2/tRkcUvZRPXIqEAiGMabl4WsKAjbQkJGoGBjJAtZgikEKgJJAMjOpCTpKS6EQFYlbMlA1w1M3bESkQodATxNmjRw/PhxFi9ezIEDB7pMFjVNo0ePHowZM4a77rqLsrIyXC4X4Agg27dvZ+3atfTq1Yvy8vJuFa87hJqPP/6YwsJCRo8efVXXi8VifPbZZ/zxj3+koaGBhx56iLvuuuu6J4ppupdTp07x2muvsX37dkzTTL0uyzKqqpKfn095eTkzZ85k+PDhqYWmruvs3bOXFStW4Ha7ufnmm7tVvDYMg7179/Lee++hqipjx4694vWi0SifffYZK1as4ODBg0QiEQAyMzO5+eabWbhwIePHj08vnG8Azp07x1tvvcWGDRu6bOppmkZubi7Dhw9n+vTpjBkzplM/dP78edasWUNmZiZjxozpVvHaNE0OHz7MypUrmTFjBpMmTbqieG2aJgcPHGT5u8upqqqitbUVy7LweDyUlJQwb948Zs+eTY8ePbrtntN8PU1NTbz33nt88MEH6Lqeel2SJFRVJTs7m6FDh3LHHXcwfvz41MaXaZrU1taycuVKxo8fz9SpU7tVvLYsi+PHj7Ny5Upuvvlmbr311mu+3oULF1i+fDkbN25k4sSJ/OAHP6BXr17ddMdproaTJ0+mxuBLbZ4UFBRQXl7O9OnTGTVqFIriRFvGYjH27dvHu+++iyzLTJs2rVvF63g8wYEDB3j33XcBuPPOOy97PV3X2blzJy+++CIXLly45DE5OTnMnTOXR77/yFWJkGm+eUzT5MCBA/znf/4nZ86cSb0uSY7u4vF4KCoqYvz48cyaNavT+GoYBkeOHGH58uVEo1HuvvvubhWvDcPg6NGjLF++nHA4zJw5c7623bS3t/PRRx+xfPlyzpw5g67reL1ehg0bxqJFi67Yhi+FJMl4XT765WiMLfJ0es+0wbIExjGJc+eu60+8KgLtAq8qMaanh3xf58jrxohF7gmZ5ma72+5B1wWhgM2gLI2be3WeP1sCNElc9YaobdmcOHGCX/7yl9TX13d6r2PzpLCwkJtvvpnZs2czcODA1PvxeIJt27bx0UcfMXjw4G4Xr/WETmVlJe+//z6DBg26rHht2zanT5/iX/7lXzh58uRlzyfLMv369ePnP/85AwYM6K7bTnOdXHdPFgwF0A0dWVWxbQuXS3F8lpP+zLZwBGqnaKBTvNESyR1eSSAJGwkF28YJ4ZYEdtL/WkoVd0xGaksSIunjnDwU0zQwTR1FkUkYOsFwCMMy0FwuR2T2uLFNA2GZaJoHTXGuH4/FMHUdIaxURJGze6Tg83nx+bzohokkQTgcwTRtLCQUVWBaFoppYlkWwgbTtMAykGXn73XEYhsrESceNhACLI/AMk1i0TCqy42iuvG4XcRdKi6Xhm1bSLKMbRtJb28Fp9iiQBAnGGpDkS3C7e1kGkXYio1tCyzTxrIlLAtkWSFhxIkZMpqQwVKdSGzLwkpG7NnCRFMtkMKYpoxtgqZ5O9mKpEnzP51QKMTevXvZs2cPffv2JTs7G1l2NoJCoRCff/45mzdvprKykueee44JEybgdrsRQhCNRgkGg50iFLsLYQtisRiBQKBLRNoljxeC48frWL78HT788EOOHj2KbdtMnjy5kyCV5sYiHA5z4MABtm/fTlFREfn5+an22PFeRUUF27Zt44c//CG33347Xq8XIQSxeIxgMHhV7eNPRQhBPB4nGAwSj8eveD3DMPj000/5zW9+w5EjRxg4cCA33XRTKlJy5cqVtLW14Xa7GTt2bDrq4VsmGo1SU1NDVVUVhYWFFBYWptpgS0sLhw4doqKigq1bt/Lkk08ye/bsLwREwyQcDjsb7Gb3RTvBFxHiwWCQWCx2xTYohODgwYO8+NsX+eSTT+jRo0cqe6W2tpbq6mpOnTqFbdv81QN/hT8jnQnwbRGPx6mtraWyspK8vDyKiopQFCWVJXXkyBG2bNlCRUUFjz76KPfffz+ZmZmAI9IFg0Gi0ehlI0y/KToixEOhENHo5SNaL0ckEmHLli28/vrr1NbWkpmZSSwW66a7TXM1CCEIh8Ps37+f6upq+vTpk9rMsm2bpqYm9u/fn5oTPv3008yYMQNFUa6pP/pm7tW+6jHYsiwunL/Atm2VJBJxiouLu4iEuq5jmOm54beJbdu0tbVRXV3N+fPnGThwIH6/PxkoZ3PhwgV27drFpk2bqK6u5h/+4R8YMWIE8MV4GAgE/kztz+n/AoEA0Wj0a68XDAZ5//33+dWvfkUoFGLixInk5uZy7Ngx1q1bx8WLF8nJyblG0VNCkq8smGf4YezY7stmaWyQqNly6fckwOWSGFDaffcQi0okQiCf+tPPJXD6v507d1JXV8fgwYPJyMhItb/GxsZU+9u6dSs/+9nPGDNmjPNZYROLxVJr4u7mq2vwK2Hbdir7+FLnicViHD9+nIsXL5JIpP3Bb0SufxvOThAINuNy+5FlGb/fi43kRPjKTkVGWwJh2yiy5NQclGVAOH7Xjju244htkypO6LiFODYXyM6PW8YRrFPvS+D1eZAlQdjnJhYVGJaFS9jopoUtHNsN2zZJJKJoqhMhbtsG8UQcI6FjGgYSEqZlYZkWkiTh0jSysjIIBEMgBHFDJ6Eb2EioSasTy7acwog492OZVrLYJKguF6pLIxYPY8QTCElC2D40RcY2E0iagipryIpCTk4WYHGx5SLRWAxwRH/DMjFMC2FZqC4bPR4nFg1hWxqxcARV9mIYJqZhYRgW8UQCr9eLbsWJW4JIwiIWBpes4VLUjkeIImvYNiQSjp+2S1NRXRpOgPzViwMdP+zOKXSdd4IvZ63Q3USjUf74xz9SV1fHnDlzGDVqVCoytoOOBY8QAq/XmxZG0lySgoIC/vqv/5pbJt2C5tIQQmAYBrW1tbzyyitUVFRQVlZGaWkpffv2/bZv92upP1nPCy/8FxUVFYwaNYqMjAxqamq+7dtKc5Xk5eXxyCOPMGPGjNRmiWEYnDx5kqVLl1JdXU2/fv0oKytj0KBB3/btXpGzZ8+yZs0ajh07xj333MODDz5I3759EUKwf/9+XnzxRaqrq9mxYwdDhgwhK+vrrR/SdD85OTncd999zJ8/P7VBYpomZ86c4d1332XTpk2sXr2aoUOHMmrUqG/7dq9IKBRKCZ5jxozhqSefYsTIEaiqSkNDA2+//TbvvvsumzdvZtKkSQwbNuzbvuX/8WRmZnL33XfzyCOP4PP5EEI4ItyFC7z//vt89NFHrFq1ipEjRzJ+fPekZHcXtm1z5MgRVq9eTWNjY7dmyaS5PgoLC3nkkUeYM2cOkiSlxLq6ujqWLVvGtm3bKCwspLy8nN69e3/bt3tFbNsmnoijKDJTpkzhxz/+cWrDp4OOzK7ujBZPc3XIskxJSQk//elPGT58OPCFWHfgwAFefvll1q1bR1lZGWVlZTd8xpppmhw7dow33ngDy7L42c9+xm233Ybb7ebc2XMsfXMp27ZtY9++fYwbNy6VzfCnIkmQlSUxZXL3rftPnpRZeYXTuz0wZKjcbfcQCsk0npFpO/P1x14tkiRRWFjI3//93zN69Gjgi82RY8eOsXjxYjZs2MCAAQMYNnQYbs+N2/5kSaZ379788z//M/F4vMv7sViMVatWcfbsWe64446/iPX9/0Sue1TKy/Vz8aJGKBhCVdx4PB5Q5JTKLASObYiSLLooQLIFQlhJQ2wQWI6LtfSFj5jj2+yIw1YyehtsEBKqoiBJMqqi4PWqYFuAnRJODcMCIbBsA0kWOP2dRVyPOhHctsAyTCQkNMWFbYOh6wRiIWRZwu32oKkqPq8HXdeRZQXDSmBYNoplI0syHrfb8e6WcSLLFQVFUVBVDb/bi60L2iIxLMN0ji/UyMjIRnMrmJZJJBhAYONyu/F7vbQgIWwLw7QIRqJEYglsW6AqCqpm43a7cOd5aT8X5uyJk3gKcpAUJzobScK0bUdQR8GwLMKRCJneHqgWaIqCpsjJyHYJ25KxVB+SIqO5XKiaC1m+tkEhFArx/PPPU1VV1WVHSpIkXC4XvXv3ZurUqdx111306NHjzyYQJxIJKioqqKysZPjw4QwbNqyTeH3u3DmWL1/Oli1bKCws5LnnnmPw4MHf2MCY5r8Pbreb0tJSykeXd5oIjhgxgvPnz3PixAmOHTtGe3v7FQc3IQQnTpxg/fr17Nmzh+bmZlRVpbi4mDvuuINJkyaRk9O58nUgEOCzzz6joqKCM2fOoCgKgwYNYs6cOZSXlzt97RVobm5m06ZNnDt3jkmTJlFeXk44Esbv9/Pkk08yceJEli5dyvHjx/+0h5Tmz4bL5aJ///5d7GhGjhxJe3s7tbW11NXV0dzcTFlZ2WXPI4Tg9OnTbNq0iZ07dtLY1Igsy/Tt25dbb72VKVOmkJ+f32nzMRQKUVVVxaZNm6ivdyqYDxw4kFmzZjF27Niv9adua2tj69at1NXVMXr0aPx+P1lZWdx+++0sWLCACRMmpNJLs7Ky2LFjB0ePHuXcuXNEIpG0eH2DoKoqvXv3pry8vNN3PmrUqFQEbH19PRcuXGDkyJFXPFdHNM/GjRupq6sjHo+Tk5PDmDFjmD59OoMHD+4kmkQiEXbt2sX69eupq6vDsiyKi4uZPn06t9xySxfh5asEg0Gqq6s5dOgQQ4cOZfDgwXg8HsaPH8+MGTO47fbbUucoKiri7NmzVFRUcO7cOZovNkNau/7WUVWVnj17MWrUqE7fdyKRQJIkDh8+zJkzZzh9+jTjxo277HmEEFy8eJFt27axZcsWzp49i23b9OzZk4kTJzJt2jT69u3bqQ+MxWLs37+fTz75hGPHjqHrOn369GHatGnceuutXcbwrxIOh9mzZw979uyhpKSEW2+9lby8vNT7DQ0NrFmzhpMnTzJ+/PguKdppvn3cbjf9+/dnzJgxndrGkCFDUiLiiRMnaGpq+lrxuq2tjcrKSjZv3szp06cxTZOCgoKUN3ZJSUmnawSDQSorK9mwYQOnT59GlmUGDRrE9OnTmTRpUpcgna/S0tLC1q1bOXnyJKNGjWL06NHE43EkSaJPnz6MGTPma/vQNN8ekiTh8/kYPHhwKrIVnL6stLSUc+fOsXjxYo4cOUI0Gr2ieN2RMfDpp59SVVVFY6MzB+zduzeTJ09m+vTpnfomcALDduzYwdq1a1NzwJKSEmbMmMHUqVO/ViwPhUJs3bqVw4cPM3ToUCZMmJCK5p03bx733ntv6ppFRUUU9S5i0aJF9O7d+xvXD1QFsrO7L7guM+PKNcRkWcLv6757kGWJr1kiXhdut5uysrIu7W/w4ME0Njbyr//6rxw6dIhwJHxF8dqyLOrr6/nggw/Yv39/KsuytLSUO++8s0t7EkIQCARYt24dFRUVNDQ04HK5GD58OHPnzmX06NFXfN5CCNra2li9ejVtbW1MmTKF8ePHM3To0Eve28GDB9m1axeDBg3ikUceSW8k36Bct3gdjYRJJKLoCZ1ANEwwEiWvRx5ejwdkCRmnIciS0/E6th+OtzNCRpYkLNsEHP8dCScNQZJlhO0UEdSSmrYkSciKjKwqeN1efD4PEhZtLS0I4Qjb2BI+twefx4XX7UIWNkYihizspL8zTnS0rGKbFvGESVtrgOaWFoKhdgQCRVHx+b24vW5AQlYUDMMkoZtIsoymqrg0N5Yt8KgKsiIhhIKquPD7/CiyRtyQCMVdSHIGudl5tMfi5FoaGW4PrefPcOFCA6apk1+QT0FhT9yaRrtu0h4IUVt/CltxioBk+v2gGnjUGJZXAq/mDEpWNh6vG1lxCmXKto3mUonF4xAxScQNsjwybo+KW1VSxSCRJLBVNNXZIJBlDVXRkGWZa0kmMgyDgwcPsnXrVnr27El+fn5K/LUsi7a2Nnbu3Mlnn33G7t27+fGPf0xpaemfRSAWQhCJRAgEAp2sGxKJBJs3b2bp0qVUVlZy/vx5hg0bRiAQ6PZUqjT/vfD5fMmCsC5kWU5F4FwKIQS7du7it4t/y+7du+nZsyd9+vQhHo+zceNGKioqePjhh/mrv/orCgsLAWhsbOSVV17ho48+wufz0adPHyKRCB988AE7duzg2WefZdq0aZe9v/b2dt58802WL1/OyJEjue2223C5XJSUlPD000+nbFBu9MiMNFeH1+slNzc3Vdjr6yZxB/Yf4Pcv/56tW7eSm5tL//79MQyDqqoqtmzZwn333cfDDz+c2pBpbm7m7bff5r333kt5wOm6zrp166iuruYHP/hBKhLtUgSDQVauXMnSpUvp168f48ePZ9CgQfzN3/wNlmXRq1evTr6ImqZ1+m2li5Xd+LjdbnJzc/F6vVf1nbW1tbFixQqWLl1KJBKhpKSEnJwczp8/z549e9ixYwdPPfUUkydPRlVV2traWLVqFW+++Sa6rtO/f38AtmzZQnV1NQ8//DD333//ZQWcSCTCxx9/zCuvvEJWVhYjRoygsLCQuXPncuutt5Kfn99pgfLlQkSKoiAr6eysGxmXy0VOTg4+n49EInFFsUMIwalTp3jjjTdYvXo1mqYxYMAAFEXh8OHDVFZWsnfvXp566qlUhGMoFGLt2rW8+uqrBAIBSkpKcLvd7Ny5k+rqaurq6vj+I9/H5/dd8pqxWIyKigpefPFFhBA8/fTT+HxfHBuNRtm2bRsbN25k1KhRjBo1iiVLlnyzDylNt+HxeMjLy8PtdqcK2V6JCxcu8Pbbb/POO+8ghKCsrAyfz8eJEyeoqqpi586dPPPMMymRqKGhgeXLl/POO++gqiqlpaWpAuHbt2/niSee4N57773s9VpbW1m1ahWvvfYaxcXFTJkyBUmSUtYOHdmyaf7ykCQJr9dLQUEBiqJ87fhr246H8W9+8xs2bNhATk4OAwcOxLIsdu/eTUVFBfv37+dv//Zv6dmzJ+CM1ytXruT111/Htm0GDRqEYRhs3ryZqqoqHnvsMRYtWnTZa4bDYT7++GP+67/+i7y8PCZNmkQ0GmXXrl2oqsqUKVM61QbQNI2SkhJKSkq+seeUpnvoyLbv0aPHVbU/wzDYt28f//Zv/8ahQ4coKSmhb9++hMNh1q5dy+bNm3n00Ud5/PHHUxmmjY2NvPDCC3z00UcUFBRQXFxMMBhk2bJlVFVV8dOf/pTbb7/9ktcTQtDe3s7LL7/MG2+8wXe+8x0WLFhw2WMDgQBvvfUWJ06c4B//8R8ZOXJkeg1yg3LdI1ZWbg5FvXthFYKhWyDJuDxeNLeGoihEYwaRmA4oSS9nHO9qkoUYLRtJdoowIglsnEhrSUgIp4QjIJAlBVVVUTUVTVPJ8HvI8HsJtLejJwwUWUWWLWRZQZIEqibjdqnYpkEiamElDKcauayAZGMiSEQiNEcMwlGTmAmJhOUUYTRCtAeD+Pw+PB6P41knBIZuICkKsqxg2U4JSVlW0I04Hs2D1+PF4/HhyuzJxXOtBEJtJIwEqG7ye/ejLWrj90MsFudiczMgoSgqqqKQn5tDLBwhEonhcrmwFAVZcRFLWGi2io4XxaWSUZSJJvvx+DNQ3QooznNUbUGm30+sNYSMgtelYepxZJdT3FFW5JTlCpJAkt0IYdMR064oilMo8xpxu90sXLiQWbNmpaKwOtJIPv/8c1566SVWrFhBaWkpjz32WLcWyrkSQojUwBmPx5k1axbr169PR1unuS4aGxvZt28fiUSCESNGkJeXd9nBrampiWXvLKOiooK77rqLBx54gD59+mBZFlVVVfz2t7/lzTffZODAgdxxxx1aGu6pAAAgAElEQVTIsswnn3zCypUrU7+bIYOHoOs6q95fxdKlS3nnnXcoKSmhT58+Xa4XiUR47733ePPNNyksLGTBggWpCMbMzMxUZE0oFOrWZ5Tmz0dzczMHDhwgGAxy++2307Nnz8u2x9bWVla9v4pPP/2USZMmsWjRIkpLS7Ftmz179vC73/2Od999l9LS0lSl+IqKCt59912ysrJ47LHHKC8vx7IsPvnkE5YsWZI6/lK2Ch3Vv5csWYLL5eKee+5h1KhRZGVlXXY8OH/+PMePH8ftdlNSUpKOBvsLoL29nUOHDtHS0sJ3vvMd+vXrd9lNPdM02b59O2+99RbxeJwnnngilS7c1NTE22+/zdq1a+nRowfFxcX07duXnTt2smzZMoQQPPXUU0yYMAEhBFu2bOHVV1/lvffeY8CAAUyYMKHL9eLxOJs2beLVV18lFouxaNEixo0bR0ZGBhkZGRQVFXX5TENDA7t27aK1tZVx48Zdsq9Nc+MQCoWoqanh/PnzDB06lNLS0sv2geFwmE2bNrFixQqKiop46qmnUgvU2tpalixZwscff0zv3r3p168fPp+PAwcO8OabbxIIBHj88ceZOnUqsiyzc+dOXn75ZVauWJkaw7+Kruts376dl156iaamJh577DEmT56cyp76sl2Iz+fj3nvvJRaLpRfMf0G0tLSwb98+4vE4ZWVll+xTOohGo1RVVfHWW2/h8/l45plnuOmmm1AUhVOnTvHmm2+yYcMGCgsLKS0tRdM0qqureeutt8jJyeFHP/oRQ4YMSfVrv3/p97zzzjsMGzbsktHewWCQTz/9lFdeeQWPx8PChQsZPnw40Wg05acuSRI1NTXU1NQQDofJzc1lxIgRDB48OF2o8Qanw3d4586duN3uLllRXyUQCPDhhx/y/vvvU15eznPPPUe/fv2wbZvDhw/z4osvsmLFCgYOHMijjz6aqkPy2muvIYTgJz/5CSNHjkyN47/61a9YunQpo0aNYsiQIV2u17Fx98ILL5BIJHj44YcpLy/n9OnT1NfXk5mZid/v56233mLHjh20tbWRm5vLhAkTmDlz5nUUS5aIGLC6JkRti+5sJiW7UksImqMWrr7dvBktQWNU4n990kimp/PvJ2baNIoCpmd037xWlmUUTwZrjoQ40+541gvAthydrS1mXFPA4pUQQtDa2kplZSWyLF+xMLwQgubmZl5++WUqKyt54IEHeOyxx8jOziYej7Nz505++ctf8tprrzFmzBgmTZpEPB7nk08+YdmyZZSXl/OTn/yEnj17EolE+PDDD/nNb37Dyy+/zPDhw9HUrn1VJBJh2bJlvPbaa5SUlPD4449fNlPaMAyqq6v58MMPGTt2LPPmzfvajJY03x7XLV5nZmWh63Enshdnt0WSpWQFcI3W9gimZWIZzs/EEaQd5KQ3tuNv7fwncHyjbVsgyyDZgCQhqwqqS8Pv85CV4cPndROLRIgEQ+ixBJIAf4aHzIwMsrJ8+H0ebNskqptoqgtTSfphay4UVSKhmwTaA9jeQjxZKpGEhcebwIhHkWWFaCJOe1sQf4aJLCuYlo1lWWDbKLKCYRioqoKMjGEaeD0ybo+b7NyeKBl9iOptCDtOItKAZRWQm59PqLWJs+eacLl8+LNynagyWSEYDFNQ2JOiot6EolEK83KIGCaS7CYSimJaBm7Ng8frR/UquBQNJBlZkRwjcBynFp/XjW3bqLKCy+1GERKaqqAqTvQaUtJXPLkZ4NSRkbCEjWXZXKNziPMdJqPwysvLu6R1jxg+gvb2dn7961+zefNm5s6dS05OTmpBe+TIEdatW5dKGXG5XAwYMICZM2d2qhYPTod39OhR1q1bx759+772+K8ihKClpYWxY8cybdo0NE1j9+7d6YjrNFckEAiwZs0a6urqUsWhgsEgBw8e5ODBg9x+++3MmTOH/Pz8y57j8OHDVFdXU1BQwLx587jllltSg2F+fj579+5l1apV7Nq1i5tvvhnbtqmoqCAUCjFjxgymTJlCdnY2ALawqa+vT2UW9OrVq9O1EokEa9eu5Q9/+AN5eXmpyMWvs3RI85dBKBRiw4YNtLS0oKpqqmDjkSNH2L9/P+PGjWP+/Pn0Lrp8uvLx2uNs374dn8/H7NmzmTp1akpEKSjoQU1NDW+88QY7duzglltuwePxUFVVRVNTE3PnzmXatGmp9q4oCmfOnOHMmTO0t7djmmana3WINq+99hqKovD4Y49z5513XtECpKW5hdWrV7N7927GjRvHhAkTOlmkpPl26YgQtSwLTdNSxcxqa2vZv38/Q4YM4f7776e4uPiy52hvb2fnzp2cPn2aefPmMWfOnFQkdXFxMYFAgH379rFr1y6OHj3q2Mjs3MHp06dZsGABM2bMSPV9Ho+Hs2fPcuDAAdra2roU39F1g8rKSl555RXC4TCPPvooc+bM6bJxYhgGhw4dYufOnZw/f54jR45w7NgxJkyYwAMPPHDD+9f+TyEWi7Fr105eeuklXC5XyvP15MmT7N27lz59+rBw4cIrev5fuHCByspKYrEY06dPZ+bMmanFdlFREY0Njezb5xTnmz17Nn379mXv3r0cPXqUadOmcffdd6cWvxkZGZw/f54tW7bQ0tLSpf2ZpsmePXv4/e9/z7lz53jwwQe57777KMgvSB3zZbuQhQsXMmHCBLZt29YNTy/Nn0p7ezsbNmygvb0dSZJSc8IjR45w+PBhJk6cyIMPPtjFcuHLtDS3sG3bNlpaWpg9ezazZ89OjYl9+vQhEAh0iugvLCyksrKSlpYW5s+fz5133onX68W2bTRN48SJE5w9e5bGxsYuonk0GqWyspLf/e53aJrGM888w7Rp0/D5fIRCIWKxGO3t7Xz44YdUVFTQ1NSEYRi43W769u3L3XffzaJFi64oxqf589ARgfruu+9SVVUFOP1La2sr+/fvp66uju9+97upoINLnsMWXLhwIRW8df/99zNlypTU8QUFBZw6dYr/+I//YNOmTdxzzz0IIdi2bRunT59m0aJFzJw5M1UwMiMjgyNHjnDgwAHOnz/P4MGDO10vkUiwb98+nn/+eaLRKH/3d3/HXXfdhdfrJRwO09bWRltbG0uXLqWlpYXCwkKEEFRXV7Nx40b27NnDc8/9Hf3797vq5yRJEhlZgykr/1s0r8SwoRIDBzpiiaM4yRT27N723KtXX/71/1tCNBJG+UoWhgDcHi833XQtRSivDY/Hy53T55GXV4iSVO4jEcGGTTZYUJIZwDq+hF27t1/1OTuikleuXMmePXsAJ9M+EAhw4MABDhw4wNy5c/ne9753WcHXNE1qa2vZvHkzRUVFfP/7309ZMAkhyMvLo7KykhUrVrBhwwYmTpxIW1sba9asIZFIcP/99zNx4kQURUnNQffs2YOu6zQ3N1PUq2v/t3r1ahYvXkxRURE/+9nPGDt27CWzTDqE9bfffptEIpHs99LzvhuZ6xavVUVGVSQUNZkmIIMkySiSjKpqNDZepL01Sl5uHpqmYdiWYw3y5T2fL9mCSCIpagOSEMgSKLKKy+XC5/eSm5VBpt+LLAtaL0Zpb2tHCIE/w4vP5yc3JxuPW0MIi1gshm0LFEXD7/GSnZODpmlYpo1lQUNTG6VlYzhdfxKXpmIpEjoSA8qGE9PjnD5dj6GbSLLAsm3nnm2wTBPbtBCSREI38PoUFFVFt3QSBvTIKsDjzyDSnEDFJsvvRRKCYDBEY9tZSkt6UdCjN7FYBNuIE08YBIMR+vTpQzQeRzd0mgNtmMJGzXCR0CMoCFyaG0VxrFaQZBRZdlxAbIGwQdNcCCGjSG68Lg9uxZv0B5dIBXB02LfYAllWkxHvOH7kctJD+xsiKzsr5UvY2NhIOBzGtm1kWWbLli288MIL1NTU0L9/f3r16kU4HGbNmjV89tlnPP3008yfP5/s7GyEEGzdupUXXniBw4cPdzl+8+bNPP300yxYsCAl8n0VWZa56667mDlzJoU9Cjl0+BCyLDsbEmnSXIaO1PaOVNCOAnmxWIyBAwemivJcbqJo2za1tbU0NjYyceJEiouLOw3qubm5DBkyBE3TqKurIxAIEAqFUtEIpaWlnYTnsrIyfvrTn5JIJLpEApqmybp163hjyRu43W6efPJJbr/99rRX138jAoEAH330ERs3buzUHuPxOP369WPUqFH069cPl/vSE0chBCfrT3Lu3DlKS0sZMGBAJ+/07OwsBg0ahN/vp76+npaWFjRN4+TJk7hcLkpLSzsJz8XFxTz77LNEIhGKioo6TQhN02Tr1q2pieBjjz3GrLtmXdEXtqGhgbfeeou3336bAQMG8NBDDzF48OB0Qd0biFAoxPr166mqqkpFhna0wZ49ezJixIiUpcLlaG1t5eTJk8iyzODBg1N2SeB4Gvfv35/+/fuzc+dOGhoaaWlp4eTJkwCUlpZ2EoZ69+7ND37wA9rb2yksLOzUFztp0LtYsWIFjY2NPPTQQ8yfP/+Sm426rrN7924WL17M2bNnkSSJkSNHMmvWLEaN7Fr0Oc23QyQSYevWrezbty/V/kzTJB6Pk5ubyy233EJZWdllN7w6BKC6ujpyc3MZOnRopzHW5/NROqCUnj0LOX/+PGfPniUzM5MTJ05gGAalpaUUFHwRCVhYWMiiRYuYNWsW+fn5ndqJbVscOnSI1atXU1tby3333cfChQud9p6caneIi5s2bWL06NHMnj07nWlyA9Pa2spHH33Epk2bAFJjsK7r9OvXj9GjR9O/f/8rZnW2tLZw9OhR/H4/I0eO7PR9ezweiouL6dOnD+fPn6e+vh5VVTly5Ag+n4+hQ4em2nZHAb8f//jHqTH4ywE58Xic6upqFi9ejK7r/PCHP+y0UWNZFvF4nGg0SigUYvTo0cyfPx9Zltm/fz+ffVbBK6+8ghCCH/7wh+m55LeMbdtcuHCBJUuWpMa5jkxny7IYMWIEY8aMSVl9XArTMmloaODkyZP07NmT8vLyTmOm3+9n2LBhZGRkUF9fT0NDAz6fj4MHD6KqKiNHjkz1l5Ik0atXL5555hna2tq6bHAYhsGBAwd4/vnnaWpq4tlnn2XevHlkZmZiGEZq3tDW1kYkEuFHP/oRZWVlyLJMbW0tL730EqtWraJfv34888wz1zAGS3jcuZSUzsfvF4ybIDNu3BdzSEmSuj3r2u/3M23aXZfVGBwL2u7LaFAUlb59i+nZq3cy5R7a2gQnTlmYJvTocZFAYN01nbMjwvrNN99MfRcdBWsTiQSDBg3i5ptvvuJGl2EYHD9+nObmZsrLyxk0aFBqHJckiczMTG666Sbee+89ampqsG2bixcvUlNTQ25uLiNHjkx9d4qiMGDAAH7xi19gWRZ9+/bD+lIATTweZ/369Tz//PNkZWXxT//0T0yaOOmy63XDMNi1axfbtm1j0qRJ3HrrrShpu7gbmj/B6EoGSUFC/sLnSwJFKJgmNLeEcGt+QsEQfr8Pt8eLbulIOI3+C6lUAiEhUtYWTpqHLMvIsoRLVfG73fi9HiRZoq21jZbmViRkCnoUkJGZidvlStaKtInHTBJx3YkEs0GVXeTlaXi8frzeTNwujebWAL36lNLS2kqgyUSYCVRJolfffpxraMDl8ROLBJEsgbCdgpCOiQlEY1GIOQKt6vIihEw4FKKlpZX8/gpZOfkEGzMx9DDBQBy7/jTnz54i021iC/B5vOiJGLphoWkq4UgYVXWKJbS3NBOPR4gJG0UGRQVZUbCFhCqpjl84UqqQpS3AFiDLCoqkgeXCrfhwqy4kBad4ZlKwRjjlMWUkJL7YKECyk8ddf0u4FLFYDMuycLlcaJqGJElcuHCB119/nerqah566CG++93v0qNHj5R/1q9//Wtee+01hgwZwtixY7l48SKvv/4627dv58EHH2T+/PmXPf5KBXrSab9prpW8vDzmz5/PkCFDUgNmIpHg/PnzHDhwgOXLl9PS0sITTzzBgAEDunzetm2am5uJx+Pk5+d38rgEZ/DNz8/H4/HQ0tJCPB6nvb2dYDCIz+cjIyOj0yTL4/F0iigLtAcApy+tqqqioaGBs2fP8uyzz3LrrbeSlZWVTj3+b0ROTg533XUXY8aMSQnFuq7T0NDAwYMH+fDDD2ltbeXxxx+/ZLG8jnoE0WiU3NzcLiKJoijk5ubi8/lob28nEokgyzKBQAC3292lPbrd7k7tvsOKRgjB3r172b59O8ePH+fhhx9OFQC6VHsUQlBbW8sf/vAHPvjgA8rKynjyySeZPHlyOur6BiMzM5PbbruN73znO6lFgGEYNDU1cfjwYTZu3EhbWxtPPPEEY8eOveTGQzgcpr29PeWT/dVFaUZGBtnZ2ei6TjQaSR2vaRqZmZmdNkk6iph2RG5HIhHAaVOHDx+mvr6egwcPMnfuXObMmUOPHj0u2QbdbjcTJkxAlmUaG5uoqzvOkSNHWLJkCYZhsGDBgm/N9izNF/j9fiZNmsQdd9yRajemadLc3MyRI0eorq5Otb+pU6d2+bxtC0KhEO3t7eTl5XXpk2RZJjMzk6ysLE6dOkV7ezuxWIzW1lZkWSYrKwvXlxa/HQVMOyLzO2wYOoo0L1myhD179jB16lQWLFhAUVFR6npftgvxer3cd999qXac5sYkPz+fWbNmMX78+NRriUSCCxcuUFNTw/vvv09zczNPP/30JYuBddTkaW1txefzUVBQ0Kn9SZKE3+8nNzeXuro6WlpaKCoqoqWlBbfb3SVAx+PxdBqDL168CDi/iYMHD3Ls2DFqamp48MEHufvuuzttPmdlZaWyCHr37s2oUaNShZovXLjA0KFDWbx4MR9//DG33XbbJS2Z/lTqT9byysv/yYn6k5ddfvYvHs199/0fFOXapAohbE7W72XFe//PZc/t8fgZXT6dqbc9dU3nBtD1GDU1n7Fu3QuXPX9hzyIWPfQEY8dNvubzf5WOgorf+9736NfPiUTuyDw5c+YMBw8e5NVXX6WpqYmnnnrqkpu0pmkSCAQIh8P079+/y5imKApZWVn4/f5UZLQkSVy8eBFN07pkFGiaRnFxcSrTKhgMAl/4av/6179m586dLFiwIBWQ9lUyMzOZOXMms2bNSm169+nTh7a2Nv73//7fbNq0ifvvv/+yVg+XRJJQFBeKApom43L9uUVIxxb2WtvsN3oHkoRL+2Ju5XIJFMVCCFAU1zWvDSVJIi8vj+9///udvMjj8Tjnzp1j//79LFmyhIaGBp577rlL2r1YlkVTU1OqOO2l1sQ9evRI1TmxbZtAIEAgEKB3795d2o/b7U7VpQBoa20DnDnp7t27eeeddzh79iw///nPmTJlyhUDe4LBIB988AGGYTBv3rz0fO8vgOv+dQkpaTchO37WclIolYXExeYAMgoZvgwsK0E8HicWjZFbkIdh6Y6YKiclVOkL0RqSLwuwsVFkJ7rb63YE0HA4RMP5RsyESW52DpnZWXi8HiTAMg0M3UjudklIkoxpOBYhAhlfZgZ9evfH63MzonwYcSNCfmEBTecyUFxuLDNG/YljtIWi2MKZmJqmgWlaCNvGFs7OZUJ3OgDTtjBNA6/Lg0ezMfUITQ0nyc3Lh4GjaTqXiWlLnD1Vi0yCgvwCTCNBIhrB0HUkWcKlulAkiXg8Ql5ePr2KehGOx1CFRCisoygGhmUQaA+Tk5WN5HIDNrawsJJCtm07O2uqomEZVqoYZkeBSiFscLYYEFJyZ9527FqElEyl+YY1roaGhlR63YwZM8jJyUGWZT7//HN27NhBSUkJ9957LzfddFNqMZqTk8PWrVvZtGkTu3fvZtiwYanji4uLuffee7n55ps7Hd9Rfbvj+DRpvikyMjK47bbbOi2ULcsiEolwpOYIv/rPX7Fy5UqKi4udNOCv/IYsyyKRSCCESBWg+yqa5hRMNQyn3+r4v6IoVx1xGo1G2bhxI+BMTmtqamhpaemyMErzl43f7+eWW25h/vz5qYhpy7KIRqPU1tayePFi1qxZQ1FREb169eoUVQ3OgqIjSkdVtUtGn3QUqDNNE8uysG0b0zRRFCVVjOXriMVibNu2DUVxLLaOHTtGQ0MDvXv37vL5Dq/tl19+mS1btjB+/Hgef/xxxo0bl7a7uQHxer2MGzeOhQsXphYetm0Ti8U4ceIEr7/+Ohs3bqSgoIA+ffpc0m6jI1JRUZxaJtJXOk7ndceSxLbtVL8oy3KyzX59G4zH4+zatQuXy4Wu65w4cYIzZ85cNipSVVWGDhlKSUkJuq7T1tbG+vXref3111m2bBklJSWX9DNO8+fF4/EwatQovve976UiQW3bJh6Pc/r0ad555x1Wr15NdnY2JSUlXaIQhbDRdSewRVXVS6YPK4riZGl+aTzuaH+qql7VXFnXdQ4cOMDx48dJJBKcOnWKkydPUlZWlmp/jY2NrF27lvr6ehYuXMj48eOTaw7za86e5tsiKyuLW265hYULF6bGso45YU1NDa+//joffvghPXr04Mc//nGXz9u2ner/NE27ZBSgoihO7aPk/NGyrNQYfLVFFcPhMDt37kzNIWtra2loaOgkPvp8PiZMmMCYMWNwu92d5gtZWVnMmDGDiooKTpw4QU1NTbeI1y3NF6nbuZaZvdooyuz6LA43JVjxmU5BjwSqem0CpBAWJ080cPHQev7X5K5CVMyw+bxJsHG9HyH/4JrvPR6PcrzmMK6zW1k4susme0vUZPfpbI4dnfiNiNeSJJGfn8/dd9/NTTfdBDiim2maBINBdu/ezfPPP8+yZcsYMGAADzzwQJdzfDlSW9O0Lu3JsXx15oCJRKJT/3ct7S8Wi7Fnz55Uf3bs2DHOnj1LQUFB6joul8sRqwUMGTKkU7aW1+tlyJAh5Ofnc+HCBZqamq5NvE7zjSNJEllZWcycOZOJEx3LEyEElmURDoc5fPgw//7v/86yZcsoKyvj0e8/2uUcQghisVhqTfzV9YAkSalAR8NwvLpN08S27dQa5GqIRqOsXbs2tf7es2cP9993f5c1UQeWZVFXV0dVVRWlpaVMmjQpnfH5F8B1i9eKoiLJilN8UZawEciSjFBkzp1rxKN50Vwysu1CVlR0PU5z80UyszJQFBnTsrGRUCUFWwgUWUKybBAgyxKqquByqXi8btweF4au097ajlvTyOzpx+3xork0VFlG2AJLSjoaqQqy4XhKy5JEQk9w8WIT7nAYI2FgGr3JyMnCI0fJ9Hro1buUgCQTbLtIe3sY3TIRpoHbrWFaOqZtY9pJP24cIVvC+XGFwxHO2Y30yM8iI9BIRstRFKsXfp+bnJwcouE28rNVMjK8eP0KsXgYReoooqglbTycCU1mViYjRpYjqy7aIjEaW8I0NpwlEo4QCoXxun3J4pQSQjiKvyNMy8iSjMftJh7TMU2BqQgkAUrymUiy5FhkCxnHStyJlrcl59+2JfFlN5erwTAMtm7dSiwW61R8pr29nb1797Jt2zb69u3L7Nmzyc/Px7ZtDh06RGtrK1OnTqV3796dBsOCggKGDRvGxo0bOXbsGOFwJHX85MmTL3v8hg0bOHbsGNFo9LKdU5o014osy3g8HjIyMjpNrHJycsjPz2fL1i3s27ePffv2MXPmzEtGMXQM0Lqud0kh60i5siwLt9uNoii43W5UVU0tsJ1Ct1deLQshGDp0KLNnz6ayspKKigoGDRrEo48+2sUXO81fLpIkpSKgvxyR3NEe9+7dy44dO9i/fz8XL15MRed0IMtyahPFMPQuIklHCrRlWfh8vtQiRtM0QqEQhmFcVXsEGDBgAHPmzOHIkSPs2bOHP/7xjxQWFnaKLBRC8Pnnn/Pb3/6WHTt2MGvWLB5++GGGDx+e7sdvUDoWnRkZGZ02F7Kzs8nLy+PYsWNs3bo15YF5KfFa07SUOGMYBrawUfhiUWKaJqZhpNJ7O443TTPZBm3g6xcx/fr1Y+7cuTQ2NrJlyxbee+89ioqKLuuHrLk0NJcjoOTl5WOaJjt27GDbtm2cOHEiLV7fADiLW6f9fTlzpKP9nT17lvXr13Po0CFOnTrVRbzuWBx3iCodC+QOOsQgp66NY1moqiputxvLstB1/apqpQgh6NmzJ/PmzcM0TT7++GOWLVtGv379GDFiRCqlfu3atZw6dYpPP/2UQ4cOpT575swZTp8+TSAQ4Be/+AUTJkzgnnvuSWcQfstIkoTH4yEz8/9n782j7KrK/O/P3vsMd655SKqSqlRVBiojGSCEEDSAEFBQkDiAgoIgjq2reX+uVteye3X36+yveymvzeqATAYFURmkERQhYVLAkEBmICETVamkxnvr3nvO2fv949x7TagKksjYns9a9Ufdu8+uc8/ddc7ez36e7zd92HOwurqaZDLJ1q3beOihh1i3bh19fX1jKoeklJWgdfn+90p8369s7sVisUr7fD4/RlP91c6z/AwuB2VuueUW/uEf/qGSESmlJB6Pj1vdpJSiurqapqYmNm/e/IYZfAeBxjUeJ7Ym6agZmxXpKMltuwT9Axr7KCMV2hhGRjR1CcW72sdKnmSLmqw/yuMvFRkYOPpzz+cD8jmPKdXOuP3vG/bYuxMKhfzRd34ElFIkEokxVXO1tbU4jsOjjz7KTTfdxJ/+9Cfe9773jTm+/PyWUoZrDG/sHND3/crmnuu6hz1/C4XCazrPsqTIypUr2bdvH/feey+rVq3ia1/7Gk1NTZUKl5qaGvbs2YPWeszx5YrtQqHwmv9uxBuLEIJ4PD5m/JUrOU855RSefPJJHn30UVauXDnu8bFYDCFEGFjW5rCpXHlNrLWurJ0dx0EpRbFYHPd+OR5SSjo7O/nwhz/Mr3/9a+677z66u7u56qqrxpW0K2dq9/T0cPrpp0ca/+8Qjj14LS1KIdEwr1cKhDAE2jAwOIBUCXytQUqkI4lZSQpFxfBwFsdxSKaTeL6HNkDvH+wAACAASURBVCBMaH5R1sw2aCylSCWS1FRXY1sW2ZEstrKI1dTguOGAxoSKF1rrShDY84r4QUColBHKZFiWhUFz8GAvvlcglU7R0FzAjVXTMbmBwaRieLCW3v37Odjfx7CXI5fLMZLN4vsBQRDeXGUpUKxE+HmNNuRH87zcU8TL58kODZKpqSWZqSEo+hTzWWzHpliIMWr5gEaokja4bWHZFrZlhQFxpahvquc4odiwaRO9BwaRQlAseFjKRgcabTRChwaXumx4WQpAx2Ix+gcHQ41uE2BhhZrXJZsCZFlX3CCECSfhQiCFQIqj32XyPI97772Xhx56qLJLdejNZ/78+Vx66aUsXbqUWCyG7/v09PTgeR5NTU1jbiKWZdHY2IhlWfT29pLPj1YMRI7Uvlxi0tvbS7FYjIIeEW8KUoabP2XdwPG0zaSUNDY2Eo/H6enpqZS0l9Fa09vby+joKI2NjRUTlKqqKl5++WUG+kMTvHLWd7FYZNOmTQwODtLV1VUZ6/F4nNNPP52PfOQjzJw5k29/+9v8/Oc/p729nXPOOSfSKvw7oDwey9nV42XvSSmpq6sjlUrR19fH0NDQYcHosszNyMgIU6ZMIZ1OY9s2NTU17N69u2KIVx53nuexbds2+vr6aGtrq5Qkx2IxTjnlFD760Y+yc+dOvve973H33XfT3t7OhRdeWNG93rJlC6tWreLpp5/m3HPP5eMf/zgdHR1H1KSLeHtz6Bh8tYVGedGaz+crJneHBlBGRkY42H+wslGTTqepra2lWCyOae95Pjt37mDPnj20trZWSqVd12XRokVcdNFF9Pf3Mzg4yAMPPEBbWxuXXnopdXV17Nixg7vuuov9+/dzzjnncMIJJxyivxj24bpuJVMy4u1NWbqwPAcdb/yVpT9qa2sZGhqir6/vsHtg2ZTq4MGDpNPpSmlzXV1dRZ4kn89XNm5832fPnj3s2LGDpqamyqLXcRxmzpzJxRdfXEnoeOSRR2hra+Mzn/kM1dXVDA0NVUr4H3/88TFz6JGREYaGhip+Me9617ui4PXbmHD8icomh+d5YwLDQghSqRQNDQ08//zz9PT0jBl/IyMj7N+/n2QySVNTE8lkkoaGRjZv3kRvb+9h7fP5PDte3EFPbw/t7e2Vv5dKpVi8eDGXXXYZW7ZsYd++fdx1111MmTKFiy++GNd1GR4eZuvWrRQKBWbMmHGYzIQxpiJjV5ZresMoFf6OtycuBTQ2Ct57jiQWO7o1qg4kTz0leXCzGLdvIcBSgtZWwXnnHf36d2hQ8siDAm/dkft/M+seD12TlOWLXollWZVA4+DgIH0H+ph0iBliWVpueHiYSZMmUV9fj+M4TJgwga1bt7Jv377Dxl+xWGTnzp3s2rUrrEAtZVbHYjEWLFjAZZddVvEOuPfee+nq6uKKK64gHo+TyWTo6upi27ZtbNmyhTPOOKNyDywbAQ4NDVFdXR35ALwDOHT+l8vlxt3kVUpV/HF6enoYyY4c5oMTBAEvv/wynudVJN6qq6upqalheHiY3t5epkyZUmlfrnAaHh6mu7u7oiOeSCQ455xz+OhHP8qkSZP4yle+wg033MC0adNYsWLFmAScbDZbeQYvXLjwVT1bIt4+HHNufKhLLSo/oCkWc4zmhmhrm4A2Pr29fXhFD6UkSkkSsQSJeArPMwwODGBbAkuGN3ojQCiBdBRu0iUWd4nFY1i2QhuNZSuqaqupqq4hmUoTiyWIOS62ssLjdSkT2YhQ59kYPD+gUPQYLRQqJo7ZbI6BgWF273qRXS+sp+el58gN9xIEOSY0VTOxuQ5lCUZyOUYLhdAUsWwkKcBWCkspLBnKc9hKoQND/+AIu18+yO49L7N96xZ2732J0WKeIPAoenmGhgYpFooYrXEcC7v0ucId+DDgbjkujc0TaJ7QjFfIMTQwzEh2FIPE8/xKNmZ4/Uv6KhiEFNiOjTEaY0JjRIxBSBleUynCL0yEWdhGghYaTBiUDzceju5Ra9s2p512GldddRVf+MIX+OIXv8iXvvQl/uVf/oXrrruOb33rW5x99tnU1NRUHqp/TUbh0F1hz/OOqv1ryYiJiPhbCYKAP/7xjzzxxBN4nkd7e/u4AWIhBN3d3UyYMIFt27axY8eOwzII9u/fz3PPPUcQBMyYMYPq6momTpxIR0cHw8PDbNq8qaIhB7B7927+67/+i+9973ts3LixEqAsa2fX1NRw6qmn8sEPfpDR0VFuvPFG/vznP7/m3eqIdyZBEPDMM8/w6KOPks1mmTRp0rjGiFJKurq6mDx5Mjt37mT79u3k83/JCurv72fTpk3kcjmmTp1KQ0MDTU1NdHR0kM/n2bx5MwcPHqy037dvHzfddBPf/va3eeqppyrjUUpJTU0NtbW1nHTSSVx44YUopVi9ejWPPfYYhUKB/v5+7rzzTtauXcvSpUu56KKL6OrsigLX71B0oNm0aRNr1qyhv7+fiRMnVhayr6S+vp6pU6dijGHLli309PRU3vN9n+3bt/PCCy/Q0tLCpEmTqKuro7Ozc9z2fX37+fnPf843v/lN1qxZU7nXSSmpqqqirq6O+fPns3LlSqqrqysu9vl8Ht/32bhxI7feeiv33XcffX19h53HCy+8wNatW4nFYodpxUa8/dBa8+KLL/LQQw/R09NDc3PzuMZlQggmTpzI9OnTOXjwIM899xwjIyOV90dGRti2bRu9vb20t7czefJkqqur6eoK701bt25l7969lfYDAwPcdddd/Pu//zv3339/ZZNDSkkqFQa/Z86cycqVK5k0aRJ33nkn99xzD57nsXTpUq655hp+9rOfcdNNN3HDDTdwww03cP311/OlL32pUr78rW99iy9/+ctjKmki3j6UEwt+97vfEQQBLS0tR9RMLY+J4eFhnnnmmcPmeLlcjueff57du3fT2tpKV1cXtbW1zJzZTTabZf369YdlQe/bt4/rf3I93/nOd3j66acrryulqKqqorGxkYULF/LRj36UIAhYvXo1a9euxRjDyy+/zPXXX88///M/c++99x62QTc8PMxTTz3Fxo0baWpqYtq0aW/AVXttpJKC42ZIZnYf3U93t2TypFePIAsJVVXiqPue2S2ZPl3S2Cg4hryv151cLseTTz7Jww8/TCKRoLOzc9y5VFmjf8aMGfT29vL0008flvVcHmPZbJZp06bR1NREJpNhzpw5+L7P008/zeDgYKV9f38/t9xyC//2b//GY489/pfYRGmTsKGhgVmzZvGJT3yCRCLBLbfcwgMP/A6tNZlMpiI/8cADD7Bt27ZKv4ODgzz88MMcOHCA9vb2qIL0bU6xWGTdunXcf//9OI4zRgamTPm9pqYmtm/fzpYtWyrvlTeOH3/8cWzbZv78+Ugpqa+vZ/bs2Rw8eJA//vGPlURSrTV79+7l+9//Pt/+9rd56aWXKn2VtbNramo45ZRTuOyyy3j55Zf50Y9+xObNmw87J601/f39bN26lUwmQ2dn5xt0lSJeb44581oTIGSYxQs+XmGUQn6UoAC1VQ7xeDP7eofoO9AXlpkmEggUtu2CVPjBKCMjWRJuHNd1MaXtV8exSMRd0skk6VQKAfi+hzYmLGWxnfCBVDIrNEKjgwCjAwSlIKyBoufhBZrA01hKk4jH8Twfz/MZyeWxhi0sBYEXEIsniMWSCK3RfoDWYeDXmLD8SAiJ0SCFwo3FcG2LAAOBIRmLoWzFaH4UbQyFfAFpSbSEZEmzsZDXWMrCEgrf8kKJlJIOdaAFOgjQfrjwcuMJOjo62bvvZXbs2MtItoAflMpo/FAjt7JxICSGUB4k7tphiN0IRDmAz18C3RrQ4i+7FUIKFDLUxS6bNx7NwLEsTjrpJC644IJK8K6smRWLxcboE71SRuGVpUJlPS6tdaVcrpx1Wn791dpHGkURrycHDhzgpptuYs2aNYdlBJTNobZv386JJ57I8uXLqa2tJZfLHXZ8OXi9bNkyVq9ezc0334zrusyZM4eBgQHuuOMO1qxZw3HHHceJJ55YMVg87bTTePLJJytSC8uXL2d0dJSf/vSn3H///cyfP5/6+vpx3aqrqqo4//zzefHFF7nnnntYvXo1jY2NTJ06lUfWPsLaR9YyODiI53k8+uijDAwM8D//8z/s3buXWCxGY2Mj5513XvQAfxsyMDDA7bffzoYNGyr31nKmzNatW9m6dSuzZ8/mjDPOoKmpadxs0aldU1m2bBnbt2/n5z//OZlMhkWLFpHL5bjnnnsq2alLliyhrq4OpRTLli3jscce47777qOlpYUVK1agteaOO+7g7rvvpq2tjcbGxnEXS+l0mhUrVrBjxw5uu+02br31VpqamhgZGeHBBx9k3759PP/886xatWrM8eX/n9NOO21c+YmIN5/h4WHuuecedu3aVZHwKmeWlhcjnZ2dnHXWWUfUqMxkMixZsoS1a9fy0EMP0drayvnnn09VVRXPPPMMt956K/39/axYsYIZM2ZUsggffvhh1qxZQ3t7O+9///uxbZvf/OY33HHHHaTTaZqbm8ddMCWTSd71rnexc+dOfvKTn/Czn/2MlpYWZs2axfHHH8/DDz/M6tWr6evrq2it79y5kwceeICNGzdy6qmnjmuAGvHmk81m+f3vf8fw8FDlfqG1Znh4mOeff57NmzczYcIEzj777MMytA5lwoQJnHLKKTzxxBPcc889tLS0sHz5coQQPPzww9x+++0kEglOPfVUWlpacF2X+fPnM2/ePP70pz9xyy238JGPfIRkMsnvf/97br31VjzPY+LEieNW/sXjcU466SRWrlxZCVZPnjyZd7/73eMGZcqyJXfeeSfNzc0sWbLkiJ/ljaRYLLLuz3/igfvurgQMDsMYausnM3f+2TQ2to/bhzGal156lief+CW+N458gjGkMg3MnH0a7VPmjduH1ppdL23gyT/+Et87XD5ACFDK0NjYwJJTzuC47vH7eL3o6+vj9ttvZ+PGjZXXgiDg4MGDbNmyhRdeeIFZs2Zx3nnnUV1VzYGDB8b0UVdXx7Jly3jooYe4//77mTJlCitWrMBxHJ544gluvvlmAE477TTa2tpQSnHyySfzu9/9jt/+9reV+2s+n+eXv/wld999d8W0drwEpKqqKk4//XRefPFFbrnlFm688UZaW1upqamhpaWFX/7yl/zHf/wH27ZtY+bMmQRBwIYNG7jvvvsoFossX778LfUTEgIsK/w52uNei0SuFEffN4THqDdxyam1Zvfu3VxzzTU0NjZWXvc8j56eHp577jn27t3L6aefzllnnTWuPnVZzmPFihWsW7eOW2+9lcbGRhYvXsxobpT7H7if22+/naamJt773veSTqcJgoCTTz6Zu+++mz/84Q/cdNNNnHfeeRhj+M1vfsMvfvEL0uk0HR1Txh1/yWSSZcuWcdFFF/GjH/2IVav+m/a2NrpndrN48WIWL17MmjVr+PrXv86ZZ55JPB7n6aef5q677qK6uppzzjln3GSMiDcXYwwHDhzg2muv5a677qq87vs++/fvZ9OmTTz//PMsWbKE97///ePqUyul6Orq4r3vfS/XX389P/zhDxFCMHXqVHp6erj99ttZu3Ytc+fO5YwzzqhkXp977rmsWbOG1atXM2nSJJYuXVpZnz/44IMsWbJk3Kqk8vEXXHABW7Zs4de//jXXXHMN3/jGNyqVJlprDh48yP79+8lkMkdMuoh4+3HsdqjCoE0REwQEXp5iocCBA8PEnBipJCSTikktaVJJhz37DpDP56hK1+I4MRzXwUZhSYPvFfD9AslUCstSJOIJUskkmVQK13Hx/SIKMCocjGgNmDAjWpc0soXAsmxsrQm0QRtdMpsKMAj8IGA0n8d1bKQSKGkTeD4SBULgewU8JfG8AoPlne2S3o4fBARBSYe7ZKQmlMKWAlSosR3q+Bh83yPA4LgxhCXJFz1s20ZZYVZ0oZAHaXCsMDBrSucpkGgdABrLcchUVzNv3lz6B7Ls7x9idDSP52sCrfG0jyVkeB6U3C6FwbYkqiQTEmafG0xZJqQU7RdCIIUqmS8o0AJLKiRHX+JU1j/KZDKvKTOpXDJi2zZ79uw5LOsPwkngvn37KBaLTJgwgUQiwYQJE3Ach7179/7V9lGpR8TryeDgIPfdd19lElguS3Ych5aWFi699FLOO+885s2bh+M4Y4LXEC4cLrroIvL5PL/97W+5+uqrqaqqwvM89u/fz9SpU7n88suZM2dOZTH+7ne/m97eXm6++Wa+853vsGrVqsoEobu7m0suuYSuri4Cf6xUiRCC1tZWLr74Ynbu3Mnvfvc7Ojs7ueiii1i/YT2rV69m7969FeOMYrHIH//4R9atW4cQgunTp7No0aIoeP02ZHh4mAcffJBHHnkEOHw8NjU1sXLlSs4991wWLVpELBYbN3idzqQ5//zzyWaz3HnnnXzta1+jurqaIAg4cOAAra2tXHLJJZx44omVjcPFixfziU98gp/85Cf88Ic/5Kc//SlARS7kkksuYfbs2UfcPJwwYQIXXnghO3fu5LHHHmPy5MnU19fT09NTKad/4oknxhwnpWTFihXMnTs3Cl6/Tchmszz66KM89dRTQDgGy7qEDQ0NvPe97+Xcc8/lpJNOIpFIHLF09Pjjj+eyyy7juuuu44YbbuDOO+/EcRwGSuKjH/jAB7jwwgupr6+vtL/00ku57rrrWLVqFb/61a+QUnLgwAEaGxu55JJLOOGEE45o6NPQ0MC5557Lzp07ue+++7jttttobGzirLPOIpvN8tOf/pTbbruNe++9F6UU2WwWrTVLly7lk5/8JNOnT3/jLmrEayaXy/HUU09V9KHL48+2berq6li+fDnve9/7WLZsGalUatzy+Xg8zqmnnlrJGvzWt77FqlWrgDCTMJVKcckll3D22WcTj8VBwIwZM7j00ku59tprWb16Nffffz+WZVXkRT7+8Y+zbNmyIxqa1dTUcOaZZ1Y28VavXk1LS8vb2mS8p2eA3/7PH9j4h5+wbPLY/6uXRwLW5qbw4s4mOromj9MDBEGeTRufYt+G/+LMjrHXZqCgWT+YYcsWw5z5c8btw/dH2bzpKfauv5azOl/Rh4ChgmHd+iqElG948HpgYGDcZ7DrukyYMIEPfehDnHfeeSxcuBBljX8vcl2XE044gU9/+tNcf/31/OAHP+CWW25BSsnAwACWZfGxj32MCy64oCIDsnDhQj71qU+xatUqvvvd73LjjTdWMgYnTZrEFVdcQXd392FVBGWklLS0tFSSGtauXcstt9zC5z73Oc4//3yGhob4xS9+wbXXXltZx42MjFBVVcUll1zCxz72Maqqqt6gKxrxWjHG0NPTwx133FExzy6Pv3g8TltbG+effz7nnXceXV1dR+wnnU5z5plncuDAAVavXs0//dM/UV9fX0mEqK6u5pOf/CTLTlkWJspJycyZM/nsZz/LNddcw3/+539y2223AVSM4a+88krmzp07rnyiEIL6+no+8P4PsH3789x772/471X/zVe/+lU6Ojr4zGc+QxAE/OEPf+Cpp55CKcXIyAj19fVcfvnlnHnmma/ZKDLijcMYw8DAAL/+9a8PM28ve0O1tLRw5ZVXcsEFFzBjxoxxNzKEENTW1nLJJZeQzWa599572bx5M5lMhnw+T19fH7Nnz+aLX/winR3hGtR1XZYtW8YXvvCFim56Y2MjhUKB3t5eTjjhBL74xS/S2NjI0ODQmL8ppaS1tZXLL7+c7du3c+eddzJ9+nSuuOKKUgwulGrK5/M0NjZG0rPvII75rlAs5MjnhoBQfkIKQTyRwLYtjAIdBNhKUl/nEos30tMzwuDAQWpq6og5MSzXIRV3UVKEpn+OSzqdDg0Jkkksu+xEb6OEBSWFba19fM9HBwFe0QNtULLk0Gw0yg+wlAx1nbWP1iVdEm1CgwINri2QSMAiEU+gpMB2HPoHh0tuqGGmtFamZG4YgAYw5EbzaCPIpJOYIEBr8P2ARCJJLjuCFALXVlRXVeM6LralUAqM9rEsScyJlUToQ/MOJSXJZAqJJPB9hATXjVFTV8e8ubPYtXsvzz63tZRpHgbsjSSM5pdT0Mva4xIMQRjQLmuJEWoRuZaD44b/mEJIpBT4XgCY8Jijzr0+OqSUzJs3j/r6ep599ll2795Nc3Nz5cHU09PDhg0bsG2bWbNmkUqlmDt37pj25SBfub1lWcyaNYtkMhlJJET8zXR1dfHNb36T/v7+McGXsuFOJpNhwoQJNDQ0VIJ8qVSKq666igsuuICOjg6SySRSSqZOncrnP/95TjvtNDZt2kRfXx+xWIyOjg66u7vp6uo6THakrq6OD33oQ8yZM4f169ezb98+LMuio6ODOXPm0NXVRTKZJAgCrrzySj7wgQ/Q3t5e0eFUSjFnzhz+9V//lf379zNx4kSqqqo4++yzmdk9k9H8+Hp45c/Q3d39BlzViGOlvb2dr3/963z6058eU31SHo+pVIrm5ubDJl+JRIKPfexjnHbaabS2tlYy+9va2rj88ss5+eST2bRpEz09Pdi2TXt7O93d3UydOvWwxWo582H69OmsX7+eXbt2IYSgvb2d2bNnM23aNDKZDEEQcNFFF1W0Wct9SCmZMWMGX/3qV9m7dy8NDQ3E43EWLFjwqkZQQggaGxtpb29//S9qxFHR0tLCl7/8ZT784Q+Pa65UHoONjY00NzdXgi5CCOYvmM93v/vdStYNhNnXp59+Oh0dHTz77LO8+OKLFAoFGhoamDFjBt3d3TQ3N1eC0el0mve85z1MmTKF9evXs3PnTrTWTJo0idmzZzN9+nRqamrQWlc2FBsaGqitrQXCMdjR0cGXv/xlPvjBD1JTU0NTUyOpVIqLLrqIRYsWsWnTJnbt2oXnedTU1NDZ2clxxx1HW1vbuKZmEW8ejY2NXHnllZWqj0MpV/slk8nK+Cs/Cx3H4fTTT6ezs5Pq6uqKjmZTUxMXXnghs2fPZuPGjezevRsIx3l3dzfTp08Pje1K6+9UKsWyZcuYOHEi69ev54UXXsD3fSZOnMjs2bOZMWMG9fX1aK1Zvnw57e3tVFdXVzIkpZRMmjSJq666irPOOotUKnVEUyilFCeccAL/9wf/l0Qy8ZaVzI+MFBnoG2ZapsC5nWOTU144GNC/vZ+e3kEyR0iM9LyAgf4cE90s53bWjXl/f9bD2zbIuv39HKLac9R9/Hb3EMND/Uf1+V4rZfPDb3zjG3z2s5894pywXP3R1NRUmRNWV1fz4Q9/mJNPPpmWlpZK5WltTS3nvu9cpk2bxsaNGyv3s+bmZrq7u5kxYwZNTU2VAFBtbS3vP+/9TJs2jQ0bNrB7926UUrS3tzN37lymTp1KPB5HKcXKlStZvHgxEydOrJyHZVl0d3fz9a9/nT179tDQ0FAxOf3c5z7H8uXL2bx5Mz09PUgpmThxIscddxxTp05jwoTmo5aUfK0YDHk/oDcbkHLHbn4fGPUpjpOk8VrRxpAt+PRmx3qAZIua/lyA7xx7/7429OfG77836zNQUIwdsUeHZVnMnTuXa6+9doxvzqEJDNXV1ZU1STmRIJlM8r73vY/u7u6Kb1T5+7300ktZtGgRGzduZN++fdi2zeTJk5k1a1Y4p6v6y/98JpPhrLPOor29nWeeeYadO3cihGDSpEnMmzeP6dOnk0wm8X2fs88+m+nTp9PY2FhJKFNK0T6lnf/zf/4fPvShldTW1pJOp4nH4yxevJiGhgY2bNjAtm3bKBaLlfvqcccdF2XCvsWUzQ9//OMfj5mvH5q8UF4TNzU1VeZtVVVVfP7zn2flypWVBAClFNOmTeMf//EfWXHWCjZu2sjBgwdJJpN0dXUxa9asUKarZJ5d3vy4+OKLmTt3LuvXr+fll18mHo8zdepU5s6dS1dXF0opUukUn//857nwwgsPkzqybZu5c+fy/e9/n76+PiZNmoRSVuV85s6dy0033UQikagY2ka8/Tnm4LVXzJMdGcGxLSwrDDTH4w7IUJICDRiNwFCVdoi7tWRHPLJZHyl8Mqkk1akU6VQSYwQGQSKewLbtSkaxQWM7TijNrA1aayxLIqRESYnRmkK+gA4CkJR2CgVSKWxlUcBDBwFCKZSUpUC0CbOoLasy8bCUhe06OG4RrQOklFh2aedcCLQJMIGmkC9SKPoEASRiMSwBrmOHuteeR8yxSSTi1FRXU1dXR8yN4wceyhLowAMdoATkc1mUkrjVaWpqqqiqzuA6NsYE4Wd0XFLpKlpaW1iw4Hj27OlheGQEE0i00iBDw0YlBAiFwSCEwrJtfK3xTYAlLJTlEIu7lWsqhMLXAaZkrBmYgECAXRYef4OZN28ey5Yt48477+S6667DGMP06dPp7e1l9erVPPXUUyxcuJAFCxaQSCQ4/vjjD2sPMG3adPbvD9s/+eSTLFiwoNL+UD2uMkNDQ9xxxx08//zz+L5Pb28vu3btwhjDj3/8Y5qbm5FSsmjRIpYvXx5lGfydU1VVVdFiOxrKC4RXBn+VUkyePJkJEyZw4oknUigUwgdtKlVxXj6U8sN6yZIlzJkzh9HRUYQQpNPpw+RxLMviuOOOGzd7KxaLMXv27MNemzJlyltSfhzxt5FOp5k/f/5RH2dZFlOnTmXq1KmHva6UoqWlhaamJhYtWsTo6ChSSpLJJPF4fEwGtRCCmpoaTlh0AjNnziSbzSKEIJlMkkgkDhuPXV1d42b9uK7LjBkzmDFjRuW1V8sOinh7kUwmmT179ph7ymuhvr5+3AVoJpNh7ty5TJs2jWw2SxAExGIxksnkuBIyVVVVzJ8/nxkzZpDNZjHGlMZgElWq31ZK0dbWRltb25i/5zgOnZ2dY6pKGhsbqaurY+7cueRyuZLTvUsqlYyqud4mxOPxIz7rXg2lFK2trWMkbMpamieddFLle4dwjWMVagAAIABJREFUwy+RSIybwZ9KpZgzZw7Tpk2rmCjG43FSqVSlffneOl4Js2VZRxybh1LetDtUGuCtQBsQCDIxRX1i7PUYzCsaqwUTuiWnvnv8tUM+L3EtyciwNW4fxhhqUwEddZIzzjhCH6MS11IMH6EPbQwZV5N/A+3xMpkMCxYsOOrjjnTPEVJQXVPNCSecwOzZsxkZGcEYQzyeIJVKjhl/UkpqamtYvHgxc+bMOewZnEwmK3NIx3Ho6Oigo6NjzLnEYrExz2CA1tZWmpubOfHEEyuVColEglQq9YZLMTpOihFrCj94YgPVCR/bPvw7HCoK6qbXHpMXhhCSZCLJoKzlXx4dWxXpBZosSU6aM/ZavRZc1yVd3cBTPRzWv9aGYhHyvsZkajll4vhVCa+VstH2smXLjvpYy7IqkjKHopSiqamJ+vp6Fi5YSG40VzETTSQS465JMpkMCxcuPCzDP5lMkkqlKu2P9PcgHJvjzQ/j8TgzZ86kq6uLoaFhtA6Ix+Ok0+lICvRtQPm7X7p06VEfW05EfKXsmmVZtLe309rayklLTqJQKGBZVmWN+0qklDQ0NHDqqacyf/58RnOjKEtVNkAO/XszZ85k5syZY/qIxWIcf/zx4/ZdX1/P8uXLj/rzRby1HHPw2vc8jBbk8z6OrXAciVKhZIdAltN+wwxn4xOzFfHaJFUZiRAWVTVVpBMJLGUhlQMolAolNgQgpETjgxEYrUuZ0oDQBJ6PLwoYQrOggtZgDI5tEfgOnuUT2BaubWP8AlIIgkBjK4VtW7gxFyUVSjqAwAhJ0Q9KutUaRHiDV0ohRcnw0BiKjsPQcBa0QQQaYQksBfn8MLZlEY/FiMdc0ukU9XW1uLE4JtB4XoFA+0hhsFX4eSxbUVNXQ31DPclMGtsNdaJLfvdYtktNbR1tbZOZO3cOz6x/Fs/TOI4CoUItbmPCTAApMAYc1yXQBst2SCZSuAkXy7WQQhJ4AUGgCQwERofyK4YwcT4wYbb2G0xtbS1XXHEFvu/z8MMPs27dOtLpNPl8nt7eXubPn89VV11FZ2cnSilqamr41Kc+he/7PPTQQzzzzDOHtT/++OO56qrPVHbexmNkZITf/OY3rF27lmKxSBAElYfvr371q3DjRYTZ/yeeeGIUvI54Q7Bt+4gmPuNRNt2JxmPEG4FlWUc1vqSSpNPpyPk94nVDCFEJGL4WQiO81LgGuX8LSqlobP8dcrTfu5TyqMbr/3YcV9AyWXD8vPGDTLmc5KUXBJv/dOQ+lBI0Nx+5j2xW8NKLr97HO5XypnG5UuCv8Ubdp452LvB6UVfXxeJ3fY/de/qZ1Cp517sOX8MJBNW1dVjj+Lv8NaQUzJw1j//3B6vHVGtAuHESGhgeW3A5Hk9w6rtXMLFlciXIqgPD7j2Ghx7W1Mdh4cIEi058+0oDKaXIVGUOy7J+Nd6o568QglgsFkk2/J1hWValOu61oJSiuro60kCPAP6G4HUQeBg02ggKRQ8DuK5ASUkoYxHaACoRBlsRCokiHneRysZVgqBYBKmxHIUTi2HbLrbjIlWYFY0woTBGKHONMQZjfDxVRMlQd8cEBoMJTRsx2LYmFo+hjSYox2hNqKEhpUJZDgKBXQpa2o4i0JrcaJGiV0Qqie04KM8jfHwabNvGtS1MzJBKxEnHE2gvQApDMp1gOBugLAvHtrFshetaJJMJamprEVoyks0iLYFtCYT2w13OdIraumriqSRuLI5yXISyyh80LMeIxWmeOJGW1ons2r2HvoMHMUagpAUmzEQ3wiBNGPR2lE3gG2w7ge3EEVLi+4aR7BBxN4aSJfNGP7wuQllYUmGJsgTJXyedTvOVr3yFK664gqlTpx7VRF5KSXd3N1dffTVnn302mzdvpr+/v1IyMnPmTDo7u0gk4uO237Rp06u2T6fTXH311Vx22WVMnz69Yhx59dVX88lPfnLcSQz8RSv4aG6kERERERERERFvBsVigZ07X2D37p3jvi+EoKVlMp2d05BSkctl2bHjeXp69o7bXinFpElTmDKlExAMDQ3y4o7tHDywf9z2juPS1tZBa+urZw9H/H0gBTg2xGLjrx20Zkw27RhKhnlH6iMIBGHi7RtfGRrx5uK4KZonLsAI6JwqWHLya3BYfM0IqqpqWXzSu1/HPv+ClIqmpok0Nf3FjyMIYOtWzdbtmkwGpk2X1NRE2cMRERERrzfHHLw2JsCYAAjNAbU2aN9C2KEOs0QijAQUQSDwvAChNb7UKLtI4BWxLRvLcjGEQeVEIkUsnkApGyNASkr6zaKkfw1+UEQKhSUVaIMpGTQWCqNorXEdF6VUGOgOwmCtMQLbdhDKDktcTCkgLjW5XI6hkREKRR8hwgxJx1EIKSnkCwSBV9L0ljgxRUNthobqGrQXUMjncRMxYjEbKQW245JKJ4k5NpYlSKWSpFIZmqUsBU41JvBQShGPx4nFY9iOg+26WLaN7TgIqUpZ0KGNYjpdxaw5s8mNFliz9lFy2QIxx6Wky1L6LnTp3B1GsqMEfoDWBgLIjuboO3CQ2toaqlJJFAY/0AgE2gi0AX0UeteO44xbfvFasSyLzs5OJk+ezNKlSykUChXNTNd1x5QslfV+J02axMknn1wqMbFJp8e2L2sbHYpSikWLFh3z+UZEREREREREvJU8//wWfnrzj9m75QkmZsZO3fcO+aRaTuGssz5HQ0MH27c9yZrfX0MwuJ2G1Nj2+4YD6jrO4qwVXyCdqmfD+jU88vv/JKEPUhUbG0jqGTHMPuV8rrzySzhOJKkSERERERERERHx5nLsmde+IPANCIMUGl8LhAELgVICXQpe+0WfYsEj8DVCh+IfylbYTiizoSyP0XyR/OgohXyeVKaWVKYKNx4DqSrSFkKGchsCiWXbSCFLQd5QOkMIg+97GA0qsJBSEXcTFItFtDYIpbBsF0s5SCnxvAL50SzDw0PoQKOkwLGdUK9bCGKOy4gYJvAVYEinEtRWZUg4Dulkgnw2h+NYJKvSVNVk0DpAKEkymaCutoaa6ipSqSTxVBiMLxY9lJS4ro2lJLYdZllLZeE4cWzXxrYdpLQQ0qJswmjZDs0TmjlpyUnYtsu6P/+ZYjEHhAFrQSgZYoxBKYXne+QLo8RTcVT4bSClxUg2h6sUB196iaBQIF1fj0ynCIzBCPOmJzbYtn1Umc5H2z4iIiIiIiIi4n8DQ4MD6ANbWRJ/kUWNY6UGHh4dZc32NtY+MkRjo+bF7X0Udj/He1p6mFE/tiT7t8M5Hn1uN6l0jkxGs2XjflTvs5w9VTMxPbZU/+6BYXr27cL3/Sh4HRERERERERER8aZz7IaNhYBiwccQmjIKISgS4LgayzYoKSkURinmQtNES6lQCxtQGAKt8T2fmBsGsL1iAd/T+L6hUMhjuw6W4+A4DrblYJVkQtAGTKiJbbkOMQHKUihb4XlFvKKH0ppEKoltOUhhhSaGvkfR8xjN5SkWCvh+kXw+h5ICSzl4foA2GmUktmUTT8ZIx11yuWFc1yWVSlFXU40SAgvDwIEcQilirkMqk8Z1HHwdhAL3VdWk0hmSqSSxRAIhLeLxkrGBCUAYEAI35uK4cSzbRVkSqUI98LLMijChrImQgqamRubNn0uhWGTjxmfxihqtfaQAXbosQkr8wKfoFUqyLg4x18WxXYZHBnG0z8iBgyjPJ+86xDMJjKCU4h4REREREREREfF2QxtI2tBR4zCj3hnz/s5Bn8eHJSMj4LqQy0Hahqm147d/bn8RspLhYYMxMDoKDa5kWp1Fa2Zs8PrJvRYvvgnG3hFvPUpa5HzF3Zuz7BzwUUocluByIOuRT1Yz9VU2MaSUBAjW7szzlQf7x7w/NOrRF6Q4Y+7YsXlYH0awducoX3mw9KIJjRqDAEYKPv2keM+8I/cRERERERER8b+HYw5e50c9CvkiCINlSZxSVrMlY6AV2ZxHIe+jfY2CUpBUoJRFyf8Qz9PYlsa2JdoPGBkeIijJWLiBgyoqClKhpMK2bKQQCCkRQiGERMowa1jZDnFpETOgJGijMcaQHy2ggzCr2Pd8iqM5vHwh1JO2QkO0QqFIsRBqXRsBUkhc20ZZklgsgRtTWEpSW1MTOjtrTXZwEIzBLxbwvQKuU0NdfS2OE0PZNsl0mkQqjaUU2ggkEsdxkUqVJoEGIQh1u5UN0sIg0UaCCDPWjSmLgoiSVrdFbV0dc+fNQ0rBiy9sYzQ3RLGQR2uN1iZsLwxFv0i+WMB2XFw7TiaZJDcyhO8FpOvrEUYTr61GWQ5SqoqmeERERERERERExNsQwxGr5KSE1hbBkuWCzk7JE48Jdjx45MQEKQXt7YIVZ0kaGyV/+L3k5Yei4HQEJJLVdE2/gIGhNuItMG+uxDpktRhoQ7qqjvnHH1mSz3Fcli49jZj7Y4Txx7wfaIMbTzJr5txxjg5xXZeTl55GLPb/IUwQHhfArl2GDc8aEgnDe+akOPPMY5cyjIiIiIiIiHjncMzBa4FCCoWQYFsWMTeONhLPM/ieR3akQBCApRSuayOMQZowMCvRBNrg+5qC8om5BqUkAgulLFwnRiKRwrJstA5K8iAglYUbj4eGhaFmBlKCNAqjTRig9gr4hRyFfJ786Ci+5xEEAb7noYQkk0mjS2aHgTbEk+DlC4yOZtHoUG4kFsd1YyghMcSwlCKRSGDZFqMjWYRUNDU2U/CKGG3wih6WZZGuqsZx48TTaZxYPMxIl4p4IoUUNp5fJNA+Upb0BIXEIAiMQWtQUiIJs66NMRhkqGdtJFJauLEETc0TCAKfeNzlpR0vsH9/D4XiaNgOUblWxWIRo8OodDIRI5NOYSlJvK4GKQXSsvBNmAsvSvIrERERERERERER7yyEEKTTgilTJMcdJ9mzS7L3VdQ9hIDqKkFXl6SlRbJ1k6Dv9fRMi3jHYimX+vpuOro6mD4Nzjxb4bwiudmyrFeVj5FS0tbWwYQJrQRBMG4bpRTOKzs+rA9FW3snEya2ooPQcN33Yf16zWBWU1cnOPEkiykd8aP/kBERERERERHvOI45eG05FlLZaKMpFg35fBbfF8ScGMV8Ed83KGljWQopBJZSBLqsUR1OQiSGIPDxggDHdoi5idC0MZbEdRNYtgNoMJogCDCAH0A8HseyFAgJGPyCh6f90HhQiFKgN46lHALfD4/VoS6053to7QOGYtHD8z0QhngshjblyZRFPOZiKRsQKAm+1hRyozhOjNrqeuJuHISg6OXR2mc0VyCRCnCTDsqOI5WLDBOp0UajjQ9CYtkxpIRCKWOaIEBZDrYbx1IWoDBCIELfS4SQGC3DrG0tsJRPMpWhtr6JfL6ARtI/sJ98NkvgFwkCg9Ear1DA832U5WMpRW1NDUKCVAIhDFJKHCORUmIpa4xRYkRERERERERExDsEEVrFSBn+/LVZnRCgSu1FpB4XcQhSKmwriesKEgmJ6x79GkFKRSz2twWWlVTEY4nK754HsbjGtjWOA44TrmMiIiIiIiIi/vdzzMHr/oERenoHQ1kODEJKYk4c40ukkDi2xLYcbMvCtkOTQgtQSqKkhVcs4okwS9vzfWJS4sZcLEcirXAiLaRASSuU15ChZAiA53v4vocUKpTANhpEmAkglUBZMpTRKO3Ua20wmlJAt4jvFcjn80gpUVJi2RZBoHGkjW1b2EohpUQbje8FFAt5YgmXqppaMpkq0vEUiXgC1w21qouFAoVCHiMkRiiktBHKQqrw82qt8b08Whu0CYPxlmWBlEgsbCeBbbsMD+dwHBvXjSGEBHQlA9vzfYqFIjoICHyN1lDwNNoo4m4aXfTxjUYIReAbdBBQKBZw3DgajeuGUiiWHUqXSAG2HcMzCqkUb7pjY0RERERERERERERERERERERERETEq3DMwevhYZ/hQQ+tCTOX4y7JmIUOwPcDMpkYqWS4W66kwpIShEEqiTQaLSReEP5uWxauYwOGoldAFkZxEwlsN0YiFkNJVTLo8NHaJ9CmFNQNd/aVdLEsiTEBQRBmVRsDvufh+374moGKuLMxxOIS3/MoFItI26FYLOLaDoEf4AeGQIcSJAJwHIdUMkMikcKNJ0hUZXBth0Qiges6ICU6CBBKkUhlcGMJhFJggjBwHYRB63x+lGKxgEHjui6WFaOqpoqa2iaktAiC/WSzA+jAw3VLfQiB7wfkcjn29/ayZ9cuip7HwOAgQWAY6B/GK+ZAa9Dg2i7ahNenWMhj0j6OEyfuOjiuBcZgTEAiHsfzNP741XwRERERERERERFvE3YNQV/PMC8N+mF13iE5B0/vLeC3Ff/ykjG80K/55f4hNuwvjOnrj7vzOIli5XetNVv6PG4PRmlOH2LYGE6ZeWJ3jpqJ0YQxIiIiIiIiIiLireGYg9cGgS5NkyUCKUJjxcJogZgbx7FsHDvMjJZIlFIYNDrwQy1mA64Tw0iBQKEDsJSD4yZJpaqJx9M4rotUDkopLCnwvCI6ECipSn1IhLBQSoEIdbMt28YYjdYaKQRSCnQgCQIfYeywTNKy8LwiUiik5RBon0QiFUqYFD2EEWgdII3Ctm0yqTSxeAKlFDE3FgbgLYWyLaRjY9kOtu0ipMR2Yli2jdaa/OgonhdQKOQJAg+jPYLAxxhNAUlgbCw7TjyRwrZslCXw/QJBEAbdte/hBwGCUNO7UMhzoL+fvXv3kcvmGBnJkhvNoQMPrQvoIJRAyRZyxJMOmao06VScVCqJ69gIIUKDSTeOV/TROtQuRwdEmtcREREREREREW8/Jk3qoGPWBTza18ZIUTBhgqC66i/R67oW6Ow+kYkTJwHQNW0WS1ZcyssvbeGAPbaybuIkxYx5p1BTWwfAcTOP5+Rzr2L4wG4OqLC9AXp7DXv2GKh3OHnm6bhu7I3/sBERERERERERERGv4JiD18oCywKjBRKBrSwcyyVVGyceixGPuQjAsgSWcJBC4PlhgFX7GqPBsh2UbSOVjbIdHMcJ9dGMIJ/P43kBtsqDkKGMiDBIITA6QCCRApQVBl0tpUCCQWMCg8FgOTbKVgSej/YtAitAWgrP88OMb9vG9wM0IKTEK+ZxnVK2tO/jGhfHcXFjLtoEmCAAHSCEITA+gdFIZSGkhbRshJAUCgWGh0coFvPkhocpFAsYowl0gLLCv+k6LloLip7GcWOhlIotSagUzRMm0X/wAPl8Ht/3KOQLDA2NUCgUEUJQXVPH/t4DDPTvJZfNUfCLDI0M48YESVcRj9k0T2qhpbmZeCweyrY4DlIphJBYlk02OwwGisUArQRxx0SqIeNw8803UywWWbRoEdOmTcN1X8X9KCLiHcbjjz/Ohg0baG9vZ8GCBdTU1ETa9xFvGevWrePpp5+mubmZhQsX0tDQEI3HiDeV4eFhRkdHSafTxONvLxO4hoZmZs05n579p1FfDycsknR2Hvr/IUinM1RVVQPQ0trGeed/jGx2ZNz+hJRk0lXES5rEHR3TqfvIFYyO5ipttIF16zSPPGKoq1PMmFEbJotEvCEMDw0zkh2hvr4e27b/+gEREa8jIyMjHDx4kNbW1khHPOJNxRhDLpuj70AfbW1tb/XpRPydobWmv78fz/Nobm5+q08n4q9wzMHrdNrFdQVe0ZQyoBWxmEvMtYnHSjrVQqJUGFgO/ACMQQcm1KAONT/IF4o4MU2xUGBkaIj8aB7LiWO7sdAwxImVAsQSy5Yl/WwLx1Y4jotlWQgEQohQ+9qAFAIhRUnr2iCQKEsgjAAhUJZV0o4O8DwPoSz8wEdKgdEabULNaBChjrYUYeBaCrQxFApFbCRF38fVBuMH+P4onuczOppjaHgI3yuA0RgTZnpLJRFGEmiBHwiUpdDa4DguUil0STc8kUqTzeVYu3YtAwf7iTkxvKJPoVCk70AfsUQCx7apqalldHQU7Qe4jksuO4TxJHU1cZobqkgmHRxbIUumlrZlMTqap1gsIKWFrwOkbWEI348Yy9q1a3n44Yepqqpi+vTpLF68mFNOOYWurq4okB3xjufFF1/kxhtvZGhoiLa2No4//niWLl3K/PkLqK2NAtkRby67d+/m1ltvZd++fbS1tTFv3jyWLl3KggULqK+vj8ZjxBvOjh07uOmmm+jp6WHx4sWcfPLJTJs2jVjsrc82Vsoimagnk6mjpgYm/v/svXeQHNl95/l5JjOrqqt9N1z3wGMGbjBwM0QDGC5FShQpilztcXV72tOG9nT/bCx1saG4I+/W6OIYMhG8W8ZuhLSh0GkpLSnxdo9c6ihHpyHH0MxwMA5jYGbgPdq78pnv3R8vK6sb3cAMGgMzg/eJgOnKrJevComsrE/+8vtbJVmz5vqCKQhC+vqW09e3/B2NH0U5li1bOe8xY+HK5YSODktXFxTyXmjdTo6/eZwvfelLNBoNhoaGOHDgABs3bnQ9cjye28ybb77J7/3e79Hf38/+/fvZt28fGzZs8CLbc9sxxnD8+HE+979+jq1bt3LgwAH27dvnRbbnjhDHMS+88AJf/OIX2bFjBwcPHGTf0D4vsu9RlnxG1NfXxuxsF7MzVSqlBqiYQnuBXKAJlEAiXdNBYVw1tHXNE6UKQCTESYKxlkY9ZnJikmq1QrFYJJfPE1lBkhiEVCAkGokWrgFkGEbkcjnCIERrFxkihRPVUrjmjXHiojcAhAyQgZPbxiaZODdJQlxvkBgDShAnCfVq1cWGKImUCoGTv1IKjIlRShBbi6k1SKxrz24Sd4JvEkO9kdBo1F1EibAu7kQHTrArmTaclC4DWxiCIEIFAUgXXmgsSCnp6Oigp6eX5599nuErwxTyBZQOmJqeQmkNAoRyOeBxGk+CVSTGki8UUBIa9So2aRAEEWGQwyQxSgsiGRInBoygETdQQYCw0vvrRahUKpw7d45SqcThw4d5+umn+cpXvsLmhzazb8iJ7A0bNhCG4d2eqsdz08RxzMjICMePH+fIkSM899xzfPOb37xGZO+mp6fHi0PPbSdJEsbGxnj99dc5duwYP/3pT/mrv/or1q5dm4ns3bt309vb6/dHz21hdnaWV199lWeeeYann36aVatWZReuDxw4wKZNm+7ahetSaYazZ1/m+LHjXLkkqJQEh19p/T8QUrB+/UPs2LGXMAyZnBznjddf4cKFM4uOp7Vm06atbH94F1IqRkau8PprLzM8fDlbx1o4dcpw/Jhl+GqO9et2sH37DvytereH6elpXnjhBY4ePcqTTz7JwMAAmzdvZv/+/QwNDbFhw4Y7IrKNiRkff5M33/wpM1OGelVy7Wb7+5ezc9dj9PT0X2cUy8WL53jh0E+oViuLrtHV1c227bsYHFx7nREsFy+c44VDP6ZWqwKQJHDuvNsnLxUFUdBDd9ceBge95LpVpqen+fGPf8zs7CxPPPEEg4OD2f63b98+NqzfgFReZHvefay1TE1P8eSTT3Lo0CG+973vMTg4eJMi21KaOcdLz3+OKBRcOiN46gmRja91yPaH9/DRj3/6tr2O6akJ/vqbX+XC+dML/q9YCx2d3fyDf/g/sHz5yuuMcGvEcYMTJ47yl9/4ClK6116tWl57zWISKLRVGB07fVu2/V7GWsvY2BhPPPEEL7zwAt/+9rcZGBjwIvseZclnQVE+T29fL909kjgWCDT9y5cTV6s0KmUQLhHbgourCF0ltq3V0KFCxgmVSo16vY5MYnL5HLl8jjCXwwpITEIU5hBSu0iRXI4gigjTX1EYusxq5eJDnCgXWBNDDMpoBK6hoxQCa100hrXC/TKJa+hoYhpxjK3XsdSwQqCUIp8vkovySKkwSUy5PIu1hnrs8qHrsaXeMMzKKjY1v1ZIpJAopdFKooRAKJVKeIFAYAGD+01KmR5cROaOrYUwyrN772O8/NJhzp49x6UrV7BAsb2IkhohnYyfnJpiZnrG/SwtUlnauzqRKnQNLY2hVi1Tq9Xp7uonCHM04phqeZZqtYoQiiBUaaSI/zJyIyqVCmfOnOHMmTMcPnyYp556ii9/+cts3ryZoaEhDh48SL1ef/uBPJ57kKbIHhkZWVRkHzhwgD179niR7bkjxHHM6Ogoo6OjHD16lOeee46//Mu/zET2448/zq5du+jt7b3bU/W8D6lUKpw9e5azZ89mn/cDAwM8+OBDDA3tyypi7+SF67NnT/DSc3+KvfQj+uqS8qzgfNg6Fp+aMLy65Wfp61vG2rUbOX70MN/4838PY8dZ1bFQOJ2csBzd9SkGBlfT27uMw6/8lP/6n/4veswIPQW3vrVgapa1FcuZq4IXV/0KH/rQQz73+jZiraVUKnHq1ClOnz7NSy+9xA9+8IN5InFoaIj169fftgiXUnmSE299m7ET/zcb64JL42Le5/5oyTDbtg5rP8NHfvaTi45Rq9V4/vkf87X/+NvsWeQ7/3TNMmq6GPn5X+VXf+03Fh+jWuX553/E1/7jb7N3ZUtCJTVYW7GUpi3PfqeHKLj+GJ6bw1rL7OwsJ06c4OTJk7zwwguZyH7ooYcykXirIjtJGszMXGFsbIy2NsHRo/PHEkLS0dGZZfjfLLValStXLlIulxZdHoYRy5atpL29Y0njVyolzp07gzGuiW2SwNmzlrExQ7UKV6600WisJQh8cdPNYIxhenqa6elp3nrrrZsS2dZa6rWr2JE/54MbAvSUQKX//ImBCzOW/3Lktdsqr8fHR/nqn/47fnagQkdu/j49XbM8PdXLQ1t33zZ5XatVefHQj3n+e1/h4xvdvmct9OQtxsLlUsxzJxf/P+Fx+9DExAQTExMcPXrUi+x7lCXL62XLHyQMegFFYgAhKOYCKrNTNJTANhqoVIomcQJCIIVrplipVojjBo2G64we6ID+ZSsoFIsYAXFsAenEtQ7RQUgYuUaISOkEsIXmuZRQGpqqXCp0ECEBKQVSSowxCJFGi6AQCBr1GlJpTL1KrVyhXHbNFa0VCKGxNgg1AAAgAElEQVQRQiG1xiSWOLEgtYsBkQKBBCmJrcHGLl5ESIGSYFNRbq3FCgHGYkiQVmKVRCnp1geCQLnQDmud2LaAtVgEnd09fPqX/yEXL12iXK0yOTlJI2lgY0t3VxdRLqKrvYOpiUnKsyVWDfazbfsm+pd1o6SlUa+ChDAIEDKgGhtyYYhQisTMIpVG6xClNIjWAfYLX/gCL774ohexwBtvvLHo+1Aulzl95jSnz5zmlVde4cknn2TVqlUcPXrUXSTx3LN873vf4xvf+AZXr16921O565w/f56LFy8uePydiOzx8fG7MOP3Hz/60Y/47ne/y7lz5+72VO46ly9f5uTJkwseX2x/bIrsXbt2ZSLbszTGxsY4fPgwY2Njd3sqd51jx44xPDw877FyucyZM2c4e/Ysr7zyCk895SpiH3roIYaGhti/f/8duQOrVJqhEJ/n42vG2DfYtmD5040Kp0tXKKWyZmZmknDmNEM942ztXyibv1Muc3VmhFqt5tafnqQwe5qPrkkY6FiYt/zNYzNMzgyTJMm7+rqGh4d56qmnmJmZeVfHfS/y6quvMjU1lf3cFNknT57k1KlTvPjii3z/+99ncHCQLVu2ZCJ77dq176rIjuMaojHCzp4R/vtNxQXLT03U+eHUOSbGR687RpIkzE5P0FW7wN9f2bVg+Ugp5snL04yOXF7k2a0xSjOTdNUv3mCMEmOjV97hK1vI1atX+c53vkOj0VjyGO8XXnvttXnfeRYT2c3971qRfbM3Y4yOvMkPvvt5KjMjjJzQXHhjvugr1xPal2/kN3/zt1i5cvCmxjbG8MYbh/niF/4lKzoWao5GYignir2P/Qz/7DP/281NHHcs/v73v81Xv/IHDHa7Y6u1UC5bhoctDZNw6mQ3hfz/yIc//InrjmOt5erVq5w4ceKm5/B+I0kS3njjjXmP3bzItoQyYVtfwn/zUGHeWLGxvHylyhuXb+95TpzExLMjfPSBDnoL8/e9kXLMmXKZmZmp6zz71jHGUJ6ZZH2xzifWzH8PEms5Mhzz/7zcvIsl4dy5c7z++uu3bT7vFRqNBi+99NK8xxYV2d/6NgODXmTfbZZeeZ3rIcoJsK5KWgeSfCGgp6cbU6tQmZ2mUa2SNGrEcUIcJ1RjF+chrEFanEwVmlyhjXyhiAoisAaV3t4RhiFBoNFpRnW9bolkSGIU9UaDUFuE0Ejj8qKdIBcu7gOXK22tRUiL0gopJYlxER9SGcrlChMTk5TLFYwxIBRKSpQIsAZq1RrWuoxuIRVSOBkvU9krZVrRbV1VtUQgrCFuxNSqMXEjBiCODSPDI+TyOYod7SitCcOQ6akia9asJd/WhlCB0++pvEYo1m3YwH/7j/47/vRLX0JKwez0DLWkwcTUJGpW0dfXx8M7tvPGkVcZGFzO8pWuyYsxEKgAm9QIc22EUQEVtNFW7GBycpIo1wY2ySopbFY7DocOHeJb3/oW1Wr1lnas9wP5fN7tFzegXC5z+vRpTp/2t+G8Fzhz5gxPPPGE//dKebuLLYuJw+9+97uUy2VqtRpB4LPfb4WLFy/y8ssvLzhpv195J/vj8PAww8PDHDlyhGeffZYnnniCT3ziE3zoQx/yFw+XwKlTp/jyf/oyh144dLenctcpl8uMjIwsuqwpEk+fPs2ZM2d4+eWX50U7DA0NMTu7eHPEd4u2AAajkDVdC0X5qsmEq8FcgSlojwSrOxdff3l7g0ndWl8IQWdOsrpLMbiIvO4taKblu3/XzbFjx3jmmWd466233vWx32uUSqXrXli/ViS++OKLPPF3TzD4gBPZBw8eZHT0+jL5ZgmUpL+oF913YmPprr39vqCkpCu/+BiFQNI7Zai9zZ1cSgq688GiY+QDSd+UoXoLMTbHjh3jj/7oj+ZdNLhfKZfLlEqLV2VeT2TPjbaZnJx8x9uqVScJZ3/KP95UYXV3xLWHlrfGGzx9IWZ6evKm5bW1homJUaZO/YT/6e/1LXydDcPzlxuceOt6kTc3pl6vM3r5DD1Tr/LLa1sXd2wX2BUwUmrwzGiBC+du/D0jSRKOHTvGH/zBHyxpHu8nrLWUy+XrLr+eyB4YGGDbtm3s2bOH8fFJpIDOnKK3MP9iXmygK6fQt7nhsEAQBYregl4wh8RCMbr9sTvXew8SC91557nAUK/Xef755/nt3/7t2z6nex1rLdPT0zdcnonsY0c5dOjQfJGdxhp67gxLltfGSgxOXgspCcIcxWI7bfk82hrqHTOUZ6YoT49TKc+CjRE2QWAItKStUEDqiKjQ5ip/pcRYgbUSoQRSBagwQihNYozbhnLRI3EjAS1IpEUaS2KMk9ZpQ0aTGGQgSYxFKoXWikAH2CTBYqgnDcanJpkan6Bed1UnWmuQTkxL5XKljTEYY0gSQxzXXZ52HINxgloqhVIBxlqEtSglMNaAwOVdS00YReTzESsHVhFELvpEqQClNAKLAeLEEEiDFBKXbuIEthWKnbt28am//4v88Knvc/zoG8yqhEaSUKtXGB1pYG3ChvUbsEYQxzFRPoeSIUJIAiXI5SLCoECYayeKckxMTJOPchhTY3Z2GqVzBEGUnfpt3ryZiYkJX3kNnD17lsuXr18VMpe1a9cyPDx8ww9fz91nxYoV7Nmzh1WrVt3tqdx1hoeHOX/+PJXK4nmUcykWi2zfvp2hoSH27t3Lt7/9bV8t/C7Q19fHrl276OpaWFV2vzE2Nsb58+ffkQDs7Oxk+/bt7N+/n927d7N582aKxaKPtFkCpVKJ02dO+wsoN8HcaIeJiQna29ux1vpGzktgZmaGkydP+v3vJrDWMjMzw8zMDBOTE9nnRy6Xv8sze+8xOzvL0aNHmZiYuNtTuesIId5Rc8amyD558iRjY2N0dHRgraVQKLztc+fSHsLOFTnW9yy8MKGl4IURhTFLvCBtLZ05ye4VC4/JpbphtGpZag2utRYlLAOdwaLjX56RnKiKty1+ar6P/th3czRF9szMDKOjo/T1uQsU/vP35jDGMDEx4fe/m8Ray+TkJFNTU4yOjbJq1Sq01vdEY+/7hSXLayEsMo3wkFIQhgH5QgGtQ6IgoL2zi86eXkoTnUxNjFCamSKICtTjmHyuQHd3PzoqgJJMTE7RiC1YCcISBpogihA6AKmQgUZpjQXq9QZKKgSSBjFCSIQ0IKRrDJkYlJDUGxYd6LRxpCZODI1ajVq1ysjICJMT41TKZRLrXocQIJREK42wVSekhUQikNoJcCklgQ5QUgBONLsIE4tMc6hdpbd08loFBIEmCEKU1igdIZQCoZBSEDfqaK3ThpAGMefSs7A2zb/OcfCDH0SKBv3decYnR5mZLdGoxzTqMcZAW3seQ4IS0jWIDApIqQjDgDAI0DqHDnIoFSKlIEnqTE6OYYyhoHNIGWTRIb/+67/Opz/96bf90L0f+PznP8/o6Ki7YHENUkrWrVvHBz7wAYaGhtiyZQuf/exneeWVV+7CTD3vlP3797N+/frsVun7mb/922/xJ3/yJc6ePbvo8q6uLnbs2MHQ0BB79uxh7dq1rFy5kv7+fl5//fXblrd5P7F792527979ji4gvN95+umn+eM//mOOHTu26PLu7m527NjhhPWu3axd5/bHvr4+oii6bsWs58Zs3LiRz3zmM3zqU5+621O565w5c4bvfe97HD9+/IbrdXZ28vDDD3Pw4EH27NnD+vXrWbVqFc8999wdmun7h61bt/Lxj3/c//8F3nrrLf7mb/6GCxcu3HC97u7ueU2Vm5/NP/rRj+7QTN8/bN26ld/5nd/xd5sCJ0+e5M///M9vWIEohKCzs3Pe/rdmzRpWrFjBT3/60zs42/c+Sim2bNnCF7/4xbs9lbuOMYZTp07xh3/4hzdcTwhBd3c3jz32GAcOHGD37t0MDg7S0dHB00/749/NEEURQ/uG/P6Hu6vz1Vdf5atf/eoN1xNC0NfXx9DQEAcOHGDXrl2sWrWKlStXEgQL71jz3B5uQV6nVde4W8OUVEjhmiCEYZ4oCsnRTj4qUuzopl4rEycxJrFE+TbCKI9FUanVmK3G2FqdJBEoEqRWhGFEELkKYmEhiRNiGxPo0HVGMAIpFdaANYJGEtOI6yglSaQkkBFCuDznarXG1NQE9VqZqckpV3GMcE0fgyDt3u2uOAsp02xsl6kthQDhZLS17nYMmeVngzUmFfgyaxqJlAjce6KkROoAoRQWhUwzt7EGk8SEUejiTYylEadiPsugNoCg2NnDB4YOIBplrlx4k1p1FhWESKlRUpJYycTUDNVaBWk6iILILVMBUgYoFTmRrjWzM1NMTQ27angUUaHdier0duv169ff0g71fqK7u3teFYJSio0bN2bC+qGHHmJgYICVK1fS1tZGR8fSGn947hx9fX3ZVfr7nSNHjiy4UtysBB4aGmLnzp2sWbOGlStX0tvbOy/T1Ve4vjt0d3ezdevWuz2Ne4ILFy4sqNzq7e1l586d7N+/n507d2aS5tr90bN0VqxYwcDAwKIXae83Dh06xJEjRxaV1x0dHWzfvj0T1hs2bGDVqlXz9kW/T948DzzwAL/0S7/0rmdpvxd55plneO655xaV111dXfOaJ889FrrvMGR/et45g4ODPP744z5yCteD4+tf//qCx5vCevfu3dn+t3r16uz4p7XGGOP3v5tESskDDzzAgQMH7vZU7jpxHPPss88uKq+vJ6ybxz8pJaVSCXkbYq3ez2itefChB1mzdmHzy/uNWq3GX//1Xy8qr5vCev/+/ezfv3+esO7u7s6+D9/u2DhPi1uT10Kk0lMgsGBc88UwiFAyQClBqAKiQhFrY6xJsBZUoEkSS6OR0LCu86JMI0GSxCCAeiMmsTUEMs2WBqU0ibIopQhDQWBd9lpSrVGtVVFaoZQkCAK0slSqVRr1GqVSiZmZaayJnSCOQgKtCKRGpdEfQmq0ChFKIKTENptBWoO1AmzzFiDjcrCtdVEbIv2TVHKmESIibRYplWuYKKRKd/B0YOMOHIEOXDNJKRgfm6BYLKbdieWcjpSSju4+tu7Yw+zkFWQygxI12toL5PM5kJq+7iIz5Sp1Y8lrXKQLEkSAlJqkUePcpdMcOXKYnt52wijEJDbNGfNV1tdDKcW2bdsyYb1p0yYGBgZYsWIFbW1tXuJ53vM0o1SGhobYsWMHq1evzj6U/ZVkz51m2bJl7Nq1i/379/PII49kF1B6enr8/ngb0FrT1rawAeD9SLFYnCdg2tvb2bZtGwcPHmTv3r3zhLW/RfndQWtNe3v73Z7GPUGxWJx3R1NXVxePPPJIdsFk7dq1rFq1yh8L30WCIKCzs/NuT+OeoFgsZgU7iwnrucUMXlS/OwRB4GPjcPK6WGzlh78TYb0Y1dhy+HKFJ0/P3z9jYzkyUqNcvb133RprGJmu8NRpSc81DRvHKwlvjbSx77bOAGqJ5fDl8oL3wFjLW2M16nHrQnEYhuTzPm6qVqvNK555p8Lac3dY+qePdMnMUjrxinCZ1UoHWARCOMksdYQlBGFRAqyxSOXyqHUjxliQUtNoVLEGpEybNeYLKB0ikWl1sU2vjAt0EKCDkEZiqDfKmEYdYyw6CEBClFhmZmZp1Os06jWSJHaxHEKgdYjQEq0kEhBSIJRE6ZAgyCOVAikwuGpkaw3CCjdva7DWiV5rLSZJEBa3nmvXCNLJb6k0UmukcHEjNGU/uKwRhMuaFgoQCAGVSoVarUZfXx9ahzQNurCACFg+uJ4tO4d4+Yd/ialNUa8EKCxKayIVEbYXSASI0NJAUIsNxjYYuTrM2MhF4qRCqTxNb383hULRxZZIVw3uWcgnP/lJPvaxjzE4OMjAwADLly+nUCj4g5bnfcHWrVv5rd/6LXq6exh8wJ0UdnV1+S8lnrvChg0b+OxnP0uxWJx3AcXvj547SbFY5LHHHuPxxx+fJ6yb8TR3CyEEZ6YEJ87PcuhyglLz74A5crVKtLGRNRQXQvDmuODqxRnW9jQWjPfalSo9bY3sTj9r4fURwx+Xy/QXW2LUGItJ4PCVKqtXGX/+c5vp7Ozk8ccfz4T1unXrsgsmd0pYCwSzdcOTp8okFpSa/29+ebrOJdHB2hvkIwshaCSWly7V+IMXZxYsn6zEnJyNeHTb9fen1hjVeWM098nJasyZap4PbFvCi/QsQAhBV1cX27Zt88Lac8cRQtDT03PTwrr1fIkMuhlVu/naeUtXl6Cj3R1frID68pCff3Dotr6Gzo4uPvwL/5jXZ84RXtNIthHB2u3LefDB23fACqMc2x/ey+FNj/Ock000Yrh0yWAMGBkzOHCes+fP3LY5vFeRUtLf3++F9XuAJX8SzcyUmC2V0QJX/StA65BcroBSyv2XlRKppBPEQqAQWJO4HGssVkjCOEdXby+1eg2TJGil0UFIGEZonUNp7aqcE1fxLATpLUoJcaNBksQI4yRzvVFHBZJatUy9Vs2quJvxHUjlhLJULstaGPd3HaKDCBWEWdW1shbbzKG2gDQYa12utrUIC0bEYFR6eFBZBbWruFZubOmaUQqBk9xZtTpEUS4bX6QNIsulCoEO6OnpRiidVnK791zpiHWbHubi6aNcOfMqca1MlQShNEEQIYUkjHIEMiKnDEoKTp0/zYkTxzGmzPLlywnDkCQ2Lr87CNILD/52ucX4uZ/7OaIoIooif9DyvO/YsmULmzdvJpfL+fxqz11nw4YNrFu3jiiK/Jdkz11h48aNfO5zn0MIkeX73ysV1uvWPcjDj/46T88+xumiYs0aQW9v67zkkdiybtPDDA66W4A3b9nJL/7Kv+DKxTPkgoVf+h81gk1b99DT0+uev/MxSr/2r5ieuEqgWutfvGg4ecqS3xCwa/e+e+b9eD+ybds2Pv/5z5PP5xdE0txJ2tp6WLvhFxgeDrncJ3jwQcncU4QoMezoWcmevdePO4iiiL2P7qf0T/81AQsvnuSNZW2hk72PHbzBGDn2PnqQ2X/6rwhxsUbGwJWrlhMnDGE37NjSxc/87AeX/mI9GVu3buX3f//36e/vv+3C2lrLaKnBM2eqnJ9auI8fH6tzub70HHJjLcMzDZ4+vfBW/tm64ZWrMY3VS4/KqjQMr18u8fTphcfWq6WYY6NFfBjIO0dKxdatW/mzP/uzmxLW8xG0FdewdfcXyOctu3dLtm9rPV9KSVdX77s/+Tl09/Txz37jX1MqTQMLvUEYRgw+sO62bT8MQnbv2Ud///+ZPTY9bfmL/y8hSaCnZ4L49S96eX0NWmv27dvHV7/6VS+s3wMs+VOpkCtSy9XAxGipCJRG68DlXgUBUgi00u6KvRAoLdFCuYplXJNH14TREuiAIAwwiUJLjdYRYRghVZhFkwhcpTQ2ITEGa5xMxrrsaRfoYbBxgonjTHQLoVBKu5xqrRFSu0twrkMjQmm0ziF1CFKCFEgEYLFZfpLAIpDSpPEhacNKIVzmNkCaga2Vk9hSuApsIRXCOl+dXv/L8rSDMGgVYzcruI1lamoKrRXFjg4EIs0XdwOEhQ52fOAjlEuzTF09ha1UAENDawIVEYURHcUiYb6DWOWo1RPOnDxOzSSAJQg01WqZej1HoLTrkanFYsfY+x5/K6Hn/czNdob3eG4n/tZFz92mt7d3CV+Y7wzd3f1s3vJxLl4+SH+/YN8HJJs2zT1xExQKbdmt1ytWDvDzH/801Wp50fGEkBQKbUSR63uwes0GevuWU6u1hJEx8NJLhqow9PVpNm5sR0p/ofN2sWzZMvr7++/6/hcEOVas2MOmzRvZtNHy0Z9XRNf4xTCMaG+/fp8XKRWbNm1hxYoBkmRxSaiDgPbi9cdQSrFp02ZWrFiVjdFowBtvGCrW0NMtOHgwYvNWf67+btDX18eHP/zhO1LMUGxfQffqX+L7o8foqwv6l83/ElrpkOwYfJjly1fc9NhSKlateoCtH/gEP7YLq/4TBXZ1O0MHPrKkuecLBR7cvINXNh7gx9nd2FApw8iIIQE6B1exdduuJY1/PyKloL+/n4997GO3dPzTOk9P7yO0tcHatZItW+/ssTQIQlav2XBHtzkXIQTFYgdbtu7MHpuYsPT1J8Qx9PcPkz/pY+KuRSnFAw88wJo1a7ywfg+wZHkdBSFRGIHRhKGmLd9GEISu6li6ZoqBDpDCdTlUWhMGAVJI4jjB4mI+XJSHJtAaI4wT11GE1iEqCImTBJUK72bFssGCEgjjMpuNtcgslzrGCoWUzscKIZ2wxrrKaOHKnIXLDEGIAKFcJjWyKYpx6zT/LgUii4W2CJUmgFiVyWsr3K12Mh1DpFEgbsYLD57u9et0XWjU65jEbSRJDJOTU0ils1xlF5gCVkh6lq1m+56f4e++eZ6Z4fPkCyEd3T0UetrpWb6anhUD6DCPJSDKdXHl8mXePH2MWi1BIamWZ6mUI2xUQEfgZu7xeDwej8dzf3K3peGNkFISRe0UCkWKRejtlSxbdv35KqXp6Oiko+OdiT2tAzo7u+c9Zix0dycUCpa2NgjDe/f9eT8gmt877gG0zlEo5OjoFCzrl0TRzc8rDCN6e/tvaR7XjtFowKXLhkLBUGyH9g5JuMidBZ6bRwhxx+7C6+wa5JHHfpMLF6ZYs1rw0Z+75t8wlXCdXT03PbYQgvXrN/Eb//Pv0qgvzDi2uKzp3r7lS5p7Lsqx99EDrFw5gDXp93YDZ89anvi+oa0N9n2gwLbtvhHezXAvHf889x/+DuT3Dktv2AipmXVS2FqLViqLWbDGoqR0lddKpvnXGi2Vy3MWLtjeDSZd1bZICKOIMMyhgxArBdLKZlkyIBBWIkRavWxdk0iJwRiJRGESiZWJWwZpnp9ESJx0xsVkCClQKkDpIG0W6eR3q1Oj+8Nmv+FSrZurpdXbUjXldVq9bG26uqsWd1XZljnDpJXk7gqdE9OuSrxWq4FQSCGI44SJ8XG01uRyuVZzTJMghGZg7UPsePSDvPTjMt09Haxas46evpV09a4kKrSjggBQBLmYPXuHqFvL6PBlsIJ6rYY1hkacQOKqxH3ptcfj8Xg8Hs+9x+TkOMeOPsVrL7/M6aLk8nlB/5zYEITgoa072X/gIxQKbQxfvcTzzz3F6VNvpud485FKsf2RR9m3/8MEOuDC+dM895MfcPnS+Wx9C1y44GJDOjoiVi7fz7ZtH/KCwePx3BJa5+jqXk+lCisHBFu3v7viKJ9vY9NtyhYWQtLR0UXHnOrWJAEdGF56xdDRASsHJG1t/qKKx+PxvNssWV5ba0AYEBatBRZDLpejkC+glCSOY6SSSC1dQ0TlqqulkEjpojwsrumhSRKUlGgpXYRI4KRygnU52VnmRiqwmyHR1oJxOdTSWqxJMEisSACDTSyimTndzKImDfWXGh1oJ9XTymNhXRW3kDAnPARobt6mjSpTx23IGjE6CW3c31tTRWCzCJOWvU4jRoKAZnm20oply/u5cP4SQRCilKJeqzM2Okr/8mVOdCPAGqyQBPk2dg59iIQYa2t09fRRKHQidJ4EhZIBQZQjKkjWRTnCfJ7XDr9Eo/4qlfI0Jjauej3N//Z4PB6Px+Px3HtcuHCa44e/Tu/M0zyU0wTnBPJia/nxccPFix9l/YbNrF//ICdPHOEHf/2n5CaP8kDHQjH0+ojhysgIW7bupK9vOUfeeJknvvEfGJQj9BZa0mVVDP2h5chFy2uvlfj4LwxlUSMej8fj8Xg8Hs+dYsny2qS/Ai0JIkUun6OzqwcdBMSNGtgEIQOsNSih0Mo1TXQ50C57WilFvTFBo15Hpg0PpVRo7bKgFRIjXWC0kmBNFp4BWRW0xaTV0lZKEAnWSIS1GBKEdLJcSOmqRYRACpnORyOkwiKwFrcNa9N4aesUc/ozwjVbnO+gW40OrTXpEpnmX5tmygkgnBQXIK3Ixmk2ArSpLO/s7qJWqzM2Op5Wh0sqlSoTYxP09vUShAHWSjAGgSLf1kln70qmJodBRhgUjcRg4zitKk/QoaZQaGP1A2uZHJvk6JGj1Co1rl65QrG9i86u/vR9Weqe4PF4PB6Px+O5XVQqJQpmhA+srjP0QLBgeZut8VZjikq1AkCpNEO+doXHlzXYtmyhvBb1Cheq09Tr9Wz99sZVPrLOMNhxbQM1wX89Ms1oZYokSd711+bxeDwej8fj8bwdS28jLCRahYShIAhC2grthFFEHDdoNOpp5rOreA6C0P3SAVIrhNRIoTHWMDE+jjUGqRRSugptKTVSaYyxIAyJSd1qM9FDSJxXdoJXtFJFXLq0dLXEQimEkCip0+pot46SCiUkUikX95EObIzNYkIEaaSJbUV+iNaPmeu1aaSJzeSvE95Zj0XhAkTcc9z4Nt1IGASuMrwpt6Wit78PC4yNjKLSTPDZ2RmkknT39KCkwqV+u+0kcUJptkwuF6GlxKoEgcDoGGFDbGIIghw2FHR1drtlSUJpdoZKtcbgmo2uUaXH4/F4PB6P554kr6E/1CwvLjx172vTnNdzz+UEhUCwvLj4+j0FzYhuSW0hBMVQsryoFl2/M6cZl77KwePxeDwej8dzd1i6vEbgUqAlAk0ucnEhpWqVRq0KVhIEedra2wiDHFoHaB0gtRPTWEujVmNicsIJ5UAhpSaI8kgZuGxraXH9FY2T4WlltBAyzYlutkOUWGHBSpSSWRW0TuW1lNqJ9Ex+izT7utUcwKRZ1a2kv6ZkTps70sy0dlEiwqbPFXOCReZUYgvrsqRbCditpo4uF9sSpA0sIe3saC1aB/T29pAkCeMTE+g09mR6egqlJR3tnci0ilwKQa1aQ2CpV0o0amWCoI2uTg25HALQWqfrSnL5PI1GHYtFakWlWsHgmkD60muPx+PxeDwej8fj8Xg8Ho/Hcy9xSw0brTFYI3GuuClgodGIsYmlHtXpjfoJgsjlTatUXEuBjRuMjY1Sq1ZQUmGlQGvXrDB2dDgAACAASURBVFEqjRUSYQ0gkFJkDRidaM1cr4v3SGV0Ux27KBCTCmrpBHEajZFq6yyruimsBWkDSSGQYk40SPpqrbVZNrRAAi5OxNIa183JZjEcNhXd1pVyuyrtVJJLIAiDrLu9SOeCBR2E9PX3Y4HpySmwYIxhcnIKpQLaCgUXwQLUqhVmZ6ZpLy6jo6OLKFekUGgjXygQ5XKoZvW5gt7eXgYGBpiavko9rlKtx07gS+3dtcfj8Xg8Ho/Hcx9jTMz42HHePPYTpicSKiVJcM23xf5lK9m1ex+9vcsWHcNiuXD+DIee/yG1NMrmWjq7enj44b08sHrd4mNYy4ULZzj00x9Sq7kxkgTOX7AcP2q5WIRA9dLV8eh1x/B4PB6Px/P+YenyOu1KmCQxcaNBpVqlUqliLEipaMQNkiRGByFRLudEs5QgJMYYqtUKI8NXsdY4gSs1YZBHB1GaNu0Ur3Tp0fOzpq1I3bPIYjuyWJG0ItrFljgt3RTDQrRaEzbHagrlzFg3JTPNAVsrZ5EhC+Q2SDvH31vrejlmT7AIZJqdLcA4SR6ksSBZlIhtvh5BGEX09/eTNGJmZ0soNI16zOjoKHJZP21tbVjhok4uXbpAZ2c7y1euptjeiY7yRLm8q3SXKq0wVxSL7SxfsZLRsRVcunIBIRRKBUihm0rf4/F4PB6Px/MewhrLyIjhhUOGkeGE1183lMvXnqnOXR8uX7E8+2xCb2/CsaOGeqN1/6Hn/qVcnuTkie8wfuqPeDARDE+K7M5VgNGSZbZtPUIIPvKzn1x0jHqtxgsvPMvXv/R77F25cD+cqVlGTDfjP/9P+NVf+41Fx6jVqhw69BO+9ie/y6Mr0wctUIf1NUtp1vL83/WQj64/hsfj8Xg8nvcPt1B5nYpg67Kh6/UaJk4QQiNlQKCdNG406rS3F9MqZNewUQpJtVqhXC6BcWMpFRAEOVc1bW1aDd2sdBZZ9TLGaWyZVmCbrFFi+pw04sOmsSCtsGoy49yU3taarOmjSHOqXSyIw2Vqu8eyyui5y7Lh08xqm1aAW6fbhRRYY7LGj66C262rpWsYaa1LG3HzaeZlu4GjMGLZsmUYc5VKpQJCUK/VGBsdIQg0uSjAWEupNIsRILUGJNa6iwpSCoyxacNKlyne27+CfFsXSXyBSqWOQKGl9PLa4/F4PB6P5x5F3OA0zViYnIQjRy0jo5azpyzB4gWvgDvnHB21vPqapb3dcvqMpSt+9+fsee/RiGvQGOGRnhF+ZWP7guWnJur8cOocE+Oj1x0jSRJmpyfoql/kUyu7FiwfLcf84NIsoyOXbzhGaWaSrvqlRccYKcU8ebnM6OiVd/jKPB6Px+PxvJdZsry2QiBkmj0tFVJL4iRGKIWSAVE+RxhFVCoV4jhxcloHCCmYLc8yfPUqtVrVyV8VoHWI0hqhJMIYsAIpLEmSSuVMYJtURJvmTFKxbNPqZdLs6VZ6tUztddaEsZlfTdpR0TQ7K9qsM6RAuGrsObR6Mtqs6SLCad+5XypEJtZbgps0q7s5qWZVdEvQz5XmzZkLolyB5cuWc3XkKuVKGYDZmVlG1AgrV/S7ZpdWkc+1uYgQwMYJsahjTIJSCqVCpDIIIVm7diNvnTiGEJp6I0ZqgVTN98zj8Xg8Ho/Hcy8RBJpzs5pXjk/z7RP1BcvPTNSZLhiWo0kSsFbzxoji345NMNCRW7D+ifE6lU6DtQFJAsZKXroMvzc9QU8hWLD+m2M1tg7YLOrO8/4mUJL+YsCarnDBsthYumtvX/CipKQrrxcdoxBIeicNtRtdkQGUFHTnF59HPpD0TRmqvvjG4/F4PJ77gluIDRFY4Rol1uKESrVGo1FHixCbGKIwh5QBjUadRj1GRpo4TqjXq1y4cJorVy5Rq9cRSHQQoaM8Qqm0D6QTynZO1bRNxbIT1QaLk8PgmhdaY9JoENdIspVJ3ZLIrUaMaShJFjvSigoRWQK2i/1wwly2qqlJGzYiXMW0MQil5kSTiHlj2WZFuAsJB+Hmq9LnNKNNMvGeYm1rHrl8gf7+fq5cuUylUiVuxEyOTxBId6toGObApE0npRPvxiQgQMp07tbJ+SjSSCxBoIiiiEAFSKHAn/x5PB6Px+Px3HNs2PAQv/xP/gXnznyCQCmuDa/bbqF32UbWr1tLEEpmZ/Zw5vRvMTlxiUAtFM47kPSv2MLatb0oLZmcOMiZ01+gNDuBlteeDwp2SsmmzQ8Thgslosfj8Xg8Ho/Hc7tZsrxWSqKUAiAxgnKpQrVWIacgbiRIqYhtjIolpVKJmZkZ6vUGU1NjXB2+5CJDBGgdosMcSgW0THXa7LCZGW0FWElmeLO8DklT/LoYkyy0uhVXbVvrzz/Vt1k+dbPq2onnOVJbtKJLmt0UW88BkYZcZ5KdVvyHaDaENK2tiZaJJwzDVCw3xXbz+amAFxaStMhcWHK5Av19yzh96gQz05NYaymXpomTGCklY8NXWbZiObm8y8JuvlXN99FasCYhF4VgLNMTU8xMzmKtQKnAx4Z4PB6Px+Px3IN0dfdw4MDP8Oij+xdfQQjCMCQKXZW1tavYsuUXqNcXVmm71SVhGBKGEQDGrGX79pU0Go1F15dSEoaR66Pi8Xg8Ho/H4/HcYZYsr7XSqbx22jiODePj43SYTuq1Bkkco2qKarXKyPAotWoNpaBaK9GIG1hrkTpAqQClw7T61zUgJIuxTrOpm9rXCixJ+jgt65tmTcNcv22xQiDTdZwHF3Mk8tzH7Rx5a9Mqa7d+lnQ9RzA3q6tNM7LEGpfn3RTntDKx5wVpC0C4au4gUFnUSNY00rZ+NpkVb0n8fL6NKNRcOHeKOE7IF9rIhYL2Yhv10izTo6PkC+2EuTYXwZLJfZd9ba0lCHKsX7uJJ+pPUJ5NG2xqfeMwRY/H4/F4PB7PXUEgiKIcUbQwAmTR9YUkivJEUf4drS+lIpcrkHtnw3vuc2ZmLc8fMhiRLLq8Xk945SWDMtePJGzULUeOWr729cXHqNUSDr9kkDcYw+PxeDwez/3DkuW1lNL9SgWpQVCuNDB2hiRJsMZSq1cpl0qYxIC1BFGAlAKJREiNVjl0kEMImQaBCIx1TRizLOisAttVO7v+jM02iRZj3LouxmROxXQW3eEql8mE9dyu2aKVl51WZzd/blVwi0wqA5lQFs0KcJHVezdXyJS3sc0tymy5sQlCWIJAI6RI5wqmmcPdfL1zppA1c8SiZEC9XuPq1Uu0F9vp7++lu7sLbQ2zU5O0TU5S7FZIkQcrSJIYFUikBWsTrBCsWbeRvmXLOX/lopPp1p8Yejwej8fj8Xg89zNSgn6bAvtG3XLliuXY8cW/P8QNy5WrllU3+HqRJJaxseuP0UjHWOm/ong8Ho/H4+FW5HUaG9K8hVBKhbGCWj1BSkG5UqfeaJBYQ2JiJIIkMQi0y5tWGq1DhFTYOU0NjXX6l6awbTrdpgieI4ub5zPNOulmlXZzicA1ScwaIWJoCnLZNNiZnE4t8pyfRWa5W2dOc8V39nsaO+K2buctc9XY6XiZ2LYEYYiQqUi3LXE915U7h22bYSXutRpDoANsEjM9NUF7e55CRxt5LWkrtiOUwlhDpVoBK5BKUZAC90/tmu1EuYiuzg4KhRAtBUJcE7jt8Xg8Ho/H4/F47iuKbZKwCE+eKtMwC5dfmWnwVqmdTasVDwwuPkajIRkZhpeP1vl3P51esHyqknB8MqJjvbj+GHXJ6FV4+Wht8TGqCWcrOT6w3d85+l6jNHuVN179G86eeZNzJyWXzs7P5RdCMLh6PZ/45K+QzxducnTLxQtn+Zu/+s+USzOLLLbkC0X2PvY4ex/74E3PPY4bvPXmEb7zra+17pA2MD5hOXbMEoaW8nQ//X3/gDVrNtz0+B6Px+O5PrfQsFGilM6ynZVsNiCUqcgGpSLXoJAa1lgEGikDtNau4lppJ5yNwQqJMaYlpNMoDWPcmVPWWDGN0zDWpvI3jfBoVg9nOdVzY0HSeuismWKzGaTIKqjn9HOkqZ3nFSTPkeikcSGuMSSZdM4+xLLGkjQLxp04FwJh3eNhGDqR3Zpgtt0sZmRO40eDSauxBVprclGOer1BvRFjhcIKiVQu/kOkVfFJYpBKusaNysWaNGJDtVIhnw9Z1t+LDnQqxz0ej8fj8Xg8Hs/9Sv+yHn7hk5+gpzdHYZFvicXEsqNjOTt3HWT58sVLtI3Ns33bQV7Z9m8I7MIc9Q5jWZ/rZMu2A6xbf50xTIHt2x93Y7D4GBvbOtn76MGbe4Geu87k5HlOvfqHbC1eZFVdEx6dfwHi0qzhtdceYe9jf4+1azfe1NjGGE6fPsnf/r//gV/csPDCRjW2HJ8NGB0dWZK8LpdLvPTiT/jJt/6En1vb+g+SN9BfsExVDSdeWsbL21d6ee3xeDzvMkuX11I4MZpmPYtUXpPGd2DdYzqNCTFJgtaaIIpQWqcNGiXWGJIsEsSktcu2Jannymsh5kWKXJt2YXFxIS7ruZVn7bxwK4ejKbGz+mjBnJ+b65I1aZwbIdKqxhbNUvBsGzKbp53TcFI68SwswroMbmsgCIJsLJu5ajv/5/SHZoyIy6wOUUFAGOUJwjzFjm6QmsQklMsVch0JJjEIKdGBa8ajpEZJjUCSiJiJyUlq9Tqd3Z1Imb1Qj8fj8Xg8Ho/Hc5+Si3Ls3LmXDRs2XTdWMAgj2osdKHW9qmfNsv4tbN48SJLEi6+hA4rFdoLgBmMs3/qOxvC8t4jjKgVzlY+vF6zvXnjx4vWRmEsTJSqVyk2Pba2lWqsQ1Eb5xJrlC5aX6oanzpc4Pj68xLnHVEtTrC1U+cSajgXLr8wa/ur8FJMTY0sa3+PxeDzXZ+mxIUI5YY2rwhZCIqTNYi9Sj40UCqkF6BAdaFetLSUIiTUWKyzCWhIpssaFtlmubMBJ6DmxIBbApOJ3vpCGZkyHnRtCnVVW21SEC9HKuk4HzuachpZgrJn79KyCOgslESILCZFzokpakR+t7Otm7EezIlzQlNfNym2TNqNknkfO3gJaTSJ1FBBFOYIgQghNe0cvgUiQpkZsoVqtErXFLpYlcO+1ENJdTECCEUzNzHBp+Cqj46M8khjvrj0ej8fj8Xg8Hs9NNQe9HmEY0dMT3doYQUhPT98tjeG59xAIIi3pLwj62xaqiN6SIV96m+D1G44PuUAuOnYhMHTlY6SUC5/4DtFS0JnTi44fG0tH5KNsPB6P53ZwCw0bFVKIVFCnSc9NgSxci0LRjONQ0kVZKNnKmrYWk0losIlNs6+NU7x2TnZ1M9Oj2bdRKIS1aSPGuZK5VcncqqKem4G9MOu6Wd0s5oZNZ9XUqX62aWL1/M6MtKaVVo7bphRv2W47T6zLzICHQdiS182mkWkzyda4qXSfEy8ShhFtbUW0DkgsdHR2E2pFtTSJFBYhFSYxGGOx1oBNsDbBWA3CUqlUOH36FKOTE1wZHqNeN65xpMfj8Xg8Ho/H4/F4PB6Px+Px3EPcQua1cHEhYm4lc1NMu9xnSPOfpXSRH6SxHgiXAW1dZTZWYGyCkC1J21ze3Ja1Yo4IbonkZlS0TUV6s/FidqebtXMqp0Uz/cOJ3XlNGWkVcptmRIgT0jSHba6SyWg7R2qLdP7MkeFuwDk14WAsUoi08ro5ipkTG9JcP0vtnve+B0FIW7GdXL5APUnIF9tpbysyCdQqs1gRYFBYa4njhssFlwlCGuIk4cyZU5w6eYKpqSlqjTpGSKxQcztRejwej8fj8Xg8Ho/H4/F4PB7PXecW5DXzRLLIKpUFBptWXs8J07BOQBshwBgng4UAk6QC2ICdU4lsTdZ80ZhUIguZVUtn0tkal7GdZlynNeBpZnZaaW1bQhha2dIIm1U3u5/T5c1HM3nefLz1fJE2X2zOcW5udbN54/yq69YmtNYopeZucM78bHYRwAqbRqU0ZbxAKU2+UKStvZPQWgptbXR0dTMzPcPV8xfI56qsUKGrjJeanIwQIiJJYHx0nNcOvwzGsPnBLVy8dJlQh0v41/d4PB6Px+PxeDwej+fdw1jLyIjlie8nHD2a3NRzE5Nw7KhZ0Bdr3vgJnDtv+Yu/uLmxAWZLCa+/YSgmPnPT4/F47jRLltdpfsec2Om0kWIzKkQ0GxdahJVpJbLAWEAYsAIlRNpcsJkO3fywkVmch/uj2WTR5WtnOdJpRrZjTgZ2Wq8ssodTsd2U4mnFdbOhI3O2lVWSz9muw7htm3Rk0ZTL6bjpVrOUkmx7tCrEXUdJpBQoKdPK7mZt9dwK8FRYY7DWZNLcWJBKkS8U6ejsxgpJPsqjg4DEWi5eukKtWmZkZJhVK1fR09ND3A1xMsPly5cZuXqV02dOIZRi2+ZHWDOwjvZCW5rZ7fF4PB6Px+PxeDwez21CvM0NvxbKZcupU5apqZuTxMZYzp9v9ctadB1rmZ6yHDl68wK6UrUMD0PBvP26njtLEteYnHqLWs1y8aKkp6eVa66Uoqenl46Ortu3/SRhYmKMqanJhQsFRGHEwMDq+Xf9v8s0GnXOXzib+iqYnrZMTCTEMUg5Qb1eu23b9njuBLcgr1u6uNmQEGvSCBH3oWSsRWZrki6XND+13AdLM0YkrS62qWCWzn831a4xaTazcPK5aYPnZlZnAR0iHbUpoZv509nWyGJASJ/ffN7cMuymUm/mXpPK6eYqmXa+9iCUrZMq+eb80nlJpZFSthoy2jl523ZOtXVzapas+lpKRS6Xp7OrGyEVYZhDIEiShNlSiYnRYUaGL3PyxFt0d3fT27uMOEk4eeok1jQYmxylUCzS3tHFqpWDiEAj08abHo/H4/F4PB6Px+Px3A6Uklyuan73h5dZVlzYGHS0nDCZNFixIkdf782NbaykNCt5aTThX35/fMHyWmw4Py3IrwhZueLm516pakavhDz5eonZ+sLK7dl6wjA9/KNc/uYH99wChqnpEzz7w8+wojPHlROCZ77d7IVmKTcMK1c/xP/x+X9/22Zw9eol/vd/8xuE1MkF8xuONhJDTRb55//8f2Hnzsduy/ar1QpPPvldvvynv8+qbrf/NRqWCxcsiQEjKlwcOX5btu3x3CluSV5bXNNE0RS9wulamcZ0mGaF8jViu1mtbKxJYz1a1cduWTODoyma54RuZMaXbJ05pc7uYTvn7wBpo8SsMjrtmtjy1HPGaEZ2ZFkec4K15wRfN1ssZvnbc1bBipbQT414K+ZEoHWAkCr14a2RwInu1uWANJYlzcQW6XhRLqS9ox0hFVqHCCQ2sQjrbpeqVGYZGx/h8pWLtBW7wFrGJ4b5/9l77+C6zvPc9/ettQuw0XsHWEAS7BRJUZRESbZsS7bk2LEcx+3m5JwTH+deZ+5cp8wtkzs+M5kz+eOcTHIzJ7n3OEVxJrbjEndbbiqUKEpiESmSYgEriEL0jo1d1/fdP1bZe7NKJEGA1PsbQQD2+tb3vWthE9j7Wc96XjtkgXEoMjGMBVUNDSTi82Qv70QpCIIgCIIgCIIgCLeR5StW8b/+n/+N8fFRwrZ1xXYDlJY3sn59G9GofeUE18EYi6md97H1vv+BcbJX2W5QdoS2ti6WLX93cwM4Tjk7H3iK7sfKsK/y1tkYQzRWxv33P/iu5xZuHmMMYTPL5tIDfGZteWCkBMhqw6l4mheOzi5oDYn5Oc4f2c3/dn8RVcWFEttEwuEn/TH6+3sXTLzOZNL0X+ymbOwAv9lSFjyuy8AxhvMTKf789MyCrC0Id4qbFq993TjXANF73GvWaLQBS+W5ot3YC6NdSdZoV4E2KifYWl7jR9ddHci/WJ54rLUr0KqgAC/W4yqiNRh3LZUnVOc1fHQFc98frQKhOt/JHeRXe6P8sQaDMsY9PD/VO8jXNmhfqC/Y0/taQSQScZs7EpSSF3/iu7zxXNi5JpD++QiHI8RiJSjLxg7ZGO3dHmXyEryNIT4/x9xcHKUstEmhshCORDDGlcNDkSgqmcJy5N4nQRAEQRAEQRAEYeGoqqrlgx/6DTKZ9FW3K6UIhUJEItGbmF3R3NJMZ+dncZyrZ1pblkUkEsW+mvp8QyK0tq5m3fr2IJrhcmzbJhq90lEuLCwhC5ZXRXiorfDcZzWEFOzuXWijnqIkYrGjuZiaWOGFkZG4w/7pEFq/+5z1d4NlNB1VYXa2FJ4Dx0B5xNOqBOEu5hac18oVf4OsDc+BHTitcR3Syo38sJSdc1crP06EIBPagiCfyo/60ICFGyNiAjHZCoTrnD+5MCPbz5L2XdGWH82d1+QxyK3WeO5xT9gORHhfKnYFZNcp7h21UaBycSe+w9qYfDHbYAfnwxPtPcE6FAmTl2Pijs8/GcqA9ps0evOR+wOp7BDRohgoC0tZGOWgHceLGsmdQ9tWJJNpso5DKKSwlEUmmyWrNdr4jTZVLspFEARBEARBEARBEBYApRSRSIRIJLIg81vKomgBYzssyyZWXLJg8wsLg6PdLPWFIpk0eWbEKzHGkEotXA3z85DOLMjUgrBkuGnxWqFcERfcJo1BHEfuk+9u9pssKmNQSgexIX4DQ1/rVn42tdE5ERjfUQ1YrvAd2L2DWI28ZA+/tjwRN2j86IvMBNJ7XtMIV/hVOVkdLx8EheU2mvTd0oA2Kmh06JWWy6n2Z8jLvnZFcHcN33ntb1F5K+Zq9b7O+/3mu9qVZWGHw0FkijbuB5ZCY1yhX1koy8IK2aDdCwG2ZaMNpNJpsk4WMKiQhaXzHeaCIAiCIAiCIAiCIAh3N8bA+IThBz9cOOfz6Igme53p5+OG/Qc06czC1JBMOpw4qaldWHO3ICwqt+C89l3JvuDrCdh+40Q/nQPfk01+h0TPRe0K4CbIAfFxd7RyMdGeSzjnP3aXsvwqAu93IBcHLmsTuJ9RuXgSk+fS9gfkHlfkLNMUCNzKz6/OV8txBXeFhfKEdV/M9+8oUpYKYk2i0SIsZfnlFVyky/ee+w70XGSIn01toVyrOlr7Ar8bteLmivtNMS1CoRDKDmFbFtlMhkw2zeTEBHMz0zheFphRC3cVUhAEQRAEQRAEQRAE4fZjcJwUmGtHzSSTcO7cwlUwPeXqMlfDANksTA4vXA3ptGFiHGpE1hHuYW6pYSNG42V7uDnPlitSW5bC+K5syxvnO6UDN7Lvf/albS8uA0+M9hof+qp3nu6dJ/b6Adga48eJ5M0ciL/53majsJQvKptcnIjyas7LzPaFeGOM2/PRgNKe61spX2MOROn82JNAzMfysqhNENMRCYexfGU+l7oS9I0MZlE54RrPre7W658DT123DFo7GO1nbbsCu2VZaO0QsUOUlJSiNaSzSRztMDc7y9zsrNvwUWzXgiAIgiAIgiAIgiDcRRij0U4Kra8tbcWKoWvNwmkeIyOKE69cfZsCQmFobl64GhJJxdwEWAMLMr0gLAluXrz2UzUsK6cNK1do1W6QNEpZBaKzm8ih3DxnLx6EoLGhwfLGG20CkdYXc11B3E/KcMVtHYjFuiCGRHldEE3e0jkbuELjJ48oQHvf5Lod5xo15mJClGeh1nmRJH5DSOXtq72t/qpBRrY3l+W5t6PRnGBsjAlEb/dYg5PpPXS5Ldvk4kSC7xXamLy6TaCtW0phW4poNIodLiKdiZJ1soQjRcTjScrKI4ELXBAEQRAEQRAEQRAE4W5hLu3w8zNz9M9lsFSuYaJjDKNxh8r6EO97/8JpHhd7bP7l723+rxeGKYuGC7Ylspq+TC2/v6OCXbsWpob5+RBzM+V865U5RucLw6+1MYzFM3k6kyDcndyaeO27kn3DsjFBlrPviMbk/pG4qR06yHNWvjqN5602rvDte6XzHc2+k/vq/+Tc/GmNc8UIY5S71XMra5Pn4za5OA8VOK1zcwYysr+z3wQy3wnu15nbLX8DoIPxxhgs23VeK2W54nu+69k3W3vFBebqfBne5H3kCdnGuyCg/ZgUv+GjUmB5mePKwlI2ttKE7TAWVuDQvtaZFQRBEARBEARBEARBWIpkNYzrSqof+Axb73sgeNxXmhoam6mvWzi9I1bczH/+879jfn4O2yoUqA0QicbYuXM7tbULU4PjxHj6o09R31BByLpMDwOmZ6aIf+8bHDq0f0HWF4Q7wc03bPRylQ3abcToK69ao2zbc0Xn2h8G5FuqyTVUzI+Qdo3KKuck9iM5jAlcwkFMs1JgLC9G4/JfBl4+NSoXs6EMRhmv2eKVoUB+tIcxbuNDgrgPV2y3TN5e+bqzH3OCJ2b7GdpBfInl6cgWdihUmGTinT9zWU3Gj0UxuQxsP7c758l218pmHdfN7iewWDZoN7HfKCvwhFsKHIwbMYLBskAFcwqCIAiCIAiCIAiCINw9RIuKuW/rgzz90d8qeFwphW3b19jr9lBaWsaTH/44jnP1jolKWYTD4atuux3YdoiOjhU0N7dxNV1nbGyEPXteWrD1BeFOcAvitQoaLoIr+lpBdjSBOJvfDNDNsDeAHbicAw3a3+SJxb5wHZia/WQNo73sZ+PFbfvzeOKvLz7nTeoGheTlZ+etbTzbeK4Jo+eUVrlID+Wpvv6tFq6LWgXR1358ifHqIhDH8w/OK8H75elnU7unys/PNuSHnSgURlnuqDyne5Dj7a/raLTWaO07wy00bl1auxcUjNFo42BwMMqgcXC04x3jNboLCIIgCIIgCIIgCIIgLGGUUoRCYSKRazduXMDVse0Qtn1rLeVuqQKliEQiV90WDkufM+Hu56ZDd7RxwOjAL4zKCcYmz8hrAsnXE0+VK5bm0qRzuEJq8J2nLpvgS3eWXNyzH8Wh/Pxs35HsubRz8SA6L66EnEjsL+br1t5C+Y0X8zO7/cxtbXJistbuejpfZffdz54Ijsodm/9LLUj6Nu450v4cnphstCu4u3Eq7lk2JifSB98bcLTG0dpdQ7vRKNo4uXm0QTtZtM7i6CxaO2SdLBodHIcguhJr5AAAIABJREFUCIIgCIIgCIIgCIIgCMJS4qYvDRm066oOYi/c/GatDUq5Iiq2QvmRIK41G6P9rGbluagtT/j2BGWl3GQMpQKB2hepfdFaQ+CeVr772DhYlue8VuQaMHqZ3F7RwZzet24MiCdmK7/LYZ6u7cZS+xEp7oOWMhjjuHElXi3+HJayAvHbBGv7DSQNoXAI27a8uBMK4qwxfjiIH8eC65wORG3jif8m17NRgVaGrM7iOI7rwHYcjCdmu2K+xslmceys+5g2pDNpUukkCkXICsmVOEEQBEEQBEEQBEEQBEEQlhQ3f1+DUm5zQGNhuSpyntjsCb15LmqDH9dsCsTjwD7su5+9jb4l3Oick9kViF3R21a5nOv8aBJ3yqtlOHs1+JnZ13QbKy8DOjjMnBNb5YR0V3/PDz3J5WMbvxmlFSSheMK2IhwO5RpA5tfgK9HBN15etnKd1MG58cYFkSveRQDHyXoua1e81sZ1Y2utCSlFNpslZGdBuenXqWSS+fk5HKMJi3AtCIIgCIIgCIIgCIIgCMIS46ZjQ/CEa7DcrGdP2PUdvJZymzUW6LsQWJ/9mA3ticq+cznnZM5fyhQq27iitt9YURsDygpcy7nA7ZxIXeC49lzMYDwnOAWxJ362tJvrbeEnbGvjuZkDAdyvId8xbVBYGKPQfhyIzh1jIF5ToO3nn9hCXfsyDd7CcnO+jcIybqyIjUI5GksblNb4DTA1BscTsTOZDBknG/woEvE5JibGcbSD47vjBUEQBEEQBEEQBEEQBEEQlgg37bx2s561J8QqT6wlkIotzyGNlXMZ+87n/BgRfJey8RsuKs+lTLAtT9b2YkhcsdyP+gjmwwoiPLy8j5wLOhds7daP9jKp/ccNxvtamZyAbqn8RpC4NfhRJ8Yd7Dat9Nze3nRKKdAE9VtKAZpINIKyctcMcn71Qvk4N+PlLnJfeA/iu90Px40D8YVx7bjCvNYOGsg6CsdxCIUiKGVIxONMTkwEFwEEQRAEQRAEQRAEQRAEQRCWErcQG4Jnq9Zo33atjevlVmCU6wp2o0VUwW5+tnXuk3YjM4JsacgPg1YQOLuDTOtga07YNcag/Y6R3mDjrR+U64nByhe/vaaK/prGDdQOHNV47mu3yaQK1kGRF/+RF6YdrJELUPFn10YTiUQDx7hfau6AjBcd4gdaF561YG3/qD312gCO42BQaO+ca+26043j4GCwjOVFjFhYliKbyTA7O4PW2dx6giAIgiAIgiAIgiAIgiAIS4RbaNgYyL2ue9dyRWAdCNie2zinUxdEPBMIwN73lkEp7QrYxtfBPdFWqUD0tvLiPXyXtcpvypizSHsJIN4Y/Mf8SBEv2/oy4VgFNZkgxxvlSerBPEHRrvxtkWtMmYfy3Nb5zRCjEd95rS4blzsvvm5tgrVMrsQ84TqnbRu00XkxJb7I7oayGGPQjkYb0ChCykJrTSo5j2McDA6iXguCIAj3AmNjI5w5c4rS0orFLmXJcebsKSYnxxe7jHueyclxzpw5iWXZi13KkkJrh9HR4cUu454mHo/T03OOM2dOLXYpS47JyXGGhgYWu4x7mvn5OGfPnpLn31UYHr5Ef3/vYpdxT5NKJeX5dw1GR4e5ePH8YpchCLfELYjXnoBt/CaInqzra8PaYCxQWK6wm6fPKk8hNkYH2dZuFIcBo7GUFTiQA+HZchVwjeukNsZgLBMIzX6TSD/KJD9/21d6/UgTP1NaGeUKv5ggKgT/uDzF3Y/ocIt0v3dTP/K7MfqKtx9XQl7Ytx8d4q4X8jOv85NC8tYIjNh5Tm7/MRPUaPxUlCAyxdGOJ2Br7zw6wXFo72eBUljKJmSFUCgcR+cOTBAEQRDuAS5cOMNXv/pXVFZWL3YpS47R0SERb+4Ar7zyvPcmUV5fFWLk+bfAZDJp9u59keHhwcUuZcmRyaQYHJTn30ISj8/ywgs/4/z5M4tdypIjlUpw8eKFxS7jnmZ2dpp/+7evs3fv7sUuZcmRTie5cEH+XS40qVSSH/3oOzS3tC52KUuOVDJJPD5zS3PcfGyIT0GMhwrc1L5+68dzuP+pXIvIvDxrN0bDQhnXAewK1ioQi8ETmvMU3/zojSCKxHNOW962YF+vUG1MkLntZ2FbedEcxndXe4Kv8sXq4ADVZfnZJljbF8yDeBM/m9rL7va/j4QjBU7r/GaU+Tq5v81c5esCVRvQWqO1xuBgTBZF1rsw4D7uNp10neZunW5DSQevYHlzJQiCINzFlJSUUFlZSSQSYW5ulpMnjy52SUuaoqIiysvLiUQii13KPUNFRQXV1e4Fk0uX+rh0qW+RK1ralJaWUllZudhl3DOUlpZSV1dHf38/o6PD4nC/AcXFxdTU1Cx2GfcESilKSkqora1lbGyMoaFLDA1dWuyylixKKYqLi4O/F8KtEQ6Hqa+vJxwOk8lk6Ok5S0/P2cUua8kSCoVoaGjAyuvBJtw6/vlMJOb5p3/6W4qLixe5oqWH1pp0Ok0oFLrp599Ni9cKC20MtqVy4rQnHPumY6OM6wTOz5bWxnNqF0qmSnlWYz8L2hepAyu3L2KDLxq7Era/nZwb2Rjyd7OC5oquA1qbPLHZ29d1YucaNQbJIQAasLw5PJHbb/R4+ZpGG7Dynd/uIHdORSQUdmv29vUjswtCrv14EC/TGk+I9rcFoxVBVIjjOOisxjIKjMLCAm1htIWyXLHaUjaWstz9jF+Xl9EiCIIgCHcpRUVFfOpTn6K8vJzh4eHCu6aEAixl0bGsg6eeeopQ6NY9DILL2rVr+eIXv8i6detIJBKLXc6SpqioiG3btrF9+/bFLuWeYf369fzRH/0R+/btI51OL3Y5S5qSkhJ27NjB6tWrF7uUewKlFF1dXfzZn/0Zx44dw3GcxS5pSVNaWsrDDz8s4vVtIhwOs2nTJv7Lf/kvnD9/Xl7/XQelFGVlZXzgAx8ojK0VbolwOMyWLVuIRCKk02n6+3sWu6QlTSQS4f7777+pfW9evNbGFUrx+iz6RmDfIeyLwzov89kEOjFALq4DN59ZWcqVufPjO/Dc0sbP2FZYfqZ1EK+hAge2Cr5QefsQZGRr48+Zq8lvvKjyRO7A2Y1nTtZ5orrn8PZd1r6oHUSo+HEjyo9I8SuHUDicc297/ze+6J131P42TaBXe+fWXdjPsjYY13nt5Vv7x6uVQhuFwnbjVhQYyxWy3RPoX1AI5H9BEARBuCuxLItt27axYsUK5ufnF7ucJY1CUVZeRmVlpbx5uY1UVVXx5JNPsmPHDrLZ7GKXs6QJhUJUVlZSXl6+2KXcM9TV1fGxj32MXbt2obW+8Q7vYcLhMFVVVZSWli52KfcMDQ0NfO5zn2NiYkLEwxsQDoepra0lHA4vdin3BEopGhoa+MIXvsDU1NRil7PkCYfDNDY2LnYZ9xThcJgPfehDPPvss/IcfAdUVFSwa9eum9r35jOvvddFGo0yliv4Gg3KDkRkP/7DF0l9QddNmfbmMWD7YvUVf+tU3kfuIY3bIFF53/uuZm9G/CQOP5Ykf18/Q9vKG5/zYPsRJgTxJ/lO64I3eZ6gHQjjweO5SI9ASla+yG9hh0OB2M9lxxv8sTf+/oWD8uVtX9j2ndnuC1WvYaPRuW3aYCyFxmD8KwXKu0jg5XZf2WpSEARBEO4uotGovCAXFg3f0VRWVrbYpQjvQSzLory8XC4ICIuCZVlUVFRQUSGNkoU7j23bVFdXi5tdWBQsy6KxsZHf/u3fljuf3gHhcPimYwtvWrx2VE6CdsXUPOHVE0jBy5QOYjF8dF7etRXshvJkVO1PobAsy8uvJpcfnee41n6cSCDK5kRgS6lC17Y2gSvaF6RNgQie59T2vwriR/JE6CACW+XiTACT66oY1JoLOAHLtrFsm1wqdt4ceTEnhnzhOshgCdYKYrLzjks7jvu4J+6jPVHeuA0ufQu3wc/IBok6EgRBEARBEARBEARBEISbIxwOyx0VC8wthB16ArCfce2Jttpo7zFT0OzQ/dryjMi5/BA/W9rNX863MLv7aKOvcCgTiLS5gA1X/82P3DA5Ydp3NKu8PG5j3NgP5UaW+GnWQQzJlUvm5vNEYeMfhBus7Z4T75j1ZWsbDJFwGNtymyfmN3Ys0Kn9EGxzxer47SHdbO3cVq0N2nHX1p4b3BiDYzQO2r0QYAwYK4gc0UZjjPI+i/NaEARBEARBEARBEARBEISlxU17b12BWqE0nvCsXfHXszG7TmAHoz0R2eSEYm2MGx2SF3XhZknrnBDuu4SNF+ahLNdBHMR6WK7DWFmuY1sZHK1xPFHat2drT+UNPMyWQdkqFyitTdCv0HiistcG0Z3XaK9LpCcO5zosgsY7Bi+hQ7tidq4ZIjkh3EA4FHad4N4Z1Hk69WU+64KaA6e1d979b9xMcU+o1hpHGz9FxD0Oy+CYLBodNIfUnj1bGeWJ3Y7EhgiCIAiCIAiCIAiCIAiCsOS4aee1H1OBcTM+jAKtQSm7QAzVnk3aslSQKm0wrmpuqcC/bQWpGK4D2xe6AVCW6zoOYji8kBLPRYzyPckFWnFAfsSGxgRr+dEkuUwS30/u1udnaStjXJFcuwsY90ALndy+IxodPBYci5eqHQ6H8hpFevX4IndQaX7l3nEqz2qeFyaSG+FdENBOLtpE+xEprpKtPYE+eAj3Z2XZFhjLu0AgCIIgCIIgCIIgCIIgCIKwdLhp57UB8izL3iflibE6aBpolCu5OkbjaJ2X7ey6f/EE8PzoCl+m9T8bzwmdL90GDm1/fYMXUZJrrOjPGURUG1xh16icWJufae3P7veI9AeZ3AZFXr435NRyz7HtKdx5E3jubUwuA8d46+Vbqr2YlSsiPHIB14HD3bh2a2+TQevc+QviRBwNjkERgsBx7RbqOrC1d+pNkIctCIIgCIIgCIIgCIIgCIKwVLhp57XCFUAtA1ierGuMF/1sPIeyKZDHfQ3W8h3NKLC8yBCl3CaDym/gmBOq3Yxmb1XtO6e9NYzv5fYiNXy3c54IbHzhNz+jAzdexI/dyB1Xnojtj89v1Kg84d0bE2jcQb3qMic1QWPISCSKwgp04kCMN5CvWfsNJd3P/mN5ESImN3cQ/RHEl+Q5uY3CwsLxd1QGlHYvCyiFZbtnztFOMJ8gCMI7ZXp6kl/84odcvHh+sUu5azhy5CDpdGqxyxAEQRAEQRAEQRCEu4KbFq99l7DxRGfw3NKW/5Ur8vq6tK+8uo0W3f6GjpeFjeVrvyYQYd2drEAc1oGz2svQVipwVJsCcdaPF8k5uMkTe5XfNBJcx7fyF1dubvdlzRp9ETknVVsoz7nsurwhaBjpu7gdUFaeA1y7gnEkEiloBml0IF/nRYNcJkAHpZvcefQE9HwB29FO4MD2P7TONXB0J9RuDjlgcEBZOCaLo7NI7rUgCO8U27axLItkMskvfvEjXn7514td0l3D1NQEjuNgWRaWddM3PwmCIAiCIAiCIAjCe4KbF689odTBC8nQuVhoX+tVngvbdx672dRWLnjD1YLdL4xGebEjSnmisBePgbFclzUKS+nAJOxKwm6Uh2VZXvwIXmK1O6/vCFee2zjIxzYmEMN9PVl7oreF5bm4LfwMbFffNgWObjwB22CwLOVnpKDQbiSKCuRzUBCJRlxx3/jic259nSewu3O7J1Hlua8D/dw/yZ7IrbVG66wrTBuNUTqo2RiFUd4xaPfigMZgTBZF2I0PKdTrBUEQrsuWLVtYtmwZ09PTTE6OMzk5vtgl3VUUFxezZcsWWlpaFrsUQRAEQRAEQRAEQVjS3Lx4TS7d2RVZXRHVzwnJCcIatEIpPx7DU7mVH/sBGFduVn4uh7+Ah8Kb12sO6bqmTcGwXPRHnrPZG+dnYRvyhnhGZ1fY9kYrrwalPMHdH648Ed7K5Wcr5YrnefUYT6W2vMPQWruRKt5O4WjE1emvd2Lza/Pzq/OjTILkk3yHtXZd144OssBddzreec65ud0kF/fnZSk7cKILgiC8Ux577DFqamoYGxvzfu8L74ZQKMSKFStYuXLlYpciCIIgCIIgCIIgCEuamxevlXE/jMby5GVw3cuW5SuwoFXOzWzlNVPMNVs0uXgNb45cjrSrc7sOYpOTpb19LWXlEkEsP5faciM8/IkAv/Wj6952v1Oek9uPMFGeKq4BmzyB2K8RhdIGZbnucFcRVqCswNXsqdho5QrYKnCgWyiliISjKGxPxM9lcOcyub3H8wXuAmHZFDR69ENHtKPR2qtJG3C8c2+0m2ft54jjObHzY1ccg3HMZesIt8KpU2+ze/cv6ek5u9ilvGc5ffo4yWRyscu4Z6mpqeGxxx5b7DIEQRAEQRAEQRAEQbjHuYXMa+26qI1BeyEbStkYrVHKzonTlucQtpU7zgRaqicKk+dGzsnXRmsv8iP3uGW5cSLa8cTkQOVWaEMggudnbLvubjfiRHmiuuMJykEGh8m1fNTGoCw/9iMXgeI3iQxyqn3nd86enYtH8Y9CeTEixs3rDofCngvbBIK8CURoAse0wbNIB3nbeWv7zRz9tY3BcbSXFa6CjGtjDI7RXja3AR1y41fcnxSOo3PuchGubwv+8290dJh/+qe/JRYrXeSK3rv09p4nm81ckWEvCIIgCIIgCIIgCIIg3D3cQua1QTsOeC7jQHC2XKu0Nm6ytZWf06y8nGptsJSVs0dr18XtO5yNl0/tSdGuuzs/TcSPAfEbM/rO7UA49rK18xzYboS2J9h6irLyhOVcqrQJnNvGy6023txOXpJ2vsiutR/PQSA+u40pLSxcJzoaVMjCtm24TJA3weec29zP9/BjQXwRO/e9f/HAbciYzWa95owO2hi0yQnbrv2bQERXeBnZeefKcPfd9h+fGWZ2coCKmmUUlVR6FyMWHmMMkyNn0U6aitrlhCOxYNuaNWt48803mZ2d5cIFcV0vBVauXElFRcVilyEIgiAIgiAIgiAIgiDcBDctXqMVOBZYrqPZb4IYZD2TcyIrT4DGUkHGssF1bSvLdqMsyEur9gRXo0AZ5UVtuNnOKAtL+Q0VvdaPnvvaNTprb0fcSA/P3e1+uJnVJs8hnf99MM5zWWsMFrbX0NC4xmVfZvYiQ/yGjYpcY0dX8Nb47RqNUti+eO2nfhgvdgRvTt+Jrfw5gnSQYIxbEbmxxqCNJqszZJwsjtEYZbzGkwrfwG0b3EaTxr3g4K5qYVmhIDP7bjJfG2PoO72HnhPPs2nXf6CxYyvKvjPidSY1x6kD38HJptn0yO9RUdMebPuDP/gDtm3bxtTUVIEDX1gcIpEIW7ZsYcWKFYtdiiAIgiAIgiAIgiAIgnAT3LR4/fRHniCdyeC6lfNuzQ8c074jOj/J2su89lzKuTFcJl7n5lPB/3M2aoW/hpsPXSheG29M3n4F8+QtlLdo4fr52wP/d944f7TfCNIU7BIcrfIFfYVt29RUlREKue5rV4C2CvYqjBDxji/fpW1y2wKh3GiqK6LU11WSSCbQjsZxHLTj4Gjt1mbAsiwi0SIs20YBicQ8kWiUurp6QqEQZaUl3C1oJ8NI31uM9r9NOjlzR4XiRHyCoYuHCIWLcLJJ8p85GzduZMWKFWQymTtWj3BtbNsmFot5dzwIgiAIgiAIgiAIgiAIdxs3LV6vXr3ydtbxHuX25PGWl8VobKy/LXPdLpxsmqGeg/Sf28v02EWcTIpIURk1TV10rH2cipplWLbr/B66+CYjfUeobV5HQ9sWQpHiYJ7ZyQF6u18mFCmmY837SCamuHjyBfrPvMb8zDhnjvyUyZGzdHR9ADscZeDsa4SjMaob1jDYc4Dh3rdIJ2aIFJfRtHwHy9Y+TnFpHUopMukEvd27ic8M07ZqV1AT4NV1iJH+I9Q2raW+dRMj/UfpOfECE0PnCEeLOf76N6htWUf7mvcTK3PnLCm5ey4CCIIgCIIgCIIgCIIgCMJS5uZjQwThGmQzSU69+T1O7f82yfkpyipbCEdjTAyfpf/s6wycf4Ot7/8D6ts2YSmbwQsHOP7GN1mz9Tepblh9mXjdx4l936SopJr61k0k5sboP7OX6bEBMukUwz2HmZ8ZoaZpLZYd5uT+b4NSxMrrSc1PEQpF0U6Wkb5jDJx9g7nJAdY/+D9RUl5PNj3P+WM/Z6T/GKUVjZRVtQbiNcYwdPEgx1/7Bqu3/SaVdSuZGD7DwLk3SMxNk0kl6D/7GunUHA3tW4mV1XK7LkYIgiAIgiAIgiAIgiAIgiDitbAADPce4vhr/0JibpJNj/wuzSseIByJkYhPcPyNb3DxxCuUlNVTVtlESXkj6eQs8alRUokZNxs9j2wmRXxmFKM12slQWbeSNdt/i8nhc8Rnx1m97eM0dmylpmktkyPnSMyNMz12iYZlG1j3wGepaezCsmwGew5w6KW/49SbP6CubTNtqx5Ba4fE3ATx6VEy6UQQZQNuGEg6OcvctFsXGNpW7SI5P8nUaA+lFQ1sfPh3qW1aS2lFY2F0jiAIgiAIgiAIgiAIgiAIt4yI18JtxcmmuXhqN5MjF+nc8mE6N3+Usqo2r3mnxnEyjPQepe/MXlbd93GKS2vf1fyx0hoa2jYTLakknZyjvnUTLSsfxg5FmBnvBa/5ZtuqXSxb+wFiZXUAlFQ2c+n8fi4cf4nR/iM0dmx9l0emqKxbQW1TF+FIEcWlVTSv2EF1wyrEcS0IgiAIgiAIgiAIgiAItx/rxkME4Z2TSswwfukkAA3t91EUqw5cyUpZVDespqyqhfmZcabHe3Gy6dtcgaK4tJy6lo1EisqDR4tilVQ1rEIpi+mxHrLpxG1eVxAEQRAEQRAEQRAEQRCE24mI18JtJZOOk4xPYtkhYqW1uQxpj0i0lGisEu04JOMTaCd722uIFJcRLanEsu3gMUvZFJfWYNkhEnOTaCdz29cVBEEQBEEQBEEQBEEQBOH2IeK1cFtxsmkcJ4OyLCw7jFKFTzHLsrFDIcDgOOkrMq5vB5YVwrbDqPw4D4X7mFJoJ7Mg6wqCIAiCIAiCIAiCIAiCcPsQ8Vq4rVh2GMu2MdpB62xBE0QAbTRONgNKYduRK8TtKzFu98R3gTEOWjsFuxljXLe1MVihK0X1q88jArcgCIIgCIIgCIIgCIIgLBYiXgu3lUi0hKJYFTqbJTE7iqML4zkyyTmS81NurEiZGyuilAWKq4rd6eQMTjaNeRcKdjoxRzoxi9FO7kFjSMyN4zhZNz7EE7D9PG6jnYI1jM6STsyiHeeKmgRBEARBEARBEARBEARBWHjumHhtjGHv3r08++yzfPWrf8e+fftIp293sz5hsYkWlVPbvA6UYrj3LVKJmYLtE8OnmZ0YoKS8lvKaDuxQhHC0BKUsEvEJsnkNHLOZJMO9R0kl4pe5rxUKhTHG05ULxeXE3DTjQyfJpOeDx1KJaSZGzmK0prJ2GeFwDMsOEYoUu/nb81MF+dtz04NMDJ8hm7n8OaqgYG1BEARBEARBEARBEARBEBaC0I2H3B6MMbz44ot897vfZX5+ni9+8YusXbuWSCRyp0oQ7gB2KELHug/Qe+pl+k7vpaq+kxWbnqKouIKJ4dO8/fq/kJibZP2Dn6aipgPLDlNZu5xIUSnDF49w6cI+QuEojpPh4skXGOw5iNYOqFx+tWWHCIWLyGZSzEz0kYxPEY1VYDw12RjDuaO/oKSiidbOhwA4f+znDF04RGllHfWtmwlHY4CiorYDxR56u3dT17qBmsYu4tODdL/5fWYnL13hurZDUSw7zPzsGPHpQUrK6wlHS7DtEORnbAuCIAiCIAiCIAiCIAiCcEvcUfF6fHycnp4e4vE4ExMTaC2ZwvccSlHfuokt7/siR1/9Gsde+xcuHP8VdjhKYm6CZHyKzs0fpmv7b1FcUoNSivr2LXSsfYyzh5/jwC//ipP7vgUYLCtEXcs6Zsb7AYMxGgNEisqpa93AcO8xjux5lp4Tv2btjk8TLa4AoLSqjpKKek688U1O7v82RjtMj/WCUqzb+RnqWjdi2xFAsWzdhxjqOcTQhbfY84P/TFFJJU42Q6ysjrqWtcSnR9HGwXd3l9e0U1nXwaXzh3ntp39OVcNKNj/yBepbN2LZ4UU66YIgCIIgCIIgCIIgCIJw73HHxGvhvUM4EmPFhicpq2pltP8o0+MXcbIp6tu2UNO4mvr2LVRUt2PZ7tOvpLyeLY99kfrWjUwMnyaTSRArraOhbTNVDato73o/lhWirKoFpRShcBFr7/80xaU1TI9dwA4XU1LeEMR+RIpKWb31E2gnw/ilUyTnJ2ho30J922aal++guKQ6cHLXt25k50f+dy5d2M/sZD8YQ3lNB03LthGOlrJqy8coq2oNhPGyyha2Pv4l6lpfJRkfp7SymWhxxTtqALnUMcYwPniS0YFjaO3QsuJBymvasCz5NSEsHYwxzE0PMnhhP6P9R4nPjGK0Q7S4nOrG1bR0PkRl7QrskNzVIyw8xhhmJ/sZ6TtCKjFFbfN6apvXYYei191nbuoSl86/wUj/URJz4+7fuOpWWlbspL5tM5Gisjt4FMLdTjo5y9il40wMnyFWVkdjx1ZiZfXX3yc1x2j/MS6df4OZ8V4cJ0NRrJK61k20rHyQsqrWoC+IIFwbw/zcOCO9bzE3Pei9ht5OOBK77j6J+CSDFw4wdPEg8elhwBArb6CxYxstKx6gqKT6Th2AcJeTnJ9mpO8w0+O9lFY20bLyISLRkmuON9phbnqY/rOvMtp/lOT8JJYVory6neaVO2ns2EYoXHQHj0C4WzFGk5gdY+D866QSM1TVd9K84oHr6gLGaBJz4/SfeZXh3sMk4uMoZVFW1UrT8gdoWfkAoXDxHTwK4a7FGJKJafrP7CExN05F7XLaVj9yw+ff9Hgvvd0vMX7pJOnkHKFwlIq65bSvfoy61o1LWtcSVUpYECJFZbSseID61o2kEjMSgOIFAAAgAElEQVRonSUciREtrghEax+lLKobVlFW1UIqMY12CsdW1XdeNl5R1bCKWHk96eQslh2iqLiSkYFj7nagvKqN2ua1tK16lGwmQShcRDRWiX2ZOzoULqZp+f3UNHWRTs4CEC2uCHK461o2FIy3QxFaVjxITWMX2UwymFdZ9m0+g3ee+PQgx177Z3pPvYIdilAUq6K0sknEa2HJYLTDYM9BTuz/FpfO7SebSbpvUJRFKjHD+bef58Lx51m/87O0r3mfCIDCgpJOzTFw9nVOH/4hwxffAmD9Q5+jonb5NcVrrR2Gew9zcv+36Dv9KnNTo2TSaZRSRItjXDzxIl33f4rOLb9BrLT2Th6OcBeitcPU6DnOvPVjek68QHxqhKYV2yitaLqueB2fGebMWz/m9KEfMjl8gdR8HK01oUiY0spf0971KOsf+Dz1bZtFwBauSTaTZHTgbbrf/B4DZ14nk0mwbN3j1DR2XVO8NkYzNXqek/u/zfljv2RmcohMMglAOBqlp/YFVm76MOt3fp6yqtY7eTjCXYaTTTMx1M2pN79HX/crpBKzNK/cQV3zhmuK1042zdil4xzZ8ywDZ98gPjNONp1BWYqi4hIuntrNmu2fpGv7p64rgAtCJp1gsOcAp/Z/h8GeNwHo3PI0TcvuR9lXF/+0zjI5cpa3Xv57+k/vzXsNCJHiGD0nXmT11o+z4cHfkfcwwnXRToah3sOc2PevDJzdh3bSLFv/AVo6H8a+zvNv4NzrHN79VUb73mZ+dgonm8WybIpKSrl48iU2P/IfWbnxaZS1NAXsJaVKnT9/npMnT+I4Dhs3bqS1tZXz58+ze/duTp06xezsLJFIhLa2Nh555BG2bdtGcXExWmu6u7t55ZVX6O7uZmZmhkgkQmtrK+973/vYunUrRUXXvoI6MjLCvn37OHbsGENDQ8TjcZRSlJeX09HRwc6dO9m0aRPFxde/CjY4OMjLL7/MsWPHGB0dBaCuro777ruPRx99lJqaGk6dOsWFCxcA2Lx5M83Nzdj2lcLn5OQk+/fv59ixY/T39xOPx7Esi5qaGrq6uti16xHa29sIhZbUj7AQpQhHSwi/wz/+4UjsBk6N/KkVRbFKimKVuccuy5y2rDCxshu/+VfKIlpcEbirb4Rlh4iV1b2jsXcL2UyKc8ee48LbzzM1OkhRrIRMKh7kiAvCUmBq9DzHX/86599+gZqmVXRu+Q2q6ldiWTZz08OcfevHDJw7iJNJEo1V0bJi5xUXywThVjFaMzl6jrNHfsK5o79geqyXVCKBbdsk41MYc+1ItKmRs5x44xuceesXFMXKWLP9Y1TVrSCVmKb/7OuM9nfjON8kVlbH8vVPyB0EwjVJzk/Sd3oPpw/9gMGewyRmp3EyGcprhshmktfcL5WY4cLxX3H01X9mfmaMxo5NNC2/HysUYWzgOANn3uDMoZ8RiZRQUtFIaUXjHTwq4W7AGM3c9BAX3v4lZw7/iLFLp0knExijmZ8ZQevsNfeNTw/T/eb3OP7Gt8EYlq9/H3UtG3CyKYYuHuLSuTc5eeB7FJXUsGHn5wlFxIEoFGKMITE3Rs/J5+k++ANG+o6TTs2jsw4VtcNonbnGfprp8R6O7PlHzr71K4pLK1i385NU1CwjGZ+g/+zrDF08TjaTpKyyhWXrPigX74QrMNphdmqA7jd/wLmjP2NiuIesZ0JIzI1juPp7Z2MM8elhju75R7oP/oRIUTFr7v8Y1Q2rSCVmGDj3BoPnj5BOfp2yqjY6N39Unn/CFRhjmJ8d5fSh73Pm8I8ZHzxLNpPByWaZnx29ol9bbj/NxNBpDvzq/6H/zJtU1bexZvsnKKloZG5qkIsnX6Tv9Js42TRV9auoaeq6w0f2zlhS7+oPHjzI3/7t3zI3N8eXv/xlqqqq+PrXv87BgwcZHh4mlUph2zaVlZX88pe/5Pd///f5yEc+wp49e/ja177GW2+9xejoKMlkMhj3/PPP86UvfYmnnnqKWKxQGNVas2fPHr7+9a/z5ptvMjg4yMzMDJmM+0cvGo1SXV3N8uXL+djHPsbnPvc56uvrsS67EmGM4dChQ/z93/89e/fuZWBggHg8DkBJSQmtra08+uij/N7v/R4/+9nP+PnPf45lWfzpn/4pdXV1BeK11poDBw7wjW98g3379jEwMMD09DSpVArLsojFYjQ0NLBu3Y/47Gc/yxNPPEF5efkC/2SEe53h3kOcPvxjnGyGSLRoSd8uIrw3MVozePEgA+cPEI4Ws2b7J1l938eDi05ONk20uJy56SGG+44z1HOA2qa1FJfWLHLlwr3G3PQgJ/d/m+43f0hxSSUrNz/J2MBxJoZ6rrtfJhWnt/tlLp56hZLyWjY8/Dus2PAExaW1ZNNJGpdt59irX2Nq7CLT4xfJpOIiXgtXxcmm6D+7l8O7v8rc1DDNK7ajnSwXT7563f2M0YwPdXP26HMkZidYufFDbHjo31HVsArLspmZ7Ke8+rucOfwzZib6iM8MiXgtXEFqfprzx37BkVf+AWMMy9a/n/nZUfrPHLjuftlMiuG+w5w98hzKslj3wG/Tte23KK1sQmuH1sGTHC/9Jv1nXmNq7DzJxDSlIl4Ll5FJzXHx1Eu8+cL/SyY1T8faR8hmklx4e/d190snZuk7vYeeE7sprazlvvf/z3SsfZziWBWZ9DyNy7Zz5JV/ZHain8nhM7SteZTQdSLAhPcmyflpTuz7Fsff+FfC0RLWPfBJhnreZLT/zHX3y2YSDJx7jbNHfkG0uIRtH/hfWL7hSWKldWSzSVpWPsjh3V9lbOAk44MnWL7+QxJfI1yBk0ly6uB3eOvlf0Qpm/UPfZqhnsNcOnfkuvtlUvN0H/o+A2cPUdO0nAef+j9o7NhKpLiMVGKGxo6tvPHcf2V+ZoyJ4dMiXr8TZmZmOHnyJJOTk7z66qtcvHiR6elpnn76aTo6OpidnWX37t3s37+fV199FcuymJ2d5Tvf+U7BuHg8zp49e3j99dfZs2cP4XCYtWvX0tXVVSAUHzp0iL/8y79k9+7dZLNZHnzwQXbs2EF9fT2ZTIbT3af59fO/5rXXXqO3t5dwOMzv/M7vUFlZWVB3T08Pf/M3f8OPfvQjZmdnWbt2LY899hgdHR3MzMzw6quv8sMf/pB4PM6lS5c4cuQIRUVFzM3NXeFs3b9/P3/xF3/Biy++SDKZZOvWrdx///00NzeTSCQ4evQoe/bs4bnnnqO3t5dsNstHP/pRSktL78jPaClTVFpN+5pHsWzbdWTL1cp3xNz0EKcO/hvTY320dO5gpO9tknNTi12WIBTgOCmmxy6SjE9T09RJTeMaIkW5C3d2KEJNUxfl1a2MDZxlZqKfdGpOxGvhtpNOzZFOztC6aiedmz9KKBIjMTd+Q/F6ZrKfSxf2k07O07n5I6zc9BTl3m3xkWgpbasepaS8gfm5Mcqr28VxKFwT7WRJzI1TXFrDqi0fpWn5TvpOv0z/6Teuu18mPc9I3xHG+k9S1bCc1Vs/QUPHfUE8WE3DGjY+/O9p7dxFOFpCeVXbnTgc4S4jm0mQnJ+kqn4lq7Z8jPLadrrf/AEDZw9ed79kfILBCweYnRyibc2DrNn2SaobV4F352Rjx1aKS9zndKy8gWixmHOEK3GyaZLxCUormlh138eobV7H+bd/wYXjL19zH2MM8Zkhert342SzLFv3OKs2/wZFJVUAhKMltK9+lNKKRuZnRymvaZfYROGqONkU87PDNC67j67tn6KssoWZiT7gOuK1MSTnp+g5/jypxDxd2z/K6m2fJOa9RwlHY7Ss3ElRrIq56UuUVbXKnaPCVdHaYWaij9qWtazb8Rlqm9czNzUIXFu8NkYzOzXAhbd/jWXbrHvg03SsfTwwyITCxSxb90GKYpVkM0mqG9fcoaN59yypfxWhUAilFFprfvWrX7F8+XL+8A//kAceeICqqirS6TS7du3iK1/5CgcPHuTAgQMMDQ1RW1vLH//xH7Nz506qq6tJp9M89thjfOUrX2Hfvn0cPHiQI0eOsHz58sB9nUgk+P73v8/evXuZn5/nE5/4BF/60pfo6uqitLQUx3EYHR1l8+bN/OVf/SW9vb1861vf4uGHH2bTpk1BXEc2m+WnP/0pzz//PDMzM2zfvp0vf/nLQS3JZJIPf/jDPPvss7z00kvMzs6STCYpLi4OjtdncHCQZ599lhdeeIFUKsWnP/1pPv/5z7NmzRrKy8vJZDIMDg7yk5/8hK9+9ascOXKEf/iHf6Czs5MtW7Ys7QiRO0BZZTObdv0HUIqSika51eYd4GTTnD/2c/q6X6W+ZS0dXe9nZryPZHx6sUsThMvI/XtWlu1+XDbCskJ5bzYk8kZYGErKG1j3wOdcca+6nanRs1fEVl2OMYbp0QtMjZwnVlZNQ/tWSssbCsaEozEa2u9byNKFewQ7FKG1cxd1LRuprF2GssMMnHvthvsl4xNMDJ0im05S17KemuZ1BQKNsizKq9spr25fyPKFu5xocQUrN36E5es+RFV9J8n5iXfwmtsQnxlmdOA4oXCUxvatVNYuI/9vux2KUt24murG1QtZvnCXE46W0N71fpqW76C6cTXZdOKGd4xqJ8PMRD/jl7qJlVXR0vlwQfSkP6/8DRZuRDRWybodnyUcLaGyrpP52ZEb7qONQ3xqkJG+Y0SLS2hb8xjFlzWltUNR6ts2Ud+2aaFKF+4BQuEoXds/RSgcpbphjRcTd4P3INphfPAk06P9lFU30L7mfVfc2RmOxGhb/egCVn57WFJqpx/HYYxhenqaZ555hieeeIKKiorgRdHDDz/MQw89xPHjx5mZmeHSpUt88Ytf5Mknn6SysjIYt3PngzzyyCMcPXqUmZkZuru7SafTgXg9OjrKoUOHiMfjVFZW8swzz7Bjx46CaJGKigo++Vuf5IUXX2BoaIgTJ05w4sQJurq6AqF4ZGSE559/nrGxMUpLS/n0pz/NE088QVVVVZCbXVtbSzQa5cKFC7z22ms4jgMUZjQbY9izZw+7d+9mdnaWhx9+mC984Qvcf//9RCK5J1dtbS01NTWcO3eO7373uxw4cIDdu3fT2dl5hSP8vUYoXExl3YrFLuOuYrjvLc4c/hF2KMLqbc9QXt0uV3qFJYkdilBe0040VkZ8epj49BCOkyn44xufGWZ+doxQOExZVSuRqNyRItx+osUV1Ldt8l5vvLOLpNpJMzPZx/zcBNUNKymtbCYxN8HEcDdz00MopSiraqW6cQ1FsSq5+CpcF8sOU1m3HFAopYKG09fFGOZnx5iZ6McOu79PbTvCaP8xJkfPk80kKIpVUt2wmvLqNqzLGlwLgk84EqOmqct9H6MUyfmJG+6jHYf52RFmJwYoipVTWbeCTDrBcO8RZiZ6MTpLSUUj1Y1rKCkXA4pwbULhKNX1q0C5v/+y6cQN93GyKWYn+0nGZ6hp6qSipoP47CjjgyeIzwyjlEVZdRu1jWuJxt5ZDyThvUk4UkRD+30o68qeZddCO1lmJ/uZn52gtLKeqvpVJOOTjF16m7npQQBKK1uoa14f3A0gCFfDssM05j3/rtfjxMdxsowPniKbyVBe00ZpZTOzkwOM9h8jMT+BHYpQWbuc2ub1Sz6qZsmpVP6LlY6ODnbt2kVZWVnBC5hoNEpnZyeRSARjDO3t7VcdF4mE6ezsJBqNMjMzw8jICNlsroFIVVUVf/Inf8JnPvMZtNbs2rXrqk0d6+rq6Ozs5KWXXiIejzMwMBBkYoPbZPLcuXNkMhlWrVrFQw89RHl5eUEtlmWxceNGnnjiCY4dO0YyeeWTLB6Ps3v3boaHh4lGozz11FOsXbuuQLj2aWpq4oknnuCll16it7eXvXv38swzzxSI/IJwI+IzI5w6+F2mRi/Sdf8naVn5IMl5iQsRliZKWTQv30HT8m1cPPEypw7+G+FoCY3LtmPbESZHz3Lq4HeZGD5Hfds6mpbveMdNWAXh3fBuRGufbCZFYm6cbDqFHYpy6fwbvP3aPzM+2E0qMYtSimisgvq2jay+7xM0Lb9/yb+AFBaXd9ubwgCpxDSJuPtGZX5mlMO7/z8Gzr3B/OwY2skQChdRXtPO8g0fYuWGj1AiedfC1VDqhnebXI52MiTik6QSs5SU1zI1ep7eU7sZ6TtKcn4KYwyRolKqG1fRuelplq374Dtu+C6811Ao6909/5xsmvnZUZxsBmWHuHR+H/1n9jA5cp50Mo5SiqKSSurbNrFm2zM0LX8A612Ik8J7CfWuhGtwxev47AjZTBo7HGW0/yiHXvjvTAyfJZVwLz5HY+XUNq+ja/snae3cJWYy4Zq82+ef0Vn3IjEQKSrj9OEfcv7oc8xM9JNJJ7Esm+LSahqXbWXDQ/+OGokNefcsX76curq6K5ojKqWoqKjAtm2UUrS3t1NfX1+QZe2PKy8vDxzSiUSiIF+6rKyMD37wg2QzWdKZNNFo9Iq1wBWeKyoqCIVCaK2Zn59Hax1sv3jxIlNTU2itWbFiBY2NjVfUAhCJRHjwwQepr69nfHz8iu1jY2N0d3eTTCapq6tj48aNlJTErhjnH9uGDRuor6+nr6+Ps2fPMjIyQnt7+3suOsQYw2DPAS6de4Pa5nW0rNxJ+DpuS2MMly7sY/D8fmqb19G8cud70p3pZNOcf/vn9J3aQ13rOlZueoqS8gZSCYkLEZYu5TUdbHr43xOOxLh0bj+v/fTPKSmvR9khEnPjJOcmaVqxjbXbP0V96yZ54ScsGbTOkEnFcbIZxge7mZ8dJVZWx7J1H6CopJq56UH6uvdw5tBzxKdHQClaVu6UzE3hNmLIphNkkvMk52e4ePIlorEKqhtWsWLjkzjZDMMXDzHY8xbTYz1kM0nW7fgMRTFxgQm3jtYOmXScbCbF7OQQZ4/8lHAkRkvnTkoqmkjNTzJwbh89x19mdqIfg2Hlpqex5Q4A4TagdZZ0cpZsNsvUSA+nDnyXSHEZKzZ+mKJYJbOT/fSd3supgz8mPj3M9g9GaFq+fbHLFu4RjHFIJ2ZwslnmJoc4/vo3CIWLWLb+g8RKa4nPDNHX/SpnDj/H3OQAlh2mtfPhxS5buEfQWpOYm8Boh7GBk0yP9lBW3UrX9k8SLiphauQ8F44/z/HXv0NibpyHfuP/DnryLDWW5LsipRTV1dVEIpGrOon9rGh/XDQave44AMdxrmiOCBAKh5hPzPP222/T19fH+Pg4MzMzpNNpHMchm82yd+9eEolEMI+PMYahoSFSqRRKKZqbmykuLr6m+7m9vZ26ujrOnLky0H9sbIyJiQkcxyEajTI+Ps7Ro0evKoQDjI+PEw6HsSyL8fFxxsfHcbLOXS9eZ9IJJoa7KS6pobSi8ca3rRrD2MDbnNz/HVZu+gj1bZuvK15jDKN9Rzn+xr+yctNHqGvd9J4Ur0f6j3L60A+xQmFWb3uG6sbVIvQJSx7LsglFYoAhMTeJdrJkUnEsO8T87ATZdIrSqibXFSaOGWEJYbTGcTJoxyExO0XT8q2s3/l56lo2EIqUkE7M0NC2ibde+UcunTtIVcOvqapbQWll82KXLtwrGNznoM6Qnp9H1Visuu/jLF//IWKltWjtMNn1Pk688U1OH/oJ5448R23TOtpWPyp39Qm3jDEa7f0OzKRShKMlbH7kP9K07H4ixRVk0wmaVjzAsVe/Rm/365x96yfUtWygqr5zsUsX7gHcv8FpdDZLOhmntKqZLY/+J6obVhEKF5Ocn6K+bQuHX/ofDJw9QEXdj6luXC1NQ4XbgjGGbDaFdhxSiTjRWCXbHv8Dapq6CEdLSCdnaFx2Pwef/+8MXjjCqQPfpa55g0TYCLcHo3GyKYzWzE2OsHbnM2x86Hcpq2rFDkVJzI1S17qR1376Xzl/7EUaOu5jy6P/6V3f4XcnWLJq1bUEaaDg8XA4fN0X1dfbFp+L86tf/4qf//zndHd3Mzk5STweJ5lMBmK31ppEIkEymbxiLmMMMzMzgaBdXl5+TbEZoKSkhIqKiqs6vGdnZgN3+MjICH/9139Naem1RdVsNkt3dzfZbJZkMunWoZ1rjr9bmB7r4a3dX6Vt9aOs3Pw00RuI10opWlc9QqysjrLqNiJFZTdcI5tOkIxPk0nHwegbjr/XmJ8dpfvgvzGdFxcSjsitmcLSZ3zwJEdffZa+03tpWn4fy9Z9kIraDpRlk4xPcPHUSwyceYO3Xv47986c1Y8RihQvdtmC4GEwGEoqaunoepzmFTuDaJBwpJj2rscZHzrN5HAPwxffYnq8R8Rr4fahcq+J7XCYhvYtrNjwJOXVbcGQ+taNJDc8yUjfUSZHLjDcd5imZfcTjl79TkBBeNcYQzRWQmvnQ7R3PU7EiwYJR4ppXvEA8ekhhnuPMnbpFKMDb4t4Ldx2ikurWLnpaRqXbQ+iQUojxbStfpTJ0XMcev7vGLrwJjPjF6lr3bjI1Qr3GpGiGJ2bn6Z55c7g+RcKF9Ha+TDTYxcY7e9m8MJBJkfP0dixdZGrFe4JfA1TKWIV1f8/e/cdHcd1J/j+W9U5obuBbgCNnEEAzEFUpKhsBVuWg0b2eLye8WhmPG/Gb/b5nZm3+87ueM++3dnZM9Eee2Q5SrZlK9iSrEhKIkWJCQADQBI554xG6Jzq/QGKIgVIhCiSAMnf5w8dEV1ddav6VtWtX937u1RvewRPTvWZjx3uXErW3ctg50Faan9Hb/ObVG19eFXmX1+1wevlutDeIKFQiCeefIInnniCU6dOEY/HKSwspKysjPT0dOx2O0ajEb1ez/Hjxzly5AixWOzclWgQjUbRNA1FUTAajUsGpt+j1+s/NCgfT8TPBMwTiQRjY2NMT3/0BCgOhwOHw3GmrFeDmYkuRnqP4c4qR0suIxivKKRnlZOeVX7pC3cVWEgXsov+tnfw5q9dSBfiyJReVWLVi0eD9DS9QW/THhzpOdRc/2XyK2/FYFwIqqRSSVyZZcTC8/S17Kfr5Gu4M8txZ5aucMmFWMhPp9ObUFUd1jQPjvS8RTmtTRYnLm8pZmsagdkxQvMTK1RacXVSUHUGdDoDBpOZtPR8LLb0c5ZQVT2O9HycGQVMjXQR8A8RjwYkeC0+MUVV0emMqDo9Josdp7cY4wfqlcFoxekpwubMZH56hLnpgRUqrbjaKKq6cA/W6bDYXbi9JYtyWpvMDjKyKzGYLITmJ5mfGZLgtbgoFEVBbzCjqComq4P07MpF9U9vMOPJqcZksREO+Jmb6pPgtbgoFEVFb7SgKCp2ZzZOT9EHl8BospNdsInWupeYmxwgHJyS4PVqcvDgQX7605/S2NiIzWbja1/7Gvfccw85OTlYrVZMJtOZvNqPPfYYp06dWhy8ZiEn9vlSk7xH07Rz0o6cTa/Xn9leXl4ef/VXf0VRUdFHBsPP/m5VVRUmk2mZe395xWNhhrsPM9JTx7x/iGQyhsFoxeUtpXDNbbgzy0gmYvS1vkX78d8RmptmuLsWNI2c0u1k+KoZ7T1CaH6CrMLNTAyeYKTnCC5vMWu2fIHA7AhjAw24M8vIKtiEwWhF0zSmR9vobXkL/3gHqWQcm9NHQeVO4vHQkuVMpRJMDJ5ksPMAMxPdJGJhjGYHntwaCitvw+7OveIn75gebaPj+IsE56ZIzy5noP1dRvuOnvk8ODtGcGaMRCxKf9vbhAITeHzVZBduXlavdiEuldD8BOODJ4iEApRuWI8nt+ZM4BoWUoqkZ5aTmb+Boc46JoeaCM6NSfBarAo6nRGTOQ293oCi6BaG4mna+70hWGhcGo1W9AYT0UiAZGJxm0OIT8JgsmG0OAjNTy/UwSVT7hkXJsrTNOKxMMmk1EPxyamqHqPZgcFkRlFUFEX3wUsgCgsBHoPJSiqZIBEPr1yBxVVF1RkwW12oeh0o6pLD4RVVh8FoQ6c3kErGiceWfl4U4uNSVT1mqxudXg8oH1L/VAxGG3qDkWgoQCwauPwFFVclRVGxOTIX/l9duu333sTxqqqSiEeJx1bn/feaDF5Ho1F2795NZ2cnqVSK++67j2984xusWbNmUc7oVCr14SlMFLBaraiqiqZphEKhDw1OA0QiEQKBwDkTPr7Hbref2Y7JZGLr1q1s2bLlis9hnYhHaTvyLM11T5OIR7A7s9HpTQRnxxjqrGW4+zBbbv8LHO5chnvqGe8/QTwaZXq0i1gkiMnixO7Ko6dpN+MDJ5gcbmGkp57w/BSJshDx9fcz1n+cE/t/Rsm6T5GeVYHBaGVquJmje/6Noe56bA7P6clgWpkeaSORiJBMxM8pZyoZp+vkazTXPsXc9BC2tEwMJhszEz0MdhxktO8YG3c8Skb2mis6N/TsdD+zk/1EwyGGOuoY7W045/NUMkE4OE8qkaTrxBv0t75L2cb7cHmKJXgtVlQsGiAamlsYcmxxotObFy2j6gwYzWmoegPRcEAefMWqoTeYsaZlYrTYiIZmiAT9aFoKRTnrhaimEY+HSSRiqKpeJmsUF5WiKFhs6dgcmUwNdxIOTpGIhc95CQgLebET8TAaCy9dFKmH4iJQdQYsDg8Wu5twwE9ofhwtlUQ5q02toZGMx0jEwmd6agtxMej1JmxOHwaDiVgkQCgwuWgZTUuRiIdJJmIYTFZ0+tXZKUxceVSdHrsrB6PZQjwWIjg3tmgZTdNIxMMk4jEUVUVvkLSH4uJQdXpcmaUoqkp4fopoeBaz1XXOMpqmEYssxClVVYdOvzrvv9dkizQQCNDR0UEkEsFms3HHHXdQXFy8ZKA4HA4zODi4ZK9rRVHIyMhAr9ejaRqTk5PE4/FFy71nbGyMiYkJEonEos88Hg9utxtVVZmZmcHv95NMXvkTMM77B2g79gKh+Uk23voovqIt6PQmYtEAnQ0v0XH8JXqa3mDdTf+Bis0PMT89QGjOT27ZdkrW3k2GrwqD0Uo0NIt/vJ9UIkFB1a1k5W/EkXqlAioAACAASURBVJ6PxeYhFgkQmBknGppB01LEY2E6Gn5HX+u7+Io2UXP9l0lLLyAeC9HTtJv2Y78jHo3CWZ3kJ4abOHngSWYn+6i+/vfIK7sJo9lBKDBJ06Ff0Ne8D4stg023/gk2Z/bKHdBPyJaWRXHNHQTmxjjnAJwWC88z1neSaCSMO6sIu9OHy1uCzrA4UCjE5aTTG9HpjadvrvNL9kpNpRLEonMkE3FMZvsV/aJJXF1UnQ6Xpxi7y8f0aBdTI63klt14TtqGWHSe2cleIsFZXN4CTNbVN1xPXNmsDi+uzBL62w4yPdbB7GTvQh083UFD01LMzwwzO9mPTm/A4vAu9MIW4hNSVR32tGxcmaXMTOxjcugUofkJ7C7fmWUS8Qhz/gECM+MYzTZsaVkrWGJxNdEZTDgzCnC4swnMjDHae5Tc0hvRG94PUMejQfzjncQiYezubKl/4qJRVQNp6fk4M/KYGuliuOswhWtuOyd9XDIRZWq0nUgogN2ZIXOeiItG1enx5q7FbLURmBlnpLeetPSCczrnJuIRxvob0FIprGlezKv0GeSafLKPxWKEQqEzvaqzsrI+NEjc0dFBU1PTmdzWZ3svxYfVutBrpa+vj7m5ObKzsxel+0ilUhw9epSxsbEle157PB6Ki4s5evQofr+flpYWbrjhhg9NBRIIBDh27Bhut5uysjIsltX5di44N0ZgZgSLzU1u6Q14cqrODJWxOjLJLtqC3enDavdgd+XicOeh6vW4vSXkl+/AbHMTDc+CopKMx7C5sqnc/DkycqpQVT3aEscyODvCcHcdiqJSuuF+8it2nHn4MprtjPQeYXZq9MzyqWSc/ta3mRxuo7BqBxWbHsLlLUZRVDQthZaMMzXcxkDbO5Ss+xQWh+eK7RGXkV3Jhh2PkkxGl/x8Zrybw4F/IDU5SNmG+ymouBVrWuaqzHkkri0Wu4e0jAJ0+nomBk4yO9mD3Zl9ToA64B9icqiZWCSIr3gTVptnBUssxNkU3JmlZOavZXK4nZ7mN0nPrqCo6g4MJhvJRJTh7loG2t8lEYuSnl1OWnreShdaXGXMVjeZ+RtwuN9kfKCJzhMvY3V4SUvPRwNmp/roa36T6bFu7K5MPKc7EAhxMdicWfiKtjLYfoihrnq6Tr7Cmq1fxGRxkkommBpppfvk64SDc+SUbCA9e81KF1lcJVRVhyM9n5yy7Zw68Aw9Tbvx5q2nsOo2dDoDiViY0b5jdJ98HYD0rHKcGYUrXGpxtVBUFVtaFvmVtzAx1EFv81tkF26mZN296PRGkskYE4Mn6Tj+IloqSXp2GS5vyUoXW1wlVEVHRvYafMUb6Dl1gKZDT+H2lpGZvwFFUUjEo/S1vkV/6zuoOh05pdet2tjPlRmB+4SMRiMWiwVVVYnH48zNzS2Z7sPv9/PrX/+azs5OEokEiqIQi8XOCT6XlJSSlZXFwMAAnZ2dtLS0UFhYuCiYPDg4yK5du/D7/Uvmxbbb7dx000289dZbDA8Ps2vXLu68806qq6vR6c7Ns6xpGnV1dfzzP/8zkUiEz33uczzyyCO43auvki3kt7MyNz1M14mXURQVt7fk9BvwQpzpBSiqAihLBvXPpqgqmXnrsLtyPjJ4HJgZJjg3tjAhR2bpOb2GnRlFuDNLGe05eeZv0cg8E4Mn0VJJfEVbsKV5zwTYFUUlI6eGtIw8JgaamfcP4s1dh2q8Mk8do9nxkek/tFQKg9Gy0EPGmYM7qwKDcXW+GBHXFpMljcI1OxnrO87kcDsN7/yQef8g6dmV6PRGAjPD9DS/wUDHISx2JwWVO7C7c1e62OIqo6WSDHS8S9fJ1wifnlAxGp5jcqiVeDRKZ8PLTI+0LAy3U1RyirdRuuHTOFw5WB1eiqruZGKwifGBFo688R2Gu2uxObMJzU8w0l3PxFA7Lm8eBZU7sbuk/oqlzU3109H4EmN9xwCNVDLBzEQv0XCIyeF2Dr3yd1js6YCCzZlN6fr7ySu7EZ3eSFbBJgqrd9J8+Fla63/L3PQgnpwqNC3F5HALw131AOSV30h24eYrfq4PcXG9V09ajzzD/OkJFeOxMNOjnSRiMYa769nz9LcW2o6KgjuznMotnyc9qxyTOY28shsZ6amnp+ltGvb9mMnhZpyeImKRAGN9xxntbcRiT6Ow6nbcWWUrvLditdE0jdnJHpoO/5LZyR4AEokoM+M9JGNxJgab2fvsXy+8dFMUHK5cqq57BE9ONVa7h+KaexjrPc7EUAe1r/8Dg537sTkyCc6PM9xdx8RQBy5vHmUb7sf8gQlthUgl4wujtff/lFhkHli4/o33nyIRj9Pf+g6vP/EnKIqCoupxZ5axYccfY7GlY7KkUbLuXoa7ahnpPUnt7n9muKcOuyuHSHCa4e46RntPYXd5qNj8EFaHdMAR59K0FP7xLo7t/R6x8BwAyWSCsb5GUqkkI11HeP3JR0/PaaLicOey5Y6/xGr3YHFkUL39y0wMtTLceYx3X/hbcsuux2RxMjc9yED7fub9k2QWVFC+8UF0OsMK7+3SrswI3Cdkt9spLS3DaNxLMBhkz5493HjDjeQX5J/pMd3X18fPf/5zXn31VUpLSwmHw4RCIQYHB4lEImiahqIoFBTks3nzZpqbm5mcnOSpp56ipLiEmrU1Z3pzDw0N8fjjj9PQ0IDJZCIaXdzrVVVVdu68jVdeeYWpqSlqa2v58Y9/zDe+8Q1KS0vPrCuVSlFfX89jjz3Gvn37MBgMPPTQQ8ua2HElODOKKN/0GU7uf5JTB3/FQPsBXJnFZGRXkl20FW/u2mX36tHp9FgcXtTznEyRkJ94LILDnYPB5DhnUgSd3oTF7kHV6+H0SIl4JEA4MEU8FqHrxGtMDjefExxPJqL4x7uJRYKEA1OkknFAArpCXE6qqie35Ho23vrHNB3+FSPdR5kabsViT0dRdcTC8wTnJrG7Mqnc8jmKqu/GZElb6WKLq4ymafjHu+hqfJ3ZyfFFn08O9TI51AuAqipoqSR5FTtwuHJQdQZ8xdex4Zavc/LgE4z0nGB6rBeD0UQ8FiWVTOLJKaPmhi8vjBiSF4fiQ0RCfgY7DtDV+PaiDhGJ2VmCs7Vn/u3OysWTW0Ne2Y0AOFw5rNn6RVKJOB0Nr9LV+Ab9rfsBjXg0gtnqoHLLZ6i+7ksybFksomkawdkRek69wcRgz6LPZycnmJ18G1gYoZpTOkBB5Q7IKkdRdaRnVbD+5q8B0Neyn+bDz2O0WEklEsRjUdIysqna9nnKN34Gk9l+GfdMXBk0wsEpepvfYrS3bdGngRk/nQ37gNPpPXMKKajciSenGr3BRHbhZjbd/meceOcnjPSeYnq0e+EeHI+SSiTw5paz9qavkl9+i7y4E4ukUkkC/iHaj71KOLB4QkX/2DD+sWEAdHodvpJ11Fz/ZbClo+oMeHxVbLnzL2jY9ziDHceYGR/AaLaQiMWIx6JkZBew9qY/oLj6zit2lLe4dDQtRTgwQcfx1wjNzS36fHZqgtmpXQAoqoInp5j1N/8R2D3odAYKKnZw3d3/J43v/JjBjmNMDLagMxiJRcKkEgnyyjezaeef4M2tudy7tmzX5FlhNBq58847eP3112hra+OVV14hmUyyadMmLBYLY2NjNDQ0cPToUbZv3862bdsYGxujv7+fQ4cO8dhjj1FVVcUDDzyAw+Hgc5/7HLW1tZw4cYK33nqLSCTC9u3byczMZH5+nmPHjlFbW8vmzVvo7++jsbFxyXIVFhbw9a9/ndHRUY4fP86vf/1r+vv72bJlC1lZWSQSCfr7+6mrq+Po0aMkk0m+8IUvcMcdd2C3r84GnsnsoGrrw6S58xho38/YQANdjbvpbdqLIz2H/PKbqLru904PjfnoALyiquj0hiVn6D1bIhFDS6XQ6Qyoqm7RhKo6neGcHD8pLUkqlQBtoQddYHbk3Em00HB5i3BnlmBLy16YpfUq5XDlsv2ebxGNzJFdsHnVJusX1yaT1UXx2ntweoqZGDyFf6KTcGAKTUthMjtweorw5NTgyanB6vCc91ohxMelqDoKKm/FlpZJLHKemeAVZSHH5lkBQJMljaLqO3C4cxntO7aQXzMawGi04fQWk5m/Ho+vWnp8iY+UllHAltv/nPJNn15q+opzGC1peHPefxB57wF6484/xVe8jYmhUwTnxlGUhV7anpxqsvI3kJZeIPMGiEUURcWTW8PND/4XIsGZ8ywMVrsXd2b5mT/pjRZ8xdsw29IpWHMrUyNtREJ+9HoTjvR8MvPW4c1dd3p+GeXD1y2uSYqi4PKUcOMD/5lwYOo8C4PZ4sKTU33mD2ark+Lqu7C7chjrO4Z/vIt4NIjRbMflLSEzfwPenBpMH5jMTAhYiCFk5m/kji/9Pcn40ik436OoKha755z2nMFkI7/iVqx2L6N9x5geayMWmcdgtOL0LLQBM/PWSxtQLElRdLgzy7nj9/4XiXjkfAtjtrqw2t/rwa9gsjqp3PJ5XN5iRvuOMTvZSyIRxWxxkp5duXD9y127qiervSZbpYqicMMNN/Doo4/y2GOP0dXVxTPPPMOePXvQ6/UEAgFUVeWee+7h0Ucfxev1cvDgQcbHxxkYGOBHP/oR5eXl3HzzzaSlpbF9+3a++c1v8v3vf5+GhgZ27dpFfX09NpvtTC/re+65h0ceeYTHHnuMkydPp6z4QJvMYDCwc+dONE3jySefZO/evbz66qscOnQIu91OMplkZmaGQCBAfn4+Dz74IF/5ylcoLi5elFpk1VAU7C4fZRseILtwC/MzQ8xPDzA+eJLe5r00HXoagE07/xTzsvLTKudty+p0ehRFIZVKoGmphQe7s76TSMTe/zsLD3KqzoDeaKJ0/b3klF6PfomTVlFUbE7fVT37r8nqpLDq9pUuhhAfymROw1e0FW9ODZHwDPFoCE1LoTeYMVtdGEw2CVqLS0ZRFNyZZbgzL3w4u9HsILtoKxm+KiIhP8lEFJ3eJPVXLJvZ6iav/OYL/r6qM+DyFONw5ZJffgux6MKLGKPZgcnqWrXDRcXKUxQFu9OH3ek7/8IfQm+w4M2pwe0tPT1aMoSq02OyODGZnVd1JxHxSSlY7BkU19x9wd83mh3kFF+HN3ctkeA0yUTsrHuw/ZwOTkKcTVF1ONy5VLo/d8HrMBgtZBVuJiOninBgimQigk5nxGR1YzQ7pP6JD6UoClaHl4rND13oGjBZ0iiouJWsgk2Eg9NoyQR6owWLPeOKiHFdtuC1qqp85Stf4YYbbiCRSFBdXX1mosP33HzzzXzve98jEolQXl6Ow7F0bt5t27bx3e9+l0gkQmlpKWlpSw8N37x5M//6r/9KKBSiqKjonOVcLhdf+tKXKC0t5fDhw/T09BAKhTCbzeTn57Nhwwa2bt1KSUkJiqLwV3/1V5SXldPR2UEqlTpnkkS73c6DDz5Ifn4+hw8fprW1ldnZWQwGA7m5uWzevJnrrrsOl8uFqqpomoZOp8NgMCy6QNntdu666y4KCwv5zGc+w6lTpxjoHyAYCqKqKi6ni5LSEtavX8/GjRvJzc390MkmVwtN01B1RpyeIpyeIlLJBPkVO0jPrqB+13cY7DxI9fZHlhm8Pj+jOQ2dwUQsEiAeDaJpqTM9qROJCOHAJKnE+znOjUYbltNvOI1mBx5f9aJ0A++liRFCrA56owW7pFUQVyhFUTCa7RhlWLxYQTq9EWtaJlYyV7oo4lqjKHIfFytGUVSMJjtGk9yDxeWnKAoGoxVDukyILFaAoiy8LLY4V7okH9tli3oqisLWrVvZunXrhy5TUlJCScn5Z1YtKCigoKDgvMvl5eWRl5f3oZ9nZWVx7733snXrVvx+P7FYDL3egNvtwuv1YjS+nzLh+uuvp6io6MyEiw6HA5fr/SFFTqeTnTt3sn79eiYmJohEIuh0OpxOJ5mZmZhMJoaHh5mbmyOVSmG323E4HEvmqjabzaxfv57Kykpuv/12ZmZmiMViqKqK2WzG4/HgcrlWb2/r0zRNo6/lLfrb91FQeRu5pddjMFpRdXrsrhyy8jdisqaRiEXQzkzWqAAaGhrnHQv7IexOH1ZHOnNTQ8xND+DxVaE/3Tie9w/hH+8iEY+dWd5gspORU81gx2FG+45RVHUHJrOD9/KNBGZHaT3yLHqDhbL192Fz+iSQLYQQQgghhBBCCCHEJba6u+xeBnq9Hp/Ph8/30cPf9Hr9ksHwWCzGm2+8yZGjR/D7/Xz2s59l+/btmM3mRevo7OxkeHiYRCJBQUEBXq/3IwPQJpOJ3NxccnNzL2znVoF4LEjPqTeZGm4hEQvhK9qGwWwnHJigp/lNQvNT5JZtR2+0osCZyRvnpgYIB6dR9cazAtvL43DnklWwCf9oD+3HX8Dq8JLhW0NofpyWuqeZnx5aCIufjj/rdHoK1uykr3UvQ52Haal/hvKNn8HmzCY4O0Lb0d/QduQFCqp2ULL2bsnAJ4QQQgghhBBCCCHEZXDNB68/KVVVaWtv48knn2RsbIyJiQmysrIoLy8/JzA9PDzMM888w9DQEAaDgZtuugmPx7Nkz+urhaIo+Iq3U77xATobXqH29X/E5sxEpzMSjwWZnx7Gk1NB1baHsdgzUBQFb946LDYXfS37mPcPkVt2PeUbH/xY2zWY7FRu+RyzU32MdB8l4B/GYk8nEY9gMFrx5FYTCc6hpZILPbwVBa+vmo07/piT+39G65HfMthxAKPZvjCBo3+EdF85pevux2L3nOmRLYQQQgghhBBCCCGEuHQUTdPO5GYIh8MoioLJZJK0CMukaRq1tbX8zd/8DbW1tTgcDu6880527NhBQUEBiqIwMjLCgQMH2L17N+Pj49x444383d/9HVu2bDknNcnVSNM0AjNDjA00MjncTGhujFQyjsHswOUpJjNvPZ6cagwmK6AQDkzS0/wm4wMNJBNxfEVbKaq+E/94F8G5UTLz1pOWXoCq07+3AabG2pgaacWZUUCGrxq9wUwyEWNyqInh3jpmJrpJJRPYXTlkF23BbHExO9WHw52LN6fmTEqRWDTA5HAz4wMnmJ3sIR4LYjDacGeWkZm/gYzsSgwm28odTCGEEEIIIYQQQgghriKapjE/P4/NZlsyQ4UEry+CcDjMrl27+P73v8/BgwdJpVL4fL4zObHn5uYYGRlB0zRuueUWvvGNb3DHHXdgt187k0RoWopoeJZYJICmJdHpzZitLvR606KezPFYiEjQj6YlMZmdCzPvXkgPdU0jFgsSDc2iaUmM5jRMZgeK+tG5wuOxENHQLMlkDJ3eiNnqXrKcQgghhBBCCCGEEEKICyfB68skFArR2NjIoUOHOH78OIODgwSDQQAcDgcFBQVs3LiR7du3s27dOqxWqxxjIYQQQgghhBBCCCHENUuC15eRpmnMzs4yMjLCzMwMsVgMALPZjMvlIicnB7vdLsdWCCGEEEIIIYQQQghxzTtf8FombLyIFEXB5XKdSRcihBBCCCGEEEIIIYQQ4sJcQCJhIYQQQgghhBBCCCGEEOLSkuC1EEIIIYQQQgghhBBCiFVHgtdCCCGEEEIIIYQQQgghVp1FOa9TqRTJZFImFRRCCCGEEEIIIYQQQghxyWiaRiqV+tDPzwle63Q6otEosVjskhdMCCGEEEIIIYQQQgghxLVNVdUP7UitaJqmvfeP93pdCyGEEEIIIYQQQgghhBCXg8FgWPLv5wSvhRBCCCGEEEIIIYQQQojVQCZsFEIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOrpvf/vb317pQiySCtDR0EYizYXFqIf4HH2tR3n7jbcZ1+WT5Taj1ykfsQKNmd5GOqYNOO3nW/bj0RLz9Lef4Ej9Eerr6jjW2ER7Vx+TszEMVjs2kx5iQxw+0IUx3YXVZOCTbj0208LLv36aN49O4s7z4bQaUS/eLl0WseA47ScO8eauYyQ82bgdFvRX2D5ccbQ5Trz9Ar99+RAzOi/edAemZR50LRlmvK+ZQ/veoH3KQXq6E6tRfjBx+aTiAUb6mjn8zps0D6s4nBk4LJf4fasWoK1uN797aS+DIRsZ3oyrst7H5/po7ZogljLjsBs/8T1KXOW0BAH/MC0NB3m3vpMwdrK99ktbb7QQfU37eeWFXbSMabgzs0kzXwP9LbQwA62HefXFVzk1lMSd5Vv+fqfizE72c6J2H8dax1HsuWQ4roFjdpVKxeaZmpzEH9LhsH3863QyOkX7sT08/+K7DIUs+PK8mK6x6qClYsxN9nGi7h3qGruImorIduve+5SQf5D6vc/z0t5WYuZs8jKtl+i6phELz9DfdpR3Dxyl12+kKD/9EvcgSxKcGabxwC5efulV3tizn4aWISKKg8xMF4bVUhdSCQL+IU7WH6D+eBcpZxHeZV23UsRCfnpajrD/4BE6JwyUFGZc5l55GolYiNHeJg4fOERj2yyZpXlYP0YhtFSMwFQfDXWHOHy4joamXvxhhbT0dCyGlWudackYsxPdHD10gLpjw7grinHozv89IVaClkowP9FLW2srp04c5fDBgxzrTFC2Jper8DFugZYiPNNH9yhkuM3LvndpqQSzYz10tHcxMDqLZnLgsOivmGdB/UoX4IPic13sev5VhnTrub94kqO79vDmW+9w9FQ7Pf0xvvDtG1hf4sZs+Oj16HRhjrz4U46V3sWnb6/Gbf1kQeREaJgj+3ax662DnGjtI0gamZnppDms6FJR5vwzRFUHuQW5GEOdNPaV8p/+dz5uh+UTnjQpeg8+w79//5e0zeYR95bxh3eV47JcGVUsPtfFO2+8yZ63D9DY0kHfsI9vVWyi1JfOtfAcupIiE0d57mc/4Jl9A1Q/ZCbnP36W6gLHec6DOH2Ne3j9td3srz9FR/cwGx/+Z8rLCsiwyw8mLr1keISG2kO8u+9djp5oprNnmLK7/x9yiyvISb+0Leeov4nXn/spP/ltIzk7wzg9mdxSk3HF3NCXJ0lfwxs88ZtO1tz1Jb7wqQ2krbqWgFgVtCgjXSc4/M4e9tc1cKqli1nzRv7wz3LYUJV9SYME8bluDrz2FN/54R7M634PgyuPz97ou+qHC8YDfdS+8Su+871d6NY8hOrM54s7cs+z30mmBprY/8ZLvPHOUZra+3Ct+QyP+rZT4bvST+4kkWCQSEzDkubEdC0ET7Q4s+NdHHrzJXbVTVB285d59IsbMX68lTA71Mqbz3yf7/ymj5q7pvEWl3NDwZVeH5YnFZ9nqKuBPbvfpv7YMZra+9FnbeU//F+3srH49JHUIgx1HuKp7/0Dr3V7uH/SRnnlp8m8yIcoMjdKc91uXn1zP0cbWhgJu7n5s/+RnTeWXrqHcC3O5EADu154hY5QDuU+I61HXuDJ2mlKt9/PH//lt3jwuhVu22gx/GPt7Hv5ZfYerKepYwS9ZzN/VnIb1R953UowP9VP/d7Xebf2GMca2xgN2dj6wLe4c0f5ZQtsxEPTdDbs5fW3DlB/5CQ9o1Fy132R9fdsJ2NZHYU0ooEJTu5/gWd+8zr1zQNMzwZJKGYyfKVsu/XTfOUPvsimAssl35dzixVleqSJ13/zInsPHqOt14/NdzflD+zEt4LBdHHt0JJRJrvf5qc/fZNE7h384dfvwWf+6LqXSiaYGW3nWO1BXnnuVdqnNNY+8D/54ucuU6EvK41EZJqWul088/RLjGd9jX/5L/dwvtCglowy3LqXF3/3LsMxJ1leNxZDnHd2v4ij7F6+9NBGroRQz6pqxYQnjvPUD39Nv6GaO++txGO1Es8tozRzF08cOUrXtIu7Qwk07XxrUrBlr+G6rUO88uqveCbxCF+8u4b0Cwpga8wNHuZXP/kJv33tXdonnWy/836+cMsGinI9OCwGUrEg0+N9HHv3dV554U26B0ZIpn8BfyRO6oKOxLn7ojfqSESChCIKer2CcgXdO1SDA19RBTmG5/lFYyND8yrBaILz/oTiE1P1BlQtRigUQVF0KOpyrkgqtvRcSgqNPPuLkzS0TJF3d5jEJ6/IQiyLorOQkV1IRWkDe17roLFxAMeWILHkpb9qqDoDOi1BJBwmpSmouqsvUpKM9FO3fx+7XzxIv3ENmzdWsjHPvNLFEquRosOS5qGgrJS0I+/Qc+oYfk8O85HEZdi0HlVJEYuE0SU1dHrdVfYSaWmKqkOnasQjIVKJ5e63gsnmJicvHSU4QEPdCUpdO4gmrvyWVnCsmTdeeJq3ezK49+Gvcvfmq+1l4oJUfIbe1uMcPHSK4YkpJkb7aGk8zNFBNw/n30NSg4+34wqqTkUhSSQcIZHSoV9VT3yXlqIasDmzKC7NobXxDU7UNZK+qZhw/OxzQkFVVZRUjHAkhqbquRR3fJ3BQrqvgCy3geGmWk7Fa1gbiF/S56Cwv4/DrzzJr14b584/+wPuu9VCabrGWP/3OdbSyog/fgm3vkyKitHiIicvG1tyiLpDjfg2Fy3juqViMDnwFZaS29vKay21nIiUUnrrpT2mi0qhN+HKLKAwr51DL5/gSKOCLidIcpnfjwXHaHz7Vzz+y1pIL+W2+69DCY1wsv4d3j7wMi1tg4Q1N3/7158m6+O9ufqEdJis6eTm+jDMNHGgzk/1Dddz5xMjvgAAIABJREFUFdxOxBUiGQvSe/AZvvv95zAWDVJ9++18tuqjTwJVp8ftK6eqqoune5roms3lrtJKzhPzvnJoCYKTp3j95ToGpyYYG5tkoq+W3+1uouCuO4kDH/WaKxmf4+QbP+bffnYAy5rbuePmDRRkuTAyy/HdP+fpH/0TjoLv8cg2x+Xaowu2apoy8flWnv7ed9k3UcPX/vRTXLcmE5NeoXjtDWQYGvjRD3bR61/++lSDm+obPoUuOcO//+xxfmv4Sx6+vQKn5eM1TUIjB3j8H/6BnzyzhynLNv7oL/+cLz5wKzVF6Zj0ZwUDtTjbt26kqvRx/v4ff0VnLEI8pS0E2j/RiaOQs/lh/t//uYbJhJctW3OwXUHjH3SWTKq3eLGNvMZPX6xjeH6lS3TtMDjX8nv/x39l/aeDZFVuI99jWVQVE6Fu6k9EKC0vxJthQ0GHJ38tO++7jZeeeIWjbdMrUvYztBCtdafQ5RWT7/NKb/1rgGp0UbRmC7k5Iep276X22OBF3oLG7EAjPXNOfHmFZDnfr1R6ezn3ffVbFNwwSVrhBqrzL3FqhMtOY6qzjsP1J+ge6ifw7gEa77qJNbnly2jgacwNnaJ3xoInp5gc99UX2BcfpMeVWcwWbxbBgUb273kb/0V9gNUIjHXQN5HAnl1Joef9OqWzFnDLg3+Gs/Q+dN4K1le6r7JzcWk6Sy7X3/so/yPvTjR3KeurFwdrU7EJurrHiCmZVFVmoqJiT89n8y07aDu+n1dfblyRsl98ScZ6TvDGC7/m3dBOttx7GTapJYhEoiSSemx20+Wrc1qKRDxOUjPhySvH49AYad5HKKqhwQUF5RxZa7j3q/+Z3OtncBfWUOm5dq7Zis5Muq+Cm+80Eplq4MWnD7EoXKuY8JXdyB/9p39k57SR4uotOC/BITJYnBTV3Mit82McfP6XNPVe/G2cQ4sy1n+SN158k+HIDsoq83G7DGy49ff4a1cZvbN2qje4iM2NMdjXQ8S9lZq8lQgH6LE589h08w78g+/wy6dPLPN7Kma7l6ptt2MxRKl7/hecaL+kBV2SzmjDV7aV24wKQ4ee5oXGseV/WQszMdjEO+90U3Hn1/jUjnXkZTpQYjN0n9xG8Y/+ie/9poV3d79G45c+xd2l5xlufjEpemyuIq675TbGG57hR298jOCLEBeBouqwpGfj86SjebNxLSMPj6Lqsaf7yHFrBEMqFnsZW6+7ukbraak4kWgKizuPGl8h/f59+ENJCs7zvVQiROe7P+J//P0TBEv/hP/2td9nc4kTHUmC/lp+daiexqPDZL3Tw8Pb1q/6Y7Yqgtdays+7T/8bT7wV4it/+2muW+M9Jzev3mrBpFM/diNSUe1U3vAZ7j12gH/90Y/JzP6/uXtDFpZl5v1NxXp58Yff4SdPv0n3XDZf/Ytv8kdfvosSr5VFabQVA56CjTzw+99kbqCb/+85jUQqdVHeApvTK7jtgYqLsKaVomCymNHrVvvpcHVRdC6qtt9N1falP9dSM9S99AS/bCjg63+YiTfDduYzncmM0aBb4dzqKUaaXuOJnx1h05f/gMxsL9I/9NqhM1kwmwzoL/JlIz7Xwa5nf0G78VY++1D+OcFrReegZP0OStZf3G2uFlpqmsZDdXT2T6MZ9Qw0H+BQ/Ulu2lJMWcZHNwfi893sef5XnIxv4oHPF5HjvkyFFitPMWAymTEadRC9eKtNBAc4+PqzHBrO4o4vlp8TvFZUK7nl28gt33bxNngFUFQrvtLN+Eo3L72AFqK7YQ/PvtpG9uaHWVOZeeYjVW/AYDRe9GvmStFSswwNdNHeGSJjcx4FhedLe/bJxef7OPDaC7x6eJbqm+5k582bKc62X/KHOVXvIK98K/fkbMJitxPue5uB2t9C64WvU2d2U7L+1qv2frYcit6A0WLFrFcWB69RsDpz2LQjh02XvCQ6dHoTpsuQ90ZLzTE+3MzxExNQZcJkUgEFS5qPjbc8wEYgGZ2m+eArPPtyOxu/vGmFgtcLVJ0Og9l8ASk2degMVmzWlQxlKOh1Bmy2xZ2DPkoiNMf0yBDGyof4/Bd3kHsmd1saG264nfB0A6++2kpgZpjRqSRczuD1aaqqw2I2rfpA1mqTjM0z0HqAg93ZfOGzHzfd00WmxZkda2fv662se/hzlFov8R1UizM33sFbrzSx7pEvUHaB21MNNopu+Cr/61+uI5FWzqac5Z3jyXiEkRP1tM3pyNi6lfVFK3r0Ly5Fh8Vdye2fzsNgdWAzxTnY/5Pzf09LMDd8gMf/8YccHFrDP/3zl9hS4jxzXmtakuDUGEHFTUb6ZU5RdIFWwTVJY6zxtzz+k93YN3yWWzfkYzV+sFgXfrIpBi83PvQAnsk3+dlPXqV3OrTMIT0a/Yef42fP7qN7Ikz+pod45MEbKcxYInB9FrO7kge++gjb8qyoiibpMc5Q+IRd0MVFFaP70LP86CfPcbR1jMgqHA8WHKnj6R//lBf3NjF1iYdYitVoIUXSxUyTlIpPcPjVp/j5r1+nuW+W+DWWDic43Mjh4+N4113PDVtLMYd7OPzuYU61j33kkFAtPsmR3c/w86de5mSPn3hKzsZrzcU+F7WEn5P7f8cvnnyO+rYJYssda31NSzLeVcuLTz3BC3uamAlf3QctERxlsLeD/nk3eYWlFGZd+gCOakgj05dDGgO89st/4dv/9b/zg6feonVwbtnpAC5ww1gd6WRne3HazeiupPyAV4DVcDQvVxlSsSBzk8OMzS7U2A9WJS0VYqh9P089/mN21Q8QuQwp2S4V5cx/VrYQysc8XxWDnZyy67n/vhvPClwv0JnMODOzcJtNWOxevBnXzoiJK10yHqC34WX+/TuPc7Q/tcI55ePMjp7k2cf+iZ/v6b/0ET8twdx4E8/94B954s0+lE+wPUXVY/dUcvsDD3L3jmqcy7r1a8Qi85w8dIx5nZnSzddTcKmD9ZeVgs6Yhi/Xh8dtx7DMa04iFqTxtZ/y7IERym9+mDvWnz1RsIrZXsPX/vZf+P53/zd/cFfeaggMn9eK97zWkoO8/ounONjt5Jv/eQuZaebzHjgtOUfn8WOcaO0noKWRV1TB2vXleGzGJXqKKjjybub2TXb+26tP8OrdN5NzTxnO8+Qf0JL97H72JU70+Ylpbm687z7W5DoxnvceouCt/BSP/rmb3Ow0lprbQEvO09dykqa2Xsb8IVSjg+zCNWzYsIZMp2nJ4Hh0fojmxma0rK1UFbnOmYE4Fhyj7WQj04ZqNlf70IV6OFp/nO7RMPaMYjZt30yBx7ZkbxwtGaCv9RStHT2Mz0GaJ5uC4goqSrKwmS5s5tHo/BBNDY2094wSSlkprL6OvHCcjw7la8yPd9HS2knvwCihhJmsoho2byrH6zAv8bsuLN/a1klP/+nlC6vZvLliieU1glM9nGgYJL16PQXpCkMtxznZ2oc/rMOTX8PWLWvIclmWOPYagYluWls76OkfI5gwkllYzeZNlXjTzIuWj8wOcKqhF1tpDYUZGl3HaznV5cdTcQPXrSvAuWQPgRhDHU30jsy8n+tNMZFbXkOeK0JvWw8Ts+Gz8qcrpGVXUFmchcOiY6q/iY7+SUKxFDprNhuqi3GlLbw90xIBBjub6PY7qakuwuM0o6UCtB9+kR9850e8drgbS1U3R2v3Ex514clfS1mBZ3HeJC3B1EALDY0tDIwHMDhyWb9tG+V5bszLHMmwIM7UYDstrV0MjEwRTuhweotZv3kDhVlpp8+vFP6Bep77yXd54vkD9M+W0dZYz7uGKVzOPKorC0hPM6OQYHqojebeGEUVZbh04zTUH6dv0siabdupKs7Eevo80ZJBhrtbaW7pZHgyQEpnxZtbxtr1VeR67GdmXQ9NdtPWNYQ/uNBHx+wupKosH7fDwOxIOx29o8ydDhjozJlUrCkm0207qx5oBCa7aW5qpXtgiphixVdQgi/DSVZeNm6HjWUfLi3KeH8n7R2d9I/Mohkc+ArXsLa6GI9zqetkirnxHlqbW+geGCcYV7G7c6moWUdFoRfboh4/SQJT/bSc6kPvq6Cs2Et0pIWGhmaGZzTSc6vYuLEcX7oNohN0tvcwOhk46wFej7egkuL8TOwmhdjMAK2d/UzORQEFu7eE8uIc3PaFVoeWCjHa205zcwdD43MkVQsZOaXUrKsiPzNtGddWgDjjve30DI4TjGmADk9+JcX5WThMEYa6OugfniKcWMjXZPMUUV6cR7rDQCI0Qu2uX/LYD5/mQNMw64qaqT+4F39mGgUV5eT70jGpoCVDjPa10T2up7CkhNxM2xLpdibp7mihtb2PybkYeouLvJI11FQV43VaFv02iYifvvYmRsIZlJSVkm6YoOVEA609U6g2H1UbNlBe6F32qKALooVpP3KQ7nkn193/MMbOZxnr6KLz2LvUHrubbetyyLUv3n4iPMbRt57mB48/xb4T/VRmtXHk0NsEetLIKy2lIM+DWYVYcJyulh5idh+FBU6mOxtpbB7AkFXD5o1V5LhNp9eYIjA1QHtrM529Y8xHNazObMoq11JZmkPa6dReWmKGvs4eBkb8xLWFXvH5pSUU5Gagi07S29XD4Nj86fpoJLOglKKCbOxnum9phGcGaW9toaNnjFDSiCeniNxMD1m+DNLTXR+jp5dGeHaU3q52OrqHmA2DPT2PyuoqivO9WJa4rCcifvq7Wmlp62HcH0E1pZFTVEFNdRnZ6dbFdSQ6y2B3C4OzFvKLK8m2zdDR1EhTxxgps5fKtRuoLPVh08eZGOqht3eY+dj791SjI5uS4iJ8HiukAgz1dNPXP0lUW0jFU1JSTMGZ7vIpgv5hOlub6egZZi6iYXZkUlJRw5qyPFy25TUNo3Mj9PT0MDwVWSiDPYui4mJyvRZC00P09PQwPrtwLVWNTvKKiinMTUeNTnLywAv88LGfsbu+ixy1m4bavTDsIKugmOLibKwqaKkIU8PddA1FSPeVUFbgWnwuRmYY6m2juaWbMX8YDA58heVUV5eT67EtOs5aKsbMWC9tXVNYvUVUlqYx1dtCY2M7E2E9vpIa1laX43Odv8dOdG6Mvt5uBifDZ/5mcmRRWJSPNTFBT08/s5H3fyPVkEZOYTHF+RmokWkG+rrpHZ4jhZF0XwElpfmkGRW0VJTp0R66BoKkZZVQWeRG0aKMdtXym5/+gJ8/f5Bh/Xo6m+rYu2ectPRcikpK8S66cScJTPfT1nSCtu4J4jonxVUbWF9TgutjJILUUnHCwRCxpA6bw47hvWu1liQaDhGOaZhtdsyGj2pXayTjEQKBMJregsO+VHvrXMHxYfo6uwjasyksKSVzmWVOxsOEghFSeisOm+ljjSDTWbxUbb+fr+dUcOLYEerq6tnz3PfY/1Yx191yJ7fv3M6awvQl2/arUTIWYLS/g8EpjeyyDRSeHQjTEsxNDdJ+6hRdQ9PEsJKVV0BeXjZuRzrZmctNm5UiPDdGd1sz7T0jzARiGGyZlK5ZS01VEWkfOJW0ZISp4S66hwK48jeQ7/DT2dRIU+coUezkl69l/bpyPLazGwUa8cgsg12djAVMlG6oQD/Ty6njJ+gZC6C3+6hcu4Gach/mjxXr04gGp+jr6MYfd1K1pZK0D1RjLRFmaqyXluZ2hsb8xJQ0CsprWFdTToZtqTq/sM7e1kZOtg0wHzOSVVSJM3WhcwWkiAQm6W1rorV7mJn5KDprBsUV61hXU4Lz9K01EZ5lpK+Ntq52jjT2MBtPoM0M0Xh4L2q3DoPZSXZuHobZI/z8+//Gr3c1kMy20Hx0H3sCVtIycqlcW4Lj9PHTkhH84700n2pncGyaKA7yy6pZv64Szwf2W0vFmZ/qp6t7FEPWBkrS52g9VkfLcJK8ym1s31yw7NyzmpYgMN1Hc8NxOofm0TtyKF+zhoqyXJZ5azr36CUiTI90cvJkKwNjs8Qxk+ErpKpmLaX5rg8NhGjJKLOT/TQ3tTEwMkUUO76CUqpqqshLN33It05/NxVlbmqA5qZ+wmd1ktAZ7WQVrKGyII2MvAoylvhuMhxiemCAOXMBN9z9aTYuo9dpMhpkrL+ZtoH5M0/aOqOd7MJKClxRupqbGQ+e9RSpmnB6i6ipzsWQDOMf6eRUxwRJdFgcXsrXVrEo862WIOgf4tTRY7QN+NGMGZSt3cKWdflLpnTUUjHmp/ppOtlK/8gk4aSVnOIqNm6qJtN+7rmdiAUY6WmjayBKyfbrydbP0HWijoa2MVKWLMqrN7Kx2ncBvfM/UCYtSSwcIBhVsac5Pvq5Q0uRiMfQdOb373kfvjCx8CQn9j7DDx57kgNddr75ByWL65aWIhGPEInrsduMoKWIx8KEImBPs334PVHTSMajJHXmZT0rpRIRRjv28cvHH+NXrzdT8fnfx2e68IOnpZJEQnNEUhacaWZIJYmEoljt1oXtJaOMdbzDL3/47zz1yklKH/oyvqV+LE0jlYgQTZmxmJTT+xUmkjRi+2BDWtNIJiLENQvmj2iOaakE4UCAlNFMLNhFbf0QenMWG7dvZMnL8/vfJBkLMx9MYHWmsajf7FUhSTTYyivP7GMy6eALO27i3EG2CnpjOhtv/zwbV6qIF2DFg9dzHW/y/FvNxN33sbHCjfk8V6bofCvPfO+H7NlXS/vAJBHNTLonh+pt9/FHf/r7bC1xLzqxFZ2XbdurMD33PL/7zTvcvz0Hx3mGAYZH6tl7uJvZUBydpZzrthdity2v14eiz+LmB+5ENVs+0MDVmB04zG9+/TyNo2byCvJwm0MMNNfz0nO/wp63iQce/n3uub6YNLMOLTFDe0M9tXV1HDt+kpaOSW75k+9Q4HNiMcBY+3727ttP7ZGTtLb3/v/UvWd4nOd1rntP72hTMMAMeu9EISoLwF4kiqREqhdLllziEu84dpKdK97XdspJ9vF2bLmqi5LYewVBkAABovdO9N57BwZTzg9QEklJJBXHV3IWfn4f5n2/9db1rLWehffWf0Awlk9e5hWKalvoGZpFpNASkfosP/ofzxLj7cLdtsXKdDMXDh8i57aIoAhf5CtjVOWeomNCgJu7CU+jFpVcijYwnSe3xaBzfhhpwwp9tZkcOZpFv9UNs7cJZ8koReffZmaonMGpL883dtgmqco+zZWiPgRyJULrJK21lXSOiwmI38XLrx4gIVCPXPzp+1NUZZ8ms7gXZHe/LyIgbhcvv3aQhEA9Msc49WVFlJSWUVldz+12JU//9VOIu4q5nlVAU9cqwKfRehG7bh+vffMp4gJ0n1Wzd9imqL5+hsyiHuxSBSLbFG11VXSMC/Ffs5OXXj3I2mADMscUbbUVlJWVUV5ZTUOLkP1//SoufXmcO5tFXdcMhqgX+PU/vYZzkPuXaECIwDrIjeMfkFfXz4LDnU1PPc+BgEhEIjGOpW6uHT1Oye1hloTurN/9BLt2hiC8Y5EJhVa6Sk9x/MYgEbtfxz/QC1tPDeVlpZSVVVLX2ILN+2n+/kcGtM5yVqa6qa9rpKltkLklC46pXuqqy5jqUxOU7InJqEVx11R3OGapvv4hVfnZVLX0MDQ6g03kTFDc43z3L19mXaQJ5SOAbrblQQovneBcbhfaoBjC/HQIRurJOnKeMxfW8Ow3X2LjGi/U4mk6GutprG9jeHqRZcs0nU3VlFgGcTMl4qZepr2ylfLSMiqr6xhgLW+8EMlgZRZnL5fQOyUm9em/5cff3EmoScHsUA2ZZ89R0raMm9GMXmNltLOaa+dPIHANYeu+p9m1PgKDkwSRVIpjsYPMw+epaB7Gdc0L/PT7T5Gg0SEUCrEuDnHr3HmKaruwuqXzl3/zOpsSVHfAfjuDjVc5ejyPeacAwkO8UFuGKb/yLo29GvZ+5xtsSwr5zDB4kCyM3ybn0kVKWudRuRrQOgsZrs3l9NGPMYRt5bkX97I2xPiZkWZbHqb8xgUu5dzGrnLH7OnE0tQA5bmXOHFUQ2TaHg7sSyfE7IKYWVqrirmVd4uy6npaOu1seuVlIhrHKL6aTXldK8NTFqROXqzf8yqvPb+NUHcxjsVubpw6RWFdLwt2N5J3PclT5jA+ZQMSiEVYJuo498lVhsRR7H/Og6AAIau8tvVcv3SBgsZZNAYz7s52JgbruH75FHa1PxmPH2R3evRdAOdXiRCJVMR0VyHHL+TT3Gcl9eBf8doz29AYBAiFDmaHa7h46Sa1raN4p73K9765n7UaZyb6W2lsrKOtb4L5pRUmBluprZAz4uKO3dkNqWOEzuoSSsoqqa27zYJ2C9/99vN43g1eOyz0NeZx8fx1OqaluJtNKJmlo/4WF08fxdU3iSee2se6WD+cZDDZV0dxwU0KiqtpaGpHHXGQJ9a3MdZwjesF1bT3T2IVqAhYs5NX3niRLUn+qP5MQTYrMy2UFLchdk9gTdw6jKZebl4voDm/kcKCMrZujMMz0u0+sMLB1GA7TY21tPaNMbdoYXKonbrKQiZc3YmXSFieaaOtsoSyimrqmxaI37eHMO0keadOU9AwjNJnKz/9q2+zZ1ModssYtYVXuXytmlmRG2aTG7b5EWoKszh1VELw2p3s37eFmEA9YoEEichCR8UlLueUM7gcwMFvv8Gze7W4CITgsDDQeIOruSW0DclJf+bbvPLcTgLdBICdsY5Czp+5RveSlrDIIHSCKZoLjnOyzUHq/mfYtzuRhzClrGrAOk1z+Q2u3qhiwqrG3eiGYL6HW9dO84nVxJb9z/L4lgRMLp8eUBYGW0u4ejGLhiE7erMXTqJFBpuKuXLuGCrPOB7bv4/0pBBc5TA9uOoMyC+qor6xFYnvDnZlDGDvvk5mbjmtfRMs2+X4RG7mhVdfYteGQERCC9112Zy5UkjnmAX3oHU89uSzBAbdKe4nECJiloaC01yrmCRkw5O86BcAgN0yQVPZdS5dLWPc5oyXlxaWxqkruc6Z4+C3Zhv79m8nPtT9oYaiUCxBsDJCWdZ5copakPhu4pXXXmWf3heBUIDdMkldfhY3C6qZlEZy4JXXeGG/K7bRXpobamjpHmF63oJ6rJvGqiIWenVEokCtWmSovoTikgqqa5uYlEVx8MVvEnA3eO1YYbijguxLmVT3LKM1e+MitTB8u5SsC8eQGWLY8cRetqyLRKsQYFsc4XZNMTm5hVTXNtI7Z2TTri30Nwxz9UI2lU19TM6toHEPZdv+l3nh4DYC9Q++8wmFdsY6Sjhx5BK1vYvo/RPZ+eQL+AWKEdqtDLcWcPpiLk39y+j84ti0cz8m/ztBAUIhKzPd5F84Q+2Ikd3Pv4LetYfmhhKKSyuorm1kTBTKvuffINjXFdvCOD1tjdQ0dDEyOceSaoKu1moKJQMYAxLQ6P3uAa8djiW6G3P44Px5soqbGBidYsEixhiQzJMvvMLTTySiewRj1jo/QHXJLQrKmplatOPkGce2nWloV9opKiinZ8KK0GFFqDYRl7aJxDDdfUXv7MyOdlBRkENpwxA2qRq5RIq7bwRhgW4sjE2i9AwnKkiP2DFNc9Utrl0pYnDBztxIC2UFrczOulGcdYif91zCyRjGuvRtrI823NWGg6XZYZqriyit6WTOKkUudbC4uILaEMa6jA2Em1U8qohlTpgD4/HwDiV2bTL11RWUlpRSdOEtinOuEJe2mS2bUonw1/+3NHgd9mUmh9qpLC6itKKK2oZOpJ4pvPC9yM/Ba8civU2FXDybzYDDh7hYP1SLg1RfP8SxUSfW7X6Flx4LfmgBQ4d1ju7GW5w7fY2eZQOxCRG4OY1QV36O8yfPsXbXczx3YCMmNcxN9tNQnk9hSRU1tY0sKPxJ3TSOsvc8569X0jW0CizqTaFs2PEUzz37GBEeAsb6mynJv0V5VTW1jWO4+iWwbdKXurOnyKtsY2R6GZHcBd/QZB576jmeejwR/UOQUtvyLAOddRQXFlNRVUNTxwI+8Xv5q5hgnO5ygC5M9lB+8wq55f2INWpsc4O0NN1mcNaJuIwneOaZJ4j1/Rzkd9iXGO6s5MqZC1R22zD5++CmXqCh6AKdLc3UDS19vbG0zdPfUsy5k5dpndESmxSN1mWcxsornD9xljXbnuWl57bgpRFiXZ5loL2aouImWtuGWXRYEU4PUF9exIKzEIWrmZAlUA6X09Dey9iMBYV6jLb6IuQTTngG2DCF+aMROVic7qcq/zLXinoQO2lwzA/T2tRI77Sa2A17eOa5fST4q1ieG6O1tpjikgoqq2oZXdGRsusl/GZO8+7RXLompUSufwb1z35AvPHhlxu7dZau+qv8+qMTZJc2Mzy5hEjuiskvjA07DvLc01vweWRichtz490UZZ7makkfanMAHi4SZgabKLh2kk9EPmzYuZ/9j6/DpLl7IdtZmO6n4sYFsop6UBi90DuJmR6s5uKR91AHpfPUc8+xba3pK0EUgUCEWCLGMtPKqY9O0GXzJXXTFpLjdCgUX32uOGwLDHbWkl84QuKz3+ebL27EqHiEkr0CB0vTveQcfofrjdMoXM2se/wNng8QIRA4WJxqJ/PoSQqaZ1G4mEje8Tz7tgchZDVq3L4yS1vpCc7nTxC35/sERt37+3brPL2N18g++gk3KroYn5pgfkWBV2gKB179IS/tCubz48SBZX6M+sILXMxpR+TsjGB5nPbGWtpHJESk7OL5V54jJUjJwlQPRVlXuVVaSf3tPlbkobzsLGEh7whnsqvoHRxh3q7BNyyVg6//kOe3+H5t4MphX2FmtI2ivEJaBy2onWXMj/XQNyonOn03uzdHrUb2OlaYHW+nJL+SzsEhBgfGscqCef5HLxGi/jQQaoGh9jw+/iCfFV0sT3/rSXwlC/Q3ZnPk2AWu59ykpLoHoXMIBed/y3JjCJv2pLBYn095fSu9g6PMW+XE7vgOu4L6uXLuCsX1PcxaxBi8I8l47AA7U72ROFaYm+xLrjT0AAAgAElEQVSg+GYlnYODDA6OYREF8eJfvUKI5tO+LDLalc+H7+ZicVvDM985QIDSyuxYE5eOHONSdg55BZUML6uR11/k1/+3idiM59ke5/LIurNZ5uisvsLpixUsqc3o1Q7mZmawSyTIdBv4y5cSmBtfbe9ydg43CyoYXFQha7jIr37RyJpNz5IROkdZThmtPd309A8zt6xh22s/Id1nhsprRzmRWYvFKZZXfvAqgTSRe6uewZFhhgZHmbMa2P+DH5Fqvm/NO2wsTHZSeCObipYZZBoNMqw46Sao6FxG5RpG4lr3L3G+OlhZnuR2SQ4F1T2siKTM9d9mRBjCU6+8Slqg8mvOrv/e4rCvMNNbTF7NGCJpFEEhWuZHmsi/WUTH0DwihRb/8HiSEkJw/v8Rw8p/MXht43Z+Ng0DM2jjwzBpZA+OSnQsUnr5CCN6N3zjtxASPcrtqjxuFF6npqaZsQU5//gPzxDmrr7PeyXEFBGOUXmGhptZNAzuwkev5kG1G2fa6mgdncNiAydDEH4eynt4uB8sAuSqL16UFwYLePsXv+BslYK9r77M1pQI3DU2JoZj8TzzG/790Ac0NPUw93d/y5Pr/VGxwPzCCvMDlVy+kEXHqJzApxax3nGeKpzcMXvAh4U55DaN4m//CNFIAD4+qRyM28zY7Uu8995Vzn88gVdUAt7PxGLQrA65wz5OzpHf8OYfigh95qekZaRilM0R5DbBz372HtcyF/BP3MPmRH+UXrbVwpMPFAcjjRf4zS/+QM1iHM++sJ3kGD804kX6W4o48vtrLC0tfzH22rFAbdb7/PF4O4Gpm0mJDcJNaWc0Rsuvfv5rzn7QzKRFxf/6632EmZwQORaoy3qft4634p+6mZQ1QWhVDkZjtPz6H9/k7IctTFpU/Oyv9xFqsGG32RlvKeDypUJ6pwyIPxESZPIkYuM+1iSP0FiaxdX8fFpaulgQOPH3P3icUJMGoWOBumsf8tbxFnyTMlgXF4xW5WAsRs+b//wm5w81M2FR8rMfP0mYbpGFpWWGm25y4Xw+XRMmXC/q8TdqMfl6crulg96eYRaWvirqQozOO5oIk5Ajp2tonQhkyzfd0bmpkMkF+IUnEu9/jnOXa2mbiWDH6/74++hQ3PHSOLn7oZPOML7khK+vCSe1mJmxRVaWhyjPvcqNyh68N2ewsLwaoyiU6Ylet4OUwhvUtQ8g1waQnJZBlL8bLu7uqOUCPg+vtdNZcZFrC774BmzgmTQn5npucvjQRbJPv4NbQBT+HjoC3R/m2LDSmn+MP/7uLeodW/mbPUmkp3jjmAtg5nYBvzzxAcsaP3yNTxBuluERnMLWDcXkVbUyMaYlLD6NLclBuKgNOMummViapK7gKhdv1GI1WjC7zmA2uOHlqaG1o4WBoWmWLHYWR6s49tabHM2dI/WJg2zeshYvVyFz4314Z77P7w+d4pcNnUz96Mc8u2MNBo2R4MgY9OIPaK0rQSndwPT8KmWJSutFWIyE7qIznGitpl/qyej0AvY7RVmtCy1kHj1EZrWB576dRPrGIOTWCXTCXkpLmhgcnf0SzsUvytJEPWff/yOXqh3EpW9nfVI4Jp2MUdM05fl/4ERhA0sSLW5v7CbcrATbGEXn3+P3711HFLKDJ3dvY02gDpYm6QzK4t3fHeLI7xvoG53je689RpSPHCc3I27iMWpL8ylpEbKkVjAcG45PxBaeT0qnJf8Uxy/d4si7EvxCAvDcFY13SDxrAjK5nNlAVY8rYTtfwdXVGfkdD51EZcDbQ8nClAWX6EBCgkxo5CKWxmo488Hv+OTaGDHb9rN5Rwo+OgkLk/34XPuYtw9d4Nf17UzM/g+e35OE2eVBgJEIZ4MvkdHenPykj5rSXvSJY8wvO0AgQ+cZwBrRGIVXztJaU8qsdhtT86vrTunmR8qmdZQU1NDSsYjBJ4qUjdsJMapx93LFMdPL8tIo9cU5ZOY14poQyszi3WvWRn/9Vd7//TsUDpnYvu9JtqaF4CKxMD4QwuVP3uLYpbe53T7C/F9+kx2pgUiVrhjdZYy0VZKXU4dmUIJjLpEw/yC2H4hjcbCEM8cvcv3Mh6g8AvHz9iTa/OdgdrfTV19ETZ8A/01RBHm54GJIJSnuMoVVmTQU5VNWncGa4CTc7kMsFS4+rN2YRllxDbdb5tCZI0jesJ0ITzVavYKF8V5GO6rIvXqRsnYl8yoFC7F+KN1NuLR20tU7wNTMEg7bJJXXj/H22xeY1iWz96ntJIW5I7LO0hOay+F3P+LMu/9G99AU3339AEmhWgzeIYT5OHFiqJmS1kWS905hcYBI6oTJL4L4iXqyL52isnQac8ooCyurJ4xtqYdbl49xLneW9Qe2k5ERi5NgFk/lBI01RfT0rkYkP0wcthnq80/x4eEcFt1i2bYjnQh/d6QrHSz2VvOHTz6mZciKytmVpzaHoBDaGWrO4/A7b3G9Vc363QfYkRGBVmFnciiM7JMfcOTC+zS1DjL9/dfZuzkCmcIJg0HBdF8d+TdKkfiuYFuaJCrQTMa+SNaN1XD59Dnyz3+C1NUbP38f4s2+xMQGk3v1EhfL6pmWrUGm0qH9NMRRoEDr4Y5abMGKGp/gcDx1ahy2aepuneadP55iULGGPQe3kxbpgdQ+R2/YLU58+BEXP/x/6Rqc4DtvPMv6KPcH3skkCle8gyLwcrvAcEspYzNeDE8s4gDkGj3+4fEMN5dw5WQj5aMCkndMY3WA3MmTNampJFTVUlc/jqsxmIT120jwUaM16hFbJ1lcGKetvoisS8UIg9RsmbPcdX+wM9pexIn3/sDlOiFJO55mx9Zo9EqYHg4n59zHHD5ziIaWPqbmv8WBnbG4iOQ46wzILCPUFl6nbsaDJZuFmbVB+CfuITp1iqqcC5zPvMLRJTVm/yC8dwQ/EMAXK93wC/JFK5+jqbycGXkkShdPdE4KRHYzkbHhFORc4WxFJYvqGPTewXgZNKs6lWrQ6Z2R2CzYRC74BBiR2idYXByns6mErIu3sPtK2PD46ncLxWq8QpNITamlvrqaNomBkOg0dmz1R+VkwN3l3ki2se4aivJdCDQFsONACuKFNm5cPEPm9ZPY5HoCQkPYHv3FSPZ7F8AyPU3VVNZ0oPAKxHY7m8sfl9LSWkeEjwwU3vh4TlNx/QwXi1bYOuuEf+AOPmP3cCwz0FrMuSPHKGp3ELx2PQnhnggX+2hqvEr28VaGJmWkPflt/AP0OAulOGvNhEbH4r40QW1+I+Oj4B+ZyJZdGQS6iJA5eWJwld/Tx+HOSi6fPEl+wxymyCTWRvvhqhQxP9ZMWVk+F+el6F7djOFrGmciqQoPvxjczUHExCfRWFNBaVkZVVkfUp53lZiUTWzZtI7oYOPXjPb9M4t9BcvyErMTgzRXXOd8djch6T7M3bXpLYy2U3DxKJeKVtj97W+QsdEHycoozozSerqC9v7pR6BqszPeU8uFD3/FezcWWHfgJ6RnbMRVPIubYJyqnN/w8SEB7r7BvJDhgcNux7o0QVfjLS6eLQN9EHMrYiJ83EnbeZDkmQ5Kb93kVuEFOrpHWBGq+OHrG5HabSxM9FN76woXymYxBU1iF8fjYYjj8YOJjHVUkp97kxsXWukbmUeiduPFHUE8yDdjt1lZXpxnfKSbshvnye90Zqv7Rlbu+ujl6V5Krh7lZHY/QRsfIy3aE4VgjttFF3jv9+/y4R/6WLDK+fEP9uHrJARWGO2q4OTbv+VKg5j0g99gV0oAbkoHI50VXBqq4OrIMjzMR3+XfqcGbnPlo1/x1sUREvb+LRkZ6WilCxiks1Tf+D988qEVg18Yr23zQqJwwTcija0qHerlFjKvNKNxMRObto1ULzFimRonZ1eE7smkb2yiOL8TnIxEJW1je7gSlbMeZzEszw5QkX2UI5c68N/4BBtiTChFC7SUXea93/yBQ2/1MGuRov3xk7jbl5mfHqX79i3OnMjDpgtG4OrLrIsU30BPuq7X0t09xMIjUZPYmRzqIP/qZUwuHqRsCWRhqJmy4gJyL9Vyu22YZYGKH35jHa4PRS8czI93kXP81/zxVCOeKS/x1M5UzFoFy1PdVBVc5tBbH/NmQztD40t877WteCgFgIOFqR5unX2HDy90EpB+gC0bI/DQKpho19B16wwfftDM+KwET5/vs8bwFQtfIAT7CpP9t+me0ZKybw87Nsbib9ajUcm/FFRbnh+lsfgyhw8dJrtqgfXP69Frv5j592UilCjQewUS6ifjnY9LsHqskPasL156FTKhlIDINUQFXeHw0etYPRPZ9I1IQny1qwF2IikaVwNGZxvDoyICY4JwkQmwz3/atRUmhpq4elqFm1Ms+17ahmCmhWvHPuRS5imWBGYSEn9CrH5VFyuLEzTkH+PtTyrx3vgUm+K90UgtdAfref+Xv+DwO51MLkjR/ewlfCRqdHo3HKOVXLrSgNJ9FNejCvx0Pmw/sAb7dDNZRz/gyqXjLIm8Wbv2h4Q7P7rHcGVpipbScxw5kcuoOJxNmxPw0smZd7bTVvgWvyxtYGL573njiRCkCJHINDirBbTnfcIHuWP4p3yPV+4aYsv8BPXZh/jN2/l4rXfh4HdAIBAikbvgbjSjkyyyJFQTFb+V9VHeuLoY0CiUKAw6lLaLnHz3PLOaMNQhcZxq6GFZrMfHZ4my7NO8d+kGNa2TuHj8Axt8RUikGlycRHSdOMy714fxSfwe37hr3q8sTt3pSw4eKc4c/AsAAUKxHFeDB15GKfNzAlzN8WzbFoXZ1RlXzaPDfg77EmPdufzuX37Jrakovv/TXfjK5+mqv8oHH+Rgeix5de592p6HlLlZAS6ecWzbHo2XqxNajRSRSImbXo+k8Syf/DETi34zaa+OU3r+bX7/0QUyr9chNDQTvuslwqJc0blKKTz1Me9lD+AW+CwHfnLvCrBblxm4fY2PPzhN27Ivqevj8TEoWBi8xbEPztK2IMZvbRLRXuL7vmeFmcEqTh46TF4zxGdsIsxTysBkNr//XQ5L2iSSf7T2IQ5bB7aVOfraOxibtz4CNvblIhCAUCTDzTMAb8PX48v/OuKwWhlvrKJ12o7QxRmJpZBDh+pRmn3xCzIyP9rOjY//hUuZG3nljWeJ/bPYn//58l8KXjvs41SX32ZqwUqU2RuV5ME0FQ7HEkJ1FE+9vIe14SZkjlkGOzcT8Muf89aleq6d+COJ6amYHw/FRXHv5qY0e+GuklHeXEFt8wybQj1QfKU308HEwCAzyyvYAZWbO24yyVdPaPsMdQW5VDV1Mjy1hO0zPlARKjcfUrdsI9LLyo0jf+TQqRKCnn6TPTs2EOi+ml5u9PDGrBfQ2fA93rp6mj/8NoAQ/2+T4O2Cf0QSztLbnDlVQM/Y3Yx7q9QRKZu2Eev3MSWto6xYVUSn7WT3phg8XRUsDPsyUF3LezdaKbhVzzd2haHXrEYILPTkcex4JnUjgby8Pp4QPxNqMbi7vciuK9k0D9ZgEZjY9NiTxIT646J+8K3futjC2ffe5Uz+Ii/8fD87NsdhuJPfZfQwsNiUxZWCZibG7/2/+f5bfPLuScaM3+K727cS6aVBJABHkIHu/KtUvnuT7JNH2borBS+9BvFoAZ+8d5JR99f51ratRHs7ffZ+T8E1KtpyyD55jM07U/BK9yUgOo1t/TmczKqib8qKi2kte57ZQXyoJ3L7LH1tiej/9ed8mFnH1aOfsCEjDk9dCJKxQo58cJIR7Td4fds2Ynw/bced3qJsKlqvc+PUMTbvSMF7ow9+4YlsWhfFycxyuscmmbF4kLZtPxG6x9m+s4VZaQD+Xm5fqT+pysT6x7cTc7GMnvER+vpnWV6yI1BLUbn6sfGJnay5XEZPxRjjU3Yc9s+LKQpFFsZGVwhP3U5CuC8auRSJZwhJzmJqsy5T1thzT1tilYGgMBE+7hqkIgEKJ09CI9eQEK7/bP3ZP5tqDuwiHYkZT7BjQwQmNxXLk9EsdLXQe7KU8qJaBven4O/+ELofxzyNpbeobOxCHGvA7KHFWaVEoAohISYAl7OFVJfU0D+aTojZhKdfGEv+RjRyMUKxBq+gcNbEx+AigpXFWVROapKjLpBT0kDP1AQiQwzbD67H+Yld7GkdQuUdj49+iZKTH/PxsRzkyT9m566tJAS6rDq2PL3xNEgY6mjnzeNXef8tM/5+BrbGe6PW+mB2d0Z5n+UjFMtx1nrh72PESSWl/z4kemmojuKyBiaEwZg9Dbio5AjxJH7TfraVHMVZBQ+lCnZMU5Z5mE9O1xKw54c8tmcLQYbVfcKg2kJKTCYVTQU0NnYyNrUEZgWDtVc5/NFRKsZj+OttO9mUEo5GKgA88TIbcEx00/V/DnP+o3fw8vfF8GQKnr5RrM9I5uy5G1S1DiFUeLF2/RNsSQpC7yRkIFJIT0c3ZwvLqa7vZvu6aEL0fmzc8zh5BVW097fT0znE1NQS6D8FzARMD48gNMawYUMSQSYNYsE0xdeO88nRa1gjvs32XTtIjNCugjeeXngaZIz1dPLrj3M49I4nvr5GHk8LRPmA24NQLEfnbcbg6nSfcSpAIlejN/vjbXS7iz5i9Zla60VwaCAeWhVigQgXgx/h0fHEeokBO4sKCcp1QtpK8ympaOJ+OuyV6SaunvyEE1f7SHvtRXbtXEeQToYAMJnNuMpm6O3+N47nHuYjozdmDz1JwV5Ep6STEHmFnFsNLNrl+EWksG9vKgGeLthmYnGM9dL9/jVqKuvo7ttEpPk/vzq23dJHeWE1C0p/oiJDcJMLEMuDSElNIDOnjKz6CgqLKslIjWKt/93GkgCVm4ngkCA8dRrEAhHOeh/CouJJ8JXgsC0z7+KKIC2WvLxbVLSNM7/iRGj846wLkfHEnlaGF12JifVhtDmfM0cOk9dl4LUnd7F9YyyudwbQbDIgnh+m5/95i6vHPsDd7I2H+3b83ZwxeRrRuijvvRsIxMhVLph8vfHQuyAVzNzzvcvjrVRX1tA9Hc5Tnh6rYCIKIlN3sbV+iUlXwRfG94viYKytkLNHjlM7HsErz+9n23ofFCIBOFxYl5bAjZwi8ptv0949xJIjBPFsKzcvHuPYhRZC9/6Q3XsyiDLK78wRE1rVMkO93bx/6SSfaD0wmzzYFO1JROIGEm/lkJ1dzphdhjkonj0HNhLsrYX5ZhQLQ3S2n6ahtpbWjjHW+nnjv2Yz27aUUl7dRtdAP0PDwyxhRnJn3BwLk0wtqgiOj2ddUgAamZCxlhIuHPuE7NsKnv3+bnZvikN7J9/XZDIgWx6nt+u3XD91CJ2HGZPnXoJ1D7gmCkQonDzwMBpwUYsYu+uRUCRF7eyBt5cJvZsCRj+fU0pnd/xDgvE2uiARiNC4mQiOSCAhRALYWV5UoUjKYLC1lqK8Mobva9Y610nB1RMcPlOPacu3eHzvFmJMijt69kSrsTI20MkfT57lsIs7Xl5mdsQbMAfGk5YSz82sK9TU2FFp/UjefoDUKD9cZIvEGh0MdTRzubmB+vpWprYGY3jQ5wtlGAITSV2Xwo2btfSNjzM2No5daEQqUmEOTWH9+lRu3qxleGaWubklxJ+mvwmErMzPYZe7E5O8jhh/V9Q2OZGJGYx0NVGYU0Tv3fqUOeHhE0SQvydOciESiQte/hHEJ0R8djd13HUmOQQqfMLTePqJJIK8tIiW+3AXz9J5+y3ab9dT39TPluivTpkHsC7009zazbwklCd3J9JkqeLkh+fJXZbj+eprHNy1DuXwDYpOdNM9KGN2bukuwNPKSHsJp959k5PFyyTv/S7PPZOOn16BfXkU+WI3WZ/cpHJ+DQl28WpWoECB0TcGo28MK7ONWLvzQOJOZOJ2nnnuIL7381c6LAy2FnH8nTc5WThH9NZXOXBwG2FerkiEYJ03MNPXTX7TbUaXMjD8B8OkhRIlBu8I9J4BRMUn0VhbSXlpKVU3j1BVkEXE2nS2bF5PbJgXj5ig+WcVgVCOiyGApI3rGeoqJetG931v2Jkc7KSmrIwRy3o8vbxwUckAE3HrtzA4A90rj0JxscJofwtlhWUMzcegdTfj4aZGhJLA4AjCvOTkljVQXdPJwXRPFE7uRCSmkdJZzdXz5YyLNZiDknnm2TQCTK6wNEL92lB0v32TI7kVZF08R3xyPPvigkhKS6W96hqXy6aRqfVEpT7JnrRQjC4S5kZaiA315Le/epeyihtcOBNLcqI/kQ/gChZJVXj4r2H9+ik6Sq6Q32G5T0Xz9DTf4uyZXOa8X2D/3i14a4SAHZOzjYGGm+T/voqczPMkbdyIz3odlqkeyq8d4UhmKwE7f8IzezfgdSfVzl3vhGW6m/zM6wxPP+pIWhkfaqc0v4j+mSC2uZsxadWIUOEfFEmkn4orOY1UVLbzwlYvFDI17r4RuLlKGK13QyIUotDoCIyIJyH484lpWwojyE+PTCBAoHLDLzSOhIQ7oIVjgc62Ik6fyGLC8Bw/2bcVH6fV7za7ChhuyOHmr8vIzTxL8qYMXkzTERybRMpIHReO5dG5vIJDpGPzwacxKad5/IlOrOoAQh+Ft9lhA5EMU8h23jiQhLdByfJ0H1X553n317/lckMB548fIyF5DbujHkxpY7dM0F51gXfeucygyy6+/+JekoLvFHz1NOJucMI62crf/fNlTnykwisohNd3+CCwTNJZfYn3D2WzGPQqzz+7ixDd6h1XJ48nY3McJ25eo6Wump4J21eA1zZmx9q5eeEEhd3uPP+jl9mUFon7VwCH1qUpOutyOXc+k9zcW5RVtzBlUzF/9PcIrUt899tPEfww/QlEqLV+JGzaQ+rRHK6NzNPf049NGIBAKMXVGELylj2kHrtO9vgCoyMzSD+jEhRis9pYml/BJ3EXm9YYkAjurs1sRyiW4xWxkyfT4/HzcMI+14mnpIeyvzpBZ0MelS0/IFavAscy4/2VnPr4JD3iA/z4wHYCnEWAAx93FTPNWVz9xzzyMo+TtWMXf7HdSHhCKhNtEbx1pI4VoRA3cyr79qYS6u2CbbYdo6Cdop+ep632JtUd3yU89tG8P5aFcWqvv8e//+4M3cTxvb/dz+7UANRiWJ4xYe/N4tj/zOLYoSgyMn5MhJMIucaDwPBw3IQzzFlE+EbG4fFZFscKs5Nd5F4uZNwqIjkkGg+ZAKFAjt4/gc07oDnrXZRORjbseZVnD/giEkqQy6U4FDFMt7uwsmxF4KZmZWYOr7WbiQzzw1k8QbzfAvXfeYfqgstcK3yVDX7+yNRGgiIi0AqnmV0W4hMVh+dnfbEyN9VF7qVbjFmEJITGrNKCCEQonHxJ3rQT68A1/iBR4x//OK+8cBC9WIhU9ujgpN0yz0B1JqdzWzCkP8PmLRvwEK0Q5quiq7SKabP3Xe3twj6Yze8kSgLjHueVFw9iuNOeRKQhJDqalV4nlpbsOHsEIR+4wrV6FZuffhU39Tm6hXEEm+Qo3XyIWrPILcE0M0sQHZlMsOvn57bdukRv7Tn+/d/+QNlEKH/xd8+wKzUIZ8kyw50DvP23rVgkToQnr7uHJsVhtzDWcZM//t9fcaF0hZ3f/p/seyIFD9ksFX0fsTw/ytDQ7MMdtg4rc2NlfPCLt6ib+NNqcYllzkRs+S5/82rCn62op91uZaSnjxkbSCyzlGXnkZCwi/S0KMw6BUsTnbiKhvnnf/kl/zRh4ef/+3XC3P4bppPdJ/+14LWtn/auKSwrDpzd3JCIRA88jAQCBZFpO1i3NgK9WgRo0bt7oPhOI0UVPRR11ZGVWcEzGb44K+41dsVqHW5qKQLbEB1t4ywv2/hSosrVnmGxWLDZV1maRVIZEqHgq4sVCSS4GLzwX56j5u03OV/czewy+Ce/wndeScJVLWVp4CanzhbQNann+Y2RGFzuBvsEOHkmsvexRM4VdVGTc4wb5U8Q6hGGm1aJ3GbGVS5B+CXlYoQyF5w1EkQiAa5ecaxPjsSkXeXT1BhjiAhwQ3mrjZH+fhYtqxE8AhyMN5dR2z3OiiAalVKMUPipngJZlxzC+9duMzMxj1JvxNPd+SH8fg4GKy9y5lo1VsPTpK3xxvUubmeBUI1/gBm1UobgHvDaSlPeeXIq+zFk9FGcc5kW5adasTM4b8MuEDIzWE5F3RC71vowcud9fXofpTlXaFN9/v7ArBWHUMjMUDmVdUPsTgzA202L2WxAJZMgFGqJSV1HXHQAOrkAUBIav4MXny+hqLaX0vZiCsu62JHgw8Sti+RW9uGa1kfpzSu0l3/eTt/Myp12KqiqG2R3UhC+WiVePh5oFFKEQhkh8SmsCffH7CzGL2gND68oIkAblE762sMUNt2iJKeY7j3xeGkNiAUCXPwSiAowcKOqiqLcCvq2RuHh7IaIVYqb0g410U+GYjIoESJE6eSKUu2Hp16N7E8iZxRiCkkhZW0M3vpVEEasDSUyxISrWsTE0ADT8wvYHDyEV1KKX1QyGzIEOCWswaj9NDNBgEIhQywSMjM+ysziEjYevDFJFBpcFXLMHq4o5WJkAi/ik+IID/RFI/EjNAYQwOLQdTKv5NPYK+flH0RiNjrdk5GhNkSzbWsyl/PrKSy5SE7RTmICTfi4CBEKBazGYHxRRELhl+4F9pVlli0r9LblcPVGAj6em/DTq5Co/dm4PZ0VL3dkDzkTFofKybyUS7sliGcSIvDWfc6fLNGEsOel7yLxSkcVsg5/DyXYRinNzqKwqgv9+m8Q4me+B7QVyvTEb9lMwsV8Gi6Xcy2rkIykMIzhWmROalRyKWKBFO+wBOJiVg1BAGNwCD5GLQpHL6MjM8zP20AvwtU3lU3rYskrb6O+sICGtnQi/SNQi8BhHaCitBMnr2RCg8woRbA0UsP1rHxqO8TseyEcXy/XeyI5VboINm1O4WpeFdcrM8kp2EZ8uB+Bugdf1AUCIYKvnHB3xu9rTXlgUecAACAASURBVHshCrUzCpUPHnoXlDIhc/c8t9Nfd5MbeWWMCGOICg/E5Cb7fFULpJjCN7IxJZOC8hMUX8umbGsq4QGxOInVqFUKZBIBYn0wsfFrCPFeXbu4BhDoZ0LnLGBgbJTp2bk/Q2FSB5MdZZQ2zuEREUVYkO7OGKgISUwjITqbsqZiKm8VULVjAxG+kQ/hiftcBCIZaicZJrMHbs4qRIIFvENiWBMbQZCnjKDAiNUe2MfJzswlv7gJVUgGocEBuNx1sRRK3YhM20BSXA7lH5dx80Y+6evi8E00ro7lV4y1QChEKBB+YXd1WFewWCwMt5WRm5NPVJArkd4uiJUmEtavZ0qqQfmQb3TYx6ktvEFO4RDeTzzPmijzKnANIFASkvIE3/yhktQxHalrA1EK7QzdLuHmzQK6Ld7sDQ/BT393hJUE98Ak1qUlcLOglsr8XIoy1hMXvg5XsRKlSolcKkTl5kfUmjgi/PWr4+Tsg5+fD+6uIm5PTDA5NY0dEMs9iEtNI3FNHo3ZNVRW1dC1KZooowSw0d9cR/+8DP+0GHzcJDjskzSW5ZGXX4fU/CKhoSG43UVUKZS4EpqURkriDYqq87iVm8fGDckErn9YARfB6nr8ivUmFAoRfq3FKESm0CCTe2E06HBSCO8Dr+2MtK1GWrbNa8kICyfAqLhHz3q/BFLTksnJraSmKI9bxekkxmxCJxahVCpRKmWIJK74hcYQHxeEViYAJHj5+OLrpcVaPcXkxARzVgeGh2TciWQGImJWeT/r8hupra6hf3sYAU5ChFId/gHBBPu6UFPfQG1NLcMZvpgUAhz2adpbOpmxuZGUEImLRIRQosZNZsZo0OOsFN0DXn89EeBiDGRNQhqR/q6r4ycx4usfhK9RScvIJBPj4yzb+dJaKJ/K3FAnQxPzaMwRGCSzFE+MMjUvwys4no2b0gjy0DBr9yVp67PIonVs2JnwWUTk0mQbty4f5uNzTbive439+9fjp18dJ5HMBVcXLc5yAa4aM37+ftwfS7IwOkBPewfzag98AgIwfEmwycJYMzfPf8iHp+sxpLzMM8/tINLH+c58tTM3Ocb45BwSZ7c7Dt0/TQRiOVrPENa5+xIZm0hKfSXlpWVUlJzi30uyCV6zni1b0kmI8sXpT+AX/ZNFKEaucsZgMuJu1KL8kv3TZrNiWZqlt62KG5nXiTBuIcAgR+nmS1zqZjzHnR4hGkyEi8GXhI07YTqIhCjjHd0LEItXQZvlhXmmJiZYsgtwkchx1rnjbtShlgpYdvMkMmEDcaEeq21p/FibsYvx/kZKK96lr6OBsvI29qSkoXd3x6B3RiSQo/cMIXlDEoF3aCjUfjFs2TVNT0MJ1e9W09pQQnXz80SmfnV6vFAsReWsw9PojlGnQcC9kTWW2UFul90gt7IXf5cBCjNPUXLnmXVhgL45EVIWGepro7qmnafXaRjqquP6xWsME8LB1DRMd3HECSUaXHXemPRyeGTwWoizzpv49J0sDHqRFGv6bC8WicUoFFIsS4tMjY+yaOeB2cSPKitzI7RWXOd6WQ9e6cMUXT1F6Z1ntsVheudESFlidKCNyop2XkhPxdVgxOhpxFkuRKbQERiZQmJMIDIB+AfHf43WRThpvVm7cRvx4XeyQnRaXJ3EzA3WUtF4jp7WMm7eamF7VBwPIN9gbryP8uzzFHbZSX06gTh/zV3zWYjK1Ze4DbtJPnaVMy1lZJ6/ye5NL+I81kdx5hkqBqS8+MY2AnWfw0oyZxNJj32Hf7DFsOQWT6T7lyjcscJEfx3Vl89QPeZOxt6drI/3R/0Ap5ZAJMVJ501UwgY0ei/8/PK4fr2A5rqbnFi04WIK4qcvx31hj/zC74hVuPvEsmVTGFffaqOu4Dq3x9OI14sQiJRo3cNIjDFy7tQQVXk36HwpmhCNEBxLTI51UtswT9L+jV+sLSCQ4KT1J23rBkLupNU41G74x8Tg7XSc24tjDPTP4kCFbXGKnuosLt3qwC1titKs01Te+Rm7ZZL2GTFyLEyNdlBSdJvXtxsRiyWoXJxRiISINXqiU9KJ9lN/1k7Amgg8ZKfpnx9icNACjwBe2ywzdFac4t9/8S75fUbe+Pm3PgOuAcQSOVpPI4qVWXqby2nutxLhJAVsLC+N0ds3g0LtS2RsxGdc7bblWQabc8gqG0KujiY6IfrOPVKASAz25S4ammdQOcexNsUfjepzHq+l5UUG2zsYtwpwU2oJidtAxoZg1GJw2CUEx0RjVgvoW5qip3MI8AdsWJbH6OmdQa4yExUX+Vlf7JZ5hptzyCwZQqYKJ2Zt9J17uwChSIwQC221zVhlGsLXpmJyUj1gvXy5OOx2VuanmV1YRNRTTW3nEp5Bcpx0YezYs4vbRj0gQCAUIxSu0FZ7mxWpmojENMz3tWe3rTDe28+8UIy/h5SqG71EP/ZNdsa7kB4bz5jVQJiPBBwrrFgGaGufQarQEpOayKcsQQ67hYmePH7/r7/kfIWcb/7zX/D4+mDUIsDhwGFfZmHRhlzlTdL68M+DmxxWZkdq+OiX/8oH57tI+cYvePXAOrycbEz21XPhfCUOp2DWbQh9KE0WCJGqvEjbuY+QZfufYK8JEIql6HzdH6HN/7g4HHYW5hdxOGxYVwYZW4pg9/ZkPO8EAqh1/qTtfJ7HL5/ifx37HX7RyfzTGzF/NjD9P0v+a2lDbOOMTViw2gQolEqEwodYkwIZzi5qxHe/J5Dhl7iL5NAPqe6bpKO2lpH53XjrlPeAJAKhahWkFawwPjaBzfYgiEyASqlAJFo1iq3Liyzb7avpAV92eAgUeIXEYg4KYLL4MCeyqxiZFLIhchMb1yfjpZXQdbmQmu5JLAJ/jO5yJPcbRAI5ISlr8XW9TPtQK2VlHcxtCsRNKUMgFD6gkrHgzjMBUpkcseguQ14gR6mUIBSB9Q4Y/6nYlpZYttlZnh9nctaySkUiBJDi7WtGrZAy47A/WkqEY4GavHxaB6fRbw7BoFF+IdVYJBZ/wYB12EeoKq5jaFqAB5MMDXQzfdcqtrrFsGOvN0sWCPZQI2GYyuI6hqbBnUmGBu97XxvDjie8WLRAiKcaqfDT4oeCOzqRoVBKEN19kRcoCE5dT4T5LHUd7XS09DI300tVSR2Dkw60TDE82M3MPf2KYtse02o7Jqd72hEKBAgETugMTsg+82o/mhEjlHmxYUsSR7JqqKy6QUXzfqIC9GjlAlZmhhidWcZqW6G5+Aa1nTsJ83PFWWLldkEOk84RhAea7ytEujo3/jQTSoBCqUZ6d1aEQIxarUQiFmG1LLNisz88ilGgIGz9M/zAextyvTcmnZTpwSaqyqsoul7L4NQSNvUyKzbbZzQcD/1JgQABAuQaN1w1CqSfTjoBgIPxlirq2weZtXuh16lQ3F9RRCDDLzYGPw8dpbc7qa1pZ2xXIj6PTgd2jyiM4UQGe5JTWcbRt/+Vwc4G9j25l/SEAALiE7BJZA9MYwU7Q43l1Lb0IdNtwah1uRfsFqgITtqFITgDsVKNWinDOlNJdW0bQ2MreOt1qJRfTD/SeEQTEWRCp6qlraaOroFR1oZqEdz5AxFSmQTRXSiGSCZHLpUgFthYsVg/2zsEYj0JGeuIulRAa3ExRRUtrI8PJtggYaa7guo+GX7pQfgYlQhwMNFeR2NbH1NWLTqtBqXi/jGQ4hMZSYCXO/nVTTTUtzE0Mkegzvk/Ngh/stxZM19AQ+dprW+gvWcYqasWV436C44IodSdiPAgzEYnGhoaVotXzsbgpFnVskAAYokUifjutSRFLpciEQuxrqxgs9n+88Fr+zS1RcX0LjqT7u+Fmnmm7xjPDqU/UVFh+OZWU9VWRGFJLevWBhNh/HpXl091JhAqcXZ1Rn2fpWZb6KapsYXugSUCU9zQaL6YCqsyhBAS5IfRuZDu2020d/azkGD8D32yTOtHcJAfbtcucfnwr5gaamXv/n1sSYvAHBaBUSB6KFftykwn9fUNdE0qSfJ0R+d0931BgLNnJNv2e7HBJkap1iBjga6WZlrb+xGpo3FxdvoCmCCU6AgODsLPy5Xy0hZaWtsZmk7B1fWO/gCRRIJEIvncGSiQIJPJkEqE2GwrWK3WO3NEhHtwAmvXruF64RlqyiuobdpCmNEb4coQDbXdWOWehEf5oRSCdb6PluZm2nvn8Yh0xcVZ84UxUGoDCAoMxOR6g7a2Flpae5hPM6N5RGfG/8fdewfXeaf3vZ/39IreeweI3sEmkhIlkpKoLm2T1itp7XVZJ3au9zo3yczNxpM4GSdx7MRra/tKK61Wq15IimLvJEB0gOi9d5ze3nL/OCCIRhCktKud+53BDIfnnPdX3udXnvZ9vljcbi16GenrpqtrGMGcQVhY2LpMDZU2gszMbLLSIrl0oZ/urm7GFvYSFS3ceq5Kg1arRbPCo6nV6dDrdSB5EUVxmaJtc2iIyymivKqEoxeO0d7SSFv3ITIqIxAULzbbIot2Hz57P21NjdwY2E9ivgXvbC83eqYRIoopzo1YFcywmTNgq9DpDRiNK4vHqtEbjBhNOiRRDAZobPoEBdSRZOWVYMpIRJy/ztjQEDZTGo+UV1OUHjQSh8Rt48FnEtglagmNCAuecYqHwbbLHP3oBONCDod37aEweSUvsIOFhSmm5zXEV6SRkbmWa19kZnyM/t4xQmLvJyNzvXEbxUVf6yWOfniSSVUeh/c8QHGKAdfCJDNT4wz1ddBY38SEN5m9j+4k9gs0JgtqPeGxmdREJ5NfUsX29iau19VR33iEH9SfJbN0H4cPH2R7ccLnLjD2ufop3GYNoSIsNpHs/Bw4e4n3fvpfWRht58mnHmPf9hyS07YRl8gWlGoN0anlPPOdaO73mkhKj8LnnKavvZ5Lp49T2zuPLEcgBnyrKNMEgn1SazQYDKspFPTWRLIKqilO/w29vXOMDw1iE3cRevN3qNFoDehX3ec0hCemU1RTTvKv61icn2R4YBo2MV6vnCPVOuO+gnNugu6WZqY8kCG5GB4YWPFxAGN8OU99IxXFkkxOvA4xsMjYYAu1jdOYs/aSmh65xuknoFKpV+03d4aGiIRCnvz2/8NOp56kjGj87jkGbtRz6cxJLnfOIcsmRL9/Fd3JvUPBNT9Nd1Mjk26FVMWzetyIaKNLeeobSUimePISdUgKaBGCepYgoDMYCY+Kvje5F1RodUasIStlQoU5PJnCmn0UJR7hzPwc/Z3d2KRybh/jILI4O0JT3Q086khCo2MJWaPuC2ojUXHZVBYn8XbzCIM36uiZfJqMhX7qLnWgmHaSkbXasKTSWEjI2csf/GkpPqxEhq3ugCL7mRlu4M1/6mawT8fT/+aPuL9642KGK6HWmohNL+dgWikBn53xB/dQWfIWP/rBq9QOtXH6k5N85ZlS8u94EKuwRCRQ/sB+ct5sZrT3Mudrpyh/NAFBEfF6F5icckNggcGO81xqfomc3eFIngVGO+vp8RXwve0JG6x7FRqtCcuKyHFB0KA1hGAxCMiShG+JFtTrXKC7/hrDDoUYlZ+RVfIjIVkKePwbUYj6CPJS9UjykiVmSZ9TqTToDSvujyo1WqMJgxpkOYDfv/mpFUSAxak23vvJjzje4qDsib/kuUP5y4ZrCBYg9LuceGQFjeTDt5R4oYhu7NNttPT6sEZnU1IStSSLMq7FCa4cPULHvJqY/DzKy1dkK/vdzPbU0z6jIbq6lKKMlQZ2Ga9ngfbGLgK6EHIqD/DgjqzV/VFERElBJdyKjlYkL/bpVpp7vFjCsyktvdUXt32SK0c+4cacisi8PCoqbvUFJYDL0cP1xmkMlmLKazLv2nANoNIZiMytoDD6Ha73n+fVn3xMwf/7DEnGMLbt+SoJuiXjvOLH7eihrmEKvbmA8pqsNe3J+DwLtNR34pXVqL3jiKnfYn9VMiYtZBRtJ+Nm10Uvi4N1NPR7MYXmULM9adkZ7bEPceQn/4s3Tw+Q98Tf8Y2H87lZ91MWRRyjA4x71IRnV1JTfNP5quD3zHPl3f/DD9+9jir9BV5+6SGSrRILo9d545/+gQ/ajDz5h3/JM7vi7mx6ENQYQ7J58Onse5jR3z0EBHQ6bfDk1OjJr96zRI90Eyp0pkS2V2cReOsUR974gG+/UEze2iy33zN8uZHXih+/P2ggVanVW7isb/wFjSmL/KxIDBcGcC3M4QhI641pggatJnhh8nu9KPJmWolAaGIC4fqgAumYHmPGHUBU2JT/UVBZSEqMQq/VgKAjOjb4bwGZqZFR7F4/sqLc1jBnTswgKdyIVjXN5OgEgUCAuyBH27hPG06qQGhGNkmhJnrGe2hsHOPRklRClqIuJUlCVtQk5eQRbTFuzkMOKNI03d3jOD0BUi1WtOqtiZUiTTI8sojXryez5D4O7M0hbEObiYApLIEwfQ8jo4t4fEvf35NL2IbTI2AKiyfcsjV/lj4indR4KwYduOx2Ar7RYDteHRnFu3no/rzl9PaN2oncoJ17W/Ya0msepCr3E9qH2jh/8QaP1GQTkajhxvnjDGvzKMicoXWonotXe3mgLAOzpZez5yZIKT5EakLYHSLkvzioVMHLKoqCskVzmyEsifzQaCZ6G/j49UYmXBoiY2OJjgrDqFOzqCjIysbRzpvjlhn2FmRmxydYdLoRZVCCNot1MMYkkRAZgkkdYHZyGrfbw72uOW1IHo89/00GJxZ4+3Qd770+xI3mOq4+/CTPPnOQ4oy1XPxrITM5MsqczYEmwYBmg0wUlcZERPStghKexUmm5x24A/LSxK1vQNBEk5QYhdWiZWxuinmHC5/CHTyrt3eWRefsZlflB1xuPknthVp6DtaQHhVBV+11PNZUsrPTl6JqJeYnJ1mwOwnIkbd9rDE6gbioMCxakfnpGZxON/BlGa83hiLbmZqex+7woSzpw+uHoiEmIZ7I8BBU4jyzM3YcLpn1JdvXYkVq2xfX5WW4p1q4crWBxvoZJhenuPDuSvlWmB9pZdQWQHSPUnvhCq0P7iA7Nv0ejS4bO8sC9llm5xZw+G7WT9hITiOIi4shIkxP9+wc8zY7ni0ZD9dDY05n3+NfoX9wglc/usbRt0fpamug9sBjPPX0I1QXJt/xXPPPTzE1NYtDMqDVajdYu2pMIRHcXI2K5GBudpaFRQ9KJLd5mRoiY2KIjgpHLY0xP2fD5pAhfPO+3HR+rH2m1pxCRXUVJbnnOHKjnsamNu6vScIy3U7XuJeI5HxylwrVBZzzzM3OYfPKxN9mnxU04cTExhAVYaR1bJ6FxQVcMl+S8XpjKIqT+flZ5hc8KMHgrA2kSU14VDSxsZGopX4WFxZYtMkQfac7wc2T5O5Woj40g6LSSoozT3PpRguNTZ3sK9uJdq6b5rZRAuZ0inMG6Wtvpqm5j1152xjramN4NkDKrnKSQn43E6xacrDfHN/moxQwx2VTEp6OxqRnpnGMgb5hTHHVZGRlLhuDBLWekIgYQlb8UnSP0d54lWtNc8RW51JUnLUq00F0TTI21MuIM4KatExSY1eruopsZ3x8kL5BP7EVaWSuM26D6Bqjs6WOurZZhBgvI51n+eVPriOLEgoCGp2eiMydVG0rpig/5Q6O43uDoNIREpVG5Z4E0tISMQZe4Ue//IzmHgeRqQWUFn25xuvNYInOYe9jz/Nc7wxvflrHe78cor3lOtcOPcGTTxygMi/mzg8BNIYQEjPyCJ0bpvnsr2nqnkMXHospMo4oqz54R1SUrXODCnpCw2JJTgpD6fDjdTvxKsodbwQqfSgRsSnEh2mY8ftxuVx3+MVmUHA57EyMT6M2p5FTej+H98dsHLuk1mMNjULlnWR2rJ/heYjWmTBvNX3pDlDrLcSl5WJdGKP14m9o6JhGHRqLNTKe6FA9AgqK8nkiAFdCwe1yMD46hWCMI6t4H4cPblT0LBgtbAmJWmcUWz6r7hnr7xCCxkxoVAaZKVZOTfpx2RZwy9zeu6L4cLtmGJtwoBDJxvEwKoymEJJSE9DRi8sxzeSUixjfGIMjLlTROgwbWJ1VGh3WsOjbXOtEbLNDXDs1zrAjl4KOflw7kzFsNRZgiTs5ddsOngg14J/ro+d/XmR6+AYDU9IWjNfBrJbkvD08UPFL/vlaP5dOXuaFg88S5ptjsO0KLfNp7N8pcOlGN+dP1fPszvsRZ8dorm0lpuIvyN0KvQuAAMJSliNScI2Dgs/rZmxoHFlnJa1gD4cfS9hYflQajNatOjlWOOC2IOiiZ5H+2g9569MO1KGVPPTUo2SGrx5XwOthvHcQm6wl2RJP/BL9i+h1MXmjgR6HjuSqErYlBP9fDjiZ6DrDR5914dOYicsqJz9+mbALn8tBT30TcyozJcWVpK5c/4oPp62bhpZZzGG53PfwQyRbVlBhBPzYRweZcAkYoqLIzksCQPJ5mLpRT5ddS0JZGfmJmqXvu5jsPsMHxzvwaUzEZVZSmHDL7iKLHuYH62kZkQnPKaYs37KVSV4HlcZETNYDfP2ZCpr+5Tpn3vkXflaQxf/9QhmWmDRuPlUWvSwMXqd5RCIsvZjygjXtKQHc9g5qGyZRhFA0xjQOPVrBRuWNAj4PQ/XX6HNrSSqtpjQ9qK9IfhfDDe/y019dwWvZznMvPkzyivUg+j0MNjQxoRioKNtNbkTw3SiSj7mBU7z602OMeEJ49tAzpKu6OPrWBc6cucqQLZyv/9XLPPXYA6SF/R5ddL8gCCp1MFhIENDq4snOjVy/v6rUhEVHolMCjHdfpGUoQN623+/Y6y/VeC0IOvS6YISJJAbumfgcwUBkpBWNWgV6PTrVBgq0IgVpQJRg9NudTteQ7GJyYiy0jizgmu2gc8DB3uw4DHeoYK3TapcjoVXqm80oSKKEgoIcWGR+wY8oKazVhlW6UEItwcjg26fEfzEIyXyIrz11moHpE5z69WvU5Mfx1J4sTOIg5y60oURu59mv7CE5ynJH/lVFsbGw6CUQkPH7fcjKFi0OcrAgpRhwgy6etPRc4jdR4mSfF5dbRBLdCLoEUjNySfgilD7BgsWsQ6MW0BsNqFQ+3G4RUfQg6OJITc8j8S4KRHweGKJK2b+nkGN1fTSeOUfX13aRaJrh+PFB0mqe4aFtCj/42VHqzl5m4KkKNO7TNNsSeSw/leiQ3+/NRvKMc+nob3jvRBfWtFKqqiooKcrCxiXCzFom/Hd+xt1AlmVkRUER7dgWvfj9671PKl0IFrMenUZYjlK6Zwgmsqsf58/+rZWE9Dd5+8Oz3Lj6CUP9nbTf6OWP/+xF7q9I2yR9WcHlcuMPBHDabHh9Pja7n0MwekCWFRRFwm6zLxVF1a7e/wQtFosJvV6D4LtbOo310BhT2bWnmo9OXed80wXq2x+lJHWG2iYbsek7yUiNXJ5mSZaRFRlFcmCze/D5ZNCuiUfSWrCYDOi0X0SmwG8LweKvigxeZzCNzq+wLhpQZzZjMhpQ3zbi7XcMxUdv/WU65kLZdXAXJbnR6/fzmmKSoj7i+OlaBlsvcLX+INvLUkn7Avc8RZaCa1GRcTkcuN0eFPRr3rUGs8mE0aDlprJyz9MnGEgpfIBv/YWRqKRf85v3T9DYdIKxwW7ab3Tz4h++yOF9RURskocreT14vT68HhGXy41PuoP3GnlJ3sHncuJ0uTakZdCZTJhMBjRL/r/Pp+cbSCuupLKigPPN16i/3kjHgXIi+m+wIFkpL8xfosQAZHl5r3A5HbhcbhTWFo5SYzKaMBl1fDGZO78NKMhy8M/vduF0OvHKYFkzz1qTCZPJhHYpouu3uhYFM5mFJVSU53L+zQ6aGxrpe6QQfUc9PRMK1Y8/j9j3Pj98q4OWxiYG9pjpaR/EKSRSUvrbMax+EdAazGgNgGJjfHSQ3gEXsbmpZGRunurqnhmhr7ODYXcY+9NzyMpYTUHhmhlnqLcPlyWetIysdZQgt4zb4RsatwF8C5OMDfQx7o2gNL+KfQ9UkGhQoVZrMRgthEZEExsXQ7hV/4XXELgFGffiOG11Zzl56gLNXbOk1DzFY5V7eaAyhTuoC18qVLowssof5k++ZyQu5de8+/EZ2i59wlBvBx2dvXzr5T/gkd0ZrE2WWgs54GCo/QLvv/spffZwKu7bQ0VxHiGeMAYvWGAL1QVWQ0Cn02E2G1GpRbQ6A9otLF5B0KLTmTGb1KhlTTCD4nNADATwuL2IsoDOFEHutm2byrxvdgiP3YZLlIiQAgT8X5A5WXQx0nWZD97+hI45K+X37aWyOI8IeYDJK1bg8xjp10MUA3jcHiRFQGMIv+O4gXvX3bcMNTqdmbBQE4JahUanvyOVpSJLSJKCGPDjWlzAJYFuzUDUWi2m0JDgOhVAEETEgBOXO4Df72Bx8S4DyAQdkfHZ3HdfEu/89Dzv/eIVUpLieOlw7t3tBYKGkNhMah7cT/5rlxkVvXi26skXdITHZnHfwRreOHeczrqT1A88TIWmn9oLzSTvf4mHkk7S9u8/pfXySdrGyggbaqa+28Khr2+dMm5jKMiSiMvlQZZ1CNqQLcnPVmKp76YPLtsUdSc/o2tRR27hdvbtWBtNLuJyTtLU0IOst5JaVENuVJCX2+Oy01HfjFNrIbP4Jt+1iGNhmAufXWXG60dviiSnvJq45YNbwu2coLG+D7UplqLK0lX3EdnvZn6gkfYJiMwuZ8/OxFX98bud9NY3MSkZSEyuYkdFBKDgdTvoqGvCoTWTWVKxxHct4bKPcOH4Faa9fnTGaHIqVvYFAh4XQ431DAcMVORXkR2+RYfEOqgxh2dx8IU/pbbxr3n9Wh1v/egVSor/O0+U3jrTRa+HoYbrDPn0lBbUkBOxuj054GGu5yoNI150xgzK9z9OSeJG60rG55mj4WorPq2JnMqdJBsFQMbjHOPUW2/RMCVQ+MijHKhYqcuIeJyDnDvZiqSLoGxXzTLVd6VFGgAAIABJREFUiOhz0nHmHU60LaAyh+FfrOOD99VIiprM6sc5lF9ESXHeEhXx//8gqNSExscRohYIqIyYbsOXqNbr0AgyHu8kE1MSbPsdd/Qu8eXShqijiIrUo1EruJ0upE2joTdHUCkTiE3PJsqoXx8lpXjxeCUUNIRHRaDRbD50fWQVB/Zkc6prkkl7P+dONPLsjmRCDXeoCrrhh2oiY6IwajUo4hgDfXZ8O+T1J6kiI8kAWuKTEtHpfnuVXzSmDB598Xsoej3/62e1/PpH/4Puq9lEGDxMK9X86ff2c+ihPMLNdxYRQdCg1aoQVAqzY6M4PD5kLHdOORQMGAwaVMzT2dqPzVlIXIhpgykUmewfQR0uo9GrUTFPR2sfNkch8bf7/sAI2ug4wlZwTt0Wig+fT0JWDCSmJmE0hWIwBNvpbOtn0VFCQuhGFZ8lJgeG0UbHEmo2bfDge4AQSvn+fRS8fYljbee40vQNrIOnaXWl8Ux1BVUhbs6erOVY2xnq2vfS11aHOfNhslNib0/h/nsARZ7n0gc/5h9/8DaO1K/zb/YfZl9pPEYtdN5VGuVWoSIsIhyzQQ/SNCPDCzgcIpjWrClFRpEVFLREx8ViMhm516uUHHDjlS1kVz3Kt+MyKa3axZGP3uPIqUZOv/9TMMYQF/11KrIibrM2BAwGPVqNipnBLoam5nCJievSHUHB63SiaPWoQyIItRrRq2WmR0dYtNs3XHvKkuPOGhVLpNWCQXX3KuWtbupIr9xHddFx6jvbuHy5mUK9lz5fDNuzs4gPvdlhFaHh4VhMRgRphvHROWw2EdYVf1WWDJvBqFSz1by2xS8dgiqEsDArJpMa5+wEk3OLOANgXDeUYASKoIkgOjoU61qL2u8YAUc3Vy61IkVU8fzLL7OvMHwDQ46X1uQAUyMDHGvo5crFWg7sqyK54ovjYtNawgixmjFqZGanxpmbn0difbE4eSn7whweRWRYGEYVBDZ84uaQRS8+yUBywf08H51KYfl2Pv3kAz7+7CpXPn0DSR1CdHQ0D1XG3/YSpNLrg2ewd5TBwVEmZgMkJK43iMgBN96AgEpjJjQkFItJg3t+iunZOWwBBfMay6SyLCNhRESGExr6+WbZGJVPVXUF+SdqaWu4Tt3lVEIW5tCFbaMgN2b5LqQxWwkJtWLSKizMTDE7O4tExG3egYIpLIKI8PDPqcx+8RAEE1ZrCFaLDs/kLNPTMyz6FSxrrQNK0HEmqEMIj4wgPOy3qZwIhCTmU1pRSdanLXS2NlJ7MQ3jcC8eUxZPPHEY2/lhTh77Oe0tDZz5TMI1YiM8fS95ib//1d0l7zSjw30Mz1ooeCid1E37rOBamGdmcgbZGktiStqaIAORmbFR+nrHsMY+QFpGGgbZi8vjR1aZsZrUuGcmbvFdrzBuy34PHl8A9FZEjwe3w4m4xLF74MBDxP/OiiVKOOdHaL5ymhOnL9HeP48+KoOqgy9QU13KtqxUIkN0v4eOn1sQA35EjGRVPswfJmRSXLmDYx9/yLHTdZz64BfIaguxid9ld+Ymd2jFw0jneV79x7/neK+FR15+gccOlRNtUTPbeeMee6YE+bj9AfSWMKLiE9jKFqkoEqLow+9XYY2IIC4x9h7bD0Kt0WDQ63AtzDLU0818oIjoDeRL9NqxLS7i9AR/o8aHyznL5JQHJfdzyoDiZbz3Cq//4//go1bY/63/yOMPVxNrVbPQP3zn398D1GoNeoMOt22ewe4OZgMlbOA7QvI5sc3P4jOmEBey/vPfBhRFwWCyEpucurlMCDr0xggiIwxIQy7mxoeY8iqEmzc4i2UZRTBgtsYRF6tH7TFi1Ms4Fifpau8nsLt4A8oFmUDAi8ejwhpyay8UBA1hsbkc/FoF2tERXjlyitd+lEhCwr/lsYoNAgc2gUptwByeRHykgRlD+F2dX3pLNLnVB6hO+4yLw42cPF5LWHoDdUMxPPf9g+T7BGrSjnGx7xrHjjdS5q7DlbCX7Tn3FqF7C0FqHINBj89tZ6C9lWl/FfEb+JHkgBf77BhOfTrxW1DTtwzFj9M2REPDEIrBSkpB9bpil3LAyfTAJc43LRIet52Hn34guLaVAC77II2NY+gtGct8137XAt11J+jwWtAEwBwST1lV7nLUuCJ5sM+20djhxBK+m/Ly1dkKfo+LwcYGxiQz1cW7KYxbyRfiwz7fwemTNxBC4tn96DMUx2qWqDgGqW8cRWdKobB8qS/uRbqvfcYNjxmtH0yWeMqq81ZEsMt43XM013UgG0IpqKrk3q4+CgoCKo2RxIKHePm736St9x9oa/uMN3/9OPuKHyUYqCzj9czRXHsDSR9CQfX69vxeF13Xahn16UjI38WjD+Vs7LBX/Lht7VxtmEZvyqRiZzFGFSiyD/t0PZ+d7EYxRFF63/0krHB4S34XQ9c/5JO6WYwhD1CzPXGZasTnmafhQgMLGMjIP8gzD1cQY7ISFZtIclIcIXdbKECRcNu6+Oz9c4y5pc/luFNrjMRve4DDe9N/a8ZYQa0mLCePVJOKHvn2TjBFllEQEAQdut/XdLEV+FLVEpUmiayMcPRagfmZWcR75PtUZDsTk4uISgRVe6uIshjWDUzyLrDoCCATTVpmJLo7EF4K6jge+NrX2JUTg0HtovbIa5xsGMXpuxeTj0BcUSmZkRa0LNBS147N5V031oBzmlmbF0WfRXllCua1hrYvFCosYVZ0pgyeePGPef7ph6guLaCwZDfPfvPbfO2JnaREmu5AcxCEoI4jJTkco15grLOWjvFFPLclYLv1/4ImkdTkUAx6P02nP6SuexLXBtEK9pELvP9pA1O2CFKSwzDo/TSf/oja7gmct/n+B8camJhzb0meRNcoY1NORH025WUphIenkZIUhtHgp/nMx9R2TuDYoB3H6EU++rSB8dmttbM1CETm7GNPRTph6lHOHnub1167RkzJTgoyE0jI2cN9FemEyN18+u4veedigMKyXBKjzXe5mG/xaMh3k9J5j/BM1vHRux9zvslOZlkVeZmxGJcicO/UtrJV7vVVEIjKyScjPhKTykZXayczc/Z1BlvRPce8zU1AnUxBcRqRkatvUYqsrOugKIrI8voOecZruVjXw8S8itj0Ug4+8yL/13/4G7738gEywuzUn79M5/A0t99C1CSmphAZasU1eZ3T5xoZmHCu63PA0cvZU1foHppFsGaSl5VAZKiaid52BiamcYprfqC4mZu34fEKZBQUkhwX9blTmY1RhezZVUZGPLRcPMav3jiLEJdJZkbiCp5fgciMHNITYrCoHfTe6GFianHdeCT3PAs2Fz4hkW0F6cTGbMERtOKiJitriWskRCkYAbvZ75S7ESrBQmZ2JgmxEUiOAXp6R5meX2tWVXAuLGBzuDEnbCM7PZHIO1UF/K1CYrT1ErVdbpK3FZGXFYVBp0O37i+E3JrdVJXkEqn30FV7gbqmTmY9K+Zned6Ue1qPGksKmRkpxEVqmRnqZWBkDNu66fOyuLiI0yWTnJNLWkrCctSfwM19YHXDiiQhyevvDb6ZNuobWukdl4lM3MbeR5/nX/31f+Sv/+QZypIk2utqaekcwrXJca6PjCMuLpow9TzNVy9zvbF3w7XV23CFq9fbmfUZSUlPIyUpGtk1Ql/fEOPT62XEY7Nhs7vQx2SSlZlG7Od0cAjqULZVVFFRmoFvrInP3nuDy4MQn1VESsStK7HGlERqWhqJ0TrmRvsZGBxiYW22i+LDbrdhd0okZGSRkZFyx8jL5X5wyzC/EpIkIUmb3ZuCMrVloRJMJKWkkZYag+IeY2hggOHJdQPBY3dgW3SgjUojIzNz06yuLwJqfSwFJeWUF8Yy0d3Ax2++xqV+LymFOylMSyG/sIyywihGOy7x7hsfM+C0kF9RSNhdcX3dOreVJSfD7wLeuQlGB/pZ0MeRnJZBwh3S10VRIhCQMIVGEB0TuyoKTZEXGRsboG/QT1xCKhkZ4bgmblB38Sz1vW5AZGZiNMh3HZdMemZaMMNF8TPSUceZk2fpX5BR63TojQYMBiNmq/W2xeoU2YvLacPmuBc32NqHidimezj7wY/5r//p+/z9jz9h0B3N7sPf4rt//l1e/uZT3FeeTdTvueEaZOZH+mi8coUxn4GY1BIOPv0Sf/Xv/gN/8UdPkWWapb3xCvVtU5s6uQPOcVovfcK7R5oQzelUbC8meonIdSvENLd5Kg77PBOTTiLjUygoytuSA00JOLHPTzDjtJCQuo2CvHssXgKAgMUaSkJiDKJ9jLbrZ7jYMrduNIrkZKSrnhOfXmFRayEsLpEYo8T81Cg3mjvuQHt157kR3dN01h7hNx/V4dalULW7jFjrivn9HMv/ZjC7grLiriRgtlhJSo5Dck7QUX+K842zG4zbzVhvI8ePXGL2iyHb3hyKiNezwPSsj4iYdCpqCu5QvFBLaHgCBQWpaEQ706Pt3Oj3rvtWwOdlYWYO0RROSm4Z2dEWrCEppKWYsc+OUHvqE5rG1qeE+uxT3Lj6KRfa1lfeFNQm4rN389WXX+ZAvpa2S+/x01feoH54ffubQ0IMOPHIZpJySsi8QxHzVX3QmIlNKefBfdn45oc59fZrfHCqFmPBY+zeFktsUgn792Xjme3iyK/e4Pi1Wcoe3Evs504VEdAbzaSkJSB4F+hpPM6J2un18iP7mB1t4ZN3TzP9hcuPiN83y9S0B53eREJaxpqsLBnXwhjXjh6hyx3Fjkf+gCfuS0ZDUK4Xp1pp7fdjDQ/yXSsBJxN9V7nQ4CM7WWZyUUNYTCFlRSEokhevX0HyuZjqvE7Hoo7YjAoKktRIPg9eEYJR2TM0X+8CUziFNdXLRY0haBhvP/sOJzr95FY/wwtfrSJUvcR3PdVCS68PS1gOpaVRKKKLqf4rnL3uJSdFYWJRRWhMIeUloct9QfHhXOzketMcJms6FdXJCHIwm3yrCHhsdJ17h/M9fkBAZ4ykeO9X+eZjhSjueVouXGDEu/TeFD9uWyd1TXMYLOlU1qQstXfzvBXxusaou9KFbAiheM/jlCRsbNuSA17mu6/RPC5jjSykqjwCAQnR58c21MyNCT86fTS5Bam3jL1KAPtUE++9+gFddjXRGVUUpy4VFUVGDMwzNLqIoDGQXLCPRx/az97d1RRkJ9694XrpqVLAwUhPB21t7bS3f46/jm5GZ9330IetQxC0WBOqqcq1IImzTE5ucA9SIOB241dUmMKyyU79nUUB3DO+3JgaIZzy6nzCTFpmRgax+wL3dBZ7p+u41jZHRN7jPPFgDqGm9T4Mz+QY0y4fushiinNCMKyttrUOKuKLHufPvvt1KjMicI5e4Cf/+xVONIzi8t/+RiLL8oYcwNa0/Ry+P49IK7RfOErT8AIeceX3FCaa6+md9ZGx82n2lcRhWeLbUiRpSUmRWM/IsbkCI8lBLlxljYFH8vTz4Y//kZP9cRx46qt89avP8eTjj3Lo4H6qi1MJNWq3fvkWwqncXU5ChBXv1DXef+8SgzNOpOUGJRbmF/AFAiiKH583SHUgqKKo2lVKXISZ2b6T/OyVX3K+ZQz38vwGmO45wy9++B6z6jCslhS27ywhPsLCbN9Jfv7KLznfvPL74vL3Z1ShWMxrje8ykrT2/SiMNl6lfdRBWs3j7CxKJNQUTcWOEuIjrcz3n+IXP/olZxtHVrx3kenec7z6o/eYJASz6WaByiDnl8J6Bf5uoDaksnd/NamxBjouvMfVyRiqq/NJjDCg0adw3/1VpMZoaDlzHJu1kIKsJEI3sDAoLKWJs+RVW9knQYfBoEGtFvC6nHjFADIgL8na8hgUJSjTa8Yjy/IqLsM7jdYz3kPvyAx2rxe324d4sxKW4mF6eh63J4CyRPOx1EH0ej0ajRrZ58btCSBKBKPplqKIlxu92Y81bVoSt7P/vmJSYnT0Xj9HU+8oNt/qdz/T2U7fxCIxpQe5ryKLGKsa0GI06tFqBGzT48zYXMsGZ9E9Tm//OIsOH7IYCBpJl54WsA9w8Wwt/WPziAqo9eFkFu3lK996kurcBPB7CATETZRBgbjCGkpykwlTTXHu/V/w2puf0TFqR1zSVFwzHRx58zc0DjtQtCa02gR2PLCbgsx4/NONXKnrZmxm9SU5YO+hs2cMp7aAfQ9UkZkUurTxb8ZXLq94txu8YFUIRfftpjQvGe/wZa72yCRnZCwVarwFc0Ile3eXkhFvYKD5Es2dA8x7V7+D2Z5O+kfnCC+4n11V20hYjtxeal9hnWFLpTVg0OnQCAFmJqew2W8a+RUck/0Mjc1g98iIorjKcKbS6tFptaiFAG63B79fJEhDcFPG5WVnzmpDnIbU8t1sL88jxjBH45XrdPVNssqnpTjp6eplfBbK9+6lvDB9qSDL0hq8zVSvNsreeiOKZKPz2jHe+MWvOH29H5v37vYU0TXApTOXGA0kkF+QS5z19hc1c2wxVRWFZMSZcEw1c+FCHZ2DtuW+qLQ6dDotGiGA17N63oKOnJsX2Zv7xurnC5oYynbsoKwwFWX+BvX1bfSNrXb6BZyD9PUPMSdlsPO+GgqzI1EhoNUbghHQ/nlmpuexOZYu4YqPyeFhxibm8Mri0rsOPlF0jlN/tZa2ngn8Cqh1VpJza3ji60+ytzILXcCHz+/f2MGxBG1IJsVFhWQlGxlpP81br77GsUvd2LxBeZL9i9y4/DEfn25i2q3DrNOSVFhNTXUxiWYbNxoaabsxzOotx81g/wDD4z4KanZRXZFH6E0Z2WQflRVlKUNk6TGrZ5fw1DKqqspJD52j4VonblUUeYXpq3iGBU0kRZXVVJVlIiz20NTYQuewa9WzRPcIAwMDTPuSqd65g7Jt0cvR97d4a9e93GBBSZ0G18Ics3Pzy0Z+yTfH2OgI4zMuZEkKOv6Wfq7SaIMypRLxeT14vQHWypSyRMOiyMHxB6EmPq+M6u0VpFoddLU00twyyOptxcPw0CADI25yKrazY3shYbe0ndufz4ocNKSzdVv6LWiJzymivKqYcG8vVy+1YlOnUL0jF6tWQ1x2AWWVxVgcXbR0TqCKyKMkd21Rt5v9Wxr3EtXOyrnWanXotGoCfh8er4fgsSgv3z1vvid56f9WQlaUpfNzKeJwS+OSmRsfZ7B/BGNsEukZmesoWlZDwGy1Eh4RhkGvw2AwrLiHKdgnummtr6PXFkpCchrJMTAxOEh31yCSXgOKi+npMYbH/MTEp5CeEeS79i4O0NrSRPc4hBgF9OExJKSmEa0V8Xt9BDY4WBXJyUDzWT587wjNw5+DXkHxszDRwWe/+Rf+9vv/if/9i5NMyUnsf/plvvvnf8aL33iMHSUZRGwhU/EuGr31fu7xOhlcN/IyZdNKmXbNDdJy/RItvS4UBLTGcDKK9/Lkk09wf3k8UsCPz7d5FlrAMc/kUB9D876lSNSbOpyIy2ljds6xqawpioK05jPJM8NIXzMdk2YKKvawuzJuzRoJUh+tDiBQcM6M093cwqI1i+o9+ymI1az+ze3maKkPrOqngCU6npyyMpKNTnqbTvD6T9/kctfccl6e5LPR23CC9947waicQGJUFInpJZTmhWOf7uPamU+40mVb8Q79eD1BJ4osSwT8/g1ldiVE1yLTgz0MzHoRA74V8yvhcduYnrVvOL+Koty6yy/vp7cgqNVo9AZ0apmA343b5V/+nSEsmuzyClKNbgbazvD6T37JhRuz3DR9SX4HAy2nePftIwxKCWucggobOTA/LyTfAlNDTXTOhFCy6zAPrChOt3y3WnXJUhESnUzlgwcoipGYHu3g/OnmNc7qALaFcVrbxohMKeLBw3uJ1euJiMlg+33FmPzztF56h1deeYvGwZuBLxLuxUEuf/YO758aJTTGstyJlRKmNYSTW/0YL/7Rc+RbZrl09Jf8+GefMGBb2QEFSfTjcroRN5gun2Oe4bZGZvT5HHzywSXKiK1ChSUykYr9D5BhsNHddI5LHWb2P7GDSJ0Kc3g8ZfffT7p2ge7my/QFCrh/R+KGmXY3dStFlpFXbgfKLR0TWD5rDNZwsqt3kGnxM9F3kVf/+cecaplelh9Z9DDedZG3X/s1Xf7kFZSct1ujypLet/m3Vo5dpTZiNKhRqTWYLZZVOonkszHQdIy3Px0ia883+dPvPLJMkSd63cx0t9Lv0hOdVkpenMjsSBOffnqDqKIaQuf7mMFEQk4Zybpp6k4ep3MmgMdp58a1RuwaC1ml5YT6x7h65grTXgVF9uFc7KC+ZR6TJZWyiqTleZZFF+Ndp3j99XMIaYf4zr9+iZoM81I/Pcx0t9Dn1BGdVsq2eIn50SaOHmklungnYQu9TMtGEnIqSDXMUHfiGB0zIpLfw2z3NVqmBCKTy8hPEumrP03TsOcO83ZrvgOeRVrPv8OJq+PB2RbUmMNS2Hmghli1jN/jWdZ9ZL+Xue6rNE9AZHI5hSkS/fWnaRgK6qGK5Mc+3Uhtqw2TNYO9hypvW0Ml4PMw2NjMpKQncdsO8iI8dF85xrU+F77FBexiMCPhVkF4CY99hHOfnKFrYh5FaySztIpYwU7npU9pHBERVFr0OjUoMj63c03wmELAM0PDqQ+40rNFI/JSwcZDX3uRb3/7ZV5++d7/XvrWNzhYk/w5MlxvBR4u68rr+qvCYN3GI4+XY5DmaGvqZa0bQ5J8DPUNI2qMFO59nOKYZQnF7x7mxJs/5Ic/P0Lv3BdL8PN58CUTDajJ3nOIsuQTnB28wbDdS3Z8CGvtyoJGi1atAiVYCMAdyCCMYGRDwNHLx794jRuBEr71599mR24E+nW8lAozXV1MuAJk7XiIgsRQ9FuQFpUulp1P/An/XmXmh//8Kmcuvcnf/Y2Nvmef5uH9O8hJDkN381au+JkZbODEpU7sLj86axpREYblytJqQyqPvPQdmnqmePPCed54/TNy4p+jIMmKRgD3TD3v/vos7qg9vPTHT1OYHMbN4HD/wgLOgIik+LAtOhBFiZt+B0Vy4HAEkCQFn9e7SllR5EXmF7wEJAXF7cQjSsgKqARw9p/lrbeP0+QdRBGHOBEXhtloQG/Qo9cbsIRGkpiaT3FhKmFm3R28HGoyd3+FJ/ddY+SdOs69+wP+weTg6Ud2kRGtYWbgOsePNjNt86NIw5x4501MjoMcPlBO1t7nOLzrGqMfNnHl6M/4L7ZezlYXkhRlwGsbp7O9D1XiPl7cU0xcmJWovc/y6K5aRj5s4Oqxn/O3tl7O1hSRFGXAZxunq70fVeJe/mBPCXFhq/k8FWmEq2dqub8wmZDUMLQqBdvIJd564zNmTVV888XHKEqNQKfSkLPnGR7ddY3h6etc+/Tn/DdbH+drCkmONuK3TdB5ow8h/j6+uaeE+AgjAgo+lwtvQEKW7czPefD7ZTDcy7akJXP7g1TmHuXG8Bh5ux+kKDses04ANGTUPEBFzhHaR21U7K28faFG2YbN4SMQUFAcdtyBAMu0rYKe2LhozEYd/f11XK5rwuLvYrjfT+HOSpINDty+AKIsY19cxO/3w83kOcXHos2Jzy/iw4HbH3zuZt4ObUgoVqMeNZNcOfoW78b6KE7WMDXYT0/jEDafgts7SHP9FSyeeXZXbsMcG0uE2YTKN0jtxTpK42WYHESXXk1pXhRut4eAKOFxLGDzBJXXletapU9g77Mv8FTXMD8/0sAH73xKYWY8O7bFoFeBd76dYx+dZUJVwHPfeoaa/Pil16UnLTuD6IhQugcv8Os33kRcqCRCs0jvjX4mxhxIGj3u+S7OnzpJYriZHQVJgMxI00nORKWQmR5JcpgWUGHQ69FoTGQVl5AaF7Up950hppzHnzpEU8cop1uv8OaPXPS2nKYwOwGz2sPEQD8OXQ6Pf72E9IRQ1IJARtUTfP25TkZnP+Lykfc5XZZD7KEiIk1qFHGRxlOfcKU7QNWjL3D4/mJil3hIAi4Xbp8fUQng90lIK27QAbsNh9uLX1bwut34A35Wc/8JhKXtYHd1AeeuD6DLrmRbVhrhawan0sWx6/Gv8lTnID/5oJUjHxyjKC+Z+0sSMKjBu9jJyaPnGPKl88Tzz7K7NIWbvkfR7cDp9uKXFDxu95IM6peem0hGejxR4QJdVz7grTfNLO5MB/sQ3QNTDM160RlhsreOs6dOERu2n9yUaEy6SKIiQ7AYRfpbr3H9eh7KmJ0pdwyFxYWkxzpxOD14fTJelxO314eogE4Ac1wZjz33LN39kxxrO8GRT8vJSX2C/CQLKkSmus9z4lw7qtQH+cqzD1GYFoYaUGQ3brcXf0BB5fcjiiuuDrITu8OFxycRUHvw+vyICmgFcAxd493XfsCvTw6Scf93+Ot//Ty78iNvLzwroXjprT3O8dO1eCJfIC46alNeXUEdTlZWOonxoTAwTvP545zfU0Nuyi7izCpUuggiI0KxmkQGbtRzve4K2hkvM85QcvKLifAE+y7JLux2Fy63BKs4s7UkFz3EM892MjD2KxrPHOFERRHJz24n3qpBkex0XDnJpaY5cvZ+hScf3kFyeHC/McenkJwYT6iqlsvH3+ONSBXbc60sjvYwNN7NlEeLQXHQ3XSZ0ydziHyomHBkprqu0OuIoLgogbxYA6BaijQ3kpSTR2Za8qYRfYIqnOI9BzlQ20Dfm+e5euIN7DM9nCnPJzHKgGdxlJFJkczqxygpzgwWKY3K58CTz9LZPcp7187z6bEKCnKiqUgPRYXE7MA1Tp+rxxmxk2eeO0xlTiRqIRhl4/EE5U4MBBDFFdERSpDX2eWREAUvXp+PwJJM3oTaEE9pVRVlBWcYaIkhJ2cbuclr6Xe0JGzbyxPPdtI78iptl45z/EQZ6dF7SArVoEgOuuvOcvH6GMnbn+bpx/aQHqlblie32xV0Iga8eL1e/EtyClriEhNJSozGe6GNzz58k0jDPJnhfkb6+hkPeEbzAAAgAElEQVTq7MOvNxFwjtJce5ZTFxLZU5JNXEgoYeERhFlkRnvbqLt6kUiPilmbhqTscorSvLjcLtweGZ/KjcvtJqAES4UYI3K4//AzdHYO8ea5a3x29FNKt8VSnR2GGon54QbOn7vGrKmcp597kh350csOZq/Xi8ez5HwUAyuc7MGaAw6HG1nx4/N58XoVuIuq6/rwTErKKilKO8c1VxJ5RdspTA0qz7qwDIpKKihMOUurOo28ogoSN4gGVxQ3brcTl1vCr/fgct0aN2gIDQ0jIsKCs2WAtoYrXNgmIc8voA3PoCzfi9flxifJaNxuXE4nCksORSWAx+PE6fIh+oLy5pe5cwiL4mJifJj+IRexmWlk3oHvGsCSkMG2khJimvqYmZrEIRYQphGZH23h1CcfcrJuBMkQQlhUFBZ5gubxOdzqZHKSDYAfRVaQFSPW0HAiIjQE3JM0Xb1Ex4SGkgdrSLSqUCspFFbsYntON6P9nfSOHyQ65WYKv4x7cYyWq6c5XzuIKW0H1dF3T8+iyD7mxru4evYEp8/XMzgtEp1ewKGvbqeqrJCs9HisdwyEuReIuN1ObDY3ol+N076Iw69guktydMnnxe1w4JFF9D4XTseKWgOKm+GuejpVl6nOOUC8SQBBg0arx2AMJTE1h/y86E2DWNR6PUZrCCaVl5HuWj54/VXUU5nIC2N0tVyjZ0FG9tkY6a3n1Kkk8mPjKClaLnHLwtQQzbWXGaw8QFqkHkQHA60XOXr0KkLafp776mFyY9ZSvTkZH2rj2qVOcg5sI1Qv4HeO03jxGMevLFD84Dd59rEyQpd+Jvt9eOwOXH6ZgN+P2+VCwXozfwGfz4vd5kKWgsUhXR4F9AJaSwKFNQd5dO8lfnSsn9Pv/R/s061sL8sh2ipgmx5mYGie8JwHeX5/GZEGM/rsKg49+SDXet+n9fK7/NPfG5l/7kHyUyw4xjs4/fER2qd9BBxTtF3+kNff0bOragdF6dYN51ml02MMCcGs9jPe38AHr/0cw2wugn2crpY6OudkZJ+L8b5GTp/OIDc8hsqqXESfF8fCAm5JxOdzYrcHWFmiW1AbMYfEEh0uMDYzTO35E5SFJWCbmkYdU0Zu1UEe33+Bf/64m3Mf/DPO2RtsL88jNlSFfWaE/v5prBkP8MKBSiL1Aooo4ve48Phl/H4vi/PzSCRs3cAgqFBrtKhVMm7XAhPjswTKk9ASNPCNdF7iyEdXMRQ8w0svHSbzJqeuEiDgs2O3+5EkEa/biUcGgwq0pmgKdj7Jt77Rxv98tY7zn/yKEzvTOFwVhwYF9+IwDeePcm08ike+9iKP705BKwioI1PY/sjXeOR8N+/U3uDDn/8dE90XKNmWSojOz+z4KNPOUPY+84eUppsRUJBkCY/HG+TZDvgJSALG0ES2P/wC3+7v5z//4BRHfvUDwsNM/NmLB0gN0+Czj1N3/FVePTZMbG4NDx06xI7ieHQCBDxzdNR+xtFzc+z62p/w9L67L6CtMYSRkr+X+0vfoK9BIDbvEHsLwxGWPkvK28Pe0tcZbNOTU32IvA0iu0UxgN3uRFZEAn4bC4sS3Iz8lwP4XAvY3BKSNoDT4UBZenZ62SM898gJ/vtv2rl27Ef8jaOXU9UFxIdpcc+P0dc7jCp2N996ficxBgHRKeK1O3BLMlrJj2+Fx18RRTwL89gDQYeM2+lEIfz2+5KgwxqaTmVlKp8d8zA1OoJHzsasCsrS6I3TvPHaMaTcb/BXf/lt7iu45UAOeN2M9fRjU1SoNAH6a4/SfrEJbdYhdhSZef+tAWStnrBII91nP2HQk80jFgnX7CD19UOo9NFERqhpOnWaRWsFBToB2e9itruOtimwbMsiKz14Dsmih4nOs/zildfoVu/mL//dv+LpfVnLmW4Bn4ex7j5sikCyVmKw7hht5xtQZx5if2k4H73dj6TVEx5joff0R/S5snnEosLvcdJbX8+EqKUoJhp31ynahkzsOhjUnRRFwe12Y7OtzxpYmnE8CyP0d9/gUuenDD79HdLNqqADQ5IQ9GayyqtJWjqL/F4nPdevMy5qyI+NwdN1ktYBI7sO3Sy06Gai5SrtC2piynewu+x2705BDHgZ7BtG0uiJT09k+von1Pea2fOwHnV0HFEGGHRPUnvuMg/nlCBOtnL+bB2zJBNhkFGp1VisAq1nP6RzJoYH8jTotTFU1OSiv1hP//X3eOPjUr52oJBwrZeRzmucPd+EL6SAfQVbtdUIaPQRZBdHbPH7vy0oSJKbqemgA9OzOM28RyZ0nVIjoNGHU/3UH3H42PeoO/om15/PZ3tS8DxQ5AC20Qu8+1kf0dkP8+3vPETUTVOP6Gfs6i/5L3/7M0alBAbkNP7btwt+t8O8DdTf//73v/9ldkBnjUaYuMaZK30kVD9ORWYEhrVF1TR6RMcMM3OT9Hb3MDDYT193B4115zh25CTtU6E8+JWX/r/27jw4jus+8Ph37vvEYHAN7hsgQZAUJVIiZYmS5cixZcmyDtvy+kjicpK1y1sbZ7eym8TlqiTO4Y1rk02cZL2H1zpMSaRIiuIhnqJIECRuDAYDzACD+z7mvqd7/wAk8T4kK6HL/aniX5xBv+5+3dP9e+/9fjzzm1soMGm4utahKKxy/Gd/z8FeHc9+59s8ttmFQXV7D5xKrZ3y2jpq62spseSY8HbR3t7OpYsXudTZQUdHB23vnubIm3t5be9bdIymqNj0Sb709W/w2Udacdm1KOQAMkz5ZVSUOVFn5ulub2dgOMDk9BQ+93kOHziCN1LGE1/5Bp//5EYcRjWZ8CgX3j3FwT2/4K1zQ6zGUyTTCjSqLFprEay4OXHgZV7Zf57xpRjJlIhGp8CQV4Y+4efUmy/y0qsnGZqJEk+mURkMWJ0VOK06ZOkF3N1ddHZcpKO7m66uDi5dbKe9/QJtbW20tV2g7cJFRpf0VNeV3jKArdLnU1qahyyxxKi3l+5eN/09HZw/ew7PVBaHOYbHv4yxoILa+g20tmygtsKJNa+Y0hILQmyJiYCPwQE3/e4+Ojs6GZqIU9j0CF/88ue5t6EAnUqG2ujEVWxBiC4yMeZf+3x/L10dnQxNxnE2PsIXX/g89zUWoF8/x/HJd3n14AUmllUUuRSMDfQz4O7n0rtHeWPvIQZDJXzmS7/FFx5vpcCyVtxHbcynpNgK8SUmAn68nn76+/vWtxPDWf8wz335abY3FqJJT9J28hB7Xn6dk52jhOIxEkkZ5BKgsWO36FFfXbXrVv1ObyM304l7pZxnvv5FHthYgm79ulDqrWQmL+GNNfDMC0+wpdaB+vJOL0bw9Vzg5OHXeP3AOwxNh4glMii1SkSlhYI8MzqNCrU8yEBXH17/CKMBP/7xIKbCCkziNG2H97L/7S6mVuMk0qBWKzDnF6KKj9N2ci+v7DlMt2+BaCKFTK3F7CilKM+E5gbXlUJnIL08hs83gn/Ej2/Yg298BZW9ktaNeSz4/IxNTDAzH0Jtq6F1Qw35Tj2Lwz24h334RgL4fGMkVE4KzCl8nUfZu/cI3b55wrEYWeSkkwJ6qw2zUbdeJE2GwV5MaWkhBlkQT3cX/Z5hxqemGBm8xPHDR+ibs/CJz32ZZz5zH6730+TIMVj1xBfH8Xk9uD2DuHu76PfOonZuYEt5jqGRBWSGAmrqGqivq6Os0AqrfRw7epYOd4DlYIhQaJlJXzdHDx5jJFXBZ59/loe2VmK+yYCGTKbFUVSCwywnvDTFiG+QAXc/vT09DAzPoylo5TNf+AK776vBrlciAxRaG8VlpRRY5MwH+ujqdeMPTDE55uXC6SOcaJ+hsPXTfPFLT7Cl1olWkWTc08ah117h4KkepldiZFGjVGhxFFqIL7g59tpL7D3STmApRiojQ6M1UFBaiNWsf79mnUxhwCCfpX8wQfPup3j84Y049Ffvmwy9tZDS0iJMygjD/d30uocYm5xm1NvJyaNH6Z7Sct/jz/Ps5x6gssCILLOEt+s8h19/lQPH2hlZiJDMyFAqcqgMNqxmMzqNAZM+x9L0KB7PAB6vh96uHvwzaYoatlGgmGJqLoLWUUl9QwN1dVUU5BlQKzUIkXE8A4O4h0YIjAwztQLO8lJ06Vm6zhxg34ET9PoXiSQyKFRyZGozdqsZk8GIvcCFq8iGGJmgt6cH99AoU1PjDHS+w7Gj51lU1fOZ577E4zsbcRgVROeHOff2XvbsPU7fyCKxNChVcgy2AnTCKn3n9vHynkN0DM4SSWRQqjWYHCU4bFZkK10c3Pcmp9p9qAs3s+uBbdQU3zofeDo0yrmje/jnf/p/HLswxEpCjkzMobXYsdms6K4a6cqEJ+hqO8WRtw7zziUvc8EkiegyS0srhFNqHE4HeTYrsugk3kEPbu8oY6PDTC5lsTosZFaGOH7wDY6e7WNmNUIyLZDLpJGr9JjNFvTrwR2FxoyzxEVhnobg1CA9vX0Mj0wwMT5Mx9njnDjnR1/9CZ778tPs2OBCr3xv4NeIWowyOzZEv9uD19NHT6+HuZiBhs1VyILzzC/mcFbVU19fR311CfrMOOfOnOFsxzCLK0GC4VVmAm5OHzlO/5yRXU88y6cfaibvOqu0Lu+7OrODwkI7ynSIqfERBj0DuPt76enzshAz0LLzszz1xEM0lllQyNaKUFsdxbhcDhTxWQb6euj3jDA5Nc5g73mOHzvLdKaUx77wZZ7Y3UqhRUl8KcDFU2+w5/UjdHrniKQEFAo5BpsDvTzOUPtBXtlzgPO9k4TiWRQqJca8YvLtdgzvB84U6E1yFsYDJHT1PP7U59hSZb7mWUihNpNf5KLIoSM8O0RPTw9e/wST4z662k5y4p1BFCU7eObLz7BrczlGVY6lyUHaTuzj9X1vcb5nnNVYBrlCBio9FqMZu1mH1qgll1hmbMiNe9CLu7ebfs8YKU0pzdVWYsszzCeM1NQ3UFdXT6WrAJNehxCfZ2TITY9nhImAj4n5BMYCFzZNFM/5Q+w/cJR29xShRBa5HORaI2aTBZvZiNleiMuVjzqzwGB/N70DfianJhjqv8CJt88QiDp5+MkXeOpTWym2qhFTK/j73+GNPa9x7Fw/c6HU2gxEnRWbSUlkuov9v9jDmyc7mV6NIcqVqLR5FDjt2C1XF7a8QY+RqdGqYsxMzCDm38czX36C5qK1OikymRqtIsL0xDyKgu18/rnHqLJdFpgTk8xPeDj39gH2HzhCW+8Ewcv222i0YDdrUcqTzE8M09PtJjAewD82R1ppI88iY7zzGPsPnKB7dGl9ZZ8cs92GTpFitPsYr766j5NtgyxF08jkcnSWApwOJ5Zr7tuXNSs7T9e7hzl0fJbaXU/y1Ge3flAE9AYUagMmk47cyhhef4DFcIwpXyfn23pYoYj6OhfyyCwrcYFUZIrx2Sw123azpdqKQqZALsRYmBxmdCGBTJbA19WGZ0pG/b0P88CWcvTKtdyMJqsdky5JYLAf7+gi0dgK474+2t45zel3O5lcVVDevIMHd26jqvB69VFusM+5BAsTfby97yV+9rNXOd01g65oAw89/iRPfvZxPrFjE2UF5utMlPnwRCHESO8pXv7pzzl47BhHj52grWeE1WiKRGSBkeF+Lp67gHcygqmwGsdNpr/nUkGmfB0cPXiQNw+doHt0mXROhiiC3qDDaHWQWx6m/dRbnHNPs7y8Sii4xIS/m9MnTjMULOBTz7zA4/dXYbhJWhu5SgW5BPMBDwPDAQIjg3j9EwTTBqobq8nTRPB0DTC7skoso6eioYX6MhXjgxd5+9B5VhUm9FoZEwMX6Orrpe30YQ4eucCquomnv/Z1fnNXA5b1qF0mMoO7/QSHz0+jNRlRyRbpudBBb+8lThw+wIkLE9ibP8PXvv409zU4UAhxFib7OX7gDfbvP8yFwXmSubXZ6Fq9GkSRee9pDh48yKFjFxhfjiMgIMr06DRa8grzsVkcOJ1GMsEZRv3DeDz99Pb00t3jZnJZQe09v8Ezz35mvd/KUOtMOAoKMMqjjPs89HT30NvXw8W2NnqHl8kqNGRWRwnJS2jasIGmDRtpqCnBarj+SleZUrU2MWrUw8DwGGOjgwz5xlhO6qisr6XQFMN9qZ+5lRUiSQ2FrlL08UEOvfkmBw4exz0ZJJPLkUkLCLkUWbmB/DwDcpkCMZNgcaybCz2jTI77GZ1YQDSWs6FlA9VlxRQWmMmGZxn1DTPo6ae3t4eenn7GF0Sqtz7GM88/wT21ZqJLo7z71mu89vqbnO2eJJJaWwmBUoaoMJLvuHX9I5lMsRaAWZ1lcmaKkeFhAoERvAPdnH/nBKfe9ZC2beGZr36J3feUo1MIxEMzdJ99k/373+DwqR4WoznEXA6FTo9aZaI434JWb8dVUYpFnWB8qIuLnV4mpiYJ+Pp598RRLnjibHj0K/y75z9JXcHaPV6mUGO0FVLs1JJYmsI/7MUz0E9vbx8e7yQZQy2PPvk8Tz62CZMYZrTvJG/se4ODh8/in48hIqLUGlHKVFjzDQQnPVxqu8TIzBwB/whTsysks3J0BiXjbS/y4394lbZuN16vl2Gfn0F3N+0XOhldyFK2+VM8+cQnqM6/urj1bZApUSnl5GJjeCYdfPG7v8+umvVZyDIFKoWMbHSUwekivvLdr9NS+ME2RCFFaM7DsX2v8eobxxicipAjiyAzYjQYMZvSuE/v5/U9eznePkYkK5LLyjHbHRSWurCb7BS58iA6R8A/zKDHTV9vL93dPfimkpRseITnvvIM9zdaSUVmuHhsH3v27KfNu0hWFBHQY9TrsOTJ8b6zj5d//hqnemZICiK5nBqbw05BacENUprJUal1mKxKFke9DAxPImj1ZMLT9Lef5vjZAcTCB/jSV7/I7m3lV/yNTGIVf/tB3jg7SjweJpxQUdHyCR7/jR3ksUzHkZc56QkhV2jJr2jhoUcfos4pY3W2i9d++jreFQG11kZF0xZ27txKoVFGMjRP295/5OdnZtDZiqlrrkQeDNBx9giH3u4kom/hua99lc/ubsF+2eSfbDLIyKUD7D0zSjweIhRXUbbxQT79+APky1fpOPoSx91BZAotjrIWHvrkw9QXakhGFjn/xk851LWETKnB4Wrk3p272Fi+NkCWSqU4deoUP/zhDzly5Mg1/44dO0pH+3lS0wE6xhZYjcmQ5cIE+s+w99WTpEp/g9/9zgtsKTMiRyQVW6Zt379woGsRmUJDXkkj9+58kJYKEzJEEpF5Tvzsx+x3Z7nvc9/ltz97g3zXQCYRxn36ZQ51LgJK9HkVbHtwN61VFpRqFeFAF52eAGOjw3j6O+gaWMBUtpX7d1Qzcfwl3vHHQaYhv6KR+3buotGlR6FQY8nTszraT6/bjdvjpbfzPGfPXsQ3ncBW1sy2HTtocl2vrtndJkci7ObFv/0H9h07ybEjhzh6pou5cJZsOsj0xDCdF85xvm0ARclmytYLdcpkcnTWElwFcvztxzg/GEFr0CBmQvi7j/Pzf/4pl4LN/PZ//AOeeaSG9x4HxVyaxYF9/O0/HWc2lKB827M8tbP433D/PyATf9nre+6YyNLQ6/ynb/1XZhv+hL/7wVNU5V9dFFEgOOOlzz3EyEiAhUgOpUqLwWDAYLLiqtrApo2VWHWqa17WAGJTb/LtL32P8dJv8pd/9g1ayyzcYSwRyLAy7ccz6GNsbAR/YJZIMosoU6HTadFqDdjyCygsLKa0rIqamjJshmvbIwpxpoZ66ekfwD++ChoTFpMejc5ESfVGNjWXY9WvPchk4wuMjo4zNjLCzHKUbG4tUFVaXkZ1fRM25vENDTAcWCSRziFTGigqq6CuqYVCzSp+r5tB/xyxdBZQ43BV0bihlcpCI0phktf/2w/YN2TE5dSQTUaJRGMkU2kymSzpVJzg4gQzoRJ+7y9+zFcercN28yRjIMaZ9PbQ2dXDwNA4y1EBc345za3b2eiY4Xx/CGNeMdXV1VRVlmDRrx0fUYgyNtBNV3cvg75Jgkk5FkcRFTVNtG5upbbUjvayoKgoRBn3rH3e45skmFj/fHUTrVuu/fziuR/y7Dd/zDl/Ef/hR9+mwZglEoyRFUQ0xgKqGlrZ2lqD46qBj7Xt9LzfrtWEDHNeERU1jWvtKstDp5KTSy7iHxpm2DfGUihBVgCVzk5JaTHlVfWUF1puGNS9ycFkyXuEY/0G7tu5mcoi02WDByLz7kOcGLaz4/5NlBdele9aTDA3Psb42AiBqSWiyQzINOQVlVJeVU99hRODVkkuMUvHmeO80z7ISsZIZeNWdj64FauwyNiwl9GZVZKZHDKlieKyShqamshThZgIDDPkmyIYSyOiwORwUb+hlbqyPPQ3zCWfY2m8l/PvnqPHM0VSYaOyoYVt995DdX6CzuOHOdszgyqvint3PcQ9TaWYtRnG+89y4tQFfLMpbCUN3PvATuqdWeYmR/AH5gjHMwjIMFiLKCkppaauCqfNwOWHWxQSzAUG6e/vZ2h0kYxcj8lkQK83UlDewMbmavLN2iuvVTHF9HAnFy50MOCfI6uyUlbdzD33baMw6+Vc3yyi3kllRTllpUXYTDqyy27Od46yGEyh0ijXl7oJZDJgdzWyaWMNTuu1+fivJxGcYKC3i66eAcamQ8j0dlwVdbRs3sqG2hLM64HrDwhElycYdPfh9oyyHBcxmMwYdHrMjlIaNzRTUWRde9kW0yxNj+L1eBibXSWRFlHp7ZS4qmnaWIkmvYB/wM3I9AqJlIBMZaSotIYNLfUUOUxXzPLPhDwcPT5Kft0mWppKb5p3dGFiCHdfP97ReZKiBrPZhE5voMBVR3NzDQW2tcEDMRthZmKcEZ+PqflV4qkcMpWRguISKmsaKC92YNTKEVLLeHsu0n6ph9GZGBpLETWNm9m2rZ5EoB3vZBB9fhUV5WW4ivIx6lXIEYkuDnH+zGnae8bJaJw0tO7g/vtq0aZXmR7zE5hcIBRLIaLG6iymrKqOuooirOuFJrPJVcZ9A/T1DTA2F0GmMWE26tEb7ZTXNtNY68K6HhRNhGYYHfbg9c8QjKUQ5TochSXUNGzAZRVZmBhkYGiClUgSERWW/BJqm1qoKXOiSU9w7u0jnO1dxLX5UT754GZc9utV6L5SNr6Ab9DNwNAkoXgGlHry8oupbWyiwpW/voLjA7nEEuOBAP6RMRaWwyQza4U7FGoDTlcdLRvrKHEaSS75aT97mrbuUZJKB3Ut29mxpQx5YgH/0Ahzy2HSWVBoTDgLiymvqqG8xInpik4hEg9OMzzQR5/Hz2I4i9ZgxmjQY7IVU9fURHVp/jUB9vjqOH0dF7jU5WUuImJxltPceg+tDXom+wcIzOYorKqkvKyUIqcVRWyM7l4vgdk4Ko0CuWwtJVImAyZnJRs2NuJyGm5ruZ6QiTA14qG7s4sB3wShpBKbs4y65hY2b2qktMDM1TGsXCrE5Oggfb1uAjNBBJUBs8mIwWDFVdVIY305eetLH5ORBcb9HgZ9kyyHk4hyLbb8ImoaminNUxGc8eL2jrEUTCCgxGQvorpxI3UVRZgvf/MSYwxeOMdYWE/jvdupsN0oMC+SCM3i9/bT5/Yxv5pC/f45KKKmoYnaigL0KhmQI7Iyy/jIMCPjsywH4wgoMdqclJTXUFflosBuADIsTXrpbG+j2zNBNKelsKye1q33UGWOMOQZZCZlpaqynPLSEhx2I2oFJFbH6Wo7zdmLw0RECzUb7uX+7Ruwa+JMj/gZm5plOZRARIXJ7sRVWUdtpQundW32Ui4dZiYwRF9vP6PTK2QU+vePc0llPY31le/nHxayURamRvEMDDM1HyQlyNCZHLgq66mvcqJMzePpH2RidoVkDtQ6K8XlDTQ3VlGUd/uBz1xqju4LPSykHNz74FYcl72Q5hLTdF7oYzlXyPZPtGK7vJ+LGULLs4yP+BmbnFk/3yqMtnxclbXUVpZSYNMhZEIEBi5y6mQbo4tZ8kobuff++6gpVLMY8OAbm2E1lkEm12LNL6G2oYHSfC3hOR+eoQBzS1FyogytybF+v6rAYbrxII6YW2XgwilOtc1Tdf+n2L296rbyoOfSYaZH+unu87OSUKLT6zBZ86moqafInGC46wLuQBClKZ+yilqaNzZRsJ7aKJtYxu++yMXeMVIKM2azBVdFPY2N1divqCOQI7I8xWBvF0MTqwgKDSqlEpVKhc5oo6i0iuoqF9Y7yG2Zjc3QdWY/L716ivGQmtK6jWzbvp2tmxqpLM1/fxLBL50YZ258mL4eP6F0jkwmTfa9ZQEyOUqlCpVSja2wkqaWFootNz4JQjbO6sIkvuEAU9OzBOM5ZEoNlrwiqmtrqSgvRR4ew91zkcmIEpVcgUK+nmZLVGItqKFlUyNF1utUWrtyS8RDs3g6zvLuxQHmQpDnqmPzPfewsc5JbOISRw6fZy5ro3Hzdu7fvpky2wpnXv87vve7P2Kx5BM8//Xfo9UyzWwwBXI1RlsxjRtb2dBYifWyjhafucQr//2P+M6Peml66Gl+5/cfgalJIikRhVqPo7iWltYW6irz0bxX5Gt5Bp/Xx8TUDCvRDCg0mG0FVNZUU+y0IYTG8AcmmZ5ZIpEDlcaIo7CK2toKysudaGVrwZnAYCeXugbwj82RyGmwF5ZSU9dMy6ZmqkqsV9z/hWyC5elhOi52MOAdZS6YQmsppr5lKzVFepb9l1jVVFFbXU1lpYt8q+4mz4Qiycg83q6znG3rZ3o1h624lk1b76G1qYT0XBeHD55hKmmmbtN9bL+nEXN2luHRSWamZ4kkRWQKNSZbAaXlpRQXuygtNCMH0vElfD3v8PapLuZiKkpqWtnxwA5a6pyoZSLp+Cpj3k4udbrxjc0Sy6ixF5ZSXdfEpk0bqXZZUcpyJCJLjHj68Y/NsBRJAQq0RjslFVWUl5VRXmy9jd9ZkUwizPSom74BP0BfSUkAAA14SURBVJMz80TSSnQ6HUajCaujmJqGZuqrCtZXR4qk4kGmRgYZGZtkej5MFjkanYWiihqqKyqoKlnLeS7mUqzOj9Db2cNQYIZIRo3ZasGoN2AvrKR5YxOljmtjDsnoIr6+i3R0exiZXCKrtOCqqKV542Y2bazGrleQS8dYnBpm0DfOzOwS8cxaH8p3VVFdXkpRkY4FXx/uAR9LMQGZXIXe5KCytp7qygKSU+0cOXyGfv8ssawSe1ElNdWVlBQ6cBaVU1tXif0mA4u3PKq5GNMjnZxpi/Dg049TetmAl5iNMOXr4MzFNI8+9xiFl6+aFLPEQjMM9nkYGZsmkhKRKzVYCyqpr6+lvFjLvK8fr2+MudUEIgq0xnyqGzfQ3FSJWSWSTUWYHOqkvbOf4ZEpwkkltgIXlbWNbNq0iboKOyqZQDoRJDDYx+DwGEuRzFp/tZdSV19HZYWZFX8v7sFR5kNpkCvRmQqpb2qmsbEMww0PjUAqtshgx3k63GOkVPkUFtjQa5SoNGbK6zdQXWy65rrLpcIEeg/zyhsdpPQumjdvZ+cDm3FZ1aRjS/S98ypvnpvD7NrIrt0Ps6UuDwVZIkvDHH7pFS5NK6huuZ/dj+ykrlAHZFkcv8DffPNZ/uGSmk9/7Y957uEiNLK11EcytYWymmYaqhzXrJ7OpSOM9x3h5b0XSehKaN58Hzsf2EqpTU06scLAO3vYf3YGU8kGdu1+hK31eShYC8D3n3qR105OYHBtZNfDj7J9fUY/QDqdxuPxcPbs2eseOZlMRKfKos+pEB35yEQVWhVks2kScRmupnu5d0vVekpEyCSDuE//nFePj6Mv3siu3Y+yY1Px+vZEMskQ/WffpHdWQVXrp3iwxX7DZ6lsKszQ+V/w0lt+DK6N7HxoN9tb1v5WLhNjvP8Mx06245+OoLG6aGzdxo4d23AZI7S/8Y+81halfOMOHt79EK01tvfTY2bTIXyd73C2vZ/A1BKC1kFpRS0NjU20bGrEafw4i3n/MgmkE1O0vX2RxbRILpMidVnuKZlChUatRqUy0XT/I9Q7r3z3yiQX6T17nHcuBRB0BuTIUKsgmVJRu3kXu+7fcMUAiihmiUxf4qWf7Wc87eKxZ1/g4aaPUkvil+cuCF6vzTi4tPev+NP/EeD5P/0LvnB/2TUv2O/JpWOEw1HiySxKjRGrzYRaKb/hxSAKyxz9u+/yw4NavvFH/5nPP1CJ8SMu8xOycVZXgkSjMZIZAYVKg95oxW4zo1HduC1XNixDJBgkEk8jVxmw3sl3PyJRWKVt70948VSSh77wHPc32iGTJJlMkV7PEZvNpIiFFzj70t+zuuUHfOfZzZTYbu8CF3MJQqshYikRrdGK1aSFTIykoEGrUa3PRL+WkE0QCoZIZOToTWZMeg2K641GXO/zRjMmw/U//0Hwuowf7vtfvLCrDFksQgYNZqsVvUZ53UGPy7cTDoaJZ2TojWaMBg3Km33hl0TMxYglVWi1qvfTz7zfpkyUWFqN7jr/dyeEbJJwMERK1GKxGtGoFB9rH8ymo4SCUXIKPRazAY1q/WU1GSYYSaM2mDHo1B/kyBRzxCNBIgkBncmCUae+6bm6KTFLLBwiEkuBQovZakanuToIfHV719IgiEo9JuPaDHohnSSNEpVKeUVOdVHIIYgy5HLZ2vdCEdKCCqN5bZ/uvN0CyViYcCSJTG3AbDbc9F633grSiQjhSJR0ToHeaMFo0H6IwbrbJGaIxTIo1Wo06ttYJCpmiUfCRKJJBIUWs8WETqv6CMUXBFLxCJFoBqXOuL6vIulkCplCiVJ1/fObTUUJhWOISgNm052vjIC1aycSCRNLrP0WXd6ff1nS8RCRuIDGYMRwJ3UIPibZVIxwJIag0GEyGj7EoNx7RDLJGJHI2qx6rdGMyaBDdZN7mShkiEcjxFOgNRoxaNXIyZBKi8jlSlSXt0XIkRNlyOQyhEyCaCRC4r37t157w9+gm8llEkTDYRJZxfrfufU1LWRTxKIRovE0crUBs9mITv3xPSinEwmygFqruyagfi2RbCpOJBIhkQatwYTJqL/pObiVXCZJNBolIyjXiuipFQjZNNkcyFRqrtddsuk40UiUDBpMJhPaWxTTvu6e5NLEomEisTRytR6TyYT+dnLDfSwEUskUgihHq7t6xlyOZCKFICrQ6T/EbLp1710LsaSwPvhzq9RuH4VIOhElnsyhNphvMkB9PQKZZHztniHXYbYY0azfa3OZJNHI2nOYyWy6Zhbz2j6GicRzaA1mzEbtDa83UVjLOxoMJxDlagwmMyaj9rYKjl8tG5vk0pm3OdMdpqJ1G5s3NlBRkndbqQZ/lbyXC1uukJFNJYiEI6RySgxm8x0/Zwm5NPFomFgSdEYzJsN7A0Zr97+UoMFsMaJWyBBzM+8Hr1ernuB7f/YTvvqAcv33WI/ZbL7utftB8Lqfez73+/z1T75HozJCOJZFrTdhNq8NiH08BDKpBJFQmJSgwmC69f1fyKaJR4OE41lUWhMWiwmFkCGdzqDS62/j/vwBMZchHgsTjQtojRbMxvXjm0sTj4RJ5NSYrSY0d9jhhVyaWCREPC3HYLZivGZVoEg2vfb+kxIUGEyW2/rd+yhEIUcqESEciZMRFOiMaxO8PsLP0mV/N0wonCAnU2M0WzDpb/FMJQpkUjGCoSiCXIfFar5iYtRHJQpZktFVFpaCJDMiSo0es8WGzWq4o/5xky2Qy2aIJzIYTFdNckIkm0kRTwiYzLc/OHvH28+svWMmsnL0Jgtmw7Wr4z8uopgjFQ+xuhojJ9dgslgxG25WTFckl00SWgmSU5mvOg8C2XSCUCiBymDFfNmqPVHIkYyusBqTYcnLez9+JeYiBHpe4ltP/gFuzU7+/MUX+VxlklhGgcFswWK62WQmkVw2RWh5lZz66raI5DJxgsE4KoPtiraAQDYVZXk5hsqUh8107f4KgrCefvZaMgQEIU0sJsNq15NLRVlZCZNT6LFaLde5N4tk01GWlyIojXnYzR/+uQZEcukYy8uRtbYbrxw8FUWBVGyV1VAShc6C3WZcOyaiQCq2zPxyBquz4PqFGEWRdDLM6moUUanHcp1VqL8ORDFDaGGamfllQtEsBqsDZ2ERTrv+un1RFAXioQUiGSP5+bdeRfOv5a4IXgNkIqO8+X/+nrdGKvntb3+FLZXW677o3Jkkgyf/ib/+STfNT36LF57YgtN4t1cB/7iJzPf8X/7wD/6KxZo/4m/++CkaS260XCLOkR99D2/lt/jiJ5spuEWF+bvVlcHr/83XHm3GfqsJJRKJRCKRSCSSXxtiLkk4FCaZ1WBzWPgQYymSm7g6eP2Hf/4vfPOTtlt+78rg9b/nR//zv7DV+uv9NieRSO5e2fg8HW/8Cc/+zh7MW7/Jzw78BVus0g+KRPJR3TVXkcpUyaee/y0eq1th/4t76BxdJZ37CHF1MYH3zCv84ugcrU98g2cfbyX/piNuvy4EJrvPcsk7zfhwH76JZRLpa0tep6OztB95mYvhBjY3OK9cnvyrRmS9PrG4Xhn5rhivkUgkEolEIpHcJWQKLRa7kwKnFLj+uIjvPZSLIIjiHTyRr31JFAVEQXqOl0gkdyuRVDyKr6uPFbmOoroWKn9FJwBKJHeb2y4G/PGToc9v4vGnn0Z38hwdJ9soeOYhymz6DzFNXWRp6F3a/SIbdz/DAzs24DR9vEudfnXIcFQ1UpFv5N3uX/DXP5jmzJZmKlxOTDolYjZJaHmembkwOkcZrQ89Rku5He1d1FPujEgsHCaZySKIYULBFJmMyB2XbpZIJBKJRCKRSCQfipBNk4xFiWdE0ukk0UgYkRvnQX1PJp0iGo2RETOkklEikRzYpWCQRCK5+4hCitX5fs6c8yMoDeQVFd8kR7dEIrkTiu9///vf/7duxAdkqI1OKioK0Wn0OAsd6D5kDt5cOo2+aAPbNldj01+/kOOvJxmGvGJKCqwos6uMevvp7urk0qVOevsGGJ1aJCEaKW/YzAMPPcrOzZVYbpUX7C6VCY/y7vG3eHXPPs71TxJLp0hnZGTiKyREMw6b8SPkbJVIJBKJRCKRSCQ3JaZZnvFy+sBr7DvwNh1DC8SzAtl0knQmTRYdjvxri98K6RDj3jb2v7aXQ2+fJ7AQRRBzJBNp0ukUCl0eeWZpVa1EIrk7CJk4s953OXT4BL2jEYyOQhw2Exq1HJ2tGItOultJJB/FXZPzWvKvK5dcYXR4iNGJWVaCERJpAaVKi9nmoLCkgpqaMuzGf73CCh+HXGKe4UEfvpFxVqIpcgKo9DaKCp0UldVRXZqHTloXKpFIJBKJRCKRfEyyRFZm8Xs8BKbnCUYzIFOiM9kpLCmltLSMcpedq2toCdk4y7MBPB4/0/PLJDIicqUWi70AV6kLV1klxXbNv80uSSQSyVWEXJrQrJ/BkTkSGQERGQqlFkteAWVVVTgMUtxBIvkopOC1RCKRSCQSiUQikUgkEolEIpFI7jrS8I9EIpFIJBKJRCKRSCQSiUQikUjuOlLwWiKRSCQSiUQikUgkEolEIpFIJHcdKXgtkUgkEolEIpFIJBKJRCKRSCSSu87/B9Sjv479gGtFAAAAAElFTkSuQmCC">
<br>
<center><figcaption><b>Figure 3.</b> DCNN without Atrous Convolution & with Atrous Convolution</figcaption></center>
</figure>_____no_output_____Figure (a) represents DCNN Without Atrous Conv. The network is constructed with standard convolution and pooling operation which makes the output feature map smaller after each block operation. This type of network is very useful in capturing long-range information. For example in fig. (a), the final output of block 7 with a very small resolution feature map (256 times smaller than the original image) is summarizing the whole image feature. However, such a smaller resolution feature map is not useful for semantic segmentation tasks that require detailed spatial information.
By using the atrous conv as seen in figure (b), we can preserve spatial resolution and make a deeper network by capturing features at each scale. This is done by increasing dilation rare at each block. As a result, the field of view of the filter becomes wider which is required for better semantic segmentation results.
With a distinct dilation rate r, the filter will have a different field of view. As a result, we can compute the features in fully convolutional networks for multiple scale objects, without reducing the size of feature maps. This is done by applying strided convolution or pooling operation in standard deep convolutional networks.
**Note**: DeepLabV3 uses atrous convolution with rates 6, 12 & 18
_____no_output_____## Atrous Spatial Pyramid Pooling (ASPP)_____no_output_____ASPP is used to obtain multi-scale context information. The prediction results are obtained by up-sampling. In the ASPP network, on top of the feature map extracted from backbone, four parallel atrous convolutions with different atrous rates are applied to handle segmenting the object at different scales. Image-level features are also applied to incorporate global context information by applying global average pooling on the last feature map of the backbone. After applying all the operations parallelly, the results of each operation along the channel is concatenated and 1 x 1 convolution is applied to get the output._____no_output_____<figure>
<img src="data:image/PNG; base64, iVBORw0KGgoAAAANSUhEUgAABS8AAAFkCAYAAADBmM71AAAABHNCSVQICAgIfAhkiAAAIABJREFUeJzs3Xl0XPV9///nvbNrpNEuWftqyba877LxAhgsDNSYNbQkhJCUk4aUkl/Sc9qmy7fp6WnSNIF+IfCFQjCQsjmAAXnBxjbeseRVsmRJlrVL1r7PPvf+/hg0WJE3GdsY/H6coxMxc+/nfu4dS+folc/781Z0Xdc5C5fLhaIoWCwWFEU52yFCCCGEEEIIIYQQQghxxahf9QSEEEIIIYQQQgghhBDibCS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEIIIYQQQlyTJLwUQgghhBBCCCGEEEJckyS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEIIIYQQQlyTJLwUQgghhBBCCCGEEEJckyS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEKIbzBdc6L7ewH9Yk8AzQt64IrOS4hrnq6ha27QL/JnB0D3oft70TXPlZuXENcZ41c9ASGEEEIIIYQQV05g+CAB5zEs8d8H1XLeY3VfO76+D9B8HZhj/wLVknnJ1+1o72DT5k2cOnWKoqIiFi5ceFHn6bpOfX09H3zwAW63m7vuuov8/PxLmkMgEMDj8eD3+zGZTFitVhRFuaSxvgkCgQDd3T2cPFlDe3s7fr+fiIgIMjMzycrKwmI5/7+Pq6WtrY333nuPtrY27r77bmbNmjXuMYaHh9mzZw9btmxh/vz53HXXXZhMpnGNoXkb8Xa9imJKwBz3MIpqu/A5vtP4+ooxRizFYJsy7nkLIcaS8FIIIYQQQgghvsF0bRDd34WOzvliO813Gl/POvy9m1DM8Wi+tksOL/v6+vio+CN+//vfs3DhQtLS0i76XEVRiI6OxuFw8M4779DR0cFPfvITUlJSxjWHQCDA4cOHefrpp6mpqWHJkiU88cQTpKamjjlW13W6uro4efIkiYmJZGdnj+ta1zq/3091dTWvv/46O3bsoKurC6/Xi67rGI1GwsPDKSgo4MEHH+TWW2+9aiGmruv09vZSVVVFXFwcEydORNdhaGiI0tJSqqurKSwsvKTw0uv1Ul1dzUcffYTZbObOO+8cd3ipBwYIDB0i+IOjYY57FOUC/wcAuhfd3wWae9xzFkKcnYSXQgghhBBCCPENppozwK6jKIZzH6T7CDgP4+/bgh5woqCc//jz8Pl8HDx4kDfffBOHw8Hq1auZMGHCuMZwOBzccMMNHD16lG3btpGdnc2jjz6K1Wq96DEGBwcpKSlh+/bt9Pf3o2kay5YtIyUlZczqy5E5v/baa6xcuZLMzExU9Zuxy5qmaVRWVvLrX/+aLVu2EB0dzeLFi0lPT0dVVdrb2zl06BCbN2+mubkZn8/HXXfddVXu3+/3U1ZWxvPPP89NN91ETk4OqqqSlpbGP/7jP+JyuUhOTr6ksXVdx+fzMTw8jMdzKSXcOqCBArp/kMDgXgK2aRgjlpz3LMXgwBA2B8UUf0nzFkKMJeGlEEIIIYQQQnyDGay5qOZ0UM7955/macA/8Cm6fxDVkoAp+k5UW8ElXa+lpYXi4mLa2tp49NFHmT17NgbD+IJQVVVJT0+nqKiIw4cP89FHHzF37lwWLFhwUefrerD0eN++fURGRjJnzhyqqqrYu3cvS5cuxeFwjDre5XJRV1dHdXU1ixcvHtdcr3WDg4Ps37+fnTt3UlBQwM9+9jMKCgqwWCwoioLX66Wuro5XX32V999/n/Xr1zN//vyzrlC93DxuD/X19Zw4cYK5c+eGXrdarWRlZV3x65+fgmrJwRT353hP/w+atxP/wFaM4YXn/VlSDNEYI5ZceIWmEOKiSXgphBBCCCGEEN9kigXFcJ4gRfcTcFehDR8D1Yhqm4zRcROKah/3pTweD+Xl5ezZs4esrCyWL19OmC1szPvr16/n8OHD9Pf3Y7PZyMvLY9WqVSxbtoywsODxFouFqVOnsnjxYv74xz/yySefMG3atND7552H201lZSXHjx9n6tSprFy5kpdeeolDhw5RU1PDnDlzPr93aGhs4J133qG4uJjGxkY++OADenp6WLBgAZMmTWL79u0MDw8zZcoU9u3bx/79+yksLOSxxx4jKiqK7u5uduzYwebNm6mrq8Pn85GYmEhhYSG33347ubm5KIqC3+en8kQlH3/8MUlJSdx+++1ERkaG5tza2sqmTZtwOp3cdttt5OTk4Pf7qak5yfr171NSUkJXVxcGg4HU1FSWLVvGqlWrSEpKOu+zcLlcNDU14fV6mTlzJosWLSI8PHzUMTExMYSFhTFr1iyysrKIjIxkaGiI/fv3c+jQIaZOnYrFYuGdd96hpqYGgEmTJrFmzRqWLl2K2WwOjeX1eqmqquK9996jtLSUvr4+LBYLubm5FBUVsWLFCux2Oy0tLbzzzjts2LCBpqYmNm7cyPDwMHPnzmXmzJls376djo4OVt66kikFwb0jfT4f1dXVvPvuu5SWltLb24vFYiE7O5uioiJuvfVW7Pbx/7s9F8UQjtE+l4B9H4Gho2iuk/iHD2MMn3eek4wohvBzvy+EGDcJL4UQQgghhBDiOqb5O9FcFegBN6olCWP4YhSD48InnkVvby+HDh2ip6eHlStXkpuby8hGmx6Ph127dvGb//oN9Q315OfnM3PmTHp6etizZw8HDx6kpaWFhx56KFQeHhcXx9y5c9mwYQMHDx6ksbGRSZMmXXAeXV1dHDhwAKfTycKFC1m0aBFHjhxhy5Yt7N+/n+nTp2MymdDRGRwcpKqqirq6OoaHh2loaCAiIoK0tDTi4+PZunUrVVVVFBQUUFJSgtVqxel0omkazc3N/P73v+ett94iLCyMqVOnYrVaqa2t5bnnnqOkpIQnn3ySuXPn4g/4qa2t5a233mLq1KksX748FF7quk5nZycbNmygp6eH6dOnk5GRwZEjR/iv//ovjhw5wsSJE5kxYwZut5uamhp+85vfUFFRccH9QBVFwWg04vf76evrw+VyjQkvLRYLBVMKyM7ODjU26u7u5ujRo7zyyitkZmZiMpkwm83k5eXR2trKxo0bqaioYHBwkNWrV6OqKl6vl5KSEn75y19SXV1NXl4eM2fOZGBggJKSEkpKSmhsbOTRRx/F6XRSXV1NbW0tTqeTpqYmjhw5woQJE8jIyGDLli1UVVWRl5fHlIIp+Hw+SktL+Y//+A9OnDgRGntwcJBDhw5RWlpKY2Mjf/mXf4nNduHGOhdHQTElYYy8mcBwGZqvG//AlvOHl0KIy07CSyGEEEIIIYS4juneFgKuCgAUcxKGsJlw3tY+5xhH1+no6KC8vJzw8HCmT58+ahXc6dOn2bRpE5UnKrnrrrt47LHHiIyMxOl0snnzZp599lk2btxIYWEhBQXBknWLxUJWVhaZmZnU19dTVVVFfn7+eTuGBwIBTtWd4rPPPiM5OZmFCxeSmprK/Pnz+eSTTzhw4AB33nkn6enpKIpCTk4OjzzyPdxuNzt37uT+++/nL/7iL4iOjub06dO4XC5qamqIiYnhxz/+MTfccANRUVEYDAZ27drFO++8Q0JCAj/5yU+YMWMGBoOBxsZGXn31VTZu3EhqSio5OTlYrVa8Xi/9/f0MDQ2hadqYeQ8ODjIwMIDP52NoaIgDBw5w6NAhli1bxt/8zd8QHR1NIBCgpqaGF154gV27drF06dLzhpcRERFMmjSJiIgItmzZgt1u54EHHmDmzJmjQj6T2YTJPLqhjc/no6WlBb/fz7e//W0eeOAB7HY7XV1dvP3226xdu5b333+fefPmkZqaSnd3Nx9//DFHjhyhqKiIJ598kqioqNCz/fWvf01xcTFLly4lPz+f733ve3i9XjZu3MiaNWt45JFHiI6Opru7m+Hh4dCzAOjp6WHz5s0cOnQoNHZ0dDRut5vdu3fzq1/9iuLiYpYtW8bMmTMv/h/uBSiqFYM1H0NYLoHhajRPHbq/C8UYd9muIYQ4PwkvhRBCCCGEEOJ6pfvQvE1onhYUQxiqNRfFGHNJQ2maRkdHB01NTcTFxYXCwRF2u51FixaRnJzMokWLyMvLQ1VVNE1j9uzZpKWl0dzcTFNTUyi8VBSFmJgYsrKyOH78OHV1dQQCAYzGc/8p29/fz+HDh2ltbWXVqlXk5+cTFhZGQUEBeXl5VFRUcPjwYdLS0lAUBZvNRkpKMtHR0ZhMJhITE0Ol3u3t7QAYDAby8/NZvXo1iYmJANTX17Nv3z4GBwe5//77ufHGG0NhbUxMDO3t7ezZs4fSg6XU1taG7uliud1u2tvb8Xg8pKWlkZubGyrPHnm+g4ODF+zkbrPZKCws5Lvf/S4vvvgir732Gh9//DHJyclMnjyZWbNmMWfOHCZNmjSmy7iiKKiqSmpqKkVFtzFx4kQURSEhIYGbb76ZzZs3c+LECU6ePElqaio2m425c+cSHh7OwoULyc/PR1VVdF1naHCI7OxsTp06RV1dHTNmzCA5OZmYmBiMRiPx8fGfj6/S09Mz5j6sVitz587FbreHSvpHxnY6neTk5FBTU0NdXd1lDS9BQTFGo9qmEBiuhsAgAfdJjOESXgpxtUh4KYQQQgghhBDXKT3Qj+ZtBM2PYk3EYJsMyqV1mfb5fPT29oYCtdjY2FHvR0dHc8stt+Dz+bDZbLjdbvr6+hgeHqa9vR1d13G73QwPD6Preij4DAsLIyEhAb/fT0dHB263e0zZc+h+dJ2Wlhb27t1LWFgYhYWFREVFhQK4hQsX8j//8z/s27dvVNn2+SiKgsPhYOLEicTFxYWu09PTQ01NDREREUyZMmXUXpwWi4W0tDRSUlJobm6moaFh3OGl1WolISEBr9fL5s2byczMZMWKFUyYMAGr1Up2dnZofheaf3JyMj/4wQ+YPXs2GzduZNeuXZSUlFBaWsp7771HbGwsBQUFrFmzhpUrV4YaGimKgsViISMjg4yML8Joo9FIYmIiqampVFZW0tzcDARXed54440sXrw4tNK0t7eXoaEhWttageCemENDQ+i6Pq7nERERwfLly1m0aNGosYeHh2lt/XJjX4hiiEC1TgRAD7jQ3NUQvvCyXkMIcW4SXgohhBBCCCHEdUr396B5g8GTYoxGtWRf8liBgIbT6cTj8RAWFjamsY6qqvT391NcXMzHH39Mc3MzHo8Hv9+P2+2mq6uL9PT0MeXUJpOJiIgIFEVhaGgoVEZ8Ni6Xi4qKCsrLy0lLSyM6Opra2logGK7GxcURFhYWatxzZofr87HZbERHR6OqwWBX13WGh4fp7e0lLCyMmJiYUSGioiiEhYURFRlFTU0Nvb29F3WdM0VERLBs2TKOHj3K+vXr+fnPf86zzz5Lfn4+hYWFLF++nLy8vAuGlxBcOTqyWnLWrFn09vZy+vRpysrKKC0tpaSkhOLiYsrLy6mvr+exxx4LnWsymYiKihr1eSqKgtVqJSoqKlQKD8HP2Ol0UlxczMaNG2loaAh9xh6Ph66uLuLi4sZ8xhdDURScTicbNmxgw4YNo8b2er10dnYSGxt7SWNf+OJmVFMSijEcXXOheU5e/msIIc5JwkshhBBCCCGEuE5p/h50TwsoKooxBtU44ZLH0nUNv9+PpmkYjUYMBsOo9xsbG3nxxRd56623cDgcLFy4kMzMTMLDI+js7KC4uBiXyzVmXFVVMRqN6LqOz+c7bzjV2dnJ/v37qa+vp7W1lR//+MejSsyHh4fp6OhgcHCQ/fv3M2PGDEwm0znHO3MOJpMpFBSOzMXv92M2m89axq6qKkaTEU3Tzhu4novBYCAvL4+/+7u/48Ybb2Tr1q0cOHCA4uJiPvnkE954443Q/pxRUVEXHG9kFWVSUhJJSUnk5eUxZ84c7r77bpqbmnn3vXd5/fXX+eCDD5g/f35opejIvf/p56koCgaDAV3XCQQCQHBf05dffpm1a9ditVpZtGgRWVlZOBwO+vr62bChOFSKP14dHR289NJLvPLKK1itVgoLC8nOzsbhcDAwMEBxcXFoBeblp6AYHKiWVALDJ9C8p9H9/SjGC6/cFUJ8eRJeCiGEEEIIIcT1SNfQ/d1ovm4Ugw3VnALqhYO8cwnuj2hAURT8fn8o0IJgp/EjR45QXFxMXFwcP/nJT1i8eDE2mw2DwcDx48cpLS2loaFh7DTPCMcMBsM5Vxr6/cFu3p999hkxMTFMmzZtTHn5yL6ctbW17N+/nzvvvJOMjIxLulez2YzJZAqtKjzbfDweD0ajMdQ9/Xx0XR9T7mw2m8nMzCQxMZGlS5fS09NDXV0dmzZtYuPGjaxdu5aEhATuu+++cd+D0WgkMjKSyMhIkpKSMJqMVFVVUVFRQVVVFVOmTAEIha9nfp4QbDDk9XpRVRWLxYLX6+X48eO8//77hIWF8bOf/Ywbb7wx9BmfOnWKw4cPXVJ46fP5qKio4L333sNqtfK3f/u33HTTTaGx6+rqQvucXimKGoFqzSUwfAICg2jeWgzG2VfsekKIL0h4KYQQQgghhBDXIV0bRPe2gOZDMcWimscf4p3JYDBgs1kxm824XK5Rqyg9Hg8tLS309vYG9y0sDDbugWAI5nK56OjoOOtehSOdtyHY9Gekac2f6uvr4+DBg7S2trJixQqefPLJMXta6rrO8ePH+c1vfkNlZSWHDx8mPT193PeqKAoREREkJCRQU1PD6dOnR+3TqWkag4ODdHR0YLfbSUxMDDW/URQFTdNGhYGaFiy5Hym/PnO+uq5js9kICwsjOTmZvLw88vLyiIiI4O2336a8vPyc4aXf76empob3338fVVV54IEHyMzMHHOcwWDA4XAQERFBIBAYtVLU6/XS3d2N0+kMhcG6ruNyueju7sZmsxEbG4vX66WtrY3Ozk7mzp3LkiVLRn3GHo8n9JzG68yxZ8+aPWZsn89HW1vbuMcdF0P4F/teai4C7hoMYRJeCnE1XNpOzEIIIYQQQgghvtZ0fx+arwUAxehAsYw/xDuT0WgkJiaG8PBw+vr6xuzzOBLsGQwGDMYvSpA7OjrYtWsX1dXV6Lo+pizc5XLR2dmJoijEx8efdRWjrus0NzWzd+9e7HY7S5YsYcqUKZ83mvniKzMzk5kzZ1JYWEhXVxd79+4NBYYj8/vTFYZnMzKXadOmMTQ0xJEjR0YFj06nk+rqalpaWkhPTyc3NxdVVbHZbJhMJgYHB0cdPzQ0RGVlJY2NjaH7qa2t5Z//+Z/5yU9+wokTJ0LXtVgsJCQkkDQhCVVVzztfRVEYHBxk165dvPLKK7zzzjtjAtKR+ZaVlXH06FGioqLIysoKPQ+Px8OpU6eoqakJHe/1emlsbKShoYG4uDgyMzNRFCV0zkip/4je3l527dpFRUXFmM/4zMD3nPdBcGxd11ENo8fu6+tj586dHD9+fNQq3ctNUa2opmQUgxU94EZz116R6wghxpKVl0IIIYQQQghxHdK1IXR/T/A/DBGopqQvNZ7RYCQ+Pp6UlBQaGxtpampi9uzZoeYuKSkpREZGcuzYMbZt28aNN95IS0sLH374Ifv27SMnJ4fOzk5qa2sZGBj4vEmPSm9vLw0NDTgcDjIzs8bsvQifh2/lZVRUVJCfl8+8efPOug8lQGxsLLNnz+b999/n8OHDVFdXk5KSEuqAXl9fT1tbGw6H47yBWlxcHEuXLmXHjh1s2rSJlJQUVq9ejdlsZvfu3bzyyisYjUZuvfVW0tPTQx26U1JSqK6u5sMPPyQyMhJVVdm2bRvr1q0bdW9msxmPx8P69evxer1897vfZeLEiXg8Hvbs2cOGjRuw2+3n7WJuMBjIzMzkpptu5qmnfsuzzz5LaWkp8+fPZ8KECRgMBnp6eigrK2PXrl309vZy//33M2fOnNAYiqLQ3NzM73//exRFIScnh4qKCl599VX6+/spKioiNzcXs9lMcnIysbGxVFZWsnnzZoqKiujo6OCjjz5ix44d5OXlUV9fT01NDX19faHGRl6vl7q6Otra2oiIiBjbtMlsIjk5mbi4uNDYt912G52dnRQXF7Nt2zby8/NDIWtfX98VaNyjoBgiUEzx6J5mdF/HZR5fCHEuEl4KIYQQQgghxPVIc6EHBgEFRQ1DUSO+3HgKJCYmUlBQQHl5OcePH2flrSuxhQVXG86cOZPVq1ezdu1a/uEf/oHIyEgURSE9PZ177rmHzs5Onn32WV544QVaW1t54oknyMjIoKGhgdraWtLS0sjPP3t37fb2dvbt24/b7WbmrJlnLY0eYbFYyM/PZ/r06Rw4cIB9+/bx7W9/m6lTpxIWFsYf/vAH9u3bR1FRETfddNM5xzGbzMyfP5+//uu/5oUXXuCpp57i5ZdfDnXcjoyM5Ac/+AH33HNPaLVodnY2d999N0899RTPPvssb7/9NgaDgdjYWAoKCjCbzbS1teH3+0lMTOTee++lo6ODjRs3sn37dmw2G7quh8Z/5JFHWLFixXk/ltjYWL71rQcID7fz2muvsXXrVnbu3Bkqv/f7/fh8PlJTU3n88ce5//77iY+Pp6cnGGyHh4czdepUAoEAP/rRj4Bg4yO3282NN97Id77zHRwOB4qiMHnyZO677z6ef/55/uVf/oWnn34aRVGYMGECd911F16vl//8z//k1Vdf5fTp0/zVX/0VU6dOJTIyknXr1nHo0CFuuukm7rjjjlH3YDQamTRpEvfffz/PPfcc/+f//B/++7//G0VRSExMZPXq1QQCAX75y1/y2muvcfr0aR599NHzPpdLophRjFHgbkLXhtE1F4pqu/zXEUKMoujn2HDC5XKFlqSfa0NkIYQQQgghhBBfT/7BXXhOP43uH8QUfSuWCf/flx7T7XazYcMG/u3f/o3s7Gx+8YtfMHny5OD1/H46Ojo4duwYlZWVeL1eUlNTmTZtGunp6fT397N3716ampqYOHEiy5Ytw+v18sILL/Daa6/x4IMP8rOf/YyIiLEhq9vtprW1lYGBARITE5kwYcJ5/451u920tbXR399PQkICiYmJ9PT0UFJSQnl5OWazmTlz5jBt2jS6urpwu90kJycTExMzahxd1xkeHqahoYETJ05QV1eHpmkkJyczZcoUMjMziY6OHlUa3dvbS0VFBWVlZfT19REbG8v06dNJT0/H5XLh8XhITU0lMjIytE9kTU0NVVVV9Pb2YjabSU1NZdKkSWRkZBATE3PBv9lH9uBsaWmhqamJU6dO0dPTg6ZpREZGkpGRQU5ODklJSURHR6OqKl1dXbz88sv87ne/4+677+b73/8+tbW11NTUoGka2dnZzJo1i9TU1FDH9kAgQFdXF2VlZRw/fhyn00lKSgrTpk0jMzMTp9PJ3r17OXXq1OcrQm9CVVVKS0s5evQYRqOBWbNmMWvWLLq6unC5XKSmphIdHX3WsZOTk5k2bRpZWVm4XC727NnDqVOnyMjIYPny5UCwA3pMTAypqamo6pfbOU/zteLteAF/305UWzrW1H9HNSd/qTGFEBcm4aUQQgghhBBCXIf8/R/jbvstimLEFPNnmON/8KXH1HWdkydP8tRTT7Fr1y5++MMf8r3vfQ+LxRJ63+fz4XQ60XUdi8WC1WpFVVU0TcPlcuH1eoOdvI0m9u7byy9+8QsA/vmf/5klS5Zcsb9PdV3H4/GE/ha2Wq0X/fewrul4vJ5QkyKz2YzNZjtnWObz+XC5XPj9fkwmEzab7Zxl7mce7/P5Qt29R57beO/R7/eHrq3reqgbutlsHnWvZ4aXDz74IP/0T/+Eqqq4XC50XcdqtWK1Wsc8n5HP2OVyEQgEsFgsoWehaRputxu3243ZbCYsLAxFUfB6vTidTgBsNts5n/vI/J1O53nHNplM2O32Lx1Wjrm+vwtv9x/wda1HtSZhSf47DLapl/UaQoixpGxcCCGEEEIIIa43uoauOUHzgjEMxRB54XMugqIopKWlsWrVKo4fP86mTZuYN28ec+fODb1vNpvP2jFcVVXsdjt2ux2AxsZGNm7cSFdXF9/+9reZNWvWFV1YMxJYnq0h0AXPVcd3rslkCq1WvBLHn4uiKJc81sjippEg+nzHne8zDgsLIywsbNTrFzPuyNgmk2lMF/nzjX1ZKWYUQxQAuu5D9/de4AQhxOUg3caFEEIIIYQQ4jqj6+7gfpe6jqJa4DKFlxAMohYsWMB9991HW1sbb7zxBi0tLeMao6+vjy1btrBjxw4WL17MPffcQ3h4+GWboxCXRDGjGD/fOkDzowd6vtr5CHGdkJWXQgghhBBCCHG90dzogf7g96rlsq28hODquJiYGFavXk14eDjt7e10dHSQkpJyUefrus7g4BC6rnP//fdzyy23kJ6eLtuZXWVWq5WZM2fy53/+58yfP/+8Ze3XC0UxoRiig/+h+2XlpRBXifz2EUIIIYQQQojrjK650AMDACiXObyEYAlvUlISa9asweVyjWvVpKIoJCTEc9fqu1BUBYfDgcFguKzzExcWFhbGokWLmDFjBjabTT4DAMWAotpBNX8eXsrKSyGuBgkvhRBCCCGEEOJ6o7ng8/ASxXrZw0sAg8GAw+HA4XCM+1yLxYIl/sJ7IIorR1VVwsPDpVz/T6lmFIMdPTCIHuj7qmcjxHVB9rwUQgghhBBCiOuMrrvRtWFgZOXl+ANGIa5HimJCMUaAHgitXhZCXFmy8lIIIYQQQgghrjd6AHRf8HvFGGzacyUuo+t4vV7cbje6ro96z2AwhLpSfxP3sxwYGODTTz9lz549LFq0iFWrVp1338jxHh8IBHC73fj9/jHP1mg0YjFbMJm/fIfy8fL7/Zw4cYJ169YRExPDt771LcLCwti1axc7duxgwYIF/Nmf/dnXdw9NxQCKGXQdNN9XPRshrgtf098WQgghhBBCCCEunY6uBz7/XgGuzH6GXV1dvPPOO/zhD3/A4/GMek9RFMxmMxMmTOCGG27g9ttvZ+LEid+YINPj8VBdXc2WLVuIjY3ltttuO+/xbpebsrIyPvzwQxwOB0Uri84OwiK6AAAgAElEQVT7F3tdXR3PPfccu3btQtO0Ue+pqorNZiMrK4sVK1awcuVK4uPjL8dtXZCmabS1tbFp0yZSU1O54447MBqNnDx5ko8//hiHw8Edd9xxVeZyZSgoysjPS+C8RwohLg8JL4UQQgghhBDieqNrwS8ARQHlyuwo5vP5aGlpoby8nOjoaDIzM0ONXwKBAP39/ezbt4/PPvuMzz77jMcff5xFixajql//ADMqKopvfetbrFixgri4uAuuNNR0Da/Xy/DwMB6PBx39nMfquo7L5eLkyZNUVFSQkZHBhAkTQu/5/X66urqoqKhg3759HDp0iCeeeIKMjIzLeo/n4vf7GR4exuVyoes6DoeDe++9l6VLlxIbG4vJdPVXhF4+X4T9+sjP0RX6+RFCBEl4KYQQQgghhBDXHR0YWa135YOXsLAwVqxYwY9+9KNQA5iRkvKKigrWrl3Lp59+SkJCAtnZOSQnJ13xOV1pJpOJlJQUUlJSruh14uPj+c53vsNdd60Jhb6apjE8PMy+fft48cUX+eijj8jMzOSxxx7DYrn6jZCMRiNJSUkkJX39P1dQgqXjwBc/RxJeCnElSXgphBBCCCGEENcdLfilXLmS8TMZDAaio6PJyckZ0308IyMDj8dDbW0tR48e5dixo1RVneDAgQMUFBRw8803Y7PZvjhBh+aWZrZu3crAwABLly7FaDSybds2EhISSEhI4I9//CP9/f3cd999rF69OhSSrl+/niNHjtDf34/VamXixIkUFRWxbNkywsLCQIe202188sknDA8PM3PmTCoqKti8aTNd3V0kJCRwyy23sGrVKjo7O1m3bh0HDx7E6/WSn5/PmjVrWLJkCUajkaGhIUpKSjh48CCzZs1i+fLlGAwGnE4nJSUlvPvuu1RVVaGqKlOmTGHBggUEAuMrQx4pvU9MTCQvb3TJvaZpJCQk0NXVxfPPP8+BAwe49957SU5OJhAI0NTURHFxMXv37qW9vR2DwUBmZiY33XQTt9xyCzExMaOu5Xa7KS8v54MPPqCsrIz+/n7Cw8MpKCigqKiIwsJCzGbzWefpdDopLS3ls88+Y8aMGdx88814PB5KS0spKSlh2rRpREdHs27dOsrKyvD5fKSnp7Nq1SqKioqw2+2hsXp7etmydQsffvghbW1tOBwOlixZwo033khdXR319fXcfPPNTJ8+fVzPchwPfVR4qet+FEWiFSGuJPkJE0IIIYQQQojrkX7usuSryW63k5mZSVxcHIODg/T19WEymXj33Xc5evQoOTk5TJ48OXS81+elrKyMl19+mcTERBYtWkRTUxPr1q0jLCyM2NhYDhw4QE5ODk6nE6/Xy549e/jNb35DbW0tEydOZMqUKfT19bF7925KSkpoaWnhoYcewmKx0Nvby9atWykvL2fGjBn09fWRkJCApmuUlpZy4sQJjh49isfjoaenh7S0NOrq6vjggw9oaWnB4XAwZ84cXC4XpaWlvPHGGwQCAZYsWYLX6+XTTz/lV7/6Fc3NzUyePJm0tDSam5s5evToWZvvXCpVVYmOjiY3Nzd0X4ODg/j9fsrLy3n66afZv38/ycnJ5OXl4Xa7OXbsGPv27aO8vJzHH3+cxMREAIaHh/nkk0/47W9/S0dHB/n5+UyePJmurq5QAPr973+fBx544KxzcblcHD58mNdffx23282yZcvweDyUl5ezdu1aJk6ciMViQdM0cnJyaGlpYceOHVRWVhIIBLj33ntRVZXu7m7eeustfve73xEIBJg1axZ2u50tW7awfft2FEWho6OD3NzcKxdeAsHS8RHXxs+REN9kEl4KIYQQQgghhPjK6LqO2+3G7XZjMpmIiooiPj6eCRMmUF5ezqFDh8jPz0dVg6W5AwMDHDt2jJaWFubNm0dqairNzc10d3dTX1/PihUrePrpp8nJySE+Pp729nY2bdpEWVkZd955Jz/84Q+JjIzE5XKxZcsWnnnmGYqLiyksLGTKlCmhkuuTJ0+Snp7OY489Fgo73377bZ5//nmKi4u58847+fu//3tiY2Opra3ld7/7HYcPH+bAgQPMmjULTdNwuVz09vaG9n5sbW3lgw8+oL6+nnvuuYdHH30Uh8NBV1cXH374IS+88AI+3+XrYB0IBHA6nfj9fsxmMxaLhc7OTt5//30+/fRTli5dyuOPP86ECRMIBAIcOXKEZ555hvXr1zNx4kQeeughAGpqanjllVdoa2vjkUce4Z577sFmszE0NBR6hm+++SZTp06loKDgrJ+xy+Wir68Pp9MZes3r9dLc3Izf7+ehhx7i7rvvJiIigq6uLt58801effVVtm3bxi233EJERARVJ6p466230DSNJ554gqKiIhRFoby8nOeee47du3eTlJSE3++/bM9QCPHVk/BSCCGEEEIIIcRXQtfh9OnT7N69m9bWVgoLC8nKyiI6OpoFCxZQVlZGSUkJRUVFxMbGwufHHzlyBIfDwYIFC4iMjERRFDRNIzw8nKVLl3LzzTeH9nbUdZ358+cTFxfH4sWLmTRpEqqqomkac+fOJT09nebmZhobG5kyZQoQLMe22+1Mnz6dxYsXEx4eTlxcHDNmzMDhcGC321myZAlTp07FYDBgMpkoKCjgwIEDtLa2jumsDsEmNk2NTRw5coQJiRMoKipiUv4kFFUhMTGR4eFhtm/fTkVFxWV5toFAgPr6erZt24amaeTm5hIVFUVVVRU7d+4kMjKSNWvWMGvWrFATpbCwMKqqqnjmmWfYs2cPt99+OxaLhfLycsrKypg6dSpr1qwhNzcXCJamBwIB9u7dy4EDB0Kl/hdjpMRdVVVSU1O5/fbbyc/PR1EUYmJimDt3LuvWraO1tZX+/n4MBgMVlRXU19ezcOFCVq1aRXp6OgARERE0NDRw4MCBUWMLIb4ZJLwUQgghhBBCCHFFeTweDh8+zHPPPRcKFTVNo6+vj/LyckpLS4mJiWHVqlVkZGSgqiqzZ88mPj6esrIyqqurKSwsxOP1curUKWpqasjNzWXmzJmh4E1VVRISEsjPzx/VlCYqKopbbrkFn8+HzWbD4/HQ19fH8PAw7e3tofkNDQ2NKtmOjIwkOzs7uBcmwaYzdrud8PBw4uPjR3VON5lMREREoCgKbrf7rHtX+nw+2jva6erqYurUqWRmZqJ83mBnpKFNXl4eJ06cuOjnqus6fX19bN26lb6+vtDrfr+f9vZ2Dh06xPHjxykoKODOO+/EbDbT2tpKS0sLOTk55Ofnh+4BIDw8nNzcXGw2G42NjXR2dhIdHc2pU6dwuVzk5OSMakCkqiqxsbHk5+ezc+dO6uvrL3ruIywWC+np6aSlpYVCR4PBgN1ux2Kx4PF48Hg8uN1umpub8fl8ZGVlkZCQEBrD4XAwfdp00tLSQis7hRDfHBJeCiGEEEIIIYS4opxOJ5999hnHjx8PBVS6rhMIBDAajUyfPp2HHnqI2267DavViq5Dbm4us2fPZtu2bZSUlDBnzhz6+/s4fvw4LpeLOXPmkJaWFrqGqqrY7fYxDYFUVWVoaIgNGzawZcsWmpubcbvd+P1+3G43HR0dpKSkoGnaqPNMJhN2uz1Urg7BFX0GgwGLxYLVah31+shx59qz0u/3Mzg4iNfrDYWgZ7JYLERHR48KEy9Gb28vxcXFbNu2LfSaruv4/X7sdjsrV67k4YcfZt68eQQCAfr7+3G73TgcjjHPymg04nA4QiXhAwMD2O12enp6AIiJiRl13wBms5no6GggWNI/XiNB5Z82+1FVFVVV0XU9VGLe39+PqqpERUVhMplGjREVHUVCQgINDQ3jnoMQ4tom4aUQQgghhBBCiCvKbrczb9487rzzztCqyJGwMSEhgaSkJCZMmEBERAQQbOicmJjI/Pnz2bFjB6WlpbS1tTEwMMDRo0eJjY1l/vz5WK3W0B6RI8Gi0Tj6z9zm5mZeeukl3njjDex2O/PnzyczI5PwiHC6u7vZuHEjw8PDZ5332cqPFUUJfY2Hpmn4fD40TcNoNI4KRc+c/3jFxcVx++23U1hYGJwXCgajAUeEgwlJE0hKSiIxIRGL1cLAwMCoOfzp9UbmoKoqgUAg9OXxeFAUBaPROOa+R14Hxt0tfeT8i3mWmqbh9XpRVRWTyTTqHAUFg8E4asWtEOKbQ8JLIYQQQgghhBBXlMlkIi8vj9WrV38eUCooCqH9Is8W2tntdgoKCsjNzaW6uprDhw8DUF9fz4wZM5gyZcpZg7QzO0F7PB6OHTvGhx9+SGRkJE8++SRLliwhLCwMg8FAZeUJDh8+TG1t7ZW8fYBRoZvP5xsT9GmahsfjGVcAOLI359y5c7nvvvtGBasjz/bMkFRVVcxmM4qi4PV6xzS20XUdn8+P3+/HZDJhNpsxGAyfr4bV8Xg8aJo2asyRUBEYtRrycjOoBsxmcygEHjVvdAIBPy6X64pdXwjx1VEvfIgQQgghhBBCCHHpRlbnjZRLh4fbsdvtWK3Wc642VFWV9PR0FixYQH9/P9u3b6ekpAS/3x9qwHMhHo+HlpYWuru7yc/P54YbbiA1NZWYmBjCw8PxeIJl4+cq9b6cRkqyLZbgCsjBwcFR74/swfmn5esXoqoqFovl8+cajt1uJywsDIvFMmZ1p8lkIjo6mvDwcHp7e+nu7h71vs/no7enh+HhYaKjo0Nl4gkJCei6TkdHx5g9JT0eDx0dHUCwrPxKMZlNREZGomka3d3do5oi+f1+Ojs7aWtruyqfpRDi6pLwUgghhBBCCCHENSkmJobZs2cTFRXF1q1b+eSTT0hKSmLu3LljysPP5syS5JGViCO6OrvYvXs3VVVV6Lo+7tBwvMxmMxMmTCA+Pp62trZRqz19Ph+NjY0cP378ioZvJpOJ9PR0cnNzaWlp4ejRo6Puu6+vj6PHjuLz+cjPzyc+Pp7w8HDy8/NxOBxUV1dz6tSp0PGaptHW1sbhw4eJjIy86E7jl8Jms5Geno7BYODUqVOhwBSgv7+f0tJSmpqartj1hRBfHQkvhRBCCCGEEEJck0wmExMnTmT69Ok0NDTQ1tbG9OnTycrKuqjzLRYLycnJREZGUl5ezrZt2zjddprDhw/z/174f+zcuZPs7GwGBwepra1lYGDgioWYBoMhtJK0tbWV//3f/2X//v20t7eza9cuXn/9ddra2i4qlL1UI6tZV65cicfj4fXXX+ejjz6ivb2dmpoa3nzzTT788EMyMzMpKioiLCwMq9XK9OnTWb58OZWVlTz//PMcPHiQzs5ODhw4wEsvvcSJEydYsGABhYWFV2zuNpuNKVOmkJOTw6FDh1i7di01NTXU1NTw9ttvs379eux2+7j3IhVCXPtkz0shhBBCCCGEENckRVGYMGECCxYsYOPGjcTFxbFw4cIxnbrPxWQyMX36dFavXs3atWv5+c9/TkxMDLquk5KSwurVq+nt7eWZZ57hpZde4vTp09x0001X7H6Sk5O59957aWxsZOvWrRw6dAibzYbBYCAzM5Nbb72VTz/9lEAgcMVWYEZGRnHHHXcwODjIG2+8wU9/+lMcDgd+vx+n00lWVhaPPfYYCxYsCJWdp6Wl8b3vfQ+/38+WLVvYvXs3NpsNl8uF3+9nxYoV/PCHPyQpKemKhb9Go5H8/Hwefvhhfvvb3/L888/z7rvvYjQaiYmJYcGCBdjtdpqbmyXAFOIbRtHP8RvR5XKhKAoWi0V+8IUQQgghhBDiGyQwXILn9P9F87RgjFyGNeWfrsh1vF4vp0+fpqOjg7i4ONLS0sbdUdvpdPLee+/x7//+78ybN49//dd/JT09PfS+ruv09fbR0NiA1WolIyMDm80Wet/v99Pe3s7Ro0eprKzE4/GQkpLCtGnTyMzMZHBgkN17dtPY2Ehubi4LFy7E6XTidrtJS0sL7eOo6zp9fX00NjZisVhGXefMPRdjYmJIS0tD0zTa29vp6OggPj6elJQUVFXF5XLR0NDAsWPHqK2txWAwkJOTw9SpU7FarXR3dxMXF0dqauqYPSv/9Lk0NjbicrlISUkhISHhop9pIBCgp6eHmpoaKitPcPp0GxaLhezsbCZPnkxaWtqYgNjj8dDW1kZ1dTUnTpxgYGCA6Oho8vLyyM/PJykpCZPJhK7r9Pf3U19fj8ViISsrC4PBQEdHB6dPnyYuLp60tFQ0TaOzs5PW1lZiY2NH/ds4cwyz2UxWVhY2mw1N0+jv76eyspJjx47R3d1NbGws06dPR9d1fvnLX1JfX8+vfvUrioqKLvp5jIfmbcBz+r8JDB5GDcvGlvF/UVTbhU8UQlwyCS+FEEIIIYQQ4jpztcLLL02H2lO1PPXUU+zcuZMf//jHPPzww+Puaq3rOl6vF6fTiaZpWC1WbGE2VFVF0zRcLhdutxuz2Yzdbj9vaHg5jHQWH+mObbPZsFqtV/1v70AggMvlwuPxoKoqNpvt/BmADj6/D6fTid/vx2g0EhYWdkW7jI/QNI3W1lZ27NiB3+9n8eLFxMTEYDKZsFgslJaW8tOf/hSTyczTTz/FrFmzrsw8JLwU4qqTsnEhhBBCCCGEENccTdNoamrizTffZPv27cyePZvly5dfUlA2sjDHYrGMeU9VVez2YPfzq2UkKDxzhehXwWAwhLqUXxQlWIofGRl5ZSd2tksrCoODg2zatInjx4/T09PDd77zHWw2G9XV1bz55pu0tbWxevVqMjMzr/r8hBBXjoSXQgghhBBCCCGuGbquU1FRwYsvvsj+/fvp6upi8uTJPPLII2RkZHzV0xNfEUVRSE5OZtWqVdTU1PD000+zbt06bDYbvb29DAwMsHjxYh5++OGvJFwVQlw5El4KIYQQQgghhLhmKIpCeHg4OTk5qKpKdnY2ixcvZtKkSVelPFlcuyIiIrj99ttDHcdPnTqF1+slOjqagoICZs2aRXp6+hUv+xdCXF0SXgohhBBCCCGEuKYkJyfz4IMP4vP5sNlsREREjLvRj/jmUVWVyMhI5s6dy+TJkxkedqJpAcxmM+Hh4disNpCWHUJ840h4KYQQQgghhBDimmIymYiLi/uqpyGuUQaDAYfDgcPh+KqnIoS4CmQttRBCCCGEEEIIIYQQ4pok4aUQQgghhBBCCCGEEOKaJOGlEEIIIYQQQgghhBDimiThpRBCCCGEEEIIIYQQ4pok4aUQQgghhBBCCCGEEOKaJOGlEEIIIYQQQgghhBDimmT8qicghBBCCCGEEEKMmw7DTp2uLh1NO/shRqNCZCQ4HMrVndtl5HRCQ4NGa6uOLQxyslUSEy/9fpxOaGjUaG3RsdkgJ+fix3O5oLFR42StTk+Pjt8PViskxCvk5CikpakYDJc8NSGEOCsJL4UQQgghhBBCfO00t+hs3RqgqUlHP8cxigLp6Qq3rDCQnPz1CjADAWhr09m/P0BFpY7TCfHxChHh+iWFl4EAnD6ts29/gIqKkfEgPFy54HiaBu0dOnv2BM8dHASfD3QdVBUsFp1Dh2HWLJ3ChSp2+9frWQshrm0SXgohhBBCCCGE+Nrp7dWpqAwGaefT19uHI2KQNWuyrs7ELgOfD06e1Nj6iUZLi47PB2432GzB7y9lvNpajS1bNZqbgysm3R6w2cDnv/D5vb06e3YH2H9Ax++D1FSYlK8SHgG9PTonqnSamsDl0jCbYNEiA0ZJG4QQl4n8OhFCCCGEEEII8bUT0AgFeZF6BZna+tB7Oka6lam0qLcxOOSkrbUL+PqEl34/9PTouFw6c+cq6Drs3Tt6fammQWenzrFjGk4npKYqTJumYjbD0JBOZaVOU5NGTIxCbq4aGm/eXAUU2LPnXOtVR/N6oaFRp6w8GFzm5yvccotKYoKC0ajg9erkTtTZsUOjrU2nqzt4nYgIBZ8PWlo0jh3TaW3T8XrBHgYZGQpTp35Rrt7VpXPkqIbHHXzPaoWDBzW6eyAiAqYWqEydquLz6VRU6DQ2acTGKMycqRIVFRzD7YaaGo3qGo3ISIUZ01Xi42UFqBDfBBJeCiGEEEIIIYT42vJ5e7B5t3BDzBuh1wIYqPSvoEW7DV3T0QKBr3CG42c0Qna2SkKCQmSkQkWFhqqODhsVBczm4D6Unx3QqKtXQntY1tbqfLItgNsNC+YrOByQlR0M8yIjFSpPjB3vXFwuneYmnd5eiI6GGdMVsjK/2NvSZlPIsyrERCt4PDoOh4LNpuDxQGWlxifbArS3B8vLrVZoaYGTtTr1DTo336SSkaEyNKRTVhbcR7OpScHjhYYGHa8PTEZobw+gKJCbq9Dfr1NSohMToxMXp4TCy4EBndKDGidO6OTlwayZl/UjEUJ8hSS8FEIIIYQQQgjxtaVpPpRALxEGT+i1ACq2wOf15MrnX18jRiMkJCgkJCh4vaCoY49RFIiIUJgyRaG+QaGxMRjeaRqUHtTo7IS8vOBqzPBwBbs92FjH6w2ee7HcbujqDjZFcjggOVkZ05THYuHzPUUVFCW4KrStTWfvPo2mpuC+ozcuV4mKhvo6ne07NKqqdKIiNeLjgytLPR6d3j5QFJ0pkxUWLlSpr9M5UKJz+jRU12jk5hpISFAwm6G7GxqbdCZPDu692d2t09ioo+mf7w0a8TX70IUQ5yThpRBCCCGEEEIIcQ1RFC6qa7fRCKmpKgsW6ME9QCt0Tp8O0NMD8XGwcIFKUpKCepbw82LoerCE3eUKzsliCa6qPNecR/h80Nqm0dioY7XBlMkKBQUqJhOE2XQam3RKD+o0NgVLxiGYL6sKxMUpFBaqJCerxMVqNLcEaGyE/v5g06G4eIXUFIWKSp3WVp3BQR2zWaGtTae/H2JiIDtbwWy6tHsWQlx7LvFXmBBCCCGEEEIIIb5qFgvkTVSZM1vF74empmCoOXuOSl6eelEh6PkoSvBLJxhmXgyfL1hm7nSB1UJotaSiQFgYxMUqKAT35uzv/+I8oxFiYyEhIRh0RkR8URbu9wev74iAzKxgINvVpdPZqTM8HAxEdR0mJCqkJCtfu9W2Qohzk5WXQgghhBBCCCG+tkzmaPympVT4ToVeC2CiRZv3Fc7q6lEUsFoVoqKC+0r6A8FVmxHhwWDzy45tMoHdDroW3P9yeFjnT5NBXQ+utgwEgh3MNQ38vuA5qhoc48wxjSZC5eX+M7qdqyqYTV+sOj3bClSrVSEtNXi/AwMES8U1aGnRsVohM1NKxoX4ppHwUgghhBBCCCHE15aqmuljPjv8mWe8quAlIvitrqNf7JLBryFNg/YOjbLyYDdvmxWGhuD4cZ3MTJ2kpC8X5Fmtwb0yjUad/s/DwoyM0aHi8LDOwYMalSd08iYqTJqkYrGCaggGmi53MOAcCSw97uD/Go3B8c/nT2cfXJ2pkJamUFamU1enEwhAXx/Ex0NWloJRkg4hvlGkbFz8/+y9aXRb1332+9v7ACAJgOBMihIpaqZGS7Iky5KHeIrdOLEzN0mT17dOm3blzbDWe7tuk3fd3H64adOu1axmJU7S3g5p2tpNmtixE8dpbMlzJGv0pHmeRXEQZxIkgLP/98PeByRNyrYc2Zad/Uu8JAI4G+eAJITznOf5Px6Px+PxeDwej8fzjiOdsmIVQJ40/cwZ999sRqhFMMTjOcrTI2/vzr6J9PYKz+80HDsmNDXBTTdq6mpto/e27cY5Jd84ZWWKlhZFXZ11Oj7/grB3nyk6JrNZ2LPH8OxvDIcOCWfOComEjYanUzA8DGdOG0ZHrYDZPwCnzwgoqKxUVFdfvLiaTitmz7LFQSdOCrt2W/dlY6OiocG7Lj2edxv+eoTH4/F4PB6Px+PxeN5xNDVpPvH70N5unXdTEYtpystnUF9f/9bu3G/J8DDs3h2yZYswmhMGBmzrd6EAD/8y5KmnDPX1sGaNpqtLeOFFoaQErrxSc8UyTTwBjz9u2LXLMK0BlizR7N0nbNliGB11641CoRt++cuQp58y1NXB9e8JaJk5UfyzpUCKtVdpNj5hOHFCePDBkOemGVJJGBi0zeK9vdDYCMuWaiorFWEIra2KHTuFl14WRApUVSmOHxeOHhUy5bBooRUvz5y5OIG1pARmzFBUVcHZs9DfL2QytqgnmfTipcfzbsOLlx6Px+PxeDwej8fjecdRWgqzZ2uami5cJKM1xGKpN9y2/XYRhkJ3Dxw5aqPgEbkcnDsH55QwnIWqKmH/ASGbhSuuUCy/QlNVpViyWHPqpLB7j7B9u5BM2jbyI0desV44tl52BLLDk+dZgnU6rl5to+BbtxhOnYHOLiHQNv4dj1shcv16WxIURbuvu1YjYti9R9j8nBCPC7k8VGRg7VWaVav0hHmYr5cggOpq6wg9fUYghJpqxawW/Y77Xns8ntdGyQWGf2SzWZRSlJSUoJS/cuHxeDwej8fj8Xg87xbCoe2MnrsHM3qGWMV7KJ3xF2/3LnnGEYbWTdjdbePQk1BQklAkk3beZBhCJmNdjFpbh2ZPj9DXJyRKFFWV9rYLrgeUlChqa+2aUyFiI+Ld3cL5bqGn283YLLNCZW2tbQZPJMa2KRRsrL2jQ+jqssJleRoaGhQ1NYp0WqGUdZV2dgr5vFBebu/T2pYAdXcLA4NCskxRXz82zzJau7fXShrJpKKuTr0hMfRiMLkTjJ77DuHAC+jkHMpa7kHpsjf3ST2e33G889Lj8Xg8Ho/H4/F4PJ7LiCCAqipFVdVrG4lqayc/JhaDujor5o3n9ax3IZSCZNLOwJw2TZHPgzFCEFjB8JWt4NF+1Nba45gzZ6ykJ+7axiNKS6G5WfFK12c8boXOqeZYRmtPdfwej+fdhRcvPR6Px+PxeDwej8fj8bwulLLCoXVAvj7hMAimFjc9Ho/n9eCnQXg8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeD/X8TUAACAASURBVDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyey5LY270DHo/H4/F4PB6Px+P53UZEprxdKfWO2N7j8Xg8bx5evPR4PB6Px+PxeDwez9vCCzt38J3vfJvjJ09f8DFNTdexbv3/jdbBhNuNKXDi+BNs2/a3F9w2WVbNwtYPM3/hJyfdVyiMcPzYY+zcec+Fn3vGDO76zKd57++973Ucjcfj8XjeDC6JeHnfvf/E0GA/ShnEGFAaJYJWGgQEIV/I098/QKA1YSFPPBagNCgVEAQKpcCIQiuNEGJEUEoT6ACMYDCoQKMIwEAs0BgxoEJQoHSAEvuPmb04phB3lUyMQamxq2liDIggIhgTFp9PAAkNCCgRCBRaa2JBDIVGBzG0tseDUiitUaLQaJTSiBZ7n7top9CIsa+DCgxGBK0Cux8CCoUo0EoV902h7QEoECMYI2it0O42Y/fS7p9R7kqgwohBafsY5Y4NEQwgyhCoAMQ9XgAEIwoQBkdGyI4WEAElcNfdf0JDQ+Ol+NHweDwej8fj8Xg8ngvS3t5O956n+GjDMHWpyaenW04P86utNaTL8wTBxKlnhTDPkUOnUSe38KerMpO27R81bDoTsLV7HqOFT0y6f3Q0y7FDR0m27eCuK5KT7u8aDnnpzHH27r3Si5cej8fzNnJJxMvTp47S092JVlgBTAVY1dKgBESBMSFDw8MMDw4SGe9jgSbQmiDQRcERJ16iFEpplGgUAoGgtEacWKiVBgwQggZBo4jZIZ5KgVJO5hMClBMOrUgoGJAQYwQxBiMhoRgUGhMaAqVQKJQOQGsSsThKadAKbVVHDE6URKEkQGmFUQYnX6JEUGLXERF0oK0gixUMrYgoiLLCLe74FQFKKXvMYEVVEbSykQXjXlAt4jaywmYUcggxaPsUxfVFGQJR9huhlBU+AYMVMgezWXqHRhDRIIbRkZFL8WPh8Xg8Ho/H4/F4PK+KMYZUELKyMUlTRXzS/eezhrIhRWMjxF9xdz4P3R2CTsW4ujk1edvhkJODOc5KgRkzJj/38LDQ02GoHopPuf3ZgTxnTsHISO4NH5/H4/F4fnsuUWzcyoSIcgKgoDTOAunkvECRTJYShgVGR0ZRosjnDcSwwpxWY4Ieyv4ngsagNOhAYcLQioFYJ6ZGWdHQiDMrGkQrRAwi1hmJiBMNI8OhFPc3CDTGPacWjYgm5oyPgQqcCKpRgUZr7aRCu1+Bclf9xDkiRTBiiiKittIgVmDUGBMZMt1fnBipnAuT4nL2uMREQmi0YuTOtIKjiHVtRscKbj3l3KXjZrY4/2lxJRHj9F1t/y7Oieocpfi5Lh6Px+PxeDwej+cyQCmY1aL49B8ElJRMPH0dGQl4OKnZfvjC5y+JhGLJXM3/+MzkU9/e3oCHY5pTHf78x+PxeC5nLol4Kc7lN0GsdHeIi40rAZQmmUphDORGRxFjKOQNSsXRolA6JAgUiC6uY/+nCQtWOLT/t+KjQaMEF622mWcRYwVFrB4YPTLaR6W0i5BboTUIFCq0rkntZD6ttY1pI4gKbZZa2eOzaWwb/44i8caE9imd7mf1Q/sXE4bFV0QwNk6vtVtDOSemi6krQAzGPUf0mmoX9bbSZSReulcmEkC1HvtmSPHBTsR0WyoIw8jVasVeFX23lAY0gv+H2+PxeDwej8fj8Vw+JJMwY4aitHTiuUo2q6iqUq/qvVBaUV4OTU2TH5RMKiorFKe9ecPj8Xguay6JeGlCax0UsQ5IrcFIWBTxVNFJGaKVJplKYsKQghHyhRxmVIjFA2JxjTIqMhfiVDpE3DxMlBXclHaPCVHRvEgRECs0KjSgCMMx8W4slm4FTvulXVFrhSJEnGvRxriVjb07UdIgaCeKogQjodMWtXNARn7RMXemCBisg9Lup3FHFUXXtYvYg1Jue6WxD1POkKns10o5N2kUPDd2PqcOnEhpnFPV7WLkhNUKUVY8Fqwwi7IxemTsOZWOoY1yLk7/j7fH4/F4PB6Px+PxvJ0YA729woEDhlOnhdFRSKWgZaZm/nxFJnNx520XWm/mTM2C17leNgunThmOHBV6uoVCAUpLoa5OMWeOoqlJEwSvuYzH4/FcFJdEvDzf3YcJ85QkYi5+bJ2NUVZbiRXTROyVr0QsRnl5iv6+PlABhXwebcAUBB3T6CCwBTRK2YIaY52FkaNTJBwnCILGzmqU8c5KcIU+9kqcKOXmRhpAEGPFT63jKMAY9zjBxdYZm12JQoydHamVFRSN4OLoEo2SdK7IsbmWtg9HocTd7oqAlLHTOoXQzdbU7jkF7fbZHovzRSqxblQBjBVEbRpdMCokknUpSqcGF6IvxuQFXXSk2mC+BmVcEZIipjQFCe33bVzk3OPxeDwej8fj8bzb8Z//LzdMCKfPCE8+FXLokDA8bMXHIAa7doUsXqy48YaA2trXJ2AaA2fcegcPjlsvsOsdXWTXq6ubej1joKND2Lw5ZO8+oX8A8rmxNUpKhNpaWLlSWHuVJpXyhhiPx3PpuCTi5UA2x/BgD/W1VSSCGESlNmKssMiYozGS2OLxOOnycvr7+4nFYGR0hNKSUifYGSsIaoUQ2BXCqNxGgVZFeTFqEbdmSbu6FSCNEzwZE/cEiGZRahAlhBLatnDnWYwi5pFTU6HdulEq3kXZxc2ylHHCpV0g2jP7P6WcZqii1ezfNaBMNFbTblN0XEavktsXF5ePBFHjYuGitJtzqV05kEW5WLqK4vzF7SnO2tRYl6V7SvdaFmvSXzdDQ0McPnyYs2fPks/nKS8vZ9asWTQ1NRF/5UTty5xCocDIyAixWIySkpLi993j8Xg8Ho/H4/G8ORhjGBgpcHbA8IoycQDODxXIl5gLbl8IDd1DOdoG8pO3HQ7pHgopCy+8fT40dA1Ovf3Z/jw9WU3mbTB39A8IL71kePllIR6H5csVdbWKY8eFgweF3buFaQ2G9euD1+V0HHDrvfSyEI+59eoUx45NXK+6eur1enqETZtDtmwRcnlomgGtCzTpNPT0CgcOCCdOwPCwIR6DdesCYpeoYcPj8XguydtJwSiGsnm6uvuoq6oiHneuxTEr49hsRedaVEqRSJRSXg4DA/3E4iWMjOYQiZNAOzdiVELjItzua0FhxBBEz0FkFnT+xegfPXFlQC7OXoyDR1qessU4Rim0Dpxs6OyiYqxjUgVjcyNFxp7LHZNSytoolWCMFNeO/I/gIu4KGzsf6wuyQzmVONUymksZFMXJSBTVKrDhc7e5LSKyR8t4vdE5No1bMnKmBtqJoXpM+Ixi/ZGoq7SLp79O2tvbefDBB9mwYQPHj59geHgIYwzxeJxMJsPSpUv5+Mc/zjXXXEMymbyon6e3GmMMO3bs4N5772Xfvn3ceuut/OEf/iF1dXVv9655PB6Px+PxeDzvaioqqhhJz+Vvn9tPujQkEVcE485SB/KlLG+tIzaFEhaLxaisqOBsWMnXt45Oun8kb1DpOj45Z86Uz11SUkKmspq9fSUTtjcG8nlhcCSkpHYa62e0/PYHepGMjsLgkFBZCbNnKW68MaC8HOobhM7OkMFBaG8Xzp4VDhw0DA9D0wzF0qWaRAIGB4X9+4VTp4XqKmhoUHa9CpjVorjpJrteQ/3Yeh2dNgb+SvEyl4OTJ4WXX7bCZesCxXtv0TRMU8RjitGcMH+e8NTThnPnhM4uIZsVyssV+TycOWPYtVtoaxPyOTvDdOZMxZIlmoYGez7fdV54+SXDyKi9r6wUnn/ecL4bysthyWLNkiWafF7Yt084dcpQXaNYfoWmstKuMTIChw8bDh0yVFQoli3TF3SSejyedxaXRLzUaIwoBoZGEdNLbXWaINBWOTRQtCdqirMgURqtFSWlpQjQ39dHEINcLo9WmiAWWOERO0RT62gWpowVhqOR0MaqtSusKQqLWjtxDqKGbwG00sVUdFTQYyPtxrkfnQCpbcxa4eLhCiJV1EbTXTmQ2x/rxJSia1OKoiBOoLTipHsYQSQ+Rs5UMWilbdQdDYSIu81GyG1mXNSY+KtFbFkPgiiNGG3nW+Jee5z4i3OUYlvgpWi3VK6cSDCmgNKBFWxfw3p58OBBvvWtb7FhwwYGBwe54oorWL9+Hel0ms7OTnbs2MFDDz3E888/z+c//3k++tGPUllZ+YZ/vt5MOjo6+MlPfsL999/P3r176e3tZfbs2YyOTv7w4/F4PB6Px+PxeC4tCxcv5xN3f49f/aqf+nrNtes18+aPnY8opamtrSWYwg4Yi8W44aabaL7vwWK/wHhEIJFI0NzcPOVzl5WV8b73f4CFCxdOOAXq6YbntoTs2WtYsCDJ+mtn//YHepFUVSluvingmvVW7KuqssaU0hJ7Mqs1JEoUJSUwPARbthka6hRlZTBnjubIUeHxJ0KyWVh7laaxUVFdHbB+3SvWKx0z+sTjqtgDO55sVjh1SujpgcpKWLFcMWfO2GzLsjJFWauiuloxMiJUVCjKyhSjo7B/v+HxJwzt7fZ5SkqsyHjosHDihHDTTZqZMzWDA8LLu4TubuHUScVoDk6cEHI5iMXg3LkQpWDuXEVvr7Btu1BVJdRUq6J42d8v7Nhp2L9fWDAfli9/q75bbw0iwsDAAEeOHOHcuXPk8wUqKjK0tLTQ3Nw85e+Ix/Nu4dKIl8pGs0MTMjg8AoTUVFUQj8VcUUzkjrSzGJVWxcZsUJSUllFuhKGhQYyCfH4UHZROcBeKtrMjx3WZTxQki85FV0bjvkbExsKjVLdo56+08zGtSCjFoh+lcY93RTdOAdXW52kdjSg3r1KcKBiJmrhYeRThjkpxokIeU3ROBkoTGkEFgSsycuKnc4/aKLlg3FzMKG4eNQgppQjcDM8oxq6CAFHRK+HKhEwUG5exRvRA2e9LaPdSxwJ0EKAIIsvsBb/XXV1d/P3f/z0///nPaWxs5M///M+5+uqrqa6uJhaLMTo6SltbGz/60Y/46U9/yve//32ampp4z3veQ2lp6W/9s3YpGRgY4Pvf/z4PPfQQc+fO5ZZbbuHxxx9/u3fL4/F4PB6P5x2FiC3x6OwU+gcEraCiQlFbq7jMPv55LkPS6QpmtiynYZphxgxYujxgxfIpFLQpUEpRW1tHbe0bS0xprWloaKChoWHC7e3twrnOkM7zQsM0RWXVWy8KJRJQX2/PywYHbcnOuXbh4AFhYMC2p7cusG3rixYpjh9XnDxlxTtjYOdOQ0cHzJ9v3ZiZzFgre7Ree7tw4KCdXzljumJhq5oy6j0yAufPC8ZARQamT1eT3JklJbYR3p6rWvdqW5vw3HOGU6eEpibFDe/RVFbC8ePWpbnvgFBRYaittULqyIjQ3QMgLFqoWLNac/yEsGOH0NYGBw8Z5s4NqK+3+3n+PJw6JSxaZN+Hzp8XTp4UwhBqahXl5e8O16UxhrNnz/LQQw+xYcMGTp06RTabRUSIx+NUVVWxfPlyPvzhD3PttdeSSCTe7l2+JORGcxw7foy+vj6WL19OSUnJ271LnreRSyJeKtt+Q6ACMIbhwTza9FNdXUFJAAiEqCjLXJyDKcYKeVopSstKERGGBwcwGHKjoyTicWLxOGEorpRHxq6IiTN1KiEUITDFKZE2Ph0aJwjash5krA1dxNiYNxRj5CKCEBLdVLRImqi+JyQ0emw+JCHFbHc0YxOZWBoUCbQiiDZuziSAoeCEW1OwDemi7ZVCFb02Iigtxa+LUz6j6Lqxsqm9ShY4N6Z1bdpDMuMcpbo45zIMQ3TgZoMiiDHWvepmYEoxwj4ZEeHpp5/mscceI5lM8uUvf5k77riDqqqqCfMhm5ubqa2tpa+vj4MHD9LV1UU+ny+Kl52dnTz66KNs3ryZM2fOICJMmzaN6667jltvvZW6ujq0m9u5detWtmzZwpVXXkkqleKxxx7jxRdfZGhoiPr6em677TZuvfVWkskkjz32GAcOHOCaa65hxYoVlJWVTdj/48eP8+tf/5p4PM7tt99OPB6nUCjwmc98hptuuolnn32WzZs3X/wvgMfj8Xg8Hs/bxOioFQ2HhoTaWut8eivHdmezcOCAYefzVgjJ5eztJSVWeFm5UrN4kfYipsfzBhGB7m545lnDiRNWQJw9W3H99ZpZszTxODQ1aa5aK3T3CHv3Cu3tIV1dUFsLV6+1rssxMw/09MCzzxpOOKFvVoviPW69V75/iEChAMNZnHPSuiqnYvy2+Ty0tRmOnxBKSmDx4rFIezotnD4t7HheOHHSio5gT7W1gpoaxbp1munTNXV1hrNnQ06ehN5eCEOorVU0NSn27bOx+cFBIR5XtJ0TevugugrmzlG8wyoYpqRQKLBv3z6+853vsGHDBnK5HFdccQWtra2UlpbS3t7Oiy++yI9//GNeeukl7r77bj71qT+gtPSdL/R193Rz33330dnZyde//nUvXv6Oc0nEyxCDMSHaxZ9DoH8oi2ioKU8TDzTOnOjSzFGBj0a5dvIgFiOZsrMRs0MDFEyekVyehEAsHrczGSV0Mxo10XBJNT5Kjls7cl7qqNBmLNIdFfZE4lik08mEhm/lHItOLBVBa4XSghGDQRFoq8oaE0WzTdETKhISSZFjkW9AabecFScDrYv7GrlTFTZljnOzCraQJxItiwfqIulaaaxXVNxjDPYWE5k0neypwAgB1nVpxNhIgBG0irrOxTlEp2ZoaIgnn3ySc+fO8dGPfpQbbrhhknCJe23nzJnD1772NQYHB2lsbCzOvTx16hR/93d/x8aNG6murmbu3LkopdizZw+bNm3ipZde4otf/CIzZ84EYO/evfzbv/0bu3fvpq+vj1gsRnV1NWEY8uyzz/L8889jjOF973sfJ0+e5N5776W9vZ2WlpYJ4qUxhieffJJ/+qd/Yt26ddxyyy3U1NTwuc99jlQqRUVFBTt27LjAkXs8Ho/H4/FcfojY9t+nnwk5fx6uWa+pqnrrxMtcDvbuNTzxZMjp0xSFy4hz54SenhBjYMVy7cs7PJ43SDxuBbn+fitknj0rvPCCoTwNs2bZiwML5ms6OuDpZwwnTkBFBVx5paa1dfLvXjwOVdXQ98r1yplSwFTF8+4L2Vwmk8/bmHk2C1VV0FBvI+5gY+s1tdaeMzRkBcfytL0vFrOia329IpGATMbGwk+etPM4RSCTsXNADxyw8zU7O4WKCjh10hqkGhoUM2a8tRdy3gxEhNOnT/PP//zPPPzww7S0tPD5z3+etWvXkslk0FozMjLC8ePHeeCBB/jpT3/KD37wA5qbmrnlvbe83bv/W2GMoaenhx07dhAEAYVC4e3eJc/bzKX5CBEJhMa69gyCCRX9gyNIQaipLCeWsJFsoCgSKuXKdMA6AIHSsiQmNIThACYU8oXQvkmqAoHWNh6Ni0QzJj5GVkeREO1Ey2jOZCgyNt9SReZJQTvBMoqvq6hZJxIhVfR3WxCksOU3dhuFCQGlEdca7mrFiyKibSO3DshAtDOOuqGTAmFo7KxOkbFti4VBgFGuYd3tkVB0kyql0Mq6VoWolEjb540cpiYENKHbWONedxmLuOsAAnFv7Ma4/Z6arq4uDh48SDweZ/369dTU1FywkVtrzbx58ybclhvN8dOf/pRf/OIXrFixgj/90z9l3rx5KKU4duwYf/M3f8ODDz7I4sWL+djHPkY6nSaXy9HV1cXGjRv51Kc+xQc/+EGmTZvG6Ogo//7v/84Pf/hDHnvsMdatW8fixYtRSrF582Y+/vGPU19fXxzs3dfXx6ZNm+jo6GDhwoVUVFSQSCRoabHDt425iLYij8fj8Xg8nsuAMLQC4ZEj9qQ+mx1/Qf/N5/x5Yfcew6lTtuBjxXLFihWKMIQXX7JlIadPw759hpYW25Ts8XguDqWs0/DmmwOGh+H4ccPTTxv27BFSSUNdnSKdtm7I6ioINBRC+zuZKYdXmtWUczbefNO49Z4x7N4jJMetN/7x8TikUvY0Npu1guMrexJErNsyNFBWGpUe2T+1ZoILMlpTKfs+VhhX9K41JOIQi42dv78yol5aap2XlZVW0D1xUmiaAafPCKWlMGvWuyMyPjw8zPbt23nkkUeoqanhS1/6Mh/84J2Ul5dPeFxTUxPV1dVks1n2799PR2cHhUKBWCxGLpfj4MGDPPzww0VDUFlZGQsWLOC9730v69evL8bMe3t7eeKJJzh37hzXXHMN586d4+GHH+bkyZPEYjEWLlzInXfeyVVXXVXsHIn2c8uWLfzqV7/i6NGjiAhz5szh1ltv5YYbbig6JkWE7u5uNm7cyFNPPcXp06cREerq6lizZg133HEHzc3NZLNZtm7dyn/913+xe/duUqkU3/72t5k/fz533nkntbW1b903wXPZcEnEy1gkYCmFERut1sq+afYPjwBCdW2GhFbF8hvlmmPEzbGMxEwdKNLpNMaEjIyOkB8dsUvrABWzsyBV5K6MymmKJUButqOYYimQRPlyDW54pp2J6YpslHblNlEcG8G4Wm+lpXhc4pyOCpyJMYqwu8s/Stt9UdF+jfkplbvPHqtdQDl7pURTPE3UeK7GRc7dfE03KxRsSU/070QYlQBJNOdyzJ3pxjjjBnYCtjdJK3u9zE39dNZ8Jz4r5Qp7pub8+fP09PSQTqdpamq66FkaZ9vO8sQTTxCGIR/5yEdYv3496bS9xNbU1MTu3bv55je/yRNPPMEtt9xSvA+grq6O22+/nVWrVhF3//Jdd911PPTQQxw/fpyBgQEWLlzIokWL2Lx5M/v27WPRokXFN/bDhw+zd+9epk+fzooVK0ilUhe17x6Px+PxeDy/DcPDcOSI4dRpIVkGM5oUQ4Pw8i5DWRmsWaNpmakZGBAOHxEOHzb0dNvP06kktLQoFi2yzbwiVrTcu8+wb5+dERcLYNduYXg4ZPZsW6YRi9nnPXrUcPCgoavLrpfJwPx5ioULNRUVdr3BQeH8edsk/GooIJ22c/b6+oTz5+1n1oYGewxLl2qMASHk3Dmhq8s6u/r7xYuXHs/rJJoj29UljIwImQpFQ72iuhpiMc3+A0J7h9DeDoND1snY0WEFyFzOioeDg7BnrzBrljBtmpq4XkbR0DC23oEDYud8ttvtxp2GAVBaCnV1iiAm9PfDyVNCS8tEUXFoyLo39+0X5s9XtC7QlJRaMTIM7dzMqNzWGBgdsX/GYrzmWIlXvnMEgRV0m5sVu3YJx44JJrSx8ro6G6t/Nzi9e3p6+M1vfkNfXx8f/vCHue22WycJlwDxeJxFixbxta99jd7eXqZPn04QBIyMjLB582a++c1vcvz4cebNm8eMGTPo7e3lkUce4Te/+Q2f/exn+dSnPkUikWBwcJAnn3ySZ555hv3793PixAmqqqqYNm0aR44c4b777uPgwYP8xV/8BVdccUVxH++//35+8IMfoJRi4cKFFAoFnn32WZ577jlOnDjBXXfdRUlJCR0dHfzLv/wLP/7xjykrK2PJkiUEQcDx48fZunUrL7/8Ml/96leprq7mxIkT7Ny5k+7ubkZHR9m2bRvZbJZbbnlnO0o9b5xL8ittwhBQGK2tRTC05TRKoGCE/uEsYbehpjJNIh4bczwqF3gWO3tRaSepBZpUOo1RgjGGMJ8jFmqMElvOE4ltYp2CkT8ycnS6np6x0pyic3FsTqSInVKpnKPS5cMxxjghddysySjG7URMg0TF43Ym5rj4OWKj5dpm04uzJm0M3Yql0VxMu3cmMncW/4xeG8Z0ynElPONuUba9PDo2OyfTipwmkkm1nZoZjQyN1lfimuTECcNK2yi9ufDl+mw2Sy6Xo6SkhNLS0gu6Li/EyZMnOXnyJPX19bS2tk6IdcfjcZYuXUplZSVHjhwhm80W71NKMX/+fGbMmFF0UgJUVVVRVlbGyMgIuVyOmpoa1q1bx3PPPcf27du5+eabKS8vxxjDtm3baGtr4/bbb6elpWXCOh6Px+PxeDxvNqOjVpTcvt2QTtuyjd5eOHpUaGyE1gXWybh5s+GFFw19fTaGLQJBDA4eEk6eEq6/TtPcrGlvF3buNJxts48rKDh0SOjosBfVm2fCwICwdZth505DdzeM5uzH53gcDh8WzrbBtddoqqsVJ04ITz9jirPnLkQQ2AKQ91wf0Nys+NhHA0ZGhJISRW2tLRkxBpJlVjwQ7NdTFEF7PACMjGRpbz/JuXMdmFCx62XNSHbM1ZVIJGhsbKSxsXHK7fv7+zl27NiE84fxpFIpZs6cSUVFxZT39/b2cuTIEfL5MeW+u1s4ctgK8DU1GbLZecBbZ34QgfZ2w4aN9ndyYat1XqbTimxWGBy0j9OB9eH09grPP284elSYMQMWtmp27zEcOSJs32G48QbN+fPCYxvseq0LFLfcYtcbGRlbL9BM2TZeVqZomamor4OODnjhBaG2xrBwob1Iks3aERLPPGvfa8rKYNlSqK1RpNP2IsrpM8KyZdYJOjAAZ87ak92KCnsxZGTk9dvGlbIXUWbNUuzZa1vL+/vte01jo2Jawzv/QkkUm96zZw/pdJqrr76ampqaCz4+kUgwe/bsCdufPHmSH/zgB+zfv59Pf/rTfOITn6C8vJyhoSGefvppvv3tb/Mf//EfLF26lFWrVmGMYWhoiMOHD1NRUcHdd9/NNddcQyKR4OjRo3z7299m+/btPPvssyxdupR8Ps+LL77Iv/7rvxIEAV/5yldYsmQJhUKBrVu38rd/+7fcd999LF++nBUrVrBr1y5+8pOfEIvF+MpXvsLKlStRSnH06FG++93vsmHDBlavXs1dd93FDTfcwMDAAN/4xjeYP38+//t//29mzpxJfX39W/Hyey5DLtHMSymKZVZFNIiygqZWUBDoH7IOzNqqDDoeuKsueoJop5x7UiHoWEA6XY6EhpwI+UIBQ4xAIUF9XwAAIABJREFUaduyHVjVTlToBDttrZHjWrlFjIuC4wptrHColZ1XadvEnZjntgyUtnMflYuTK4qWaOOi7YKb3akUKI0xQkzZWZd2xqQVFVU0z1KiULpCGYDAOkQpYBvOtRvVaRCtnRiq0MoKj8ViIeeinCCW4gRRN7tSJHRzMu3+BsUCH4VETlQdDSzRiBZMQdyxKZSOMfnalvthicXQWlMoFOzMiYuMJZ0/383Q0BDTpk2jvLx8gtUcoLqqmmQySX9//4QPDwDl5eUkEokJgqnWuji7VEQIgoC1a9cybdo0XnjhBc6ePcu0adPo7e1l27ZtaK1Zt27dlHM6PR6Px+PxeN5MRCCfsyJBdgSyI0IqCYsXKRqnK8rK4NAhw/MvGDo7Ydo0WL5ck0rBnj3CoUO2iKOmWqipEaqrbSHFyIh1XyVKYGaTYuZMe/Kez1ln5tathq7zML0Rli3TJJNw4IBw8KB1SVVWwFVXaYazdn5mR8erH4cOoLbGFnikUgobZlFjV9qxounp00Jfn3WElpczIYbq8Yxn18s7+OH/91d0t/UwUB7QfUiRSo39vPSPGBasup6//Mu/nGRAyOfyPPnkk3znm9+guXJyO0s2ZxhVZdz+wY/wJ5//wqT7BwcH+eUvf8m//P13aKkeS5Xl89DTY2jvCWk/Xc38Of+D/+PuP7iER/3qKAXJpKK0FM62Wedye3tIeTl0dtmG7bIymN5oS2n27TO88KItxrnySs0VyzTxBDzxhGHXLkNDA8xs1pSVQVsb9PULHR1j6508ZePW06crMpnJv6uxAJqbFWvWaJ54wnD8uPDQz0MatxqSKRh0YmRPj33vWrpEU1lpR0i0LlDsfF54+WVrGqqssm3jR48KmXJYtNAWjZ09e3EnlyUltiG9usq+RgMDQnk5zJmtSCbf+e83YRjS29tLZ2cnmUyGlpaWSefPr8bo6Cj79u1j27ZtzJkzh0984hMsWbIEpRTGGBKJBFu2bOGpp55i06ZNrFq1CrDGoUQiQWtrK3fccUcxop1Op1mzZg3btm3j2LFjFAoFBgYG2LRpE6dOneKuu+7i5ptvJplMIiIkk0n27dvHSy+9RFvbOVassGnKj3/849TW1nLTTTdTWWkvKFRUVLBu3Tq2bNnCrl27MMYwbdo05s6dSyKRIJPJsHTpUqZNm3bpX2jPO4ZL0zaurQhpjHVbgo13E4qLNCvEKPoGbQS8tjJDPB6AhIiW4gTgSISKIuTxWEAqlcIUCohR5AsF62bUCuPcl3Z0YyQ+RrFxhWAIohIeCYofpgRDNLlXFUtu7DaRPVEFUUHO+LKcYuDbzecUAh0gaLQOcd3nNnatAjfW0jaeu2ma7vlxn+/MOBekfS6titIrY4MpVTGqXhRatUY5sTOyU9o3srD4TEppgsBuo50QK+JmaDo3aTTp0ygr5Baj9xegsrKSdDpNR0eHbRAv5IknXn+F2+joCMYY4rE4wSsHlwCxeIwgCAjDcKygyDFWvDSZ8Y9dsGABy5YtY+PGjezdu5dFixZx8OBB9u3bx5w5c1i6dOmkFnKPx+PxeDyeN5uo8AKsUzKVtAU7CxZoSkqsozEM4cqVmtGcML3RxsQTJRBo2+Td1wdn24SBAWhs1KxcCT29ho4OK2QsXqxYvVqTTCq6u4WDB4TOLqisgNWrNGvXauJxRW2NITtsW4CPHBUWLIB5cyMX5WsfR2UlVFe/4nOZ+3JwUHjxJSvCjoxAdbUVE6qq3vligufNYaC/h7KBl/jCUsP0THzC7wrAs8eHOHqsiTAMJ4mXoQnp7uok0bGHT86tnLR2z3DIpnY4ffLKKZ87l8vT23WO2sGDfHL+WFZaBKQezg3k2Xz+HO3nTl2ag32dKGV/x9ZdrRkdteMh9uwVgph9n0gmYcUVihUrNe0dwtatwvAwXLFMsfwK66Zeslhz6pTdbsd2oapKiuvt3fuK9cpg+RVWnJyy0Nk5Hdestvdv22o4fdZe8Ijeu+JxWNhqW8KjkqDqasW119oeiD17hU2bDfG4fQ8sL7cXTlatsg3kF0sQ2PVbZipOnxEK4r5u0VO6R99piAj5fJ58Pk8qlSoW4L5eRkZGOHLkCP39/cybN49Zs2YVz6e11lRXV7N48WIee+wxDh8+PGHbVCrFokWLqKqqKt4Wj8eprq5Ga83w8HBRvNy9ezexWIzFixcX91EpRUNDA5///Ofp6+ujoaGBeDzOggUL+OM//mNisRjpVIqenh76+/vp6+ujUCggIkUj08WOqPO8+7kk4mUU0Q6UJjRj7kPlimkCFSDGujOHhvIoGaSmKkM8oQglRGknZEURayVFx2Q8kSBVXs7gQD860ORzOVQiUYxD66J1M4qNm2LkWgI1IVItThw0r3DvRQ7LSAQLQys6ouzfJwpnVtDUYsAYFz8X0FYNtKM1FYhrLbd7BghaB9hAfYhISODWlKjN3ImxWusxJ2v0nIyLkEejOF1EXEVCpApsJkdF8XKbCxc3E1REWZcmgpgQE4YUJGQkHxKGMm7Nqamrq6O5uZk9e/awc+dOrr/++ld9E+3r6+P8+fM0NjZSVlZGaWkpQRAwmhudsi0sn88XBwtfzFWl8WQyGa655hqefPJJtm/fzo033siWLVvo6uritttuo7GxcUrh1OPxeDwej+etIgisY6i1VVNfH30ehFmzFI2N9nNbImFdV1YstM7M7h4YydoIeiJhSzVK3PldoK2gUVlpRwP19NiZeJHQUZa0s/EUVmwoL7fJpK4uO+ty6VJNJqNeV+GPvkC0tK/PRlQ3b7bu0bIyWLJEsXTpGxMnPL8bKKXIlCgW1SZoqphsjDjWPcoxxSRzA0Tnf1Cd1CyqmfxDdr4s5MhQgdwFn10IlKI+HZty+6pSxeERPeVzv9nE4zBrlubOjGLtVUJXl51Jmyyz8x4bGhQVFYqREfjgBzVhaCPY1dX2PWDaNMWddwRcf71QkrAzKxMJxR0fUFy1xq2Xs+8NdW69ysoLN3RrDVVViqvWBMyZbWPo3T1CPmd/12tqFLW19kJFJIDG49DcrHn/7YrVq2wzeD5vZ2o2NNgSr/Jy+5yNjZpPftK6XjMVqljoU1mpuO22gPXrhWRyYplQZaXive8NWL3ans+nkvY43g0oZTsxouTjK5OJr0U+l6erqwsRoaamhtJXDBaNx+PFAt6+vr4J98ViMTKZzITzZrs/Y4lUYwwjIyN0dnaSSCSorq6etH5zczPNzc3F27TWnDhxggceeIA9e/YUhcpCoUB3dzfZbHZcobLHM5FLI14ihIWQQLn4s3JCWbG2JirI0YRGMTicR+lBqqvSxANdXAUFoseKZMCWySRKSkiTob+vh0AH5J0qHw8CgphytThEQyHHxVYiV2U4LgmtCI3YRnJw5dx2/4zYmZoup11cM/rdsXM5A+ecdBU5yjXryFgCXpyYqYkKecbET+u4tG5LzZiDdOLriXOCutdNKefi1FYQjcp1is3iekzq1EHR3ilE0XplfaFGCAs5CoUQ3LHG4jHKk0nyuWHXSh5e8PtcWVnJunXreOaZZ3j00Ue59dZbqaioKLaHjSeXy3H//ffzwAMPcPvtt/PpT3+auro60uk03d3d9PX1Tbp62tnZycDAADU1NcVSnotFa83q1atpbm7m5ZdfZs+ePWzdupVkMsmaNWvIZDJvaN03in/jvTzwYwI8Ho/HczmRiFuRYXx/oFK2uKKrCw4dtm7Kkax1NPX02jgmWMHRvMb8yDCE4WEhmwUEOjth40ZDSYkp3t/fDya0s+oGBuznlTc6ElycCLpli2HbdkN3jxVYVq5QXHtNQE3NhQURj8dzYeJxK/LV1ipyOXs+FwRW2It0pXRaTTmWIdr2lWLea633atg4u42QT5+u3FzeV18jHrfCaXW1Yq57zlhMkUhMdNiWlVmx9pUkEnYURmPjFHH2GNTXq+JFoHcTgQ7IZMopLy/n/PnztLe3X9T2oQkZGRlBKUU8Hp9kDopuV0pNaSx6NTPRmOkrJJ/Po7V+zU6JqDzoG9/4BseOHWPhwoWsW7eeqqpKtNbs2bOH//7v/3brX9Shen5HuDStJTImN1rt0DoOQ2vBtC5CBcpYMTIMhcGhEfKFPJXpNOlUGdq1iBtjXCFPNKPSoBSUlpYiJsPg4AAmLFAo5K3TMxRUzM1ylNDpfRolCuPS3kobG20vxsCtozLQCmNsDDvqqREZa+JGXPy9aHW0cy0VrjWdEKPUmGvSzY9AhU5QdC3hSlEIhZhybekKUNo6OJUmdJ09sUmux0gOtbcbMRNEGONi6WDQaNs+rlz7uhFCEyKhoSDR/FFNEASkUiXEYwliOoYOFBAwMDBK6ETNCxEEAbfddhsbNmzgN7/5Dd/6u2+htWbt2rUTruRks1kefPBB/vEf/5H29nY++clPEo/HmT17NnPmzGH79u3s3rWbxYsXF9vScrkc27Zto6+vjxtvvHFC0/jFMnv2bFauXMkjjzzChg0b2L3bPteCBQumFFrfDB555BF+9KMfce7cOS9gXgYkEgk+9KEP8ZGPfIS6urq3e3c8Ho/H8zuO1rgT/bHPdaOjtvBi0ybDmbPC6KgVGcWJla8lWI7HftYd26ZQsA3Cw+P7TBSky23Lr4iNf549K2RfozRDKaiuUsycaednRttu2hSyfYctzaiqgqvWaK66StuGYh968Xh+K4LAinsX6iZ4q9eLLrZYver1rXGpj+Hdjg40VVVVzJ07lyNHjrBz507e//73XzD5KCIM9A/Q0dnBzJkzCYLAaigijI6OYoyZIEgaY8jlctYU9gaMQ5H4WVJSQhhaofTV6O7u5sEHH2TPnj188IMf5Atf+AK1tbUkEgmy2SwPPfQQGzZsuOj98PzucGlmXrrxjAYz1vItgjE2Dg3WjWh/VwStIAyFoaEc2aEuGhuqyaRK7Sc5J8aJKJSKHI3WtVlaVoZgGBjoxxQMhXweVAwdGrS28XTbJu6i0gVti31QtgFdW9eidU0ajHEri57o2IwEwnHuTeuujOZl2sC6EXH7qMci4pEL0rWWR+tGyweupEgQTPTCKSm+RmOeU3f80WMRNG72oxhcVTqilBUtEfKFEGPyEIbO/RkjEY+TCMoIEnHiQYzAzQzVSoMSAoTQKJSbPfpaOtvs2bP54he/SG9vL5s2b+Lc/3WOa6+9liVLllBRUUF3dzfbt29n06ZN9PX18Ud/9EfccMMNJJNJSktLufPOO9mzZw//9u//Rro8zfXXX08Yhjz++OM8+OCD1NTUcNttt02Yr3GxlJWVcc011/Doo4/y8MMPMzg4yLp166ivr5/whr1v3z5+8Ytf0NnZiYiwe/duent72b59O3/5l39JOp2mtLSUO+64g+XLl0+y2r8a9913Hz//+c8ZHh5+w8fhuXRorenr62PVqlVevPR4PB7P5cG48/dIAHz+BcPBQ/bDWHMzzJunKU9De4ewZ4/Q0/v6ltYaYvEx0XB6I6y/RtM0YyrnkqK8HE6etG3jXa/VNq7tXDsbZVd0nRee22LYtsPO4pw+3c7yXL5cU1Gh3hWz5zwej+ftoKqqirVr1/Loo4/yzDPPsGPHDq6//vopH9vX18fPfvYzHnjgAX7v936Pj33sYzTUN6CUoqOjg+Hh4QkpxHw+T0dHB2EYvmqL+atRWlpKY2MjBw4coK2trdhfAtacdOLECU6cOMHMmTMJw5D9+/eTSqV473vfy9KlS4uP7erqoqOj4zUFUM/vNpdGvFS2Xds4C2FxBiMUvyYS4qJ5jM5tWBDDmbZ2hjNp6upqXMmOk+r0uLini0KXufaq4cFhTCFPGBZQBMUZl0q5Qh0nnEZlOwoQN7/SOkVlLA4eCa6I2/fIMYlr8baxbaU0Y7vj4i9RIZGTWJWIK/M2WJHTiqdaaQKtGevEcYFyJ87aoqEoGm6cIGoFYaVsO7mJZoEa+3oYEfLGEBaiwqICI0N9xIOAdHmGZHmaIFYCKig6X5UKME7AVVFlj7KOWDfm81WvhcXjca699lr+6q/+ivvuu49f//rX3HvvvaRSKeLxOPl8nuHhYVpbW/nSF7/E7bffzvTp04vN4O9///vp7e3l3nvv5etf/zqVlZWICF1dXdTU1PCFL3yB9evXF68SvdGfx5UrVzJr1iyefvppZs+ezZVXXjnJzXn69Gl+9rOfcfjwYUSEXC5HNptl//79HD9+3M7fyWRobW1l8eLFFyVetrW1kc1ae8MVV6ymomLyAHHPW8OePS/S3d3FuXPnvJjs8Xg8nsuWnl6ho0PIF6zYePVazbJltoDxpZcMBw68/s9FWttCoFSS4oe7ygrFnDnaFUDaa/Io+9gwtMaC4WFhaPA11g5gZMS6OoeGhF0vG3buNAz0W+Hyhvdo15Luo+Iej8fz25BOp7n22mtZu3Ytmzdv5p577iEej7N27doJppyenh5+8Ytf8N3vfpfh4WEqKyupqKigdWEr1dXVHDp0iMOHD3Pllba4yhhDZ2cnO3bsIJVKsWzZsoveN6UU5eXlLF++nA0bNrBz505+//d/n8rKyuI+3XvvvTz99NN89rOfZc2aNUX3Z1lZWVHnyefzHDx4kI0bN1IoFAjDkGJZ8riOEJ9m9FwS8dK4mHJEVBCjnIvRznfUGBNa96P7wVMocrkcCJzvGSCVKiedLiVqvrER7HHzLwMb+S4tS4JoskP9hMZGt12HdtSY437Q7X5FMyeVYPeBsV8ClJ0HEQSBE03dJzytGWvltvMplTtWO7PSOhejeZR2xqcbZBtF59VEIdC+JioqCHeiqZ1/KeMcnSIFhNA6QqNZoAKFfIG8CbEjPBVBEBCPJUilS1BSoKP9FEO9nSRLy9BmFJMbJIgniZWUIBLS3dlNZXUDqepa5wqNvgvaxeSdKvoaJJNJrr76apqbm/nUpz7FwYMHOXPmDPl8noqKCubOncucOXOYPXs2mUxmwhtrbW0tn/nMZ1i7di27d+/mzJkzKKVoaWlh0aJFtLa2kslkijH8D3zgAyxdupTa2lpqa2sn7Edrayvf+973ANsyPv55ZsyYwV//9V/T1dVFOp2mtbV1kh1+5cqVfOtb3yI7nHVu3MnEYjHmz59/0Q3l0c/4smUr+V//59eY1TL3orb3XDr+7ltfZ+OGX/p/8Dwej8dzWSNRNFyc+JhSZDKK9nbhxAkbx4ZIaLR/V9FVZ2VLLoaHbUS8pMSWZ0xrVJw8ZedlHj8uzJollJcrTp0ybNtm6OiEmc2KVas0CxYoamoCchduNik+ZyplSzlOnDC89LLQ3W1j8JWVdmzT4cMuueQoKbFz9iY1lHs82Ll5p3pz/PfhPDPKJ8dXd57Nkp9z4bkJ+YLhcFeWXx3sn3RfTzbkhfMBrUsuPNd/cLTAi6cH+NXByfe1D+bZ3VfOev850vM2EJ2Lfu5zn6O7u5uNGx+nra2N9evXs2TJElKpVFGE3Lx5M6Ojo9x1113ceuutlJWVsWjRIm6++WYefPBB/uEf/oE/+ZM/Yfbs2Zw6dYof/ehHvPjii6xZs+aCbs7Xory8nGuvvZaHH36Yp556ih/+8Id8+MMfBuwYtQceeIBMJsP8+fPJZDLMmzePXbt28etf/5rW1lbKysrYsmULP/vZzygpKSGZTNLW1sbp06dpaWmhpKSERCJBZ2cnx44dI51Ok0gk3rJRcJ7Li0vUNm6KcyLFCWDRRxZdFN+cWh6V22iryJn8KKVlGSBgaHCUZGkCFXOPt4sTaF10bQq2dassmcSYAtnssJ1XGariLEqtsQKiK7NBlBVXdSRGjrWUG2NQGgrOYRntqxLQ6LHE+Fia2/4RNX1HkW8XKRfExri1na8JLhnuxM+oES+yVCuXKRcnUEYRcKUCJBQKYUjBhKAUMSdWxssSxGNxdEwj+ZDsYB99PZ0M9J2nrKyMqtoGcsN9FLJ9SD5HYURhwjw6P0p+pAwlVWgdt3M3AVGuMlKF6EC/rjEk8XicWbNmMXPmTFatWsXw8DDGGBKJBKlUikQiMWVJilKK2tpaqqurWbp0KdlsFqUUyWSSsrKySYOBm5qaaGpqmnIfMpkMa9asmfK+RCLxmleQphJELzWVldXMnbOAefMWvqnP47kwNdV1vmHe4/F4PJc1SkEmY8W/M2eEri548qmQ3bsNvX3W4VhZCV3nobNT2L7dEI/b5t+yMjtLbmgIduw0nO8WFrZq5s5VtC7QnDgR0nYOtm03tJ0T0ino7ILTp23r+PRGW8BRXm5bf18vuZwt6enqEoyx4unx49Y9OuHYgOpqxXXXaS9eeqZk5qx5tF79GZ5+8SgVeSt0V2TGflaSdSWsuepGElNU1icSCRYvWcyKmz/OISYnbAoGZi+s5fobbp7yuVOpFCuvvJL9197JofhYm/PoKLR3GDp6oLZpBiuuXH8JjtTjuXhSqRQ33ngjqVSK//zP/2Tjxo3s37+fTCZDLBYjl8uRz+dZtGgRn/zkJ3nf+95HfX09SikaGxu5++67GR0d5YknnmDnzp2kUimGh4eLo9W+8IUvTGgEvxji8TiLFi3iS1/6Et/73ve45557+MlPfgJY52V9fT3/8/P/k+XLl6OU4kMf+hC7du3i/vvvZ9u2bcTjcUSE1atX84EPfIB77rmH559/nj/7sz/j7rvvZuXKlSxbtozHH3+cL33pSyxZsoQvf/nLrF69+lK+xJ53CJemsAdVFOMkGh2pNGKMLesRCLRCGYV2uWkb7S6ggVgQEBJjcCRHoj9LpqKMWFw516RtVdR2FCZgm7iVttFog5AbGbXNi+7SsyCYaFam1k6ENEUV0kbLoxIe59CUEKM0SrRzSEpRlNVaRWMp0YpiiY89ukjE1JFKSeT8VETFQy6iLbblPLpCbh+unSNMERqDCQt2bqVotAqIxxOkE6XEYwkCHaADjXYlQWjFwGAv5ztOMTw0gDEhxCqIJ1OMjgxh8qMoRu1rZgrEtSC5LJIbgZLAirlONEVplNbWfHkRaK1Jp9MXXbCjtaa8vLxY2OPxeDwej8fzu0xDg2LZMsX588K5djh2HE6fERrqYfUqTb4Am58z9PTAnr3C9OlCU5OmZaZif63Q1gZtbdDXJ5SWGubNC1i4UJHLaTZtNpw+bbcLNIQGytOwcqXiqqsCMpmLFxVFrMszKqk1xpYCDU4ROzciZLPeueaZmpaWedzyvv/FcH6Ahga44YaARQvHfia1DqiorJyy/TgIAlYsX8G0/+f/JZyiMRkgkSihpnbqmX6JRIK1a69mVsssey7l6DovPPOM4eWXhYULS1m6bNpveZSeV8MY6O4R9u01nG0TwhBqaxStrYqmJv07XfoVjTK77rrrmDt3Lp/+9Gc4dOggbW1tFAqFYqnPvHnzmDlzJhUVFcVtE4kES5cu5atf/Sp33nknB/YfoLevl0wmw4IFC1j8/7P3psFyVffZ72+ttYcez6x5RhIWIAQCxGAGS8GAweVUxY7fit/iTXKd4Sa3KkNV6qZSleRLqt6q+1blS5ybONf1pnyTG5PcgId4INcxoxCDIIBBDAI060g6ko50hu7T3XtYa90Pa3cfZIlB+IgjYP1UAul09+7Vu1vdvZ/9/J/n8stZuXJlb0Jx4cKF/NEf/RFf/epXWbNmzRnrqFQqfOELX+CKK65gZGSk536s1+vceeedrF69ml27dnHgwAEAVq5cyVVXXcX69eupFNF/t9xyC3/5l3/Jiy+8yLGxY9RqdS6//DKuvPJKBgYGGBkZ4ZlnniGO497j+ZM/+RM+/elPMzk5yapVq1iyZMmHs+M9Fx1z5Lx0Qh8WrDFOFDOFxVEXzkuDy4csxDuDpZN2CIMQ4zpvSI1hYnoGjWZwoEqgZE/c6+ZSuo+xIrdSKKqVKlZr8kyTa41UTiB1E9HWiaQSENrdvhAnu63kiO4ouCncmEX+Y/f7lTBYW3xQFts12rk13WMvRMi3j4gXF1rbFSoNGFuMnxfXlQKtNTo35NqgjXOJxmFIuVRGBSFKBkipevtBFk3pQrrHJoGkPUMnaRWZnJZWc4rRQxnKGmpxgMQ456mVuCKkjLTTRIUlhAwREqTtDqyD5l0Lxz0ej8fj8Xg8H4B6XXDbZxSbNrkpoaEhwduNZOWyYPNmyaKFgtEjltaMawNfvkyweLEgz92fx09ZohDWrJGEIVx2mcuXHB11DeV9fbBmjaBScYU9m6+RLF8uGBuznDrtxMZ6DRYvdmPl/X0frFQnDOGqq9y230Ezmr1u5IQIj+dcxHGJoeHl9A8Yhkdg1WrFpZ96/y/Kaq3G2tq6D3Tf3dy+nzVU9B+3vPGmpv+Qpa9fUC5/stQzY5w7+6mnNUODgltvVZxnitb7Rms4csTyyCOat/ZY2m1nXirFlj17BVs/497nPukZunEcs2bNGlauXMn112+h3W5jjSWMwt449bmIoohLLrmEFStWcPPNN5OmKWF47tuUSiUuv/zyc24nCAKWLVvGsmXLzvh599/Q5s2b2bBhAzMzM4ATO6vVam8as3u9G264gSuuuIJOp0MQBNRqNeI4xlrLrbfeytVXXw1ArVanVIrZvHkz69atI01T4jj25qdPMHPjvLSmKMAxRf6jEyuDwo1osVhbtHEXNkptLWmeUapWe6qfBTJjmG62ERIG+qooJXrB4s6n6LyVziAoUCqgXu+nOTVNlmfkOidSytXlWFGMnxtkL3fHAgqsxLx9fNsKCt3VjbwXbs/Cl1nkY3ZzKQUYUZT/CGRQbLdwdjrNthh3L65rrcQYyG2OtroIqxUEqhArw4igECqVkq6kR3bX1AvILLIgKdygwpUQmcLVaS0m6dBK2sRRhA1rGC0xhbCMNegsIU2alOtDSCm7xlMn2CKQwn7iPxg8Ho/H4/F45poggEULBYsWnvuLlhBQrwnWrROsWOEKdIJAEMcu3cdaqNWcUCiEEw+FcKLopz4lWL3auZXC0ImiXaeI+vKdAAAgAElEQVRStSKorBIsXQpp6ka8u9v9edxMUrqx9cFB/8XR4/m4kSRw4KBh1y7L+vWQ57063jlnetry/POaXa9aKmW44QZ34uWVXZY9ey0D/YYVK84v1uLjjFLqA00whmHI4ODgBVqVm6ysVqtUq9V3vZ5Siv7+/jMcouB0jjiOz8qzDILggq7b89FhbtrGgW5+pLGGrr3Q5V9Kl4ApimZrinxIo50oR+GELMpvsIJcW6YabYSU9NXKhIEqtt/NvBQ9Ec9aQRCEVOo1GtNTGGPIc4NS0uVUWuOESYwTKot0TotyKxdOlOyKeAbX+A2FFlmMiIvilLQpRFRVZGmCwBiNtWc+HmMNxmrQYEyOS9BUBGFIKS6hlCTojoGr0DWZI7vVOS67E5BCFqVAbmrePW6LwaKERFtLpnNXFKRCrNQoYUAYtHZj76ZwXxoLCEOea7SxbpI+Nxhr3d8pnLPeeenxeN4nr732Gvfddx+vvPIK+XtZbzxnEccxd999N7/4i7/IwoUL53s5Ho/nIkApl0H5s0JBV7AMz+4zIQicIPlOCAFRBFHkD/49no8z1sDJccuuVwxZCosWCzpty549ljVrBFu2KFoty6uvGvbvtzRn3PvH0iWCjRvdiPbp05b//E/Dq68aGg04fNjy8COG9esEGza4Y+Kx45ZduwxHjliyFPoH4LINkg0bJFHkBMnDhy3JOxSACaBahSVLBCdPWna/4SItrr5asPUz7qzK+nWW6YZl6RJBqeTfuzyeTzpz0zZeiIq2aK7GWKxwoqXoFvZgChHOzWxnaZtSGPR6vK3tNnY7ES/PDFNTLYSF/nqVQHXFRDErSgrVy5oM45hqvY9ms4E2uXM6WufV7JbyzLYNW4x1B9lSiq4qWDR+A6rrFgVhVE/j7I6YW2MR0iCEdJmeBkBijHOfZnnmDKYyIApjynGVIIwJVIiQAhm43EoFCOVERVE0j3eVQ1m0mLv7FIUwKoqfF8meArQ2ri1dOjE2kCFGGFSgyI3bX8gIFcZEYURYionKdTIrQVukkBgMRir3HBWarsfj8bwfHnroIf7lX/6FgwcPYsx5huZ6UEoxMTHBhg0bvHjp8Xg8Ho/n58IC0w3LSy9ZJictixYKGg3L6QmIYlh90vLcfxpeeMHQ6UAcuyiJPXssR48KfuEX3LH9K68aDh12l504Ac+/YAiU5JJL4OhRw0MPGw4ccOaiKHIZvQcPaqanLdddpxgbs/zkIcPk5LldMVI64XLrVsmpU259fXVYv2622KuvTxRu8p/PJe7xeD4ezJ3zstd/U7R+i6J9HNvLhDRWI3HuPp3lVGo1hAowxXi2RCLsbCO3zg3TzRZBoKhXY0ThprTCtYWbbklQIZDG5RIIQaPZcA7CYmHdlnOLy3dUxTq7reLibVZDSyEmFgKeKR6bNcV4uaQo6pEYa8mNceHQRW5kEAaUShXCOEaJgEAFzjkqlHuXxiLeVgDkxtNlT1gVQhVKppOCC7sktnCbuv3tCoKEgDzPkEIV2xAoGaHCgLhcplSuEJbKhKUKSsXIIEQqhVBONEZITPHkSCkRSoGZTRb1eDye9+LEiRNMTk6S5znr1m1geHjBfC/pI8O+fW9x8uQYY2NjTE9Pz/dyPB6Px+OZF/bueY1//n/+J08+tYdaNeTpJ2CgfzbzUinFTTffwv/6u//bWaU9xhiefupJ/u5v/waj9c9uGm00g4NDfOlLX+Kzd9191uVpmrL98cf5n9/4uyLyy5EklmNjlvFTmoP7lrDx8nv57B03z+GjvnBYA52O5fRpVxx72WWCu+6SDAzAzIzlyBFLGMLllwuuvlpy9IjlkUcNe/dblr9puekmyW23Sh59zHD0KFxyieAzn5EsWSKYmbG8+KLhjTcsIyOw9TNObHz9dcMzOy0vvGhZutQQxTA05GIszoUQMDjojjqnpyBLnUB54qTl/gdyTp+GagXWXyq4cqPL9fV4PJ9s5ijz0hbtbBbRrRsvnJbGNfX0xsgtkGUpzo4pe6JiV5jTRVO3ss5ZmGQ5pycbgKVeKfdGpoVUGGuQylkFi4F0oiimVrE0G42icc66MzVCOOemtVhhi5xHtzRjLVIKZ8DEYqwu8h+7mqwtXKFgUtdknlrtRL9AEcUl4iAiUN0xcIWUCmucs9HlYFqsKca7bVeIFEUWqBNru65Uuo/Izu6X2RzKQrC1bl8ZGRLXh6iUK0RxhTgqo0olgihEyBCrFEopQBYj5wJjzKx7sxgTNzov8kE9Ho/n/eMK1SyDg0P8+q//Ljfe+Jn5XtJHhq9//S/5wQ/u7+1Dj8fj8Xg+iRwZPcixV77Lr6yYYmEtdL0DndnLfzqWsENIfuO3fvusgpEsy9izZw8Hnvkh//XKs7P2pjqa147E7Fy25JziZbvd5s3drzL+8k/40oYzG2nMAjhZyXl9eoBXd132kREvuz4UIWBkRHDjDZLly53xpdOx/OIXFMZY6nXBwICgXDL89CVX+nX6tBM2ly2T1GsWIS0D/bBurSsB27/fsG+/+86yapVg0yZJuewyKt98SzM+bhkdtdxwg+Keu51z8p3oRmC8/rrL6z11Gp54wo2qJ6kbZz94yNJswK23Kn4mCtHj8XzCmBPxUmN6IqW1gBFFsLgpMi5nBTmspdPpUI4rgHJN34XT0WAx0hIUveLaOqHQpIaTp2cQRlApBSBd1iPS9HInu+KitZYgDCmXK7Sa05jc9MbHu7mV3fuzxtklrQVt6M5uO6+8EORFAY+xpngMAiVDAhVRLkUEUUQQBAgpCKTqVY879ym9M4OmK0pa1yju2r9Ntza8qB+3s8U8VsyKukrQ/VWEgxb72ZDlhoGFi9HaUC5XCcMyAoWR9HIyuyP23fsx1tBoNDg4Osrw8AilOEYK6fIxXVjnXLwkPB7PJ4woilm58hIuv3zTfC/lI8PIyMLi5JLH4/F4PJ9c8jyjT3X4zKoyy/vPbkxOs5yn8/yc8TTGGIzWLK7CL6yunHX56VZO+7Cm026f87611lids2YgOOftj05nzBzRdDqtD/DI5pcghOEhWLBAEBRH/eWyoFK17NsHr+92o+NTU5aJCXccn+W9Q+GzMAZmZmB6GtIUXnvNcvKERkgnip444W53esKZhxa+QznZ25macvdrDOgchoYEn71d0EnguecMx47Brlcsl15qWbnSuy89nk8yc5N5iS0arY3LvCyESFs0ZFshikHrov3bGJQKAIEstDqMRchihNpoJ1oKsEJirCTJLKcnm9iBCnGlRCBAWFuUArmsTIPFCjdiHZYjSlRoTTdIkhSlFVEcY4Ur0wmKjEpj3buzKLIrDa6dXBsDBpSKKEXOyRiEIYEKAUEQBEVxkBsFtz3Rz4mxTn/tzdIjrCjyNS1Wdj8RikxQC1a4BnXJbBFRb+wd91hdvidFlihYoymVS6S5RYQlRBC59vGui0cWDeS6EEplMZ4uYXBoEIQgCCNM7vJDJdo9Fx6Px+PxeDwej8fzISCEIJCCUiApnaN4Kgqkm5h7x9u765zrtnEgCZUhPZcaVyCFeMfbl0JBqD6aB0hSujzKbplXnsPBQ4ZHHzUcOmRJEtDa/c4y53R8rzE8Y7p9F257M63ZGwwNuQzNUuxKeA4eNLQ7596OwGVaLlrkhFUpoa8PtlwnuPZaVyqUpnDihGFqynLypGXFCnFOUdXj8XwymBPx0ha/pHRnvxCyyGykcBkWzd9Y0qxDFAUgLKbrfBQCKyzSGmcSRBcincBa5YQ7697ATk3N0I+lr1ruuSfdyLVGKdV7MxVSUSqXscZgGoZOuwOIYpxakFvtZEqDEzGNQViQKiAulSmXI6IoQkpBKN1ukkoiZFGY0zVCnvEOP9uoTiGkImcdobYoBpptXe92iEOvs124zE1btIkbXPMawvWQC9H1XjoR1iJ7wmlutKtNsgaN2xHCuv08eztLoAKq1ZrL25QKLUADhV12Ll4SHo/H4/F4PB6Px+OZJwRnOihnZiyvv2557TVLHMPmzYJPXSppNCw7njScOPEe2xNu1DsMIY5cZuatt0jU20RfKZyAeeKEZccOw8TkO29r2TLBtm2SWt1tMwigXhfEsTPyDA6ACpxYmmX+GNXj+aQzJ+KlNK6x23bLegwIKQu3JQhjQAnAkKYdyqWSa8p+2xi0m4p2bkcjixFqARhd5Ea67eeZYWqygQSqlRgp3Hi2xSKMQArlchxx1y+Vyk6c1IZ20qEkXJmQ0XkxIS0Io5hqpUoYhaggIAgihJCo4ixfVwC0Zjaf07WnSyddFg7T7kPpujuFFUgkxoqepOmmt4smcSwIiQCUU3F7zsru9kTRUNQtMer5OwuB01rjMiyNwaKLEXHryoyg53btpC3aSYdKuYywgrSToI0hT3OMgTzLCALlWocusjNaJ06c4Fvf+hbDw8N8/vOfZ3h4eL6X5PF4PB6Px+PxeDwfGdIUJk47Q9DixbD5asnq1ZI9ewx5Xhxf2p/xshQj3da6Qp1aHfr7BVNTLqdyeFjQ1+f+vn+/O15duFDQ1y9Yt07QbJ57LUK42w70C6xxrsuZGTh0yLJ+vRtDP37CFflEA07U9K5Lj+eTzdy0jRs3IW1673YSYwxSSTAGKZy7UecpwrpRa1vEQLpi7Z6X0Al7RmCVKJq4Z7Mbu/pgkhompmawFurVUiF8WrRxcqlUyjlArUQqQaVchVxjWk7bi0olgrIrshFSoVRAGAaucVuI3ni2KERAo7UbaxcKJRXWFkU8s1Kiy6gsLJkGgbXaiamFm9G5KfPCaBnQaic0pqeZmZlBKcGiRQup16tIFSCFckU9hVjqitNN4eKUhbg4W7xjjEGbnGKAnSzPmGm10HlOvVan02lz/OQJkjxj+eIllKMSaZq4D6NMo4IAKYsCn7l4QcwxjUaDhx56iJUrV7J161YvXno8Ho/H4/F4PJ8gurmIPxt7aVxy2Xsew3Rv/7MY445hL86joLklCKBUdBJNTcOuXYZDhy379roRcmvh1GnLwYOWahXCCHINR45adu40rF0r6O8XXLrelfvs3Wv58X9oFi8WHDli2bPHMjIi+OztklWrJNu2qXct7AkCqFYFUQSfulSw81nLzmcNJ064Y+f9+91k55LFgiVLvHI5H6Rpyq5du3jwwQe5+eab2bZtW69Q2OP5sJmjsXH3SeCiKw1KujMowlgkRc6i1uRpThhESKEwVvRcjdjZfExBkXtZZDz2WliFRFtQQmCFIskMU9NthBDUqqVZ8c0ajHEipBUgrUAqSb0+QLXahwXCIOiV6XRthqKoHnedPS4LMzcZxmqkdGJiWFSimUIkVcX6XFlRd190NyuKs1bFfLmxWCk4enSM8fFpJk5P0G63ydIcYzIWLxlh01VX0t8/gMD0xtORhYhL10XZFXjdh6w2rh3d9nJHLZOTkxw7fhwlJOHKiJlWm0ajgUCQZ5pcadc6nuUoCSbLMcIilUS/n0//DxljDK1Wi06nc86gbo/H4/F4PB6Px/PRxeKm8s71VV8by1tvWf726zlhmJ9xWZbl/PR5Ta7PfVtjLZ2O4cUXDX/9f+ZnXd6aydn1gqGavdN9f3xStapVwaXrJW+9pTl2DJ59ztLX54pwbrhe8sxOw5Ej8PTThptvlqxcIXjjDcuJk/DkU4ZWS3L7ZwWbN0umG5aXXrI8+6wlji1ZBv19sGGDYOlSJ0hG0fsTuQYGBDfcIGm3Da+8ZvnpS26HSwmXXiq4+WZJf78XzOaDPM85ePAgP/rRjxgZGWHr1q1evPTMG3PUNp5jrC4yLt0YsxABAlEU6FhMbsjSlKhcgW52o9VFSuRs6Y0t8jKRxWi1cUU/WONG0Q29hvHEaiYnGkgspVKECizWaLqZyl3XI0KgwghJ4Q4VAilkIUK6X8VdApAkHdpJG0NOFAYIIYmiGCEtSSdBSkUQBCBwbs9iRBuKtXVby7FY7cRXY+DQgSPs2bsPq11RT6ycsDo5k3Dw4BGWLV9BrVJDRWG3n7xoKBcglBs0txZhnEhsdOG8LNyhQgiEteRpRp6kaAE2N0ghUEKR5zlaa/I8L9rOIcszqrU6GouVEqu7RUEej8fj8Xg8Ho/Hc2GJopj9rQp/9P8doByefSBycsaS9VvefEui5JlKojaSo8cDXtvf5te+e/is22baMm0WMLSqn927z1YhsyzgyImYvW80eONE46zLO7klGlrNrw19NCa/pIRlSyVf+RVXwtPfLyj8N4QhrF8v+fIvOzdllrp27+XLndi4arVgYsIyOCBYsUKwaLFgeFhw6pSlXIaVKwXlkqC0CO68Q3Hlla5IJ02hXoMlSwQLFwqq1fM7mAwCWLFC8vnPCzZvtoyfcoW2g0OCpUsEIyMCpS7AzvK8L/I8p9VqkabpfC/F8wlnbtrGhcHgCm+s6DooNRLQWHKTY3WKUgKpXPM4xX9tMQw+m+VosVb30oUFTpBzBUBF9iOqlw2ZZjknTp5GCUGn0yHPNNVKhagUMTg4SKUSIyROmTROCNRYsAYVKHJTBHwI5zq0uabVbpHplKgUYIBAKKQIsMaSZylxXMJaM5sJ0s3rtAIpXGt5d8bdWEOWafbs3cfowVFKYUz/QJ1QSYzNmWk1ybISE40ZTh4/xbLFCwkjhbGFM7QQVbXQrrioaAw3xhTj4haTa7TSFNPuzqlqLVobrNHIIk8z14Ysy1FCkbQ7aK0BQckal5EpVNGePr/q5YkTJxgfHy9EVsvo6CgzMzNMTU3xxu43mJqaAiAIApYvX069Xn+bk9bj8Xg8Ho/H4/F8VLjmuuv4i7/8v3np5Sbj4+d2OpYrCxkeCjkr+NCGrFl9B9dd+wPe6RhGBRF9tZVU62dfbm2VSy75RSZvuPTs2wuoVmHjFVVuumntB3tw80ClAmvWnH1sJIS7bO1ayfLlzgAThU7ctNblSua5y7YMCpWgry5IU4tSTuDsiogjI4LBQUG61sXDBYG7/IOa8sIQFi10YqnTyCxhOCu8ejwez5yIlxg3rt3NChHWFehoctc8nhvSTkIcxU7XMxahLNqCEYIQeqXbRrh2bdc0jsuLxBaOw8I1icX915AZzenx07SmmgitiUsx08EUYRhydPQY/YMDLF22hMGBPpAWifP+W9xouRSy52i01pLrnDzLEVJgtGsIDyM3Zm4Kl6WxBmM0GA1W9GRYrHOPClwzOUjanYQ339zD2NExYqUY6a9TKZUpVcqkmRN0s8wy00pptTrkeV40truxeYEbwUdarDBIKIKTnfPSGI3RTqjEOpeq0cYJf0KSa02Wd8jylDxLMUYjJZTiCG0MRkjSPMdqQxhLZFEgNF8YY/j+97/P/fffz8TEBABJknDgwAEOHDjAvn37iKIIgIGBAf74j/+YW265hVKpNI+r9ng8Ho/H4/F4PB+EoaEhtv3CLVy3xdLpnO+tFbCi+P1BUMDq4vfZdBuwy+UPuPmLECkpHs/sUZ8Q7rEGP6MOvNv4t1Jnb+fn4ow1+FHA+SBNU44ePcr09DQAnU6Hw4cPkyQJY2Nj7Nq1q+gIEVSrVZYvX947Nvd4LjRzIl4qlBPNuv5JKV1wstGARQr3ZyUFoii1cUZFJxzmxiBM0bItRTF27gRMIV3eI7jLXa6mGw031pDnmuZUE91J6SuXqcQxURQSBIqZTpuTx08yPTnNkmWLWbxohChUxSkhS6Y1SsrZ9GbrbNHaakplQbUWkSbSCZLCXdZzippuxmThIbVFi3ixCwyW3GTs2bOPo6PHqMQlhvv7qJZiquUyKgwRQBZGCCvQmXMZ2sJd2m0XB4l0m3dZnMaVCGmt0cY4AdJYtHHOTCMsWqdonRNEIbnOnGipc4QAo3PAEAQKRUCau6bybpbkxRBYPTAwwMqVKxkYGABgZmaG48ePU61WWbFiBZVKBYB6vU6tVvOuS4/H4/GcwYsvvsiDDz7IkSNH3DSC55wIIVi3bh333HMPGzZsmO/lfGxoNBo8/vjjPPbYY8zMzMz3ci5q4jjmxhtv5O6776a/v3++l+OZR8LQjTB7PJ754/jx43zta1/jySef7E17Tk1NcfToUf75n/+Zhx9+GHATkJs2beJP//RPWbHig544+HDJ85y33nqL++67j/Hx8flezkWNEILh4WG+/OUvs2nTpvleTo+5aRu3omgRd8KgNhZphRtFRpDrHKWco090m7KtAOEkz0L7K/yUAmuL3mtJ0dRtUUq5/EhB4UZ0IlvS6ZBnOX2lMkN9NYJAUiqXCKKQWq3CVKPJqYlJDuxLODV+mpWrltPXV3cFP8KNtgeBIs9dFZoxFivBSoMmxRK4oOfU9JyY1oC2enZk3BQZl5Ze8ZDODEePjnFy7CT1UplapUoUhpRKJaJSTFB44K0RReitdePtOEdlT7W1RYO66XatF3dlDFrnaGvQ1iJ0jiiCO3OtnQhqLCZzDecSV46k89w9H0KiDegsR6mAWWl49n7mAykl27Zt45rN15BrF6p98OBB/vzP/5xly5bxe7/3eyxduhQApRSLFi3qFSl5PB6Px9Nqtfjud7/LN77xDVqt1nwv56JnYGCAKIpYu3at/zydI15//XX+4R/+gYcffpg8P7sgxDOLUordu3ezaNEitm3bNt/L8Xg8nk80YRiyaNEi1qxZgy0MU2NjY4yPjzMyMsKaNWsA9969ZMmSj9T3hqmpKe6//37+7u/+jiRJ5ns5Fz21Wo2pqSm+9rWvzfdSesxZ27gtJqe7rd2mcA5aY2gnHSphgEVirCvrQQgEoLq+SmFnW667TePa4iyZLuuxK4Z2tTVrLK1OB6s1gwv6qdVjjM7pegfjOKaUpPTX60w1Wpw+NUWr1WH1qhUMLxgkKpyJoQxcIVBRsiMJSdspSadN0tFYLUnTDuW4xODAUFH0MyvxGWuKdbnHrnPNifEpTo9PUi1V6KtWCaQkjiOqfXUq5QpplmE6FhUoyuUytVqNUqns0kAL5+msm1NjpARd/F0Uo+tWY40uBFWDRYNVGANSKPLMoHMnDotiDN+Y3O0jocgzTZZmEBnXqI4TUuf7nOfw8DDDw7Oh2EIIyuUyfX19rF69mtWrV8/f4jwej8dzUdNutxkdHeX48eMADAwMUa/3zfOqLj6mpiaZnp6k2Wxy7Ngxsiz/SB2EXMycPn2agwcPMjExQa3WR3//IFLO97eriwtjLM3mFJOTk2f8e/V4PB7P/DE8PMy99/43vvhLX8Ri6XQ6PProo4yPj3P33Xdz77339sbGy+XyGcfsFztpmrJv3z7Gx8dRStHfP0i1WpvvZV105HnOsWOjdDoddu/ePd/LOYM5ES9zNKbwTWIMsvibKCyVNjOEcYimEMjAOQp7YqRwI9iAMNKJgNZiMQjrxs6dAFfkShY5mBRj1Fio1auUayF5npNmhnaa0eoklEoRVWuxVtJozJDMtNm/dz9aZyxcNIIMBOSgZIDOM+IwIJCKNBM0ZzJmZjJq9T5qtYg4ipBKOYeidQKplLIoFCoawK1hutFgdPQoA9V+anFMpAQqkMTlGKVUr4gmUIooDFBKEgQB/X39SKGKXE2NFAYpi7F1W5QaSedY7WZcWqPBCDDatbNbMHlOliW9tnJjLFmWk2cZJtegNVoU+ZzWkqcJKElk6wg5NzGoHo/H4/HMB918aoCrr76Or3zlN9iw4cp5XtXFx3PPPcm//Ms32bNnd7HPzHwv6WODeVsczx13fJ5f/dXf9RE3P4Mxmh/+8Nv8/d9/zRVQaj3fS/J4PJ5PPGEYsmzZ0t7fW60Wu3fvJooiRkZGWL9+/Uf286zrJAVYunQF//2//zX9/YPzvKqLj1OnTvAXf/G/c+TIoYtuemRuxsaFcl5H282EBCtxOZh5ShS4bEshDUiFtSCFGw13LeXuz8IKFG6U2gqDkO4aLmNS96rnLBYhFVpbrAalJCoQFLPghGGAEBKtLUmSk+eGOAqxtTKt1gy5zjlwYJQ8N4wsHKJSkQhhkEFAHEeUS2XyNKPRnKFcahJEEWEoXT6mcCPjYFFSUavXybOULEtptdu0OwnHj59ECUt/vUwliFBKkmSpK/3R7nHk2pAkCVnSxpocBQQCsjTF2AApBFIK50sVAqEkTpmUSGGxxrWbF7PlbnTdCnTRHG60QUkwNkdgCZUiVDESd9bB2hydG2zuhGAZRE4XnYsXhMfj8Xg8FwGDgyNs3Hg1mzffMN9LuehoNKao133G4IVm8eJlbNny6Y/swd6FwhjNrl0vzPcyPB6Px/MJpFKpsmXLp714eQ6OHRulVrs4J5bmxmZXjCUL4dyIQkhM0T7e6XQol0oYAbJoD5dIpLVYKcAYp8HhtDhjjRuN7m4Yl4VJN1eyK48WY95SOhek1k6wS9IUYQXGQFq4LwOpQICSkiiKSLMMneUc2H+IVqvFsuVLiEsh/f39xGHM4OAQlXKVLE3Ye2AfSZogRFE89LYynXJcYnhoGInk1KlTtFsZWdKmWu5nZGAhtVJEJYzAWmp9fcRxjMk1Wuegc7I0pTUzQ6ed0G63sdYp26I7ky6K3FBjseROxBSukd1aU5zZt675nAydw+TUNNZqBvoqBCroNaD31aoIYVFKkucpaO0E0F5BkEYYgxQXn3y5ZMkS/uzP/oxarcaCBQvmezkej8fj8Xg8Ho/H4/F4PJ4PiTmbERbSFdmIQuBzo9SF+CglUqleHqOQRau10a7wxlqsEIU4WAhqxci5E9MEBoNE9rImu+3bpSikZV1pUByVaXcSsjwnDCOCCGJt6LRT185tdZGtacFYgiBg7OhxWq02tXqVlSsEA/0DWKuo1fopl2OmphtMTJ7GGN1bX5plCCGIwog8zSiVqvTVB4iCMrVqAykkM60WcaSoRjFZlqJUSH//MHmW0m63mMxOkqZutFtIRRCHyEC6x28MVkhyrZ0D00DRaY6xORbIc02eZ064NKB1xuiRMWZabRaODFKvxEipQFqAEcEAACAASURBVFuEhSgMwLrRcufSdA4Ai0UoiZWWi9V6Wa1Wue222xBCoJSa7+V4PB6Px+PxeDwej8fzsSaOY2666Sb+x//xP1i7bq2fIvDMK3M0Nk5RsuOck1gnRGZJQhSFrnzHOoefLop9pACkcxZ2R8ONsQRFC7k1GiWVy2wULkPTFpmXzpQoEFIQBgFSCpRUBCogjmNUoAiDkFoYoquWVqvD1PQUM60ZtDbEcYjONGEYU67U6SRthOigCInCEuVSBaUUYRSzYuVq+vv6ybIEMICl2WygdU4pLtFptUg7HeJSmZGRIQaHB1BK0Wp3yNOcUCm01pTiCpVqnenpKZrtNq00RQQKIUOSrEMYB6jQCXPa6EKkdPmg0oBFY4r9Z4scS22NE4ytJctSmo0GSIm0uPH7Yq+5J8kAxu1pQbHtQiwtTK26cL1ebAghfImAx+PxeDwej8fj8Xg8HxJKKZYtW8aiRYu8icgz78xN23hRWOPkMIrqHjfGXSrFCCkBgbFudFuKrnBmUcK5KLUzWGJcywzghFAppYt1NACyEOGceVIiCcOYQAVYaxBAoELaSYcsy5HtDtZCoCL6B/qp9lWZac8QBiF91X6MgUqtRlSKaDaa9PcPsmjREur1OsbmtNsNAgV53qHTbmFMRqVSoRRHNBsJeZ4hhUDrlE7HUqlUqNf7ictlhqQi7aSuxdtKQJFlGSpokxvD8MhCpBB0Wim5NkRxjDGGLM8Qsltq5ARfKaQLfhecIWB2G9tFkQnqfmTd/hVhMY7vxveFcLeXVmBxTkthilxSa5G4NyMhuq3pHo/H4/F4PB6Px+PxeD6pyCJ6z+OZb+bGeWktFPmJ4Ma5Ta6L2e6inRuXp2gLAU4I29UhnQhnTJFfKXpCKEXzpSuFkgjjtmSlQAhBrl3uJUrQbDUplSNyNGmakqUpaLDaosKIcrXKwMAQURzTbrVBKtatX4+1lqGhQaamJqhUy0RRgBDQajXJ0w6nJ04zeugwjekGYRhQ7+ujVquhdc7Jk+NEMmBwaIgkaXPKnCSMIgaHhwjCEqI/REpFmrp8S9FuUyqXWbZ0Oa3mFI3JSUqlmHq9Rorl+PFxSlFIGCoEEiEKJ2ZRwuOyMIsW98LtaovLrQAVRqQ6IccQi96T040KdVmgFqyw2MLNKYRBGDcO79pZ7Vy8JDwej8fj8Xg8Ho/H4/F4PJ6fmzlzXhpjcREITjXL8pQgDHCOS9EbR7bGFo5Al4tprcAIC7K4DFC9wh4wxgISK7o/ccIbQiKkwBpLUI44fvo0udHEpZA8z122poUojqn3D1Cu1pFBwGWr1yKVYHJqinK1SrVapVarFLdLSfOU6eY0E6dPcmJsjKmpKTrtNo1GE6MNnY4mSXKM1pw4cYI4jLBCkec5pYobDV+wcDGlSkhcqpJpSyQlYRQQhm6EPI4COu02CxYsolSqUOqbIcPSmJlEClkUAonZ5vYi6LOX9SmKTNHuULiQSKGplEs0TzXJdO6yMKWkqFByje62ECmxSOn+j3bD5RQOz9k79Hg8Ho/H4/F4PB6Px+PxeOaXuSnsKUa/jbHFyLMly1MqlRpCqsJF2b1y4dKUxeizpSdmFm0/RaFMMSZtTCHYUWRn2kIIddcTAsrVCuPTY0TNgCQJqVQrhKWALMuIKxVUELJoyVIqtT7CMKR/oMbA0BA6N8Sl2I2nB5JAKJrNafIsZWxsjNHDh2k2Zxjo66MUV2i1OnTaKcY0ybKUqakZojgnCKcJAkkQxnQ6CUknISolqCAkzw1BGJJnGhUoavUaUxM5g8MjRFHE4PAiwpMnmJyZQgYGqQKEdPvCdvM/6e4v3L7rCsFFiY8bC7eU45gkSTC669S0vQIlWwiSThTVIBMwAovC4sb6pZAIW8zvezwej8fj8Xg8Hs8c8fJLP+Vb//iPNKab55z1CsKIKzbexB13feW8U6zSNOHll57mkYf+39nM/7dhrWHBwuVsvf3LrFlz2Xmv/eSJI2x/9Lu89dZPUersQ2ilFFdt3syvf/V/8Vn9Ho/HcwGYm7FxXCt4V0jTRrsGa0Rh4rPFr8I72C2aKeyExtVoo8CNjhuLOaM8xolvxro8R2ssSiqMdc0/calEEETMtFrYUhkVhCwcHGJkZJhT46foJAnT01NYKRkeHiJNOwgscSSRNifPDdZokqRNq9mkPdNi/MRJJqdnaE43KEUxtWqVIAhpNFvEqSbXOZ1Uk+oEGTSIQ0UYRsSTExw+eJD65AQjCxYSl6t0WhopJXEUIaShUqsQxyXiUomJ0xOoUNFJWkgBUkikfNtHrnCt7VaAbie0p5uU6jVEpHr7zhZj5GGgSJLMuVXt7Li4a0hyztYuxhrnzrTuJWCMRGuNlAbO+XXC4/F4PB6Px+PxeM4fay1vvvkWT//wH/n82nOLe6+N5/zDiy1OTf4yQpxfq3G7PcWunz7D1BsP8Lm1Z+fzHW/m7Hx+KQcPXcKGKy497/UfO7Kf15/+FuvjPaweOHv7b57K+O7BE3z61i9yxeUj5719j8fj8bw7cyZeCgAhMBiSJCWOYpCy124trAGCYmSZolyG3riy7WZcaosQkiLZsWgYp3B3Wjf6bHoyKBiBUJK4XGLy5GlCFZEkKafGx5FSMDQ8RBTFJFlG0pmh2VSUSiHSWiqlEkmaYQWkeUqeJuRpQqfTptNJaLfaWKDdbhOFAWmak2YZaZYXrlGXuznTbqNNyOnJU3Q6TSYnJhgYGGB6coJqvY9KpUoUBUxoTRSXCcMyURyR5hkGTZYnlKIArEDJwl0KxaN3LeHSgklSxvYcZOmalZRG+qFwuVpjEdr5J6225LlrFccaVPHBbwuR0xo3Lm6FRdsUKQxYiSUAYZFScDE2jns8HzbGGDqdDnmen+FcllISxzFBEJzz30qn0+Hf//3fee2119i6dSvXXHMN5XL5gq2z0+nw4x//mFdeeYXbbruN66677rzuL01T0jQliiLCMPT//i9SjDEkSUKWZWe9Ht/tuUuShMcee4wXXniBLVu2cNNNN1GtVi/YOpMk4fHHH+f555/n2muv5dOf/jS1Wu09b9doNNi/fz9jY2NIKVm2bBkrV668oGv1nB/WWpIkIU3T83oNOsHiTX7yk59QKpW44447WLVq1QVbZ5qmPLvzWXY8uYPLLruMbdu20dfX975uNzo6ysGDB0mShKGhIS655BKGh4f9++JFgLWWNE1JkuSs118YhkRRdM7nKcsyXnrpJR555BFWrVzFnXfdyeDg4AVbZ5ZlvPzyyzz88MOsWLGCz33uc+d9f9ZCnmckSYJSilKp9LF5DXaShLptcM+aJee8XOQJT+1vcuiwmY2zep+0WhlTp9qs6zPcsyY+6/IDk3DizRlGj0xTfe+3hLM4fixBdSa57ZKAjQvP3n5N5vzzyYTDh1OuuPz8t+/xeDyed2dOxEspBRKBLrIthTYEQrmanm42oygUy+ILh+zlWlpkz6Ep3Ci0deU+TrdUhWuT3v8RYDAUndyYLCOKQjJtmG7OEMYRuc6ZOD2BFJJytUrfwAD1ep1AKVRRX54kCTrPaLdbzoVYjKlLNZunqaRzM7bbHSde5galQjpJQhzF6DwjSVOiQNJsNsnzFGMMxhq00YzkmlZjmjzrEEYl+gaGqNUNFkWStJFK0Fer0m5NkWW5c5wW9+0eqnEZlzInKEkCLJ3pJqWhPqRSWKOLbneFISuKjCC3loCi5IeirbzIFLXWIgOBERlpmpGnbpRcLHQCqMfjgT179vKNb/xf7Nq1C+1awwB6B0oLFizgqquu4q677mL9+vW9EaEkSXjmmWf48Y9/zOLFi7nyyisvqHiZpik7d+7kwQcfZMGCBWzatOl93V+73ebxxx/ne9/7HsePH+crX/kKn/vc597XQb7nw+fQoUN885vfZOfOneR53vu5EIIwDBkeHmbjxo3ceeedXH755cSxO7BK05QXXniB+++/nyiK2Lx58wUVBLMs48UXX+SBBx5AKcU111zzruJlq9Vi+/btfPvb3+bVV1+l2WwCgr6+Ops3b+ZXfuVXuO6663qPxzN/HD16lPvuu49HHnmELMt6P+++BgcHB7nsssu4/fbbufrqq894Hzpy5AgPPvggtVqNq6666oKKl3me8+prr/Ltb3+bO+64gxtuuOFd39fyPOeVV17hgQce4Omnn+bUqVNorSmVSqxevZovfOEL3HPPPYyMeCfVfHLy5EkeeOABfvjDH5Kmae/nQgiCIKC/v59PfepTbNu2jS1btvTe5/I858033+Tb3/42W7Zs4ZZbb7mg4qXWmj179vCd73yHa665hltvvfW87+/YsaPcf//9PPLII9xwww189atfZfHixRdoxR8+kRL0lc59xFGJJEsWw+fulEh1fkclExOSZ5BU9qlzbr+/pBjukwxuEHz2rvM/4tn9quT5Y4K++Nzbr8WucPVtL0+Px+PxzCFzIl5ON6ZIsxQZBBijiSLlchaLfEZtnUDpSmMEQkh00SSOsAhrEKjZwnJhMUX+ZfdEo+g6NYvG8q47URRnJ/M8RSlJkqVMNRukOiOMIicylmJMnmG1JgxDQuXuv9Nuk2cp1uqeo8A5qhSVSplypUSWaYSARrOFzjU5AhVYcq1ReY7WGmsgzzXoDCkFpmxdVqcx6KRDp+lEyVLJMJHntFtNgihGBTGlOKQTBURRiDEaISXGZEW2p8KV7VgsHaYbEyipaU5NUc+WYJQrStLaoI1Aa5BSkWQJ7UwSWgk6cGPxWpMXjh1jNVGQY0UTnUt0DmFYPmOs3OP5pNNoTPP888/z/PPPs3LlSgYGBlxmrLU0Gg1efPFFHn30UZ566in+4A/+gC1bthDHMdZaWq0WU1NTZzhELhTWnN/9WWvZs2cP//qv/8qPfvQjdu/ejTGGm2666YwDQs/FRaPR4OWXX+app55i6dKljIyM9F6P4+Pj7Nq1i8cff5ynnnqK3/md32Hr1q2Uy2WstbTbbaanp+l0Ohf+9WgtnU7Hld29x/1lWcZDDz3E3/zN3/D666+zdu1arr766p4A+p3vfIfTp0/zh3/4h1x77bVI6U+vzSczMzO89tprPPnkkyxcuJBFixYhi2mR06dP8+qrr/L444+zY8cOfuM3foN77rlnVkDKchqNhvtzrt/tbn5uut/npqenabfb7/oatNbyyiuv8Ld/+7f85Cc/YcGCBVx//fWUSiXeeustdu7cycGDB9Ha8F/+y5e9E3geabfbvPnmm+zYsYOhoSGWLVvWe/11Oh12797N9u3b2b59O7/2q7/Gl375S9TrdcCdxJmenqbVarmT+ReQrkN0enqamZnzv7+ZmRl27NjBN7/5Td566y1qtRrtdvsCrfbiZOFCwdatCnWe4uXx45KxA4JTB97ZpVoqCVZcKtm29fw/T0qh5PWffDwcsB6Px/NRZG4Ke0zC1PQ4UVxFSUmlWsbgDqqkdOPVxtkwnbhnQEgJWJd36dIx3Si4oVdO4zybRSu2EL3xdGHfdrmAcrmEEJZSJabdsmRaE1lDmmtMMXZtTE6StAgD5xA1JqeTtMmSlDzLEAhyrdG5RghBFIb09dWZnm6AtXSylCTNMEIQWOcc1Ua7YhzcenSui7IhCKKIMAppd5pknQQr3Fh8TUlMniBCRSBDpFIMDNSx5IyPj9PqtAEn+mY6J8s11miC0JB2OrRbDYzp0G7OEMgyWZ6Tp5os03SShHK5TKo7dLRhJjG0mxDJgEgFyOLzVkmFNpYkcXmaUagIotA1mp/HwWH3C+Pbxxjh/Y3WXmjarTbf/d532bdvH/fccw8bN24kis7Mp+mOBVtrKZfL/sDYc04WLFjAb//2b3PjjTf23JVpmrJnzx7+/u//nu3bt7Nu3TpWr17N8uXL53m1783+/fv567/+a5544gk2btxIpVLh9ddfn+9led4nQ0ND3HvvvXz2s5/tuRGzLGP//v1861vfYufOnSxfvpz169ezbt26eV7tuzM6OsqDDz7I7t27+cIXvsBXvvIVVqxYgTGGXbt28fWvf52dO3fy3HPPsWHDhp4Q4Zlf+vv7+eIXv8gv/dIvUSqVAOduO3z4MPfffz+PPfYYP/jBD9iwYQNXXnnlPK/23Wk0GjzxxBNs376dTZs28Vu/9VtcccUVBEHA2NgY9913Hw888ACPPfYoN910Ixs2bJjvJX/iqdfr3H333dx7771UKhXAvf7Gxsb4t3/7N370ox/xvX/7HldsvIItW7bM82rPD2MMu3fv5vvf/z7Hjx9/X5EbH0fCUNDXJ1Dq/I4fWi1BHL/7baSEUgn6+s7/2KRaESjpxUuPx+OZL+ZEvBwarHLyZEhjukGgYuJSCZTsqYxuclyCcmKltCCMxVpdBGKCRfdasx2iyG104qCm6940YAWBUgghCZSiXA7ctnAlP0IIskwDFq0zhLQoBaDppK1e1qbOcgSCQEVYA1maMtVuFMJbRBQElMslsjRFSkWmEzJtULnLYSnFscvulLi1KYVSiiAIqcZlTGaZaLbRWY4SklIcUqv2E5YUuc6ZmZ7CYoniiFqlwmkhsEaT5ZrpmRYznQSjLYFSBKEhjiPiwTKTR5uM7ttPaWQAoZw7EyHIjRtVN0BmNM2ZGerlBQQaQiUJlSrUX4HREh1UEEoShjFBGCGlOq/nvdFo8Fd/9Vc888wzJElyxmVCCKIoYunSpdxyyy3cddddLFy48EMTCDtJp+dC2rBhAxs2bDhDvDxy5Aj3338/27dvZ9GiRfz+7/8+l156KUqd3z7wfPyJ45g1a9Zw1VVXnTG6unHjRo4ePcq+fft44403mJycfFfx0lrLvn37ePjhh3nhhRcYHx8nCAJWrVrF1q1buemmmxgYGDjjNlNTUzz++ONs376d0dFRlFKsX7+ez3/+81x55ZU94eCdGB8f57HHHmN0dJQbb7yRTZs20Ww2qdVq/OZv/ibXX389//RP/8TevXt/vp3k+dCIooiVK1dy1VVXnTGWu3HjRiYnJ3nrrbfYu3cvJ0+eZO3ate+4HWsthw4d4rHHHuPZZ5/lxIkTSClZvnw5t956KzfffHPP3dml0Wjw9NNP8+ijj3LgwAEA1q5dy5133sm11177nq60iYkJduzYwd69e7n66qupVCrU63W2bt3GF7/4RW644YbeCYL+/n6effZZdu/ezejoKM1m04uXFwlhGLJs2TI2bdp0xnN+5ZVXkiQJu3fvZv/+/Rw7doyNGze+67aazSbPPfccjz76KHv37qXT6TAwMMBVV13F7bffzqc+9SmCYPar6szMDM8//zwPPfQQe/fuRWvNqlWruP3227npppve8zUyPT3Nzp07efXVV9mwYQOXXnoppVKJLVu2cMcdd/CZz3ymt40lS5Zw+PBhnnjiCY4ePcrJEye9eHkREAQBixYtZtOmTWc830mSIITg1Vdf5dChQxw6dIjrrrvuHbfTda3v2LGDHTt2MDo6ijGGRYsWcf3117Nt2zaWL19+xntgu93m5Zdf5j/+4z948803SdOUZcuWsXXrVm677bazPsN/lmazyQsvvMALL7zAmjVruO22284YKR8bG+PBBx9k3759bNmypfc+6/F4PsZYdwKmk3QIguA9v9ufL12zDkCpVPrAx+LWWmaaM+zbv4+xsTHyPKder7Ny5UqWL1/uj6E9HwpzIl62Wk2SpE2aJEy3mjRmWgwuGKJcKoEUSFwGjBRO1HJj3y7bESsLF2MOWCc+4sauhZTYwq0ZFpqmEAKpJDJQlOMylUoJgWbi1ClsIWxiBOW4RCWOKMcR0hqypI38/9k78yCpqjz7f+5bcs+sfa+iFvZFdlGwQUBRGwEXWlxbZXp0OrqnYyYmYiaiY2b+ml/ExGwdM71OtyO2Cg2K2CMoigtgUezFvoNCFRRF7ZVVlVm5vOX+/niZKWWVtjjQbdt1DANN7rsv8+XN+94993zPkXbK3xFHHalo2KZFPGHS3dVDR2cnvZEepLRRNQ2fz4vb6wYEiqpiGCYJw0QIBV3TcOluLFvi0VQUVSCliqa68Pv8qIpOPCnoi7sQSoCcrFzCsTg5tk7A7aGr+RJXrrRgWkny8vLJLyzEreuEkybhnj7ONTZiKyoul4ug3w+agUeLYfkEeDX6+/txW1l4vC4UVcHCRrFtdJdGLB6HiEkibhDyKLg9Gi5NRUuV1yAE2Bq6pmAjURSXowJVlGvKGTcMg+PHj7Njxw5KSkrIy8vLTFyWZXH58mXq6+sz4Q0/+MEPqKmp+b1MblJKotEoPT09A8IF0uEVq1evZteuXTQ3NzN+/Hh6enpueDnlML5e8Pl85Obm4nK5UBQlU8I7FKSU1NfX88tf/pL6+noKCwspLS0lkUiwdetWamtreeKJJ3jooYcoLCwEoLW1lVWrVvHWW2/h9XopKysjEomwceNG9u3bx/e//33mz5//me8vHA6zZs0aXn31VSZNmsTtt9+Oy+WiurqaZ555hqysrEwQwDD++OH1esnJycHtdv9OtbuUkmPHjvH888+zY8cOsrOzqaiowDAMdu7cyY4dO1i+fDmPP/54hpDv7Oxk7dq1bNiwAYCKigoSiQRbtmxh7969mTLhzzp3b28vr7/+OqtXr6aiooKbZ97M6NGj+Yu/+Assy6KkpCRDXAKZAI70b+vrElbxdYbb7SY7Oxuv1/uFvrPu7u7MmIhEIlRVVREKhbh8+TIHDhxg//79PPPMM8yZMwdN0wh3h/nfN/6XNWvWkEgkGDFiBEIIamtr2bt3L0888QTfWv4tdNfQKcbRaJR33nmH559/nmAwyIQJEygsLOTee+9l7ty55OXlEfB/onRLz4+apqEo1+6/N4zfL1wuF9nZ2fj9fpLJ5Ocu0KWUNDY2snr1ajZu3IimaVRXV6OqKidOnGDnzp0cPnyYZ555hvHjxwPO5s0777zDCy+8QDgcpqqqCl3X2bdvH3v37uX8+fN8+9vfzqhBP41YLEZtbS2/+MUvsG2bZ599dsAGVH9/Pzt37mTbtm3cdNNNTJo0iZdffvn6XqRhDGMYXynE43EOHjzIunXraGpqYv78+TzzzDPXxSvfsiwuXrzIa6+9xs6dO6mqquKv/uqvqK6uvqZ+bNumpaWFt956i/fee49Lly4RjUaRUqLrOrm5udx0003cd999zJkzZ1Cl4zCGcT1xXcjLYHY2JaVFWIVgJCykUHB5PbjcLlRVpT9mEI0lATXl5YjjXcknCdgCsIVMhfFILCkdj0wnwgeQKEJF0zQ0XUPXNQJ+DwG/l95wmGTCQFU0FMVCUVQUIdF0BbdLwzYNEv0WVsJw0ggVFYSNiSQRjdIZNejrN4mZkIibTgiP0UdY78Xn9+H1eLBsG6TESBgIVUVJlV476ekqSSOOR/fg9XjxeHy4gkW0X+6ip6+bpJEEzUNeSTndMZuAH2KxOO0dHYBAVTQ0VSEvJ5v+SJRoNIZLd2GpKorqIpaw0G2NJB5Ul0qgOIiu+PAEgmhuFVTnOlq2JOj3E+vqQ0HF69Ixk3EUVwhVVZ0gonRmkpAIxY2UtqNpFc6DuvIlFohut5sVK1Zw9913Z1QYTkl5gqNHj/CrX/2KDRs2UF1dzdNPP31DjdI/D1JK3n77bX7yk58Qi8VYtGgRH3zwwfBO0TC+FFpbWzly5AiJRIIJEyaQk5PzmYv19vZ2XnnlFbZv385dd93FihUrKC8rxzBN9uzZzS9+8QvWrFnDyJEjWbBgAYqisGXLlszv5qmnnmLMmDEkk0neeOMN1qxZw7p166isrKSsrGzQ+aLRKBs2bGDNmjUUFhZy//33M3r0aDRNIxgMZtQqaQ+6YfzxI+172dfXx/z58ykuLv7M8djV1cUbb7zBli1buOWWW3jssceoqanBtm0OHDjAc889x/r166mqqmLZsmXouk5tbS2vvvoqoVCIp556ismTJ2NZFlu2bOHll1/m1VdfpaqqKrPQvxqxWIzNmzfz0ksv4XK5WLp0KZNumkRWVtZn3g8uX27mo48+wu12U1VVNay6/CNAd3eYEydO0NnZyZw5czKqtaE2dUzTZO/evfzmN78hFouxcuVK5s2bh8fjoa2tjbVr17JlyxYKCgoYMWIEFeUV7Nu/j3Xr1mHbdkY5LqWktraWX//616xfv57q6mpmzZo16HzxeJxt27bx/PPP09/fz6OPPsrMmTMJBAIEAgFKS0sHHdPS0kJ9fT1dXV3MnDlzyLl2GF8d9PX1cerUKa5cucLYsWOprq7+zDkwEomwbds2XnvtNYqLi3nmmWeYNGkSQgjOnTvHSy+9xObNmyktLaW8vByfz8exY8f4zW9+Qzgc5qmnnmLevHkoisL+/ft5/vnn2bBhAzU1NSxcuHDQ+ZLJJHv27OFXv/oVLS0tPP3009x2222ZzcN0ufimTZvwer088MADxOPxr+WmjRCCfc0m33urFSEYtO643Gsw+ht8qc+u6zomOhtP9nKmbbBPaF/CgpwAk/1frhxf0zWaYh7+X20HBYGBBI2U0NJnIYssdG2YvBnG58O2bZqbm9mwYQOvvfYap06dyii/rw4J/bLo6+tj27ZtrH55NXv37aWjo4Np06alQhG/OEzT5NzZc/z0Zz9ly5YtRKNRJk6cyLRp0/B4PLS3t3P06FHWrFnDoUOH+LM/+zMefvjh4ZDFYdwwXBfyMhQKYSTjmV1OIQQipZbQNJ2ucBTTMrGMdL64JG1freAka0sFROqftDLStiWKAsIGhEDRHG9Gv89DKODD53UTi/YT6e0jGUsgJPgDHoKBAKGgD7/Pg22b9CdNXJoLU035YeouVE2QMEx6wj1Y3kLcIY1owsLjjWPEYiiKSn8iTri7FyNgoSiKU5ZtWWDbqIqKYRhomoqCgmEaeD0Kbo+brJwi1EAZ/clupB0nHm3FsvLIyc+jr6udpuZ2dJcPfzAnlWyu0tsbIb+wiNKSUiL93TF7ZgAAIABJREFU/RTmZhM1TITiJtrXj2mZuHU3Hm8AzavgUl0gFBRVkI4IVxXwed3Yto2mqLjcblRboGsqmup8H4iUr2iKDHZ8xAWWtLEsm2usHHe+Q0WhoqKCyZMnD0r0nDRpIt3d3fzkJz9h+/btLFmyZEDwyZkzZ3j33Xc5evQo3d3duN1uampquPPOOwekRQKZ9u+9916mvcvlyrRPL0Q+C1JKOjs7mT59OgsWLEDXdQ4dOjSsuBzG56KnpydTxqWqKrZt09vby4kTJzh27Bjz589nyZIln5tEe/LkSfbs2UN+fv6gncn8/DwOHTrEG2+8wf79+5k2bRq2bVNbW0skEmHRokXMnTuXrKysTH8NDQ0ZZfGnE0jj8TjvvPMOL774Irm5ufz5n/85t91225+sd9bXDZFIhK1bt9LV1YWmaZkAqdOnT3P06FFmzJjBfffdR0lJyWf28dFHH7F79258Ph/33nsvt99+e2YRXVBQwOnTp1m9ejX79+9n9uzZeDwedu3aRXt7O0uWLGHhwoXk5eUBTgnnpUuXaGpqIhwOD0hCB2fR/v777/PCCy+gKApPP/00d95554Dx/Gm0t3ewadNGDh48yKxZs5g1a9Z1USEM4/ogrRCzbRtd1zOVDufOnePIkSOMHTuW5cuXf26ieDjcw759+2hsbGTp0qUsWbKEESNGAFBVVUVvTy9Hjhxl//79nD17llAoxP79+2lsbOSBBx7grrvuysx9Xq+Xy5cvc+zYMbq6ugYFjyWTSXbt2sXzzz9PJBLhySefZMmSJeTm5g5oZxgGJ0+eZP/+/TQ3N3P69GnOnDnDzTffzEMPPTQkwTmM3z9isRgHDtTz3HPP4XK5MqFk58+f5/Dhw5SWlvLII48wevToz+zjypUr7N69m/7+fu68807uvvvuzD2ytLSU1tZWjh49yt69e/nmN79JeXk5hw4d4syZM8yfP5977703o0oPBoNcuXKF2tpaOjs7B40/0zQ5ePAgzz33HE1NTTz66KMsX76c/LxPnhlaW1t5++23uXDhAitWrGDWrFvYtWvnDbh6nwfh+GDdyDMIQXn5N5g2bw26ZjFpgmDKVHVQm4oRlV+qtDUUCrHsgeVU1YxiKO5TSkkwlMWUyZO/1PsfO34CP/x//05ra9uAMCFpw8fnbbZtl5RXlOLxDm+2/Wng04Psi68nW1pa+NWvfsUbb7xBRUUF99xzD1u3br0u7yoSifLuu+/yox/9CIC77rqL7du3X3M/UkquXLnC86ue5/XXX6ekpIS/+7u/49Zbb81Ub8ViMRoaGvjtb3/Lhg0beO655ygrK+OOO+64Lp9lGMP4NK4LeampCpoqULVUmZACQiioQkHTdFpb2wl395ObnYuu6xi25ZSGX/0jv6osXMgUqQkIKVEEqIqGy+XC5/eSHfIT8vtQFElXR5RwdxgpJX6/F5/fT052CI/bhZQWsXgM25Koqo7P4yErOwdd17EMG8uGltZuqkdO4WJjAy5dw1IVkkJQM2oCsWSci40NGEkToTjkno0EGyzTxDYtpBAkkiZer4qqaRhWkoQJBaF8PP4A0Y4EGhYhnxchJb29vbR2X6K6qpj8wlJisSi2ESeeMOjti1JWWkZ/PE7SSNDRE8aUNlrARSIZRUXi0t2oqlNqj1BQFUc1adsSaYOuu5BSQRVuvC4PbtWbIS4zN/J0+b4tURTN+UwCbClTvpfXb6c3FApx06SbCAQCtLa2EolEsG0bRVGoq6vjpz/9KSdPnmTEiBEUFRURiUR488032bZtO3/xF89y//33EwqFkFJSV1fHz372M06ePEl5eTnFxcWZ9tu3b+eZZ57h/vvv/8xFsaIo3H333SxatIjCwkJOnjyJoijXZYdrGF9fdHd3s2HDBjweT4Z0TyaTxONxqqurmTx5MqWlpQNKXq+GbducPXuW1tZWZs2aRWVl5YCSitzcXMaNG8fmzZv5+OOP6e3tpbe3l4aGBgKBANXV1QNI/FGjRvG3f/u3JBKJQR6bpmny/vvvZxRu3/nOd5g/f/6wau1rhHA4zFtvvcXWrVsz49EwDGKxGOXl5UyaNIkRI0Z85q63lJILFy5w+fJlqqqqqKmpGWAdkJWVxZgxY/D7/TQ0NGRI0gsXLqDrOtXV1QM2qSorK/n+9/+SaDRCSUkJqvrJY4VpmtTV1bF27Vri8ThPP/0099x9z+f6wqVDUtauXUt1dTWPPPIIY8aMGQ5U+wqhr6+PDz74gD179mTGoGmaxGIxCgsLmTBhwqBx9Wl0dXVy4cIFFEVhzJgxGbsMcAjxEZUjqKwcQX19PS0tLXR2dnL+/HkAqqurBxCPpaWlfOc73yEcDlNUVDRgLrYsiwMHDrBhwwZaW1t59NFHuf/++zPk+9VIJpPU19fz3//93zQ1NQGOl+w999zDpEk3DZfCfUXQ399PXV0dR44cyYw/y7KIxWJkZ2cza9YsRo0a9ZkbHlJKWltb+eijj8jNzWX8+PED7rE+n4+amhoKCwtpbm7m8uXLBINBzp8/TzKZpLq6moKCgkz7goICHnvsMRYtWkR+fv6AcWLbFidOnGDjxo2cO3eO5cuX88gjj1BUVJR51O7v72fXrl2ZcvHFixcTCv3+7tlSOsGmDm78PBsMllNTswyXy+aWOQp3LRpMXn7Ziihd1xk9ejQ1NTWfKUxQFGWAj+61ICcnl7nzbh+0brAsOHjQ4nyjTUG+OuA+eD1g29DTIzl7zubyZUkiAX6/oKJcMGqUIBi8tnVbur9z52ya0v35BBUVX7y/eByammwuXJB0hyWmCR435OcLqqoUysoEfxKFbSL9ISXIL76eTFuaPfzww9x+++3s37+fDz/8cFA7KSXt7e28//779PT0MHfu3AFe0vF4nGPHjrFr1y4qKyu56667ME2DeDzOtGnTWLZsGYZhcODAgWv+aLFYjPr6ejZu3EhWVhY/+MEPePDBBwets6uqqigsLCQej3Py5EmuXLmCaZpomoZhGHz00Uds3ryZEydO0NPTg9frZeTIkSxcuJDZs2dn5sy0z/+VK1eYM2cObW1tbN68mYsXL6Fpjuf/kiVLmDFjBm1tbbzzzjuYpsndd9+d2fxMI5FIcOTIEXbv3s3o0aNZtGjRZ67ThvHHhes0uyogVARKKmnaCZBRpYppQkdnH27NT19vH36/D7fHS9JKplK65VVUmQApnCTxVJS4IpwbjaIIXJqG3+0m4PUiFEF3Vzed7V0IqZBfkE8gGMDtcqEKAdgkYiaJWBLTMsEGTXGRm6vj8frx5gZxuXQ6unooLq+hs7uLnjYTaSbQhKC4vILLLS24vH5i0V6ElSpvz5S7Q3+sH2IQygLN5UVKhb6+Pro6usirUAll59HbGsBIRujtjWM3XKS56SIBt4EtwefxkkzESBoWuq4RiUTQNIXS0lLCnR3E4/30SxtVAVUDRVWxpUATmuMXikilnQtsiROGpKioQgfbhVv14dZczrwqHK9LJw1dfhKAxCdEMcJOtbs+oyKNWDyGZVm4XC50XUcIwZUrV/j1C79mz549PProo9x3330UFhaSSCT48MMP+fGPf8yqVasYO3Ys06dPp729nRdffJE9e/bw8MMPc/9991NQWIBhGGzfvp2f/OQnvPDCC4wdO5YZM2Z85nv5Y0iDHsZXC7m5udx3332MHTs280CdiCdovtLMsWPHWL9+PV1dXaxcuZKamppBx9u2TUdHB/F4nPz8/EF+WKqqkpubh8fjobOzk1gsRjgcpre3F5/PRyAQGPAg7/F4GDNmTOb/e8I9gDOX7t69m5aWFpqamvje977HvHnzyMrK+lqWnv2pIjs7m7vuuotp06ZlFmDJRJKW1hZOnDjB5s2bCYfDrFy5kokTJw463rIsuru76e/vJzc3dxCxraoq2dnZ+Hw+uru7iUQiKIpCT08Pbrd70Hh0u92MHPnJuE9bEUgpOXToEHv27OHjjz/OLO5z83KHHI9Syky55qZNm6ipqcmohodVl18tBINBbr/9dmbPnp1ZDBhJg7b2Nk6ePMn27dvp6elh5cqVzJgxY0jiORKJEA6Hcbvd5OTkoOsDicFAIEBWVhaJRIJoNEo0EiUcDqPrOsFgcAD54HK5qKyszCg9o9Eo4IypkydP0tDQwPHjx1myZAn33nsvhYWFQ45Bt9vNzTffjBCC1tZWzp8/n1Ehm6bJAw888DsDWYZx4+Hz+bjllltYuHBhZtFrGAadnZ2cPn2affv20dvby8qVK/nGN74x6HhpO2r1np4ecnJyyM0dOCcpikIwECQrK4vGxka6u7uJxWJ0dXWhKAqhUAhdG+jPW1ZWlrEViMWccuV0SN9LL73EwYMHmTt3Lg8++CClpaWZ89m2zZkzZ9i4cSMej+d3KpZvDGSqJEsgbrDyEnDWdoqGqoKuKbhc15fhctaMN+5zqKlw1qthWaDrFqpiD6n4/L/AtuFys6T2Q4sz5yTRiPOapkmyQjBxomDePJW8vC92YtuG5mbJh7UWZ89KIlf1FwrBxAmCeber5H9Gf7YN7e2SPXttTp20CfdAMum8rqrg8UgKCiTTpgpuvlnlMyxgvyZQEKTGgrw28rKoqIiVK1cSCATQdZ2jR48O2S5N5nd3d/PCCy9w7tw5fvjDH1JQUIBt2zQ0NPCzn/2Mc+fO8f3vfz+11nZxxx13MGfOHIqLizl48OCX+nQ9PT3U1dXR1dXF0qVLWbx48ZACIV3XGT9+PD/84Q/p6uqivLwcRVFIJBLs27ePH/3oR5w7d47KykqKi4vp7e1l48aN1NbWsnLlSh555BFcLhfRaJRt27axbds2zp49y4ULFwgEAoRCzubR7t27OX36NP/wD/9AdnY2e/bs4cCBA/h8Ph577LEB83h3dzevvvoqH3zwAd/97neHN8C/Rrgu5KUUqXJjRX7iXyJAkYL2jh4UVAL+AJaVIB6PE4vFyMnLwbAMh0xTUhSa+IS0hNTLEmxsVEVBVQVet0OARSJ9tDS3YiZMcrKzCWaF8Hg9TjK5aWAkjdTOmECgYKZKxCUq/qCf0tIReH1uJkweT8KMkldQQFtTANXlxjJjNJw/S3dfDCkdFYBpGpimhbRtbAmmZZJIWkgJprQwTROvy4NHtzGSUdpbGsjOzYOaKbQ1BzFtQVPjORQS5OflYxoJEv39GMkkQhG4NBeKEMTjUXJz8yguKSYSj6FJ6IsYqKqBYRn09ETIDmYhXG7AxpZW6kIJbNvZZdRUHStpZcKQ0mX4MvVwoiCQ6VR327lCUjjl/NebuGxpaWHr1q2Ew2HuvPNOsrOzURSFw4cPs3ffXqqqqnjwwQeZPn16ZjGSm5ubMS2vr69n3LhxTvu9exkxYgQPPvggM2bMyLTPyclh165dbN26NdN+GMO4XggEAsyfP58FCxZkFkqWZRGNRjl16hT/9V//xeuvv05lZaVTBvap35BlWSQSCaSUmQCST8PlcgKzDMOZt9J/qqr6hW+4/f39mZIT0zQ5ffo0XV1dFBQUDJOXXyP4/X7mzJnD/fffP8AvLV22+8tf/pLNmzdTUlJCcXHxIAWmbdskEgksy0LTtCEVLrquo6oqpmliWRa2bWOaZmbR9kXGUywWY+fOnaiqY7Fy7tw5WltbByzcr35Phw4d4n/+53/YsWMHM2bMYOXKlb/TCmQYfxh4vV5mzJjBo48+miGWbdvOlO6++OKLbN26lby8PMrLy4e0MDAMg2Qyiao6Xuaf9r1Lvy6lE+BoWiaGYaAoSmrMfhFlUJwDBw6g6zrJZJILFy7Q1NREZWXlkONe0zTGjRtHdXU1yWSSrq4u3n//fV588cWMx/BQfobD+P3C4/Ew+abJPPLIIwN81mOxGBcvXuSVV15h06ZNBIPBjCLoatjSJplMpqyftCFVeKqmOlVaV92P0+NP07Qv9KycTCY5fvw4H330EYlEgsbGRi5cuMCoUaMy4y+tHmpoaGDFihXcfPPNqTWH+Tt6v55IKS+FAP4UpHJ/XOjrkxw5YnPosERVYdJEQV6+oLFBcu5jiXlUUlhoM3u2yhd5XOzrkxw5mupPccjP/HxBY6Pk3EeSI6n+5swZur9wWLJ7t83uPTbxBJSWwOjRgoBfEA47fZw/L4lGJZoGt9yi8iWFrn8EEAOVl3xx8tLr9WYEDz09PZ/bNisrizlz5rB9+3bee+89pk6dymOPPZaxtdq5cycLFixg7ty5mfksbavyZecS27bp7u7m2LFjmefOqxXnn4bL5Rpg1WHbNpcvX2bVqlUcOnSIhx9+mEceeYRQKEQ0GmXHjh38+Mc/5qWXXmLSpElMnz4d27bp6+vj3LlzBINBnnjiCb7xjW/gdru5cOECP/nJT9izZw+1tbU89thjjBkzho0bN1JXV8eyZcsym/GmaXLp0iXq6uoQQjBx4sThfIuvEa7LdKKqGkJRnfAdRTgJ1kJBqgqXL7fi0b3oLgXFdqGoGslknI6OToIhP6qqYFo2NgJNqNhSoioCYdkgQVEEmqbicml4vW7cHhdGMklPVxi3rhMs8uP2eNF1HU110sktIZGAoqkohuMpqQhBIpmgvb2V3qiPZNLANEoJZGfhFv0EfR6Ky6oIK4K+rg7C4T6SloVtGrjdOqaVxLRtTFs6RCDOgk4gMJIGkUiEZluSnxck0NNKoPM0ilWM3+8mOzuH/kg3eVka/oAXr18lFo+gCidEB+moEbFtDMMgGAoycdJkFNVFd3+M1s4IbS1NRCNR+voieF2+VDiRQDrR7SliUkERCh63m3gsiWlKTFUiJKipayIUZ58IqeBYiTpqWVs4/21b4losOwAyCbWJRCKzULZtm3A4zOHDh6mrq6OsrIx7772XvLw8bNvmxIkTdHV1MXfuXMrKygY8PObn5zN+/Hi2bt3K2bNniUajnDx5kq6uLm677bbf2b6/v384QXkY1w2KouDxeAgGgwOIoJycHPLz89m5cydHjhzh8OHDLFq0aFAAiaqqTlCYECSTyUHlRukydMuycLvcqKqK2+1G0zSSySSmaaaCzn53gvTYsWNZvHgxu3btora2ljFjxpCbm+uUqA3jawEhRGY8Xq1IzM7OJj8/n8OHD7Nv3z6OHDlCW1sbFRUVA45XFCVDohuGMejBNl2GblkWPp8vQ3Dquk5fXx+GYXyh8QhQU1PD4sWLOX36NAcPHuS3v/1tJoAlDdu2OXzoMD//+c/Zv38/ixYt4oknnmDixInD8/hXFEKIjAr36nLb7Oxs8vLyOHv2LHV1dRw/fpzLly8PSV6m0+TTpJAtbdSriBPTNDFNM6M6Sbc3TTM1Bm2+CNFSVlbGkiVLaGtrY8eOHbz22muUlpYyatSoIdu7XK7MJlVubh6WZbF//3527tzJ+fPnh8nLrwCEEOguF4FAYIByPCsri7y8PC5fvswHH3zAiRMnaGxsHEReCiGcNUOKJDQMY8Dfp20Q0uSmy+VC0zTcbjeWZWXKPX8XpJQUFhayZMkSDMNgy5YtvPLKK1RUVDBx4kQMw+Do0aO8/fbbXLx4kQ8++IBTp05lSuEvXrzIxYsX6enp4Z/+6Z+45ZZbWLJkyQ0IjrIz4oYb7Xk5jGtHPO4QjqEQVFUKFixQCYbgbJGko9MiEoErLZLmZslHH9n0x6CsVDB+vILLBdGo5MxZSfNlSXY2FBQIp78gVI5w+svKgrNnJR0dFtEotLZKDAM+7T6TTMLFiw75GY/DmNGChXcolBQLdB0SCRg9WvJhrU1bm6StTRKPSwIBgWHAlSuSEydtWlskSQN8XiivEEwYr1BQ4DxTdHVJjh23SSSgolzgdsORozbdXRAIwvhxCuPGKZgmnD7jlNHn5MBNkxSyskTmmp0/b/PxeedzTpyokJ9/AzbxxUDyUkrzemuAAGdjbcyYMTz22GP88z//M+vXr2f8+PH09vby29/+loqKCp566qnrOjfYtk1PTw/t7e2ZjaBrUS8mk0lOnTrF7t27qays5NFHH2XKlCkO52DbBAIB9u3dx9ZtW9mxYwfTp08HSOWlaIwcOZIHHnggM3/n5ORwyy23sG/fPj7++GNcLhczZsygoKCA48eP09DQwE033QRAPOaU0jc3NzN//vwhgySH8ceL60NeKhoi5ZMiUBCK83u2bEm4pwdF9WLaNigKikvBo/lJJGL09UVxuVz4g34M0ymlFtIZ1GnPTImNpqoEfD5ysrPRNI3+SBRN1cjKycHldqGoqlPtLJ0fm0yRgIaRxLQcdaQtnTJpZyffpqurDTOZIBAMUFgcx+3JpmZEIT1+lb5wLm0dHXR1ddBn9NPf308kGsU0LSzL8YVRUkShmiL/pO3s+ra0JjAScfr7egnmXMQfzMEyTJLxKLpLx0h6iMdMwEaoDmGo66kEdU1zStNVlfyifMYJheOnTtPW2eMQHwkDTdWxLRtb2ghbpErGU8RkioD0eDx09/Rg2Y4yU0NzPC8dbSUoaV9RiRDSeQgTAkUIlC/x4GIYBps3b+bDDz8csKBNJpPYts306dN58skn+cY3voHH48E0TVpbWzEMg6KiokHKIE3TKCgoQNM02traiMViX7h9e3s7yWRyeNE7jN8LlFQwmWVZxOPxIf1TFUWhsLAQr9dLa2trpqQxDdu2M+O8sKgQv9+PlJKsrCxaWloyISjpBXUymeT06dP09PQwcuTIzFj3er0sWrSIxx57jIkTJ/Jv//ZvmQToxYsXDyvY/gSQHo9pdeVQO+6KopCXl0cgEKCzs5Pe3t4BZGTa5iASiVBdXU0wGETXdXJycmhqaqK7u3vAHJv2M+po76CyqjJDJrjdbubOncvjjz9OQ0MD//Ef/8GmTZuoqqziWw99K1N+e+bMGf7n+f/h4MGDLF26lG9/+9uMHDly2JvojxTpks30GPw0MZRGMBgkJyeHeDyeCdm5moyPRCJ0dXVlSNJgMEhubi7JZDITipJub5omDQ0NNDc3U1ZWlvGzdLvdzJo1i8cff5xwOEw4HOb999+nsrKSp59+mry8PBoaGnjzzbfo6Ghn8eLFmbJxcJ6p3G43Ho8noxQdxlcbjnWVktkUHOo7S5d+5+Tk0NvbS0dHx4A5UEpJT08PXV1dBIPBjN1LXl4epmnR2dlJPB7PEPemaXL58uUMUZom610uFxMmTOCJJ57I9FlXV8eIESP43ve+R3Z2Nr29vfT09NDX15fxkE0jkUgQiUTo7e3lvXffw7Is5s2bd/3JS2mBNHHkpF9bidwfLbKzBbfPU7lllsTvd1SSUoLP74TaKgq4dIc87OmF/fttiooEHg9UVyucPy/54AOL/n64eabilJnPVZl1s8TnFxSkSD2fLxWSK0DTxJCqy3hccqlJ0tUFWVkwZYpg9Cglo6z0+8HnE+TkQH8McnMEHo8gkYCzZ222bbe5ckUipUOMxuNw+oyksVEy/3aVigpBb6/k8GFJV5ekvNwhPc+fd3w5dR2am511eHW1oKvTUYHm5kJOjsiQl319kvp6m1OnJaNHC4Zw0LlOUECkn1UkyBt3j/D5fMyZM4fly5fz61//mv/8z/9EURR6e3v5q7/6K6ZPn35dS6PTc2gikcDj8QzYqPwiiMfjfPzxx4TDYW677TZGjhyZmd8URSEnJ4dJN01iy7tbOHv27KDPOnHixAHe1Lquk5eXh6IoRKNRhBBUV1czbdo0duzYwYEDB5g0aRICQbinh127dqEoCnPnzh22e/ma4brcpRxfyvSAdPwmk8kYZlJSWVlM46VO2to6yMnLxuNxg5D4vH6Sqk48nqCnJ0woKwCW49soU/0pmoqm63hcOh6vF01XkdJG01WycrMd9aGqpCoeLGzTImklnfRyaTsdpRZyhmmBDYYlUU0Dr9tNNNqPaUkSRgOa5hgsS6lg2xYlRdl4XJKP+sNE+vuJJRJIW+A4dDrl8bqqogiHrFWEQFdUbMumOxwhETMIRhOoahu6SycQ8KOozq6VaSbwuN14PCputwuRKul2FAgO4aq53BQWl1Lc3c3pcx/RG+4jEo3h8XgxDEeRoGgOKakIJy0cJEIR6C4dKZ2dVEVRQEqEooKwUztCAmyZ8e+07dS1AoQirrnEVNd17rzzTqZOnYrb7XGsAxQFv99PcXExFRUVmdCRNNHzu8po3W43iqJkSnu+aPt0u2EM40YjrcjZu3cvhmFQWVk5JEEohGDChAmUlJRw7tw5GhoaGDlyZIaE7+jo4MSJE1iWxbhx48jOziYUClFTU8O5c+c4deoUt9xySybNvKmpiV/96lc0NTXxl3/5l0xOpWaqqkpeXh45OTnMnz+fCxcu8Nxzz/HSSy9RWlrKLbfcMkwIfY1hWRZHjhxh165dRKNRysvLh3xgUxSFUaNGUVFRQWNjIx9//DE33XRThgjq7u7m9OnT9Pf3M3r0aAoKCnC73dTU1FBfX5+xI0gnL7e0tPDSSy9x/Phx/uzP/izjMaeqasZPrqSkhIceeohf/OIXrF23lrLyMhYuXEh/fz8bN25k586dGaJz1KhRXzpMYRh/WNi2zenTp6mrq6O7u5vbbrstM299Gvn5+YwaNYra2lpOnz5Na2srVVVVgEMGffzRx5w/f56ysjIqKirIy8tj5MiRGS+stra2jDdge3s769evZ8eOHTz88MMsXrwYcMZ6VlYW+fn51NTUsGLFCn784x+zYcMGRowYwdKlSzEMgxMnjvPBBx+gquqAMBbTNDl//jxnz57F4/EMCKoaxlcPtm1z4cIFPvzwQ1pbWxk1atSQVQdCCEpLSxk7dixbtmzhxIkTLFiwILPxEolEOHfuHG1tbcyaNYsRI0aQlZXNqFGj0HWNM2fO0NzcnCmRDIfDbNq0iTfffJN7772XRx55BEh5ZwZDFBQUUFBQwMMPP0xTUxMbN26ksrKSFStWcNttt/HTn/4045N59Wepr69n9erVlJWVZTyMb4hvu51AWjGEoiMjjHhRAAAgAElEQVTUayMohnHj4XZDaaljDxaNOiE7ra2OmrKvD0pLBGPGCHJzBePGCi5cEFy8KKk/YGPbcOCgTWsrjKwRTJyokJ3tkIsD+muTnD3j9FdSDGPGiCFLvWMx6OiQWBaEQlBWNridxwOVlSlBk1NUSEuLQzJeuCApK4W5c5330dAo2VFnc+KkJBS0yM9XkRJiMYcglVIydoxg2TKFixclBw9JLl+WnDlrU12tUlDghAJ1dMCli5JxY5330NkpaWh0QoRycwWh0I2xThJCQ6gp9bc0kVYYqPjcY778uQT5+fksXbqUU6dOsXnzZrKzs1myZAmLFy++Id7gaZuWdIXEtcAwDNrb27Ftm/z8gkHvT9d18vPzEUIQDocH/J2maYRCoQGl3ukqjPQGPUBuTi5z5sxh27Zt7N69mwcffBCv18ulSxc5fPgwFRUV3HrrrcMl418zXJcVgo2FSPldQhIjESMRj2ElITfLhddbTHNbL52dHQQDAXw+HwIVl+5GKCqmFSMaieJze3G73Q6PJgQul4bP4yYY8BMMBBCAaRrYUjolHLoL0r6YhuO9aVsW0rYQpEg4CUnDwLBsLMNRcfq8XgzDwjD6ifQn0CJ9aIrAMkw8Xh8ejx9sN7ZlYdsSoSgZ9aYQCtIGRai4PR7cuoaFBEvi93hQddUJqJGSRDyBUBVsBfy2Q04m4hJNU9GEiqkZTol8yofSsoVzTtOZIDxeHzUjR9Lc0kJDQzPRaALLFo4Zr+mQeGniWBEKEqc83OvWnQgeKRApAjeluUQRwuF6xSeZgkIRKCipcnhxrVXjaJrGrbfeyvLlyzMPf2nZt8fjGUQ2frqMNj0JpSGlJJFIYNs2brc7Uy4GIvP6Z7Uf6nzDGMb/BV1dXaxevZq6urrMDdA0zUw4wNmzZ5k1axYLFy4kNzeX/v7+Acenycu5c+eybt06Vq9ejdvtZvLkyfT09PD6669TV1fH+PHjmTVrFqFQCCEEd9xxB/X19fzv//4vRUVFLFiwgFgsxtq1a9myZQvTpk0jPz9/SKInKyuLBx54gPPnz7N582bWrl1LYWEho0aNYtfOXezctZOenp6M5UM4HObdd9/lypUreL1eCgsLWbp0KSNHjvy9XONhfHGEw2E2bNjA8ePHM999OoTnzJkznD17lkmTJrFo0SKKioqGVB6NGjWKefPmsWrVKl555RWCwSA333wz/f39vPXWWxl12uzZs8nLy0NVVebOncvu3bt59913KS8v55v3fBPLtvjtb3/LW2+9xYgRIygoKBiSIA+FQixevJiGhgZee+011q1bR3FxMX19fWzfvp2WlhYuXLjAqlWrhkx0njhhIgsWLsgQpsP4w6Kvr4/NmzfT1NSU+b4ty6Knp4dz585x5swZRo4cyd133/2ZZEsoFGLOnDnU1dXx4YcfUl5enkkxPXr0KOteWUd3dzf33HMP48aNIxAIcOutt1JbW0ttbS2VlZU88MADaJrG22+/zeuvv04gEKCoqAiXyz3ofH6/nwULFtDY2MiLL77IK+teoaysjAkTJjB16lRqa2tZu3YtHR0dzJw5E7/PT+PFRt5//31OnjzJvHnzBiS8DuMPByfUYSvRaCQz/mzbpre3l/Pnz3Pq1ClKSkq49957qa6uHrKPkpIS5s2bx759+3jrrbcoKyvjjjvuAKC2tpYNGzbg8/mYP38+ZWVluN1upk+fzpQpU6ivr+c3v/lNxnNz27ZtrFu3DsMwKC0tHbLyx+v1Mnv2bFasWMHPf/5zXn31VSorK5k/f/6Qtgpp24RNmzZRXFzMbbfd9pmf5f8EaSPtfkcxpvgQ6uAwjmF8NSAldHbBjh02jRedsu4RIwTz5ipUVSnoOlRUKMy6WbKlW3LihKStzSnfzs2FWbc4CeBpfYqU0NUNO+psGhtT/VUI5s5VqK5WBgUPSQmmCf39DinpcQt8vqFJwauPTZeLX2iQuN0wYYLC1KkqLhdkZTnl7gcPSRoaHRISUpaywiEeZ89WKCtTKC62aWmxuNQE4W4nJCm/QFBWJjh92uknEpHouuBKi6SnB7KzHdL2hu3bCxdCy3Wuj20gzc4bdCIHaZFCeXk5fX19uFyuAdUG1/tcwWCQYDBIW1sbra2t13S8bdvE43HgE1//qyGEyHABn64SSivofxe8Pi9Tp06ltLQ0UzpeVVXFkSNHaG9v54EHHhgySHUYf9y4PvIGIbFlEmlZWGaCZDxOV2cfbpeHgB/8fpWKsiABv4vmK53E4v1kBXNxuTy43C50FFQFLCOBaSbwBwJomorP6yPgd4hLj8uNYSYdwk0l4xEJn5SKIx1yTtN0dNvGsiW2tFNhAxYSgZkq73S5dBRVoCoCKylRdBWEwDQSGKqCYSToTaWmkvqBmaaFZad8OBUnSEOoKroiQHU8Nj0eN0JITNPAQuL1uBGaSjxpOCEImoq0bRKJOEKRuDSHmJOp9wmO8hNsNJeLUHYOU6dOpSvcT3t3L7FYHMO0sWwLwxZowinREqTSjoRE1xTUVJm4oz51JPoKMpUvLlOKTRVVVVA1DWGnPhPXntkjhMDr9RIKhb6QMkFVVUpKStB1nebLzZnJLQ3Lsrhy5UrmQdDn81FSUoLLpXPlypXPbJ9MJikpKRlUVj6MYfxfEA6Heeedd64iZQSK4swJpaWlPPXUU9x3331MmzYNl8s1iLwEh0x84okniMfjvPvuu/zd3/0d2dnZJJNJ2tvbGTlyJH/+53/O5MmTM+dZsGABbW1trF69mn//939n1apVmKZJW1sb48eP5+mnn2bUqFFY5uBSdSEEFRUVPPHEE1y8eJH333+fmpoaHn/8cY4cPcKaNWtobm7OhBwkk0n27NnDoUOHEEIwZswYZsyYMUxefgXR19fHtm3b2LlzZ+a1tI9lUVER3/rWt1i2bBmzZs3C4/EMSV6GQiEefPBBopEom97cxD/+4z+Sk5OTIeXT4/rWW2/NkImzZ8/m6aef5sUXX+SnP/0pa9euRUpJe3s7lZWVPPnkk9x0002f+cBZUuyoLxsbGzMeSHl5ebS2ttLd3U1dXR179+4ddJyiKHzzm9/kpsk3DZOXXxFEo1F27drFgQMHMq8pioKu6xQUFLB48WKWLVvGnDlz8Pl8Q1ZDqKrK9OnT+c53vsMLL7zAiy++yJtvvomu64TDYaSU3H///Tz00EPk5+ejqipTp07lqaee4oVVL7Bq1So2btyIEILOzk4KCgp48sknU4EnQ6ssCgoKWLZsGY2Njbz33nusX7+e73//+9xzzz1Eo1F+85vfsH79et5++200TSMSiWDbNrfddhsrV65kzJgxN+yaDuOLIxaLUV9fz/HjxweUIWqaRl5eHvPnz2fp0qXMmzePgD9ALB4b1IfX6+X222+ns7OTtWvX8i//8i+sWrUKIQRdXV34/X6efPJJR9Hk8YKAcePG8fTTT/Pcc8+xdu1a3nvvPTRNo6uri0AgwLe//e0BgRmfRk5ODnfffTeNjY2sX7+etWvXUlpa+gf1Y5MyjrT7nAot4Uaow+rirzJ0DYJBCAQgHIa2NsmJEzbZOVA5QsHrhbFjFdraHZLzwgXHK3PaNIVxYwerJD/dX2uqv5wcRz35aQLTsSdzohG+qNDFMCRd3ZL+fsjJgaJikfHS9PmgIN9Zr0Yikp4eSbqASdegIB+KU+2zsgQ52YJLlySG6Sx5Q0GorhKcOStp73D+zcpyVJi2DUVFYgBhe70hhAuhOuQl0kCaXTfmRCkkEglOnjxJXV0dxcXF+Hw+tm/bzsKFC5k2bdp1PZeiKGRnZ2eqwA4cOMDSJUvx+oZWeEopiUQiGa91RVEcQdpVAqOrnw/TZelSyi9dFaaqGhUVFdx88828+eab1NfXk5WVxa5du/B4PMydOxff1zvu/k8S14W8TCb6iff3AjZI2wkU8PrQXRpSBduycKkKBXluvN5CWlsj9PR0kZOdh8flQXO7CXjdqIqgPxbD63ITDAbx+RzyUte1FOmmowkNUg6blm1iGSa2ZWEkDbAlqqLiculY0kY1LVRVcXwdbRPbdsw8pC0xDRNscOsp1SEaPq8PVXGMwLt7+uiPxZwydVXDVh3VpbCc8nOEpD8Wx5aCUNCPtCxsG0zTxufz0x+NoAiBW9fIysrC7fKgayqaCtI20TQFt8uDy+VCVZ2HflVR8PsDKChYpolQwO32kJOXx9Qpk2i61Myxk2cdpamUSNtGKoBMJ29K0npKoTihQkgnYEikrpoQArfmwuV2Q0qdqigKpmHh5I1bXHNizzVCURSmTp1Kfn4+x44fo6npMsXFxZkHvtbWVo4dO4amaUyaNIlAIMCUKVPIz8/n+PHjNDU1UVxcnJnsWltbMyqkiRMn4vf7r1nePoxhfBqjRo3in//5n+nq6hq0+E4v1EOhEKWlpRQWFmZIHr/fz3e/+10efPBBampq8Pv9KIrC6NGj+cEPfsCyZcuIRCKZPjVNo7i4mJqamgFl53l5eaxYsYKZM2fS3d2d8dPUNI3S0tKMFYNlWTz77LPcd999mdfAIQemTJnCP/3TP9HW1kZZWRlZWVl885vfZPy48UMu6NIIBAJMmDDhul7PYfzfUFVVxT/8wz/w7LPPDlKfp8djMBikuLiYoqKijPrH5/PxxBNPsGDBAioqKjLK3qqqKp559hnuXHQnfX19mT7THsI1NTVkZX2iwsnOzmbZsmVMmjSJjo5OLMvMtC8sLKS6uppQKIRlWTz22GPcfvvtmTEHoKgK48eP5+///u+5fLmZwkKnjGj69On09vZ+5ucWQlBUVJQpER7GHw5lZWX89V//NStWrBg0BtMhKIFAIDMG04sGIQTTZ0znX//1X1FVNVNuGwqFWLRoETU1NRw7dowLFxpIJhPk5+czbtw4Jk6cSElJSUbxHgqFuOuuu6mqquLo0aM0NjYipaS8vJzJkyczduxYcnNzsW2bZcuWMXnyZAoLC8nNdRaXiqIwcuRI/uZv/obly5eTk5NDYWEhfn+Axx9/nJtn3szJUye5dOkShmGQk5PDyJEjmTBhAlVVVcOLoD8wCgsLefbZZ7nnnnsG+Uunq30CgUDGdzJ9L3S5XNx5551UV1dngs3S88rDDz/M5MmTOXHiBJcvXwbIKHLHjh3rhEWkiI9AIMC8efMoLS3lyJEjXLhwIbPJPnnyZMaPH09+fj62bbNw4UJGjBhBdnZ2xoZAURQqKir47ne/y1133UUgEBhSdQnO/XvWrFn86Ec/wufz3bjQPTuONFNJx4ob1GFvuK8qhIC8PMHChSrRqFMWvWOHzdFjEq/XpiBf4Pc7asj8PFBVx3dSVSE7C7xeMai/3FzBwgVOf42pEu6jxyQer01BgdPf1e11Hfw+kLbjfxntl3xa8iKlo7a0baeE3LbBSDp/Ov6cn7RVFNBdqZwMyznu03+naakNCgHqp1gLr1dQXi7IzoLeXidMqLwMmi5LPB6oqrpxJeNASnmZCumUJtLsvmGnsm2bixcvsmbNGjo7O3n22WcxTZOXX36Zl19+mfLy8s9NA/8yyM7O5tZbb2XLli3s2LGD+gP1zJ07d8i2fX19bNy4kfXr13PXXXexfPlyioqKUBSF9vZ2+vv7BwicDMOgra0Ny7K+tHJUCEccMnv2bN5++2127tyZUWHW1NQwffr0a7bCG8ZXH9eFvDSScaLRCC5NQ9McotHrczk+topwyD5pI5BkBV14XblEIgb9/SaKMAkF/GQHAgQDfmwpQAp8Ph+6rmeUglLa6C6X42dpg7RtdC0VmqMojpoxnsC2LFBEyjReoCoquqqRwMC2LISqoipKKmXbMSfWNM1Z/GlO+qDuduFyJ7Bty9nN1VO7+EJgSwtp2STiSRJJE8sCn8eDKhxZtK6q2IaBx6Xj83mdB6W8XNxuH5ZloGrCKQuXFqqAeH8UVVVwZwfJyckiOzuE26UjpZNmrrncBIJZlJeXMWPGNC43t9IXiSAtBVu1QXECe9RU4plEIoTjFWraNqa00ISGqrnweN2ZayqEimlbSCGxbIklLSwBuqZww7aorsLUqVOZO3cumzZtYtWq5wHJmDFjaGtrY926dRw4cICZM2cyY8YMfD5fpv3GjRtZtWoVAGPHjqW9vZ21a9dSX1/PjBkzmDlzJj6fj56enkHnTKeyffzxxxkF26VLl5BS8stf/pI33ngDRVGZOXMGCxcuHLBwH8afHrKysrj11luv+Thd15k4cSITP+UQrqoqlZWVX5iEEUJkvLI+D5qmMWHChCHJRo/Hk/HETKOmpma4jOKPEMFgkBkzZlzzcemUyk+rxlRVpays7AuHPwghyM3NzRBBn3e+0aNHZwiqq+F2uxk/fvwApdFQ7Ybx1YTf72fKlClMmTLlmo/Nz88f0v8yFAoxdepUxo4dSyQSwbIsvF4vfr9/kBpDCEF2dhYzZ85kwoT/z957Bdl13eeev5X23id1bqC7ASJ1IxEgEUki0RQVrGzZHs216soeWzWWyxqPXq6DHkbzooepqTvzMCWVy1UqX7kk2Vcuq0oidSXKJhVIgTkAJEgEEiQyCAINdD5hp7XmYR00CTbAJERy/6paFM5ZZ/U6p/c5Z+9v/f/fdzMzM3XAUalUKJfLsyKnUoolS5bMemi+kSAIGBkZmZM2Pn/+fPr6+li/YT31en3WtqZarRbdHNcJpVLpkt91b4VSioULF86xMJBS0tfXx7Zt21i/fv1smN6bj6c3UqvVWLduHStWrJitzC2Xy1Sr1QuOv0t9thpjLnlsvhEhBPPmzZuTlH65cbaJy/3mkZBh0TZ+neH9H72HYxw7OjoE/f3+JwwdBw86nn/eceo1mJmBUglGRx379vs28FIE9TocOOBYutQxf76g1YKz5xxxy1GrCebN8/NFkePgyz4h/LX2fG/OaIki36qttG/LPnHcsXiRF0jPU284nnvW8tJLjpERwciIJAy9GJnnPqSnnROLtT6h3FrQem66+ZuvRt/8b6Wgr09w00LB3n2OI4cdNvdVpL29virzilpoC4mQZVARzmVXtPJyYmKCn/70pzz88MPcfffdfPGLX2RyYpJ9+/Zx//33s2bNGv7kT/7kotY775Vqtcr27du5/fbbefTRR/nWt76FMQG3337bBVWUk5OT/OxnP+Ob3/wmExMT/P7v/z61Wm12Q/HgwYO8/PLLs4ni1lrOnTvH008/Tblcnk0Jfy9EUcTatWtZsmQJzz77LD09PUxMTPC5z32u6NZ5n3JZ3tJZmuJyQSvLCIwiCCRK+ZZtgTxf9tf2VcyIAkWpt0LaKRFC09ndSa1cRiuNVAGgUMqgtPEPlRJL5j0ZrWtXSgJYcp2RxbEPnsktsbXgHIHR5FlAajLyTBMag8tiH26TW4xSGKMJowglFUr6rR8nJEmW02y1sM6C8CciSimfxi19e3YSBExN13HWIXKL1AKjoNWawWhFKYooRSEdtSp9vb2EUQmXW9I0JrMZSjiMFLMBRN093fT191HpqGHCyJvSAiDQJqSrp5fFSxaxbt2tPLfnBdLUEgQKhPJenK6dGi59Cl0QhuTWoU1AuVwlKofoUCOFJE99anruwLrXW+6x4HLnv1WuMD09PbO7Rjt37uS5556jo6ODVqvF6dOn2bBhA1/5ylcYHh5GKUVPTw9f/vKXybKM3/zmN+zZs+eC8evXr+crX/kKI8MjlzTmnZmZ4b777mPnzp0kSUKe58zMzABwzz33+L+xlDQaX+SOO+4oxMuCgoKCgoIrjBB+w/qdVjZKKalWqxcNSPtteKPHV8EHh3f7dz8fSPlu03evS2wT2uIlIirEy+sM5+DMGcsvf2UZG3OsWCG5+25JtSKIY9+KDb4qUQiYnHTs3m155RXHwACsXCHZt99y8GVH/zOWD90lOXfO8YtfWsbOOZavkHz4bkm16udr6/ez7eFvJooEixcJ+vtgdBR2P+vo67OsXClRqp0evt/xm52Ws+d8peaqVb5itFLxlaAnX3XcmkAQwPQ0vPqqv+bs7PChQ63WO78GFQJqNcGSJYJ9+x1Hjzmmpr1IOjggGBi4ClV3IkCoGi4de8fiZZZlvPTSS9x7771MT0+TJAn79u1jamqKp556im984xsYY+jq6uIjH/kIN998M08++ST/9m//xsDAAH/8x3/MwoUL6e3t44/+6I948cUX+cEPfsDq1avZtm0bZ86c4Sc/+QmHDx/GWsupU6c4deoUxhi+9a1v0dfXRxRFrFu3js997nOXXOf5zegvf/nLjI2N8Ytf/ILTp0+zY8cObr75ZsrlMufOnWPXM7vY+fBOpqen+eIXv8jHP/5xyuUyq1at4qMf/Sg/+tGP+Pa3v81f/MVfsGTJEk6cOMEPf/hDnnnmGTZs2MBdd931nl9+pRQDAwPccccdfPvb3+anP/0pHR0d3HnnncXG4/uUyyJe5nmKw2KdIE5SHBCGAiUlvo3Zx8AooXw/s9BIFKVSiFSGUEnyJAFp0YEiiCKMCTFBiFS+KhLRzvlux5E753AuI00SlPDpUy73Ap61OQ6HMZaoFGGdJbd+Z8c530MtpULpAIHAtFO7TaDIraXRTEjSBKkkJghQaQp4Tw5jDKHRuMhSLZeolsq4NEcKR6VWZrqeo7QmMAZtFGGoqVTKdHf3IJxgpt5AKjBGImyKEIJqrUp3bzflSpmwVEYFIUJpvFrr/SnDqMTA0BALFg5x/MQJzo6N45xASQ3Oed9P4ZDOi56BMuSZw5gyQVBCSEmWOer1KaIwQrUT0m3my/+F0mjpg4TeqetlrVbja1/7Gn/+53/OypUr31VLlZSSNWvW8Ld/+3d86lOfYv/+/UxMTFCpVBgZGWHNmjWMjIzMznl+/N/8zd/yyU9+kgMHDjA+Pk6lUmF4eJi1a9f68aXy7Nr+5m/+hi996UusWrWKKPKC8F//9V/zp3/6p3Na3s5z3ivw7aqLCgoKCgoKCgquNtNTkzz33JO8/NLei94vhGD5yrVs2rydMIwYOzfKs7uf4NjRly86XmnNqtXr2LhpK0ppjhw5yDNPPcL01MRFxwdhxJq1G7l13W1FS94NjnNNXN729y8qL687hPCCoVZw/DiMj1tGRx21qhcPjx/37dGDgz6U5sCLll27LcbApo2SW27xVY+/ftCyZ49l/jzvAWk0HD8B4xOWs6OOWseF8w0NelHwzWgNC28SbN4s+fWD3lPz3p/kDA1ZymVfrXnihOPsOZg/D1av9qnieQ4rlgt2P+vYs8eiJHR1w5EjjkOHHLUarFzlxcvzYuY7JQx96nl3N7z2GkxOOaoVWLrswrb3K4WQIUJ34pJRXD7Zvqh+67CZLMs4fPgw3/ve9xgdHZ31f2w0Guzdu5cjR47MXo8ODg5SKpX4/ve/z8TEBF/96lfZuHEjUkrK5RJ33HEHf/AHf8B3vvMdvvvd77J48WLq9To//elPeeSRR3DOkWUZ9XodIQQ//OEPZzdsPv/5z7+leAmvh92VSiV+8IMf8Ktf/Yr9+/fT1dWFMYZWq0UcxwwPD/OXf/mXfOYzn2FwcBAhBAMDA/zZn/0ZzWaTX/7ylzz77LPUajUajQbj4+Ns3LiJv/qr/+1tK9HfjlqtxpYtW/j+97/PqVOn+OQnP1kE7L2PuSzipXM5zuWAD4ex1mEzjTC+5VsiEc4LmbmVpGmOyC2ZtCiTkKdJu2U7xOFFxXK5ShSVUDrACV9u7v0bxWwreZanSKHQUoFzuHZATxw3fctPEKKU8kJnDhKBQ6BNgFQaIWT7cYC0NBoNpmZmiJOs7e1hCAKFEJI4jsnzFJxFCkkQGfp7Ounr6salGa1WTFSOiCIvtuogpFarEAUarQXVWoVqtYP5QvqKVGdxeYpSilK5RFSKMIHBhBHaGEwQIKRsV0H6GJ1qrZM1t6yl0YzZufNRGo2YKAhB2NlqSedse+0BM/UmeeYT08mh3mxwdmyMnu5uOqsVFI4stwgE1gmsA/su/C6DIJgtAX8vaK0ZGRlm8eJF7NixgyRO0EZTq9UIw3DOSfG7GW+MYf369Rc8/ryHUEFBQUFBQUHBjcjRo6/w83u/z/TLO1nWM7fT5OUxy6N9H+Xo8YX0943wysvPsWvn39MRv8iCjrkX1S+dszw69PucfHWEanUeTzzxKAcf/7+4pc+iLnLtf3AcTp76z6xafSthODdZu+AGwra8eCkEQpYR8uJhHAXXhvO+lFu2SFqx5cUXHXv2OJTy1YWlEtx6i2D9esHoqOOJJxwzdbhlrWDdOklPj2DNGsmx444DBxxPP23p6pLcsUXSalkOvOjY87xDacgzP98tawWbNyuii7y1hYCOmuC2zZIggKeesrx6Ck695tDtNWkNK0b8mlev9inovb2CHTsk1lr2H/DemkHgW8YrFbhts2TTRj/nu0UpP//iRYKTrzpcCr09giWLJe8gsPq3RwSItlesszHOTs3++1IYE7Bp0ya++c1vEsfxJceVy2VGRkbo6Ojgr/7qr8jSjFWrV1Eq+ffpeXupL33pS2zbto1qtUpnZxednZ187WtfY+zcGO4S1/VaaxYuWHjR+y54ekLQ2dnJXXfdxfDwMF/4whc4ePAgp06dIsuy2VCfFStWsHTpUrq7u2cfGwQBt9xyC1/72tf49Kc/zYEDB5icnKRWqzEyMsLatWsZHh6etYjp6+vjq1/9Kl/4whfmWAqVSiU+/vGPs3TpUubNmzfr6w7ekmj79u1897vfJY5jhoaGrkgCe8H1weWpvMwEeeaTrqWwZFYgHGgESgkcEpwkS3KSuEWeWYT1zd8qUARG+8o4ndKME1qtJnErptrRQ7WjgzCKQKrZ1mYhBc76lnRtDFKcF/l85eX5tG9nQeUaKRWlsEQSp77FWkqUiTA6QApJmsW0mnWmp6ewuUVJQWAC79cpBFEQMDMtyHIFOGrVMj2dHZQDQ61SoVVvEASGSkeNjlDB3oUAACAASURBVO4OnM1BSaqVMj093XR1dlKtVihVyyhlSJIUJSVhYNBaYoyvspRKEwQlTGgwJkBKg5Ca8yE8xgQMDg2yZesWjAnZvXs3adLAp4f7JHHnfHWpUoo0S4njJqVqCeX/Gkihmak3CJVi7Phx8laLWn8fsloldw4n3LuPG/8tMca8qw+Zdzu+oKCgoKCgoOD9wEx9BlV/lS3dU2xZOLd1+ddpg/949Sy7d9fp63ccPTQJY8e5a/EMN8+bq0j8rFHnl0fHeGZ3i1rNsW/fOL2tE3x2qBd9EfXyx9NTjI2fmxOaU3Dj4T0vZ0AYhKq9bcXY5cDmCfX6KPX6NCdOCF566UIB3hhDT0/Pe7ZuqtfrvPbaa5c8PiuVCvPnz79kKvxbkWUZExMTjI1d2B5sLRw7ZpmcElQqncDlC04JAli6VNLRIRi/09HOygPhw2+6uwXd3YJm0/GpT0qy3N/W2yuQ0qd1f/Yziu3bfFXl/PmCMBR0fFaw443ztX/X+cdeqqhaSi+o3nG7ZHiZoF73z/88WkNHh6Cv7/VUcWN8evmnPy3YutWRtX+nEF4w7e0VdHb63zkwIPn85yFJoKvLV5QK4f//Rz6q2Hybr6ysVl9fYGen4KMfVaxf74W6atU/z6uBEAHCnA/tSXDpubcVL5WSDAwMMDAw8I5/z7Zt2y56u9b6op6+W7dufcdzvxOiKGJkZISlS5eybds26vU6zjmCIJgtIroYQRCwfPlylixZwu/8zu8Qx/ElH3Mxn/7zaK1ZtGgRixYtmnOflJKenh7uvvvu3/6JFlz3XJ7AnjgniTMcPpRHCEFCThA6dABKSuJWk6ThQ3O0Uu28cFA48tySplm7tTcljWOyxJJlljhpYsIAbQKCIMDoAC19NSRtr0YhJToMiIRDaYUyijRNSJMUZS3lSgWjvRCYW0eapaRpSrPRIokTsswLpkoKtApIsxzrLMr5EJ9SOaRaimjUpwmjkFq1Sm93F0r4F3DiXAOhFFEUUO2oEQYBmc3bKVhd1Do6qFQrROUyQmpKpapvc3c5QnjX4jAKCcIS2oQoLZHK+4EiFKB8e3ugEFIwMDCf9RvWEccJ+/Y/Txo3sDZDCrACbPs1yfOMOI3bbf0BURgSmJDpmUkCmzFz9hwqzWiFAaVaGSfg6mxTFRQUFBQUFBQUvFucg0jBYM2wsMPMuX9e1VCelgjhvfCkgEogWNBx8fF9FU3QkLNjhYBaoFjYadByrgDQU9JMFO3iNz4ux2XjuGwaocoIfXWKAs6cfoZHH/p/0Exx4oDmoZ9deN3RSC3rbr+Tr3/96+/almBqaop7772X//69f6K/NreMbybOKHfN50t/9qd85GO/+67X/uKB/fz9N/8/jh09TE/l9fkd3m/ylaM5pxdv4Hc/9n8Aly+5PQh8a/jg4KVfD2MunqwdBDA0JBgauvC+t5vvrZDSi4dvFBDfDmO8oPh2omK5DCMjc69FgwBuWujDeS4298DV8rh8MzJCGi9COhtjk2PIaPjqr+MqoZSio6PjguTwd0JReFRwubgs4mWrlRK3EhAOrSWBCdEq8CE4uaZRT2m1MmxmUdAWyQRKadr5N6SpxWifIG6znJnpKR8ogyPMApRWxFL59HDtqy2F9KnZQkikdDgBygSUpCZyzs+NxTlHq5lgrUO4nDzLSJoN0laMEIJAKzo7O4jjhCROkFLiJEghCY1BGUVUCogihVaS7u5uqpUKwlrqk5PgHFkSk6UxYdBNb18PJvDt35VqjXK1itYa6wQSSRCESKVQyrfZCwFKKoQyIDUOiXXSp5g5ia8pBRBtr05NT18v6zasRyrBoUMv0WpMk8QtrLVY64vEnXAkWUKcxJggJDQRHZUKjZkpsjSn2teHdJZSTxdKB/5529lfVlBQUFBQUFBQcAMhpfe1u+vDkpHlkscelrx0/6Uv6qX0gRef/IRk3nyJs5KJh67igguuCS6fwqYnweUIXUMGb99CejloNUfpyx/hP632QvubBcpHTs+wb28P1tpLBnBeimazyenjh+gY3cXv9c+tSj4ZJ+x+tZujR95bQMj42DlGX3ycbcEpVkQXtti70PFM1uKhqYg4aXI5xcuC6xchy8hgcTs+vYmNDwJFBWBBwZXisoiXwimkUAjpMNoQhSWsk2SpTyKvz8TeB0MpwtAgnEM6L8xJLLl1vspSZUShQymJQKOUJjQR5XIVpXU7Fdsra1IpwqjU9q70X3xSgkRhrSNPM5I0Jo4bxHGLVrNJlqbkeU6Wpigh6eioYdthN7m1lCpV0laLZrOBxfp286hEGEQoKXFEaCmplMtoo2nO1BFSMW/eAEma4KwjTVK01nR0dhKEZUq1GkFUalekKkrlKlIa0iwhtxlStr+YhcQhyJ3DWl+tKvFp4s45HNL7WTqJlJowKjN/YIA8zyhFAceOHGb07GnipOnHIWZfqzhJqLRr+ivliI5aFS0Fpd5upBQIrcmdQCIQ7nWptKCgoKCgoKCg4MZBCCiXBAsWCpYtlRw6KDgyt+DygvG1qmDRIsnQkKS3Fyav3nILrhEuO4eLjwEgVCcyWHK1fjM9Jdg8VGKgOvcy9LWpmNfe68zOoSQs6jLcNji3jXVeSfDaGYG179HywEGHcaybH7F2/tz5G4nlkRPSh8MWfDAQBmHmIXQ3Lpskb73ir7+L6vSCgivCZREvVaCRymCdJUkcrVadLBNEQUTSSsgyh5IGrRVSCLRS5Pa8R6UX1SSOPM9I85zABERh2Yf2lCqEYRltDD5925LnOQ7ILNTKJbRup5jjyOKU3GY+eEYIhNSEYYSWhjzLyPPce3NYR5qnWJvh8KJjmqUgoBRFWOdLo4NAUyqFaGUAgZKQWUvcaBIEEd2dvZSjMgiIkxbO5TQbMeWqJawEKFNCqhDpCymxWKzNQEi0CZFSELcrJslzlA4wYQktfcu4E23fEQdC+B1xqSTK+hb3SrWDnv4BWnGCFYLx8bO0GnXyLCHPHc5a0jgmzTKUztBK0dPdjZAglUAI5ysunfBBQ28QgwsKCgoKCgoKCgoK3l+4fAybnPTXSroXEQxd6yUVFNyQCFVDRovIp5/FpaPY7AzSzL/WyyooeF9yWcTLiYkZTp+ZxObtqkgpCYMIl0mfzG0kRgcYrTHGoJVE40NllNSkSUIqHFmakmUZkZSEUYgOJFJ70U9IiZICJRVIkEKBgDTzj5FCYp1P20Z4Y1epBEpL30ade39Ma51vU49j0iwhTWPiVhMpJUpKtNHkuSWQGqMNRiuk9AnhWeoDh6JySGd3Dx0dndRKVcqlMkEYoJUiSVrErRgnJE4oH7qjNFJ5g15rLVnWwuY+GR1nvWl0u9LSBGWMCZmuNwiMIQwj7+/Zbn93SNLMkiQJNs/JM4u1EKcO6zSlqIZNUzJnEUKRZw6b58RJTBCWsFjC0KC0RBvfui4EBCYidQqpFFc9saegoKCgoKCgoKCg4CpgselZbDKKkCVEsKBIGi8oeI8IWUNGy714mdexrZcL8bKg4ApxWcTL6emM6cnUVzQKR6kUUo40Nocsy+moRVQrZcB7O2opcTgfTOMsVkjS3CGVRGtNGPgqyySNkXGTsFzGhBGlKEJLhW1XX1qbkVtfvekQSKlRMkRriXM5ee6rKnHMCqP5+Yiz8+aOziFKkjRNEUmC1IYkTQlNQJ7lZLkjt74FXeBTsyqVDsrlKmGpTLmjgzAIKJfLhGEAQvp2BKUoVzsIo7IXBF3uhcvct6m3Wk2SJAYsQRihdUhndyfdPfOQ0pDno9TrE9g8JQzLCKVACLIsp9FoMHrmDCdPnCBJUyYmJslzy8T4tE8ftw4shCbEOp/AnsQxrpYRBCVKYUAQanAO53LKpRJpZmfT3woKCgoKCgoKCq4/pJQcnhTsOzzNb45nKCkuyFp8cTSmtjLzm/3t8QfGJMeOTrG4J5kz374zMQOVDNmexFrHU6cy/t8npmYDe1x78z+3sO9Mi5EFdnZ8wY2Hy2dw6UmwCSLsRoaLr/WSCgpuWISqosIRUgDbwLZehNr2a72sgoL3JZdFvHQIbLtaTyKQwgfrxM2YKCwRGENgFNaBRKKUwmGxedb2sYQwCHFSIlDYHLQKCMIK1WoXpVKNIAxRKkAphZaCNE2wua/E9HP4dG6lFAjvm6mNwTmLtRYpBFIKrPIp3MJpH5SjFWma+jXrgNxmlIXwLexJCk7gbI60ChMYOqo1SqUySimiMERqidQKZTQyMGgTYEyIkBIThGgTYG1Oq5mQpTmtuIXNU6xNvbjqLA5J7jTalCiVaxhtUFqQZS3yPCPLMmyWkuU5AoWzjjhucW5sjFdfPUWj3mBmpk6z2STPE6yNsXkKwtGIG5QqAR0dVWrVEtVqhTDwBtk+YKhEmmbYHAQKbE7heVlQUFBQUFBQcP2xdOlK1m/5Sx5s3MXpbsXSpYL+/tc7Zvpyx+JlN7Nw4SIAVt+8gf/pT/+W068eIzRzBcd5TjCych3d3T4Jdtu2u9Hy/4asNTsmSeDoUcuxY9B/q2bb9s0Ewdw054IbA5edw573u9RdyHDJtV1QwQeSVgtOnLC8esoRRYIliwXz5r3+WeacT3E/eNBx4qQjjh3lkmDhQsHIyMXT1a8JwiDMfITp9JWX8UF/KX2dLK+g4P3E5fG81P4H60NfjNIEOqTSXaJcCilFEQLQWqBFgBSCNPMCm80szoI2AcoYpDIobQiCgCgqgRO0Wi2yNCfWMQKBUAKwSCFxNkMgkQKU9qKbVr613GFxucPh0IFPDc/TDJspcm2QOiVNM4RSGGPIshyLb1FPkxZh0K6WzDLCKCQIQsIoxLocl+fgLEI4cpeRO+vDg6RGaoMQkjhOmJ6pk8QtGtMzbU9MX8mptMQYQxiEWCtIUl+BqZREGUlZVZk/cBMT42P++WcpcStmeqpOK/Yp6V3dPYyeOcfE+Ks06g2SLGVyZoowElRCRSkyzL9piAUDg5SjEkZpdOgFYCF8lWu9PgMOkiTHKkEpcMWH7UX4l3/5F5IkYfPmzaxYsYIwnGvUXVBwo/LEE0/w/PPPs2TJEjZu3Eh3d3fhfVtwzXjuuefYtWsXAwMDbNq0if7+/uJ4LLiqTE9P02w26ejoIIqia72cC+ju7mXlqo9x4uQ25s0TbN0iWb789feHEIIoKlEu+7Tl+QNDfOx3f49Wq3XR+aSURKUSYeif5/Llq1i4cBFpms6OaTTgySct1lgWLdKsXFl5PXCy4Ipw+PBhnn/+eUZGRli+fDnGvEXq0rvEpmew8SEAhO5GBjddtrnf9nc7y+nplMeP24sG9hw828JV3nsRRT3O2X+6wePH5wr1xyZTDtcjht/j3M45ztZznjhRZyaZG/rzwukm9WRudfONgLVw8qTjyadyuroE27YqSlfISSDPYXTU8fTTlr37LNPT0NsriCI5K15aC2fOOH7zm5x9+x0zM/5xWjs6OmD1KsFdd6kLNm6uHQKhOpDhIvKZ57HJa9jkODK8eu+r9xNZljE+Po5Sip6enmu9nILrjMsiXtZqIVEoSBPXroBUhFFIKTSUIo2SXixTyguLeZaDc973se1BiZS04pQgsiRpwsz0FK1WC21KmDBCSoUJorZAKNFGtv0zNYFRBCZEG+3FTSG896UDKQRCCv97rEMgUdogXA5CoLRue0fmvnVcabI8Q0pfcWmdI899erfW2t+e59CeM44TDJIkz4isw2U5edYkSTOazQbT01OkaQzO4hworZHKJ63nVpDlAqV9QnoQhEilsDiElFRqHTQaTR5++GEmxiYohRFJkhG3Ys6eO0dU9lWt3V3dvuoyywiDgEZ9BpcKertLDPZ3Ua0EBEYh26FGRmuazRZpGiOlJrM50mgc/v6CuezcuZOdO3fS1dXFypUrueOOO9ixYwcjIyOFkFlww3Po0CG+973vMT09zaJFi9i4cSPbtm1j06ZNhZBZcNU5fvw4//qv/8prr73G4sWLWb9+Pdu3b2fTpk309vYWx2PBFefIkSP88z//M2fOnOGOO+5g+/btLF++/LoQMoWQBKZCqVSmXIauLklv76VbuKVUVCo1KpXaO5rfmABjLqyqjCJHrcNSKjlKJQiComX8SnP48GH+/u//nlarxcqVK9myZQtbt25lZGTktxIyXT6Jbe71fpe6igyHEbJ8GVf+1tRqN5F3foz/cfIo/d2Srq4LP89bAyG337b5PdkSVKtVVqxazTM3refBxM65v6kdvSuGWXXzmve09vmDg9y85SO8sn8Xo8mFFYL1uuPwmGLewvWUonf2XrueiGM4csSye7djZASy265M6WCW+SruX/3acuSIo9mCuAVKOZL49XHNpuPAAcszuxxZBiuWC4YWCE6edBx40fHcHkd/v2XHDoW6DvZRhO5GldeRzzyPS8+RTT9IEP7JtV7WDUmj0eDf//3f+eEPf8iGDRvYvn07mzdvLoTMAuAyiZe9fRVmZjqZmY5p1FNQGZVamchojBJIpA+dEdZXQ55PGFcGREaWW6zzid8T4xO0Wk2q1arfCY6EFw+lTxTXSLSQaB0QBCFR6CsitdKodpq5tQ4paHtjxuS53z0W0iCNFzdt24PSOYHNc7IkJbcWlCDLM5JWTJqkCCWRUiHw4p+UAmszlBJkzmGTlNz5KHGbCZyDPM9J05wkTcnzFCF8OJHWBq01QkmElID0HpjCok2IMgakAIRvsZeSjs4Oenp6efKxJzlz+gyVUgWlNBNTkyij26+j8iKrtbSaMVhBbqFULqMkpEkLl2cYExCYEjbPUFogZECWW7CCNEtRxiCcLPTLi9BsNjl69Cj79u3j2Wef5cEHH+T73//+7Anljh07GB4eLtqoCm5IsizjzJkzvPjii7zwwgs8/vjj/PjHP2bx4sWvC5kbN9HdUwiZBVeePM85e/Yse/bsYd++fTz++OPce++9LFmyZPZEdsOGDYWQWXDFmJmZ4bnnnmPnzp08+OCDLFiwYHbjcvv27dd043J8bJQXnn+A3U8+Q7UqOHJQ0Nv7xveB4Oa1m7jr7k9SqdR47dRxHt75AIde3o+Uc98vUirWbdjKnXd9giAI2L/vWR769c+YmZ6cHZOmvirr1Vcdx14J6e26k0WLfrcdKFlwJWi1Whw5coSDBw/yzDPP8Ktf/YqFCxeycuVKtm7dytatWxkeHvahn+8CGx8mm3kcbIYsDaGqW7maLVfdPatYveH/RDDD5k2CLVsuVJ6U1u+52r5cLvPhD3+UkZERnL34xUy5UmHhTYve09oXLVrMl//3/8L42NgFt1sL+w9YfvozGBrqIQiuXviRc3D2rGPvPkuWwrx5glbL8cohx+LFgk0bFc2mY/9+y5GjjnodtIaBAcGamyVDQ4LxcceuXZYXXnBMTcGJE44HH7KMDAtWrpSkqRc29x9wjI05hID58wVrbhbcdJP/DJie9q3dbxQg34gQUC77CsvRUce5c441NwuUgsefmPu3imOYmIQogvnzBHffLRkYEBw+7BgdzRkfh1dfdSQJV6xC9N0gZBVVXocI/h2XjpHPPI7r/DjCzLvWS7vhyPOcEydOcN99982e/y1atKgQMguAyyReRqUSvX19dHdLskyA0PTPm0cWx6TNBgjviOnwO8Y68GKfi2N0oJBZTrMZkyQJMs+IShFhKSKIIpyA3OYEQYSQGmUCgigiCEP/E4WEJkBKhVS+fdwLpWBtjshAWY3An6AJIfwnvQDnhP+xOVmakNmcJMtwSYIjwQmBUopSqUoUlhBS4fKMRmMG5yxJliMyR5o60swyU/dt4Qhw+KR1pTRaSZQQCKXaIqxAtE8UbPt/lJTtk0oxqx06B0FYYuPm29i96zmOHj3Gq6deA6BSq6Kk9knsQjIxOcn09DRCCIR0SOuodXUgVeBTym1O3GqQxCld3X2YICLNMlqNGVqtFkIoTODTxouLwbem0Whw+PBhDh8+zO7du3nwwQcZGhqaPaHcsWMHyQ3aNlJQcF7IPHPmDHv37r1AyDx/4lAImQVXiyzLGB0dZXR0dFbIvOeeey4QMjdu3FicyBZcERqNBkeOHOHIkSPs3r2bX//617NC5pYtW9i2bRsjIyNXdePyxMmjHNp7DwuTB1mlDHpUoN6gpew/mzN27jQrb17P8LIah145wKP//n2qk/tZ1DX3tH/P6ZyJqWluXX8H/f3zeW7PMzz+P/6BLYOg29qkc9CVwaqq4/lXLc8+l/Gxj3/ouqhEfb/jnKNer/PKK69w6NChC4TMVatWzQqZy5Yt877/bzVXNkZefwrbOopQJWTpFlT4Xpuo3xvaVOnqXk0YwsgKyabbLl/ZnJSSvv5++vr7L9ucbyQqlVg2PMKb+87zHJzIeewJS7UG4iKbBFcK52ByyrF7t2NywjF/vmB62nFuDKSExYsczzxjeWaXpdkEY/x6X3zRb0Z86EOSPIM9z3tP2zSDM2fg6ae9PduCBb4C8qHfWM6eBal8nc1LLzlOnBDc/SFYskRy6pTjF7+wTExcXDQWAoYGvQh5002ST3/Kb7q88opFyrmPqVYFd9wuWXOzoFTyXphaQ6XikPJ8iNgVfnHfDUIhw8Xojm2kZ3+CjU+QTt1P0POfodjkeU/keT57/ne+sOK8kLl+/Xp27NhRCJkfQC6LeDlv3nKM6fWt0E6AEFQjQ3NmklQJXJqilETgqygRwrcwO0srbpKlKWmaIACjNf3zBihXq1gBWeYA6ZPEdYA2gU/nNgakxOEFwPNfE0Lp9r8cQoI2IRLfPi6VxFrrBT4hcChAkCUxUmls0iJpNGk0mqRpjnMCITRCKKTW2NyR5Q6k9m3gUgASJyWptb6dHN+mriQ4Ibxw6BxOCLAOS450EqckUkk/HjBG+aZt5xD4Ck6cwyHo7O7lD//nz3Py5Ks0mi0mJidI8hRyR1dnF2EU0lXrYHJ8gkarzuCCftasHWHevB6UdKRJCyQExiCEoZVaIhMglCK3MwilMTpAKX3BB+x//a//lV27dhVCHPD8889f9HVoNBocOnSIQ4cOzV7YDA0NceDAAS9kF1y3PPDAA/zoRz/i9OnT13op15xjx45x8uTJObdfTMi85557LhAyx8fHr8GK33888sgjPPDAAxw7duxaL+Wac+rUKQ4dOjTn9jcfj4899hj33HMPS5cuvaAis+C9ce7cOfbs2cPZs+eu9VKuOS++eIAzZ85ccNv5jcs3C5mrVq2aFTKXLVt2xYXMVqtB2Y6x9SbLtkVz24dLJLyYzdBqNdvrnqGcnuXOQcfaeXNP+13S4ljaIE39OU69XqfHjfOpxX2Yi4gwdu8kp1szWDvX8++35bHHHuPHP/4xeX75577ROHr06JzvV+ccMzMzvPzyy7zyyis8/fTT/PKXv2ThwoWsXr16VshcsmTJm4RMh42Pko7fQza1E2yKKN+Erm0HWVgf3eg469usz44BwrFiueBDH5L09sLMjOPYcS/4rbtVcOutXmh86DeWgwcdQ0OO22+TbN0qSVPLq6dgyRLBnTskg4OCs2cdjz/hOHUKli8XflwCTz1tefllR1enpb9fYALoqMGlqniFgFoNokgwf77/yTI4fPjizykIYHBQzM53Prxn/wEvzJZKMDAouIxWsL81QnWhq1vJJh/CZdOkYz/BZdMEfX+MUG9tJRDHMUeOHOGllw5epdVev8zMTHPgwIELbruYkPmTn/ykEDI/gFwW8TIs9RI1fbtxbnO0kZTKhu7ublzSpDE9SRbH5GlMluVkWUac+ZZqrEU6iIIAITRRuUKpXEWZEKxFKdduEQ8wRqPbHpVJ4ghlQG4VSZoSaIcQGmm9X6QXSIVv98b7SjrnENK128sluQObW6SyNBpNJsYnqTcaWGtBKJQUKGFwFuJWjHPeo1NIhRSqLYLKdlWnFypxvqpSIhDOkqUZcSsjy3L/+mQ5o6NniUqRr57UhiAwTE1WWbx4CaVKBaGMl1/b4iVCsmx4mP/0hT/iO//tvyGVYHpqmiRLGZ+cQM0o+vr6WHvLWvbt28PCBfMZGOzHGIO1YKTG2YQgqhCEZZSpUKl2MDE+QRhVCF0+W0Hl4408Tz75JPfdd98lTd4/SJRKJX9cvAX1en1WyCy4/jl8+DD3338/hy915vQB4+3E9osJmf/xH/9BvV4njmOMKS6AfhtOnDjB7t272bt377VeynXBuz0eH330UR544AE+/elPc9dddxWbR++BQ4cO8d3vfpenn376Wi/lmtNoNOaIl+c5Xwl36NAhDh8+zK5duy6ohNuyZQszMzNXdH2Rhp5A01ueW7XWXdKE+o2VPoKSEfSVLz6+s6QJ1OvjhYCykfSVFfoi4mUtVIxeoar7AwcO8E//9E9kWXZF5r/RiONL9ODyupB58OBBXn75ZZ5++mkeeOABlixZwoc+9CG+8IUvsGzZMj/WxuSN3WSTv8BlDV8lFixCRiuv1lMpuAoIoKfHC4wLF0qEgGbL8alPSWwOnZ2Cnh5BtWp5/gU4cRLOnXOEISxeJKnVHOq0o6cbVq0SaC3Ytdty4oSjUoG1awS3rJXEMTQajiNHvDA6Pu4YGpR84hOCt9p3CAK/hvNOB+/0be4cTEw4Hnvc8uSTPuh32TLf9v4uXROuLEIjg2Xoju2k5+7DJWfJp39DXl6Nrn3oLR9ar9e5//77+fa3v3111nodk+c5Y2+yZnjz/W8lZN55551s2bKFWu3G854teHsuy1veOon1vdIIKTFBRLVao1IqoZ0l6ZimMT1BY2qcZmMGXEbscgQWoyWVchmpQ8JSGScVSIl1AodEKOETyIMQoTS5tf53tBOzszQHLcjbrdK5tW3RUsxWekojya1DKoXW3nuSPMflliT3AuDE2DhJ26hDaw3SV4dKpdst6BZrLXluybIEnCXNMnCWLMmQSqOUxjqHcA6lRVsExftdSo0JQ6JSifmDgwRR6IOIlEErhc9P2CcjswAAIABJREFUhyy3GNlOUne+idwhcEKxfsNGPvd7p9n50C95cf8+plVOlufESZOzoynO5QwPD+Oc8AnppQglA4SooJWgFIUEpkwQ1QjDiPHxKUphhLUxMzNTKB1hTDi7Z7Zq1SrGx8eLykv8DvipU6fe0dglS5Zw5swZGo3GFV5VwW/D+STjoaGha72Ua86ZM2c4fvw4zWbzbcdWq1XWrl3L1q1b2bRpEz//+c+LasHLQF9fHxs2bKCrq+taL+Wac+7cOY4dO0a9Xn/bsR0dHaxdu3Y2YGrlypXUarXC0uA9UK/XOXz4MC+88MK1XsoNwxtbesfGxqiUK9x+++2F//V7RGtNtVq9IOn8g0qWZe/4dRBCUKlUWL16NTt27GD79u309va+4X6NUN0IVfXiJRZsA2wT5HVgGFhwWdAGenu97+X5isRKWZB0wJEjjsOHLa0WTEw6JiZ8xWaWXrr9OssckxPQakGSwMOPeF9Ma2Fq2tFqQb0O4+OwZMn5SsnLi7U+mfyRR3Ke2eVoNGDFCsFdd8nrJGn8QpxrYJN2J5MQICOEfnsbgyzLOH36dPH9+y45L2SOjY0xOTnJ4sWL0VoXYbrvYy6LeCmEQ7ZbuKUUBIGhVC6jdUBoDNXOTjq7e6lPnGNy/Az16SlMWCZJM6JSme7uPkxYBqkYn5wkzXxqOcIRGC/6CW1AKqTRKK1xQJKkKKmQSFIyXwUprffTxOHyHCUkSerQRreDgzR5bknjmLjVYvTsKJNjYzQaDXLnn4cQIJREK41wrXZbuq+mlNoLoEJKIm38rnRZeIERsM4hpaDdJY6QEqUURhmM0RgT0KE1SocIpfzupxRkaTKbZm6tvcAvRTjX9r8M2fE7dyFFRn93mXMTZ5merpOmGVmSYS1UaiUs/nlrbdCmjJSKIDAExqB1hDYRSgVIKchtwsT4OayzlHWElGa2dfxLX/oSf/gHf4h115OpyLXhG9/4BmfPnr1oNYCUkqVLl3L77bezZcsWVq9ezd/93d/x3HPPXYOVFrxTtm7dytKlS9+ysuGDwn333cd3vvMdjh49etH7u7q6uOWWW9i6dSsbN25k6dKlDA4OMm/ePPbu3fu2PlsFb8+GDRvYuHHjOxKQ3+889NBD/OM//uOctqHzdHV1ceutt7Jt2zY2bNjA0qVLGRoaoq+vjzAMGR0dvcorfn8wPDzMV77yFT772c9e66Vcc44cOcL999/PSy+99JbjOjs7Wbt2rfcC3rSJ4eFhhoaGeOKJJ67SSt9ffPjDH2bhwoVv2+nyQeCpp57iH/7hHy65iSOEoL+/n9tuu43t27ezfv16brrpJoaGhujq6rowrVtoVGUDwbz/lXT8J+Qze7HxMfL60+jO371Kz6jgSqMkhCFo7a8h8xyOHfN+lUePerExy/ztSQpavXVGq3OQ5f56FuuFSm/n5unvh+5uAcJXcL7yihdHL4ZvGxcMDws6Ot6Z6GgtnDnjePChnN27vWi6Yb3gzju9b+b1durpbIu8eYC8cQCkRpVXE/T9L+/IV7ZarfKJT3yC7u7uq7DS65tGo8Fjjz3Gz3/+87ccp5RiaGiIbdu2sWXLFm699VaGhoYYHBwsNhDfx1wm8bJddYkPnlFSIQUEQUgQlAjDAFHuoBRVqda6ieMGeZ6RW0cYlQnDMk4omnHMTJzh4oQ8FyhypFYEQYgJAy9OtluvM5dhdAB5TmIFoVQ4C84K0jwjzVK0EuRSYmSIEN7PsdWKmZwcJ4mbTE1MMD0zjcIRBgFSa++liUBIH7gjhKBt44lEQFuMdM4bFstZ/0xw1rYF3PPp6n68wL8mSkqkNgilcCgkyjteOovNM4Iw8O3t1vn0b6nekORoAUm1q4fbt21HZHVeO3GQuDWD0gYpDUpJcicZm5wmjptI20FgQqTUaHV+TIhWBqM1MzOTTE2codGYARRhqeZPGNvtdsPDV9fE+3qmu7v7ghNBpRTDw8PccccdbN26lRUrVrBw4UIGBwepVqt0dnZew9UWvBP6+/vpv0Km7jca+/fvnxO80NfXx7p169i6dSvr169nyZIlDA4O0tfXd8FJQVHhdnno6elhzZo113oZ1wUnT56kXC5fcFtvb+/s8bhh/QaWLL348Vjw3hkcHGThwoVFyy7eNmffvn0XFS9rtdqsYLl58+ZZwfKNx2JxTL43FixYwIIFC671Mq4L4jie870shKCvr4/bbruNbdu2sX79ehYtWsTQ0NCc89Q3I1QnqroDl9exraPY9CxZ/QlUdQtCdVzpp1NwlXjjKVm97ti/37HneUdg4JZbBCMjgplpeOxxH8DzVkjpW72l9H6Wd+6QjIxceM6njaBWhddec+zcaRmfuPS6FiwQ9PTIdyReOgdjY46HH8nZtcuhFGy5w6fTz5snrjvhEsBlo+RTD0IeI4I+TM8foiobuZQP6BsplUps3ryZtWvXXvF1Xu9MTEyQZdlFxUutNYODg2zbto2tW7dyyy23zAqWxbX3B4PLJ16eT/H2cTNgwWhDYEKUNCglCJQhLFVxLsO5HOccShty60jTnNQ571XZbgnPc59SnmYZtpkAou0tCUppcu1QUhEEAuMgs468FdOKWyityJXEGINWjmarRZrG1GfqTE9P4WxOmiWo0GC0wkiNUgqlDEJqtAoQUiCU8GE7uHYFpADXbgnHeh9M5/zHkmj/l9fjGYVzCCmQUiKVQUiDOJ96fv7DzPo3o9FmNkxobGKcarWKMYGf7/y3kZB0dvdx862bmZk4jcynUTKhUqtQKkUgNX1dVaYbTRLnKGl8Sz8ShEFKTZ7FHDt1mH1799DTWyUIA2zu8MWexW73pVBKsXr1arZs2cKWLVtYvnz5rGBZLpff8qSxoOBGYGBggA0bNrB161ZuvfXW2Yuinp4ezPXkil7wgaC/v3/2eFy3bt2sgN7b21scj1cArTWVSuVaL+O6oFarXXCMVatV1qxZw44dO9i0aRMjIyMXVPsWFFwphBD09vayefPm2QrLNwqW76bzQcgSqnwLqryGbOoJbPMl8vpudMddV/AZFFwr4thXRCYxDMyHTRslS5dKXn7F4pyvunRutmYFeD3F2zkIAkF3F5Qif5vSPlncOd/KffJVR7Xq6OoUVKuCxUsEvZew+xUC+noFUfTONrwbDcfevZZdu/ziNm4Q3HmnordXcF3umbsE2zpIXn/OV12WlqMqm3knwiX493kURXM2LD6IOOcueB0KwbLgjVwem1vpnRml9EnbCO9ZqbTBIRDCoZRC6hBHAMKhBDjrkMr7Ueo0wzqQUpNmLVwOUrbDeqKyb3MWEiUlubWzrdzaGLQJSHNLkjawaYK1Dm0MQgry3DEzPUOSJKRJTJ5ns4njWgcILdFKIqEtVkqUDjCm5NPApcTiP8Wdswgn/LqdxTo7G6xj8xwxmxDuk9WR7QpOpZFaI4XyAtd5sRfa9foCYwKE8OnnQkKz2SSOY/r6+r1Hp/PjhQOEYf7CZaxet5XdD99LHk+SaIPCi8GhCgg6KuQChHGkQhBnFutSRs+c4dzoSbKsSb0xSW9/F+Vy1bett1vwC+bymc98ho997GOzJ4znBcui6qzg/cDq1av5+te/Tnd3NzfddBODg4N0d3d7/9+CgqvMsmXL+Ov/8tdUa1UWL148ezwWgmXB1aRarXLbbbfNppieFyz7+/uvqWBp9P/P3psF2ZVdZ3rf2me4N+cB81SoAlAjxeJQRXEWi1RL3SK75Y5oyhGt7mh1hMJ22G9WtB/c4QhHKOQXhx1+ssOO8IsibD84Wmq1Bne3hubYFCmSRbKqyBqAAlAYEkACmUACOdzp7OWHvfc55wIosqqUYAKo9Uko5M17zj77HoCJvH/+//oLzq0XfP+1Nf7N6716VFHi3I0hBz+slEVwf+Z5wWvXc378xnUOzN6579PXhzz+KR++1wS8V755bsC//A+rZC6VOYb5eN4rb64OePaw2riQe8z8/Dxf+MIX+M3f/E0+8pGPcPTo0fqHiX+bey/FIbLpX6TaeAk/uEq1+X3y2c+Cff//0JHn0Ika0M2b8JNXPUuXlNNvhgi5+uBuPHdemZyAIlRCsHRJ+d73PI8+KuzZIxw9Krz+uvKDFz1oWPfkKeXq1dBsvn9fxr59whc+n/3Uwp68gCKHH/7Q873vefr9MH+z3w/X/Yu/rPju9zy7FuHRxxw/+YlyYy3E2197Xbl6raq/1jkHhw4Kn/mMY35+59+L6egG1eaLaNVDykXymc8jzn4Y+F7J85wjR46YYGncwba8M12/tcnGxiaZENx/Qph32Z0IcyMBnMNlLgiEImQI6qswxxJFxVGOuswvLtIf9PDOk2c5eVFSlh3yvEuWx0KcKjgeRcJfbu89o+GQqhohXlFVBsMBWeHo90ITblVVrVi7C9fNc5wLbeROfPg4L8mLDlkRHJIqkKmiaQ6lAi4Il45wLVHwMgKfxdkhsYm8dlxmYW0XxNBg5NTg6Iw/7up0Jur1JRYEbW5sUeQFC4sLoTgoXgsgyzs8+sQHuXj2Va6cfYnRYJMeFZLlFEUHJ46yO0GRVXQzT+aE0+fPcOrU66jfZO++fZRlSTXyOBccqkF43o6/EQ8fv/Irv0K326XT6ZhgaTx0PPPMMzz91NN0J7r2htTYcU6cOMFjjz1Gt9s1Ad3YEU6cOMG/+Bf/AhGpBcv7xRHz2LEnef6z/zVfGfxDqgXH8SeEA/ub70t+UeHwkWMcOnQEgA/8wkf5rf/qd7l69RJFdqdA9XGERx97kvn5MGvthRf+LvNz86gf1d8S9vtw6k3P6dNw4mnHCy98ICaDjHvFM888w+/8zu8wNzfH4uLitn0tFNfFdY4hncPo5kn8YAkdriCFjdF52JicFB4/4Th5suLyZfjO3yhzs8qhQ8Lzzzv+5m88Fy/Ct7/t+eQnHUcOC6+/oSwvh3KejQ3HZz/r+PSnHIO+D83iX/E4F2ZmHjksPPGkY2oqFAR1Oj/7/dHGRhA9X35FaffBhvmccO68cvAAzM0pt9YV9TD0sLQES0uNRTTLoN+Dj33sXty5d49WN6m2XgMEyXeRTd0nG3sAmZyc5Etf+lJdqmqCpdFmW/4lnOhO0ev2wI/IXEaR5eR5gXpFCocTIc9ysiw4DrPckUsWI9fgMoklPEpRlhRlia88ucvJ8w5l2cFlZR1Nl+QP1IrKe9R7vCpoLM2J/6ejCj8a1UKnSEaW5WFOZZ4jLqceaCkZkuXkeReXl+Cy0DgOgKB1gU4o53Eu7MX7WFgkEmZuAsQZmHkWREwnwYEpLkOiPT/9LFvD4RRl0Zgxk4PTK2tra+RFzvTMTOhPT4KnQGdylmc//gW+vbHO2pXTaLWF4BnmOUXWoVN2mJ2eppyYZZR16Q8qzr75On1fAUpR5PR6mwwGXYosDx1JuQmYd8MagI2HmdvnCxrGTjIxYe23xs6ya9cudu3adV+Og5mbW+D48V/irXMfZ99e4TOfFp54otlnSBbltbi4e89+PvfCr75tc7WIhBFL0Xl57NjjHDnyKFXLQrWxocx+xzNCOfqI48SJ7n15bx4m5ufn79n3npLvwXUfw2+eREfX8YNzZCZePpCIhJbvL/+jjOEQFhebpvGyDM3c3W7GxYsan4cjRxydDjzyiHD9urK4IBw6JOzZLcwvwOoKTEzAo4+GOPiTTwpzc3DpknL9RnibuLAABw869u1rrvdO6HaFD3/YsX+/vG3LebcbioCOHRc2795XhQhMT3NfuC5B0eoG2l8CV+A6R5HcxLb3SlmWPPnkkzzzzDM7vRXjPmRbxMtOUdIpO+BzyjJnamKKoiiD69AJzmUUeYGT0HKT5TllUeDEMRpVKDHmHUXOIs/x4oNw2emQ5yVZUTKqKjJVPFrXo3kUMkF8mNnoVXH1XMoRKhnp+ysnLgiWaJwtGWyOEjLjiBRIViDOhch3LCJCmlIinCD1F1tFspgA11gYhMaovMQoj0TRMW35zm/2wuvP47EwHAzqpsWq8ty4foPMZUxOTdXlQAKoOBb3HuUDH32Bv/rj89xcvsDERMHswiKTizMs7nuExf2HyMsuSkmnO8+lS5c4eeY1Bv2KDEdvc53eZgftTJJ3IITxDcMwDMMw3p/c38KckLlQvFgU0O06Jid/SlGLhNFE79Qp6VxGpzPuwFdVOh1PkSt5DtldHJzGg4Pki7jyEQC0WsMP3iKbem6Hd2W8F4KIJzzxxJ3v3kRgaio8d/RoGPtQlkJZBh/M7KwwHIYYeBIg5+YyBgMly4ROJzgc8xyOHnUcPAiDQXg3m9Z5t2QZ7NkT4ug/i717H4x3pOq38IPzITJezJJ1T+z0lh5oRMRSYMbbsj2FPRCVuehAVCXPMjqdDp1OB/UamrazOFMyC5Ht3GXkeQkCXqsoFDryvMBLRdnpUJZd8qJEneDUJVtiuLA6JLkXXRA0HR7vHY4MXznUVUiMZofmboc4guiIhJIdJ2RZEVq7Y1kQUQBtXmC8ZLy0S4KkEEp8RHBZnAsk0b2oGnca3KLBlZk+0ywoEGLbEl6NCPR7fZAMJ8JoVLG6ukqe53S63aYcyVeI5Bx+9Cl+4bnP8oNvbTK/OMOhRx5jcc9B5ncdoDM5Q1YUgKPoVjz//CcZeM/K1cugwqDfx3vPcFRBFVyiZr00DMMwDMO4/1hevsh3vv1HfPOr32Z6KudH3xPm5prv2yrv+fBHPsGX/sF/ytzcAufeOsW/+7f/mjdef4XiLtFjBX7x45/ji3//N+h2J/j+d7/Jn/3p/8vm5nr94+zRSLlyRbl6FV6ZzekUf4fDR75M5uwN5oOIuA6u2Ifk0+hoHd9/a6e3ZNxDsgxCwKb5OiESBMvbXZOdzt3j3yLByVmW9h7xDqp1fO8kAOImcd0nd3hDhvHwsi3ipaoH8aGIJxcUT7fbZWJikjxzjEYjXOZweZh7mUV3pROHk6yOjPvK46uKzDlycRR5QVEUZHlOBWFOpmotAJKky1rMUxSPU0V9hcfhpULwaKWNo9K54AolCJrO5eRFaBuPVTthjmVsCpf4OR8lxyDWaiwqihqnpy7iCSKkDx9D495MMzKFlnoZI+ZFisVDlmfs3beHC+eXKIqSLMsY9AdcW1lhz949odwHAfWoOIrJKT7yyc/jqVDtM79rF5OT80g+QUUWfkLf6dKZdDxWdigmJnjlRy8y7L/E1uZN/MgH92qc/2kYhmEYhmHcf1y6dIGrp/89z3W/zjO7OrhKkOvN8y8vD3lZ+/ziJz7H3NwCb509xavf+kP2bLzOowt3WqW+f2nAi3nOZ37pV+l2J3j1tVc49Y3/m195tKBtsPRd8IeVv7k44CevTjEc/DpZ10Y8PJiEuXxS7kc3T6HDy+joBpLbiCTDeLeov4XvvwmAZDO4zrEd3pFhPLxsi3jp468idxSdjO5El9n5RYqiYDTsg1aIK0A9TjLyLCNzLs6BDLMnsyxjMLzOcDDAkcWym4w8L+qCH+/CwMjMhabyaG+E2gWp+OiWVOdAKsQ7RBVPFUXLLFw3Co1OXNxPHgp6EFQJ19AoNhIcnqFxHJBQtjOuQTZDhFVTrlzi/EtfOzZBgigq4FTqdVIRjDoQHHML8/T6A1avrUZ3qGNrc4vrK9fZtXsXRVmg6sB7hIyJ6Tnmdu/n5tpVxHXxZAwrj45G0VVakZc5k1PTHD3yKGsrN3j1J6/S3+pz5fJlpmfmmZvfE+/LdvytMAzDMAzDMLaTwaDPlNvkuSMln37kznnFZQY/liH9QR+Afn+LOdngl450+MDeO0uHhpXntA4ZjcJMzP5gwP5Oj793bIbc3fkN4UZ/yIVqWI83Mh5MJN+FK4/gN0/h09zLn4N42e/dYGnpZYbDK5SFsLEx7t7tdDocP36cp5566l2v7b3n4sWLvPjii2MzW9ssLi7yoQ99iIWFhXe9/vr6Oq+//jpvvTXuVPVeOfWmcuaMQ/0joB9512sbDyqKjtbwvQvgCqRzCMmmd3pThvHQsl3VdeRZSVmGuTpTkzN0Oh1GoyHD4SDOfAxN42nuTpEXuDxDXI6THK+e66urqPexnTs4NJ3LcVmO9wriqXzU1lKiWxxBVwwCn6RUucTpki54CSXGwTOXR3dkOCZzGZmERnBtRaa915SER4jfoGkT+ZbmYa31aYy0a/t7vRgL12QQjUenuLvGi5RFnLWZxE2XsXvPbkBZuboS2s9FWF+/hcscC4uLMa7jw0qqVCPP+vomnU6HzDk0qxAEn4fovFaeouiipTA3txCeqyo21m/R6/c5fPQE3NdzngzDMAzDMN7fFA6mcsdUeef3bJOFu010FIoMpsq7Hz9ROHLXLvyBMgvH3k287OYujhgyHmQkX8R12nMvz5JNPnvPr3vt6g95/cX/nsNTN7i85vjGS+N/J69uCY88//f4vd/7vXc9e3ZtbY0/+7M/41/9/v/GB/feOdLgRs/TnzrMP/6t/4x/8J/8w3e991Mn3+B//Z/+B7aunGL/THt9ZXNT6S8pP974NJubvwfsetfrGw8e6nv4wYVQmlvM4jqPYy4gw7h3bI94iRCmQDqEnG5nkixzbPR6DPs9UEdRTDA1M0VZdMnzgjzPcXmBy3JQz7Df5/qN60FQLDKcyyk6EzhXIOoQp4R+HR/E0OiMrAtsJNXhOFQU1JFlLrogNTZ/h4i4iKubvVOhTop7Q4iLJ4EyeSuDyBjLfUgzLUPZjmg8V9KMSxlzYoqGWZLNBMxYco7EuZhKEQuMIDb7qJLnBbt27aIaea7fuE7mHDjHzZtrZLljdmYOF12kDqHf6yNeGWxtMOxvUhRTzM8V0PUIkOd5bD93TExMMBwOUBSXZWxubQUZNO3BMAzDMAzDMIyHDnGTSLEPsi6MNtDB5Z/LdQf9G+zmJ/zjEzn7pvM7akK/cuMWb55+MxpC3h39fp+1a5fY33+LLy3M3PH8+bUB31rb5MrlS+9p77durrFx8Sd8euYqzyyMj0zQBeU72uNPb5xjOOy/p/WNBxC/hR+eB0DchJX1GMY9ZtsKe9R71DuCVpgEOBgOR2ilDDoDFjt7KIpOmDeZBUclTtBRxerKNfq9LTKXoU5C03jZxWV5iFaHvHZo8E4OyKAkhoZvDW7GJEbWUzFdhuKjQOmCQBij0WmWZZpVmQRLAbz6GCtvRcPjq1XVejak4MIONLoq47pBf9Q6hq2amsg1Pta60McBRVnUP2FMgiwaZmHu3rsHFeXmjZugIRZx48YaWVYwNTkZIvhAv7fF+vpNZmb2MjszRzkxzeTkJBOTk3S6XbLkPs1g165dHDx0iLW1KwxGPXrDUWhcd7lpl4ZhGIZhGIbx0CJINoNk0+jwBlpd/9mnbMdVRZjtOh7f1WH/9J1vQ1+7VnD6b/FGpMwdB2ZLHt9153zXMoM3Ru89YSYI06Xj+ELnrusv3aoobmW8e9nVeFBR7aPDa+GBK5HiwM5uyDAecrZHvIytNFU1YjQcstXrsbXVwys4lzEcDamqEUVR0ul2g9DoHIjDe0+vt8Xy8jKqPgh4LqcsJsiLTphBGSU+F6ZHjs+aVInao9Sx7TpWHh2RIbYeZMnGqdlU06S1kqBYK5ZJZCQt2BxcR8bvEDfBaUu/VQ1dPpr2FZvHNZ7sg0jatI2nQvX0eoSy02HPnr1Uw4r19Q0yyRkORqxcu0a2dw+TU1OohKj70tIF5uZm2HfgEaZn5sg7E3S6we3qXBYdphnT09Ps33+Aayv7uXT5AiIZWVbg5M6fghqGYRiGYRj3P94rl5Y8X/1KxalTFT9+yXPrlsLutzse3jqn/Ps/r1hcrHj5ZeXOSZrGw4i4iSBeDq6hfgN0BLJNoTzDeD/g++goCv9SIPnizu7HMB5ytsl5GYXAWFQzGPTxowqRHOcKijyIhsPhgJmZ6ehCDIU9Thy93habm+txvqQjywqKohtck1H0S4Ka1A3jgA8yposOTB8j4knATBFvjbHwZlglteKYRE9VX5f+SKsRvC1sSjBZNs7I+rkQW4/VQWFmpUYHqAa5VZzgva+Lf4KDMxybu1AYlIrUU1EQybmp0CmDgOn9Fba2tkCEfr/PtWtXOVDkuE6BV2VjYx0v4PIccKgqVVXh3AjvNRYWZTiXsWv3fian5qlGF9jaGiBk5M6ZeGkYhmEYhnEfIiTTwN1RhbWbcOpNuLGmnD+vTPTf3gumqly/obzxhjI9rVy6pBw369j7A+kiWYhXq99C/TqSWeO4YbxjdIhWa8Ec5KYQN/GzzzEM4z2zLeKliiDOhfmPLsPljlE1QrKMzBV0JrqUnQ5bW1uMRlUQJ/MCccL65jrLy1fo93tBvMsK8rwky3Mkc4j3oIITpaqiqFgLmD4KkanxMM3AjK3gsRm8Pb3SRfWyLuFJ8yuJjTo+Neto3QwkSHBjtqgfJaERwhcuxr+plFpY1calqVp/HqV2RTYCbVs0TTsXuhOT7N27l+Wry2xubQKwcWud5fwqB/btCWVHmjHRnQoRcUBHFSMZ4H1FluXBXZl5RByPPnack6deRSRnMBzhcsFl6Z4ZhmEYhmEY9xMTk5NcGUzyv3x/hd//4Z1R3+UNpdiX8cyeSRYXYePWJK+cnOL3vvY6M507I7NLt5TJI8KePV2mJmF62vGVlwdcuHnxriLphTXPxw9Dlt1ZiGI8WATn5Wx44Hvo6IaJl4bxLlDto6M1kBzJ57HZa4Zxb9mm2LigEopy+qOKrd6A4XBALiVaeTplF+cKhsMhw8EI18kZjSqGgz4XLpzl8qUlBoMB4MiLkrwzgWRZ7AEKgqK2XJMahcUgVHqUIA5CKK9R72M0PBQJNTMpGxGxKeKJofQ6dt5ExaWegBli30EwdY2bkljYgwTHpPex1Vzq67fX0uQIDUNCw4xJkTiLsom218JrRLXZx8TEFHv27OHy5UtsbfUYDkfcWLlOEYXXsuyCj6VDLgiv3lcg4FzcuwYm4rc4AAAgAElEQVRxttMpcAJFkdHpdMizAicZ9oXXMAzDMAzj/uPYscf5Z//Fv+Ty5X/+tsfMzx9m374jlKVjc+M5rvza/8j6+srbHr+4+Cj79y+SZY5PfOLvc+nLj+C9f9vjjx49RlHcOfPPeMBwEy3nZR+tbu7whgI3bigvv+xx2bt7P7JyzbN8xf/Usp/BQLm4pLz08tv//X473jzj6f0UF7PxPsQP0Godkdwi44bxc2BbxMssc/VPYCsvbG5s0utv0c1gNKxwLqPSEW7k2Njc4NatdYaDATfWVriyvMTm5gYI5HlJXnbJsoJWnXcQHtM/RCqgjlrhq+PgoSU7RLhdXfDTOCiT6CljcfD4TD2fMrkug/DYEjVTO3g4/LZzQqN4KxFeX68p7ZHaIKot6RNVyrKMwmISNtP51C3mVKmlXOl2J9mzey9nTp/i1s0bqCpbG7cYViOcc6wsX2Hv/n10J8IszHSr0n1UBfUV3bIEr9y8fpNbN9ZBhSwrLDZuGIZhGIZxHzIzM8dHP/rxn3FU+/u4vTz51F7unNB+9+OPHDnGhz507B0fbzy4iHQhmwsPfC/EX+8xqsqplRG//8Ob7J68823oj5d7XHDKH/9J9VPHI9yN9XXPiy+PuHlmg8m79PJcXh/yoxsdDk0qrqje9d7Pv1Xx2iXP/3PzFscvDu54/o2VAZvZMJpRjIceHaH+FvgR5F0kW9jpHRnGQ8+2iJd5lkfxMky/HI08q6urzPo5hv0h1WhE1s/o9XpcXb5Gvzcgy6HX22A4GqAoLivIsoIs70T3Xxg6Xs+njAU2teynglLFz9Ooflp3iLf0TUVFcPGYoINKS0Rsf15b4p1Gl2U4vp502RIYk7vSp8i6+jDPMwmnxHmZjeWymb0pwc1ZFFn9D3RdGqTNY1+roo2IOzExRafMuXD+DKNRxeTkJJ1CmJmeYrCxwc1r15iYnKHsTuHyPMbU45xNH0TVouxy7NHH+cvBX7K5HguW8vynD1MyDMMwDMMwdpD38n3auz3Hvhd82Amx8Sheav/nIl4eOPQMJ57/HX58+QpF786/Y4PpggPzH2PpknvXfwNH1RwTU5/nxu5bfKt3p/g+ECXfs5+i8ymWlt793nv948we/i85u36KK3fb+0TGM098kEeOWPT+/YDqAB2thgdW1mMYPxe2Rbx0zoVfUSDzCJtbQ7zeoqoqvFf6gx6bGxv4yoMqRafAOcHhEJeTF13yoouIi0FwwSux5CZFt5N4F9yOoZ8n1eRofWyIsbcck3V0OzgXqQVLael0qYk8CJ1NMU/bwXlbtXi9bjzG1R+lA2rJ02u6YvrHWPHqEVGKIkecxL2CT3M40+ttbaEu80HJXMGg3+PKlSVmpmfZs3eR+YV5CvWsr91gau0G05LhZIJKhaoakRUutKFrhYrwyGMn2L13H+cvXwxi6k+JWhiGYRiGYRiG8RDgSiSbDiOx/ODnIl7+wi8c47/97/5zlpY2qO6S3HaSUZSzlOV7mak6yWj4cXq9Eyh3j4XnWZdOZwGX3cWa+TNQf4jB4J8wGN666/NF7jh4cIYTJ7rvem3jAaQlXoorkHzXDm/IMB5+tke8jLFx54J46VyGV6E/qHBO2OwNGAyGVOqp/AiHUFUeIQ/zJrOcrCgRF+ZcplKbYLyUphQnaXpJCJRGLEySW/JJJpdmekYIJTl1EQ6eJJC6pGDW4mRUEVuPmwhAI+61hc/6vzF2Hq6uY88JrfVo5mAWZYm4KKRqI1y2tdKgYWoKq4fX6j1FXqDViJtrq8zMdFmcnWIid0xNTyPO4dXT622hKrgsY9IJ4Y9dcc7R7XaYn5tlcrIkd4LIbQM3DcMwDMMwDMN4yJDgvnQTcXbfvZ95OTtb8txz+/jgB2sPyDYzE3/dCzJgT/x1J85BUYB1Wb1P8AO0iqVpkuNMvDSMe842FfY4siyvZztmLhXQuCBkesiyTmzjHqBeEXLEFeR5cF26LA8Rae9x4vDeN4JkjFKn4eF1sU6MU3vVKP7FCHfd/t0u3dH6H0mlObb+vESPZGwMb/S7VolO61NJRCXGxUMxELXoWMe862IhkmE0CKciiIbPl2UZhMxmg/V165j5WPGPr/ec5zndTpfBYMBgOEIlQ8XhsiIUArnwy1c+/JRRgtgsCMORp7e1xcREyd49u8iLLIqjhmEYhmEYhmE81EgJWYmONsD3fi6XzDKYmPi5XMow7hlKhVab4YFk4KZ3dkOG8T5ge8RLJ9F+H0QxieIlMb6Nhs/lTnBk+Koiz3PKskNW5LGgx6FeqahiJNxH76I2ImVbvBQJsev2fMsWSoiLh1mPzTzLoAs2OewkYtb+SKH1OB1LXdLTjpA3bkxJVtD6Gq7ep7YKhxweD6I4jQKqh6Io6rW01ip1/HF8UBfuqFIUJVlRUHYmKMoJpmcXwOVUvmJzs0d3tsJXHnGOvCjC/XY5mcsRHBUjrt9Yoz8YMDc/h3NtD6thGIZhGIZhGA8t4giOQgVGO7wZw3iAUE/zvxmHSLGTuzGM9wXbExuXLAiWBBemiEOcNkXg0USYSYbLBfKSvMiDW9O5OGtFUdFQrO2kLq7RZFf0EETIVixcAXwU/sYFSUgxbR2fOR6dlRqFUJFm1mVcuNlzDJ579e3TawdlLfWJ1CFx14qqN5HvZval1v8VUrVQEC+Tc9PHMiLGdMT6FtCUBOWdgk6nS1F0EMmZmV2kEI/zfUaq9Ho9OlMjXJaTF0VwYYoLYjLh95u3brK0fIWV69eoKm/apWEYhmEYhmG8L5AoYGoUYwzDeGcoaGqtl+C+NAzjnrJNhT0ZTsLMRpfmUCYBUUJFjaQ4dhZizC5zzaxJVXwtQoJWikqIV4dS7tbsypTpTr09kiGqsYinLTI2TsbGRdmegXnnrMvkbpT2sMnaTRnlR43R7fFmHpptadRQkyjaqJ06Jqy6WgEti7IRLyXNtgz3s1k3Tchs4uVl2WFqapo8L6gUZucWKfOM3sYNnCjiMrz3sV3cg1aoVnjNQZStrS1OnznDtRvXuXxlhf7Ah+IgwzAMwzAMwzAechxCFk0h1c8+3DCMiImXhvHzZptmXkYXobSdjEmYDHMfIc5/dC5Evomx7jjDEQ0OTlTCP56uEenS8+la0ZQZhcBGSEyjIlVSi7jUse2wkLack8kWShD2xkp5aJ7zKSIeBEnSsumQWozUlqgpSMs9KrHEJ03grH/3ihOJzsu0irZi4+n4lsuzRVGUTE7P0J2YZFBVTExPMzM1zQ2gv7WOSoHX8A3JaDQMc0FdhTjPqKo4e/Y0p988ydrNNfrDIV4cKlm7icgwDMMwDMMwjIcQEdc4L9+modswjLvREvxFQLZFVjEM46ewTeLluJCYHIKC4NHovGyFqVVQFbxIiCgkIdJXUQD0MdYcFlf1dfmO91FEbLkla9FRfZixGWdcJglO44Euzd+sP08zLFPSntOLas5tWsIbXW8sEh7Ld9Iem7mVTXnPuOuyuUSe52RZ1r5gy9CZGtNBJc27TGKskGU5k5PTTM3MUXplcmqa2blFbt1cZ/n8BboTPfZnZXDGupyu6yDSoapg9doqr/zoB+A9Tz3+NBeXLlHm5Xv54zcMwzAMwzAM44GjXRBq4qVhvHNuF/zN/GMY95pt+hGB1mJicjIKUs+XRFJxjSLqohNR8ApIcD1mIrFcJsWzY6M2bkxkTCJgEAZdM0cy2TGB232KdcK7FhKTg7NxXKZCH1rXqp2kresGfLi2jytLEhepm8WVxuUZLn2bQ1TDfXFOyJyLzs40ebPtAI2CJR7VdgM7uCxjYnKa2bkFVISJzgR5WVCpcmHpMoP+FlevLnPwwEEWFxYZLQij6haXLl3m2pXLnHnrNJJlfOCpZzl68DFmJqfizE7DMAzDMAzDMN4/2OB7w3jv2Htow7jXbJu/Oel0qZAGfCtKDl4VVx8JaBQAa/EwnJcmO4aH8fMuFPkkac/7OJsxCp9JDWzPrKwD2kJ9jTS9Mp3SFjXrLzct4bJtw0yFO2nuJVGcHHN30nKBtm6MRKekEu6DpBg6gssynAuSYegkas3b1JbbMm1Nqd2XzmV0u13m5hcQl1GWXQShqirWNza4vrLM8vIlTp86yfzCIrt27WFUeU6fPoWvRqzcuMbk1DQzswscPHAEKXJcLF4yDMMwDMMwDMN4J9y8eZP/8Fd/yb/7sz+lKMbn/3mvTE1N8/kv/DK/9vf/wT3bw/lzb/FHf/gHvPLKy5TF+NvcyivHjj/OP/1nv8X+/fvvyfXX19f52le/yh//0R9S3nYPVJVud4LPfPZz/MN/9OV7cn3DMIyHmW0TL5XYFn6b0OdiTNsnh2J0VSYBL0XGvfoY627ch8HJGOddprh3LQ5G5dDXG4jDNesmn/BpbX0MEIty6jh4VCkbnbK1Rops11nu1mDN1uDLZlKljJs/JexJx/bUmq2pQp6XiMuiHtqsBEHobOTgGMvHNyKqCJ1Oh5mZGSTLwlo4tIqt7VXF1tY6Kys9li5dZHp6HlVl9fpV8jzMFu1OTqIOFvbtY2tjk9HtTUSGYRiGYRiGYRg/hdXVVb77H7/K8nf+gE890h17rj9Sfryi/J9nPLe2fu2e7eHNk6f55p/8XxxzZzg6Pz4O660bI/78pQ8y9J/m+PHd9+T616+v8K2vfI3Vl/6Azz06fg8GlfL6quffbWyZeGkYhvEe2BbxMumGSSdMAp7Gsh71Ck5arsgQe1Yfu7h9UCBVGsHOxeKf4K5s/JwuiofeB4FO6g3EWPddREvQcK22UKnN4yCYtqp06vKc5jhNZT/xqHSsoohqeHlpqmc9X1PxSagdOzN+LFCWZSj3od5KK/6eXJ7NzUzPp/tRlCWTU1OIy8jyrHZqoq0JnurZ2NxkfX0zCsV9hqNwrmqQQ/Oyg/T6uMrm3RiGYRiGYRiG8c7p9QaMNjd4dk/Orx2bGHtua+gZDDf449Pr/PVf37t4+tKFPrp+k899oMPTe8bFw5eu9Hnj5IDvfW+D5eV7s4e1tQFLb63z3K7sjnvQHymZbvDKxsY9ubZhGMbDzjY5L2O7eJ211joynspyiAlv7z1OssZdKSlOTj0T0kHdNJ6i3h5whDmVWouJrhYuG3/i+IzMNEsyuSJdGs3ZKvmp51Z6ons0NYSnfSWpUFGVKADGV60hF56EzOSw1FbMXVXJ6vsRRdsoWOZlwViOPf2WboYoeB1frzWTRrKcTncSxOHEoVrhqypGzdM9FLJM6PX6jKqKPBecOIajESPv8ZqKliTcU3NeGoZhGIZhGIbxDvEeUGGycEyXbuy5TIRu7igLYWr63u1hoiv083D92/cwVTq6BXS73LM9DIfQKYWJ7M7rF5kyWTqyzN5nGYZhvBe2p22c2LYNoaSnbsehNQwzBrVjyY6oIuLrkptUYJO0TkmzKdW3RLvkqARcED6bSu8Uq24lu9PeahGPWtTUJDJSS6/U+msMb0sjqxLz4QguFA0ltyTgVeqim7i1Zk5lWqGejxnl2Ph8cl6mZ6R1xWav8ePWDwmTq1WcIyuKOjLvNfzChaZ3FUHFIc7h8gx8EIIzl+EV+oMBo2oEKJIJzknLYWoYhmEYhmEYhvHTGetOvQuZgxMnHP/0N7O3P+hvyXe/4/jzs3d/IyMCe/Y4Pv3Ljk9+8t7s4cLFjH87cGSn78nyhmEY72u2r208CnJB8IsCWCrOce2jhLa2WZ8nQQDVOgeekHpcZBwhGV2Cjf8wiH4udt1o7f2s5cLaZam1+zG06IR4urZcmukAqfPvQmOZpCVwxus2CzZ3Qz2CQ6KwmsRcHzcsTupYe6fTwYlr7k9boKyvkl6TtiLjaTalQ4JVFZ/Gg2qI2oe5otFJKY48z5EsJ3OO0XDIcDTg+uoq6zfXqKpRFHMtNm4YhmEYhmEYxvYhIszPC0895X72we+RK0tCWbzN9YHJCeGRR9w920OnIywstPoNDMMwjG1j2wp7UE/Mdoc5jy6IlM4JmlyZLh6XnJK1GzH5H5O0GePSRDEyqZZRBGzpnq2CnNoHiaY4eWvlWvxrextVcJJERW3i5CLxeq1joxAb2s+jc9RDXacuaT8pNt5yeyYxFxdnUWod0y6LEuda/8BJfbnWyMt4P1KZURRXlRSLT69UwSneV6hPszbDP6DOObyvKLOcqalpfAWDUY9KR9y6dYv1W7dC4Y/9Y2sYhmEYhmEYhmEYhmHcJ2yPeJlS1U4abVCC0ObDIElE3JjoGNyYEuY5xng4dbGN4uLx6rUW6ZKYFwTRlJQO4qavxUI/FkOXupCnuXQzY1LwpOS5AD4+aH4a1xT1NDFxiRZK34qkp0Igief6+GzCa5qlGdZy0WHaKRvBsC7bIb3W+mbGT91uy9QmTl4/Frxqa99aa6tOhMxFt2fRpRx2GFUjyrLLxkaPmdmydoEahmEYhmEYhmG8U65veX58dp3CpXFbgd7I892lAQsHhvf0+l6Vizcr/nT9Fq9dG4w99+bqgDMT9/b6IsJaz/Py2XWm8uZ9m1cYVJ4fXO7DPd6DYRjGw8r2iZexEAdJTkmtZzmGf7l8VBvjpxRQX89zlKROEr2VGoTP5JVsOxqTk/NtJprgEDzVHUeoSng2uhW9tnyc2sS5pXZaNmvWMmI6OZUAjUmUKdpdn9bcIA33IB2vqrhMKMsCERfF13EHptRLNY7L1hTNZh+pUCgKmRoFYZ9i8qpxuwIuzBx14nCSkYmnyAocrnZovt2dNQzDMAzDMAzDuJ3du/fw+DN/hx+8OOCrW3D4kLC4GEwRlSqPPzPF85/55Xu6h0cfO8bf+Uf/nKUzb7BSjr+f6RxVPnPkSZ544ol7dv3FxUU+9blfZmP9FisT4fo3bignT3lEYPcTE/zdX//cPbu+YRjGw8z2FPbEuYqKD3HqpLx5j2RZLczd4elrWyppCnV0XMOrRcs0k1KjYJdcgnViXATUxRj17QJcmj8iTcxaNLadN3sY214UN1VD8Q113DuIra6lU7bT6G1pVaOg6CTF4kGjgOqcI8vzsa2qpuby8T1pWlibGZhpbmfjyQzXGlVVcLOmBL5kIFU4QlztCXUCFRoi5oBzIPWahmEYhmEYhmEYP5uZmTmefuZXefyDz7J3L7zwOcfTTzfFOGVZsLi4eE/3cOjwEf7Jb/02a2s373hOBKampti/f/89u/709DSf//wXeOaZD9Sjzd446Vn7VxVlCZ/6ZMmXvrTnnl3fMAzjYWabxEupC3cgiH4uzoJsGwpVGlEs1MIokNUuxzvGV0ZRLkSyW/MtU7JafZz9mKZZpnWi+JfER9IelBAUb83PbF1bYxtPU8ITnZK1ezEKqK7uJ48uSqlHX/rYhh4nW0ItjqYX1/wuImRZVs+mTK8hLko77C7E1vDaxRlF1eRNTc3klcdXHu+TM9ThCfvyPgjKqh6vFUqFiuKpqPwovkYr7DEMwzAMwzAM453jnGNiYoH5+Tl274ajj2b3tJznbpRlyYEDBzhw4MDP9boJ5xzz8/PMz8/XnxsOPQuLFZ0S9u1zLCzcu7Z1wzCMh5lt+RfFawXqa78g0giG2jLyaS35RfFMNDZz3831qK1z01xHbRLYBAG07RMMrkXqtu3glAwuzSYe7ltxdRqRMCmjSbeMF2oX74ylwMN4ySBWxhfrfbieb6usyf1IamBvXluW5WRZXu8tLBXO93Hfqh71QXANcfpwl1UbkbZ+rFB5T+V9uIYP0XivVbOOV3w1wvsRla/wvmJUjfD4+nUYhmEYhmEYhmEYhmEYxv3AtoiXiq9dlUEsDO694P7TunSH6F7UWHijPomLRGGNIM6p4tXjk/jYLstprxNFvsaxmdatgqSX5lfWwmUjJrZF0PAaUgpbo1g4flo9WhIHuFo0FdHgYEzzOaNrU1XDHEltHteLRlEyL3KyzFG3mredpZrC4a52kXrfiLHJdZnuQbx1eFFGfkRVVXjv8VVoHldNYq6nGo2oRqNazBwMB/QHPUDIXG6N44ZhGIZhGIZhGIZhGMZ9wfYU9khouBZ1oUVb6jBzjEO3lUIdEwpboyIbhTC5H+OTSWFV34iPQbwTVD2ZNHMu29H0sKS2m2/G95BmZo4NnRx7YXEGZP0yGydmne6O4qu0xEma+Zixyhxc20SqOBGKIm8KgG7fg7bvTJyXKcFJWd+beJykV6ThfvgqCZNBvPQa3Jjee3IRRqMReTYCCdMvB70em5sb+BjDNwzDMAzDMAzDMAzDMIz7ge0ZRBKFS3CoCBqFvSSEOQllPWP6HiRdro5Z+zTjMrkK4xqq7UvpuLJJnImJ1C5EJLoV22FvbUTKtlSYHJzJIRr2156TmcqCBIkzJxUfmtDjteJRcQ/U1w6vwKEqTRzcN6+xFi8Z03bbN3Zc17wtXe+CtxNRwalDEDIEKo/zingfY/Hh3lZRxBwOhwyrUf1HsbmxzurqNSpfUbWcrIZhGIZhGIZhGIZhGIaxk2yL8zLEmZNrT1Cta3GAUFij6kO9dSsGLuIax2V8LjgRU+GORJci9XMtWRNJ7kuJ8zST6KhBNJRWM3nTWJ6sm639x/7tpqgntotH02YSUENjeCoCIuxBXNOALhpLi2LuW5JQKqin3r8TATxlp0Rcox83ftVx+bBZUW97Lgmv9fjO8KtS8FoLo74Kwqz3FR4YVUJVVeR5iYiytbnB9dXVWgQ2DMMwDMMwDMMwDMMwjPuBbYqNE22VHp9sl16Dr1NAY5t2iJbL2GmhkTsKghLmZwbrZji5kebC7wK1szOdQ/1sI+ypKj41BsWDNV6/3m4UAyWJn7FUJ11TPbEJPbWmB7EylAxJfR2EVvxbGdcg0yZTn3h0maqnLDu1YzRttXlBGqPjyrhm2YibtWO0finh46qqCL3qEt2ewZ2qVUUFOHUxYu5wThgNhqzfuon3o+Z6hmEYhmEYhmEYhmEYhrHDbIt4qY3cF9x7LoiAvhYwo9uw0SnHS61rATA+doqIDwKmJh00inbSzHis5zMKtctSarESWhZJ6hb01pzNpoE8zra8TTiUek9az/FEoqRar1NvOsifjugIHUei27I9U7JTJuel3HZcc1+Sbqn1tbTZYku4bLTNpuwoCJdJZA2h/DAT04cWcoRcHN57er1NKq1QKky9NAzDMB4GVlevcfrMKebnF3d6K/cdZ86cYm1tdae38dCztnadM2dO4tz2TGp6WKgqz7WVqzu9DcMwDON9yGDQ5/Tpk8zPL+z0Vu47Ll26yPnzZ3d6G3dlm8TLVBiTSnCirJe0Qa+oA8EFYa+lz0lUCFV9PdsyRLEV1OPE1Q7EWnh0Uot3EtvJ1WktNKaSoBRlb8/fTEpfirSnmZKiEoQ/tI6Kk15Xq0G82Xh4HL4XbbfxJMUzxdVpDftsouOiQp5mXraT4q1r1EbMlpMzfU7rPWrdCZQi85WvooDp432s6tfh45+FiuAkI3c5glBVvnlhhmEYhvEQcPr0G/wf//v/zMLCrp3eyn3H8vJlLl26uNPbeOj5+tf/gnPnzlgh4m2oKktLF3Z6G4ZhGMb7kCtXLvG7v/vfUJblTm/lvmNra4u1tRsUxfaEtLeT7d3RWIxbajdl0u9SPDv8v7RqxJt5liFG7RANDsAgWEotFkIUGluKXzt6XUfRo3PSxefqc+NGvWo9czPNwnStaLYmd2UU/FLTuKtfoNw2P1PHHKFj8fY0mzLO7kyPy7wc+2a2XUbU1knTc3qXj8dUTcDHVnGlQnWEMIrCcPh8KB0SnKR9hkKhirhhEy8NwzCMB5jJyUlmZ2cpioJbt27yyis/3Okt3dd0u11mZmbsG/htZHZ2loWF4Oa4cOEcFy6c2+Ed3d9MT08zPz+/09swDMMwHmLKouTAgQMAbG5u8J3vfGOHd3R/k+c5+/bt2+ltjLEt4qXg8KpkThpxMgqHyXSoosEJ2J4t6TU6NcclM5FoNUyzIJNIWVs5k4gJSTQMEmZ6npYbUWmf5upynTCB0mtLbIznJrdjKuqpk+MAHnBxjVaUfeyaqvXrw7Wdn+GgsKZQFEXYczw3jcwcG3KZ4uFxpiVRiEzP1UcLdVS8qir8yONUQAWHA+9Q7xAXxEonGU5cOE/TvmJG3zAMwzAeUCYmJvjyl7/M1NQUV65c2ent3NeICEePHuWLX/wieX7//YT9QeXpp57mt3/7t3nyiSfZ6m3t9Hbua7rdLs8//zzPPffcTm/FMAzDeIiZnZvlN37jNxiNRqytre30du5rRITFxUV+/dd/fae3Msb2iJdeg1BG7NlJRsDkEEzioG/NfEwR6LhGHdcmzGcUF5q9UwN4mv4YCnPSjE0JTeS049VSOzBrMTMJltTGxuCo1LRms6fk4nTRCZpE0fp8IQh8kkTFsAetBdIkvMb4e4qbS4rIp50TxMvk3iT6PZPoGTeUPgoTM1vTKGPTempkD25MDc7LON8yvV4vgldByEDBC6gLQmascie1rZvv0jAMw3iQcc7xsY99jOPHj7O5uTk+8sUYQ0Rql6DFmrePxV2LfPGLX+QTn/gEw+Fwp7dzX1PkBfML88zNze30VgzjvsR7uHlTefNN5dIlZTBQJieFQ4eExx4Tpqff3dfuer3TyqWlZr2Dh4Rj73C9Xg+Wljznzik31qAaKWVH2LUIR4869u8Xsuy9vmLDuDeUZcmHPvQh9u/fT6/X2+nt3NeICJ1Op3aq3i9sz8zLaAT0eERdECvVg2S1iJji30kkqxu28WPjHrM0a7L9XkPSf26LNUsQ9CTNwqzNmuNR7KBpCu2wdYqhC6kMCGohM4maKi2xM3xQx8JlfB+1Y1FaO6zdljRSoiSR15EVOePvFW6LkKcb0RJv6+fr37UWNpMz0/sgc3qNv9JzXlEneBRNSrFEkTjO7byzasgwDMMwHiy63S4HDx7c6W0Y71OSKDw7O7vTWzEM4wHGe7h0Sfnmf6x4/XXl1i2oKigKZXHj9iYAACAASURBVH4efuEDjk9/2rG4+M4ETO/h8uW43mvKzdvW+8AHHJ/5Ket5Dysryne/6/nJq57r14OQ6T3kuTIxAfv2KR/+sPDRj2RMTGzn3TCMvz1lWXLkyJGd3obxHtkW8dJLI0EGMa0lvEWBDOJMyToWXZ/dmnfp6tNS3tz7tITgnIvzK2nmR7Yclz7FyWtRrpl16UTGXZtea1dkEiR1TARtOTXTR213ZRIh6xGYSfSMW2padeq9tvvJXZbhsmwsct408TS3L07IZDxK3lyrHpPZel2+qsLnY5QcH6+sQZBNFk4lzcgEK8E0DMMwDMMwDMO4P1hfV156yfP974f3eU88ISwuwPnzcPqsMhx4du+Gj388e0fv5dJ63/teXO9xYXFXXO+MMhh4du+CT3zi7uutrSnf/rbnW3/t2dyE/fvh2WeFqUlh7aZy+rTy+hvKzZtKlsHzz2XYRBLDMLaLbWobjwJgmnEZRTuvPn5Ox8puwscuGhF1TOMjnq/axL/TOV79uO4Zj69FuvrY1DceEFpN5U2leTOPsxZZW2JmnWu/8ydPSuPaTHbPFDGPgzXDXYmv2d92bUUpi4LMhfKcdrHPmE6ZhmDqHVcn1QOF2ZrNs94rvgrX9l7rvVaqVPjwMlWJleMhaq4eVYm/m/PSMAzDMAzDMAxjJ9naghtryuQkHD0qfP4Fx+ys8PobntXrys11WLqkXL6snD7t2erBgf3CE084yhI2N5VTp0LcfG4OFhelXu+RR8J68/PC6697VleDs/PSZWU4hE5nfC/DIZw/r/zgh0G4PH48nH/okFCU0O/BmTPKN77hubYSIum9DyjT08JoFByfr73mWV5WhiOYmIBDB8Ned+8O74SvX1defdXTH4TnOh14+RXlxg1legqeeMLxxBOO4RBOnfIsXQqO0WeeDvcFoN+Hs2c9Z84qM9Pw9NPv3JlqGMb9zTYV9giiUQyUIOQ5cbXj0vvYHE4Wjw+zMcN8xihg+jSHEpIVMTgxgyiotSlRggiqcT6kKuIcPl2TNPtRURGcS0U+sWE8+kIFwDXzLtUHkdOTzJBJPPVBlq2v1SrYAbJk1fQhAp9KflIRkKYyHCScG19ekRd12zlppubt2mXrHrf1zEaUbWZxSnwNqkrlPZXXlCIP8zidUo1GOPK6HMirxlIliWJnZbFxwzAMwzAMwzCMHWZuTvj0pzI+8mFlZkbYvz+8l5ydCTMlRSDPBOfg2gq8+KJn//4g+h096jhzRvnLv/JsbCgf/ajj8ceFT30y48MfCuvt2xfe487ONutlcb3b2dpSzp1XVlZgdhY+/CHhqaccRRGen5mGqSlhYQHWN2D3LqHTEQYDOHnS87Wvey5eVKoKyhIGA/jxj5Wzbym/9NmMQ4eCe/N731euX1eOHA6i56k3lV4PihLOXwh2paOPCMvLyje/6VlcFOZmtRYvb95Uvvs9z6uvKieOC08++XP6wzIM456zPbHxGFMO6mIQFb0HkawRw4RaPMRJFONS5pv4uRger1PRwYEZhMNoLxQXzqtj2DGkHl2ESLNqq4enJkWsk38xapF1NL3JpDeOUpdeWnSRBuU1XEDDCx13cpIazn1rdmd8LVFKLYp8rD1dCfenffz4ztP8zWg1rWP67SPCffK+aqLtPkXkg5LpveJDkrwWQr0HlzlQh7/d6GkYhmEYhmEYhmH8XOl2g0MShI2N4K5cXg7R7PV12LcPTpwQFheFJ58QTp8Wzp5Vvv/9MBbsxRc9S0vKY48Kzzwdjtu1C0DY3IQzZzzLV5U3Xg/r7d0Lj5+Qu0a9ez24djWIj3OzcPiw1MJlYmICjh1z6a0xqrC0pPz1tz0nTyoH9sMnP+WYnxPOnVO+9deel15Wpqc9i4sO74Nb9Nq1kCY8flz41V91XLigvPSScu6c8vprnqOPZOyKbs3l5SCqPvFE2MPqqnL2rNIfwPw8zMyY69IwHha2KTZObODWWgnUuqQnRqoliZLRiehjcQ+ASlOaoykA3syHbEppNP63/UUoiIa1WFdrobe1lUcnZC1sJvFOWk7M1K+TXJPU5tFG0dPWdVO8PO0niZGSXocipKq1IL6mtYv01T7tva1Vxph9Ora50UlZDJv3MZreOEFD23gz1zOeV3mowl6S4zIqofGxD895va3S3DAMwzAMwzAMw9gpVGF1Fb7xDc9b54Iwd+iQ8OlPO44dCxHxRx4RPva88Bd/qbzyY+XqNc/ly8rCPHzsFx2HDztavhlWV5VvfNPz1ltKvw8HDwqfievdPjVNNcTGN7fC291OV5iaurso2HZtDoehcOj0aaUs4ZlnHB97PqPTgV27lctXlB/9SDlzRllZbVXzSnCdfvITjiNHHGfPepaXKy4uwer1UDK0d49w8KDwxhvK0pKysaHkuXD5snLjRnCHpntjGMbDwTbFxoMA5hRwTZzZ1aU6EoSx1hezpMG55GhEwDVCJ14RSQU+jbIXZjSGq4rXOs6NtMXS1Dge3Y6tOY5KinG3BcMYQffQnvko9TVJVtDxoh4huikb92ZzbIy7jwmBWt+bsuwguEavTJtRxs5JMzhTa3pavo6QpxsZj/W+Cm7QdG59kuDIqGqHLCBRCBbBZeHOVTHibxiG8W5YW7vBn//Fn7C0dH6nt/LA8NJLLzIY9Hd6G4ZhGIZh3OdkWXBidjrQ6wfx8cxpz/59wuHDwuSk8PTTjuWryre+pZw8qczMwIc+7Hjm6Ttdku31+v0wb/L0ac++/cKRw3Jn7cNdEo0/i8FAWV1VNjZgYQH2HxC63fDc9BTs3Rvef9661cy1BChy2LsnCLTdLiwsBNfoxYvKaBjeys7OwmOPCidPKlevKteuKbOzcO58cIfui/flLvUVhmE8oGyP8zI5BaPoCNEt6dJHQeRLumRS3lLxtROoVEPs2iXtT2sRLpzkanHQa3JWBvUuOSrDXqjj68mv3giLLeciBHFUU1GQRyVdXFD1467HeH4jpDqS69OrDx/XhUStyHoF4qT1k64gGJZlOVYGpL6WL1vR8NsEyHrr2tzHKKC2BczKV7UDM/3yvinwCQtWqFbRO1qBOCodUfkRNvfSMIx3inMOEaHX2+Lf/n//mq9/7S92eksPDCsr16iqCucc7p3UhBqGYRiG8b5DBHbtEl54IWN9PcSi//rbnhd/oJQdz65djsnJ4Ibct1fIMmVzK4iAuxZhclLuWG9xUXjhcxnrzylvvRUi3C/+MK636MaclSJhrcnJYPbp9ZTNzVb6MKIKo1FwRna7YTTZYBBHlDnotFyQzoXZl07C8cMBMNV6rgNZLvXjFGVP71InJoQjR4S5OVhbg3PnlEOH4OJFjTM/pZ6DaRjGw8H2iJc+CmTxsfhmLGTS+iS6MOsSG9HoPEyzHInOzGCBFKQuvZG6jYYg6sVIuJOmfTzW6iAozrkwI1NBa7un1K5HiW7Dej6mpiZ0rfVEH0VPFyptAEeagRn0TY9o20oaBEwFnJOwTw2FPxL3nIRcBMpOSewXiuJjc/26tKheO9xEabkva0U23eQocnrv8X4UhEn1qPh6z6qCxkIl9UEcDtHzEUIR2sbta7xhGO+CZ599liNHjnD9+nVWVq6ysnJ1p7f0QNHpdPjgBz/IwYMHd3orhmEYhmHcJ6iGOZOrq0q/Hwp2DhwI7xmnp5UzZ0M8fOmicms9iIXXroV5mMMhdDuhOOeNk8rx48qePfIz13v55bDe+jpMTY3vp9uF3buDMHrzJly4qDzySHBwJjY3lVde8Zw8pRw/Jjz6mKPsBPGxqoLDs65l8DDoh9/z7M528zpC3nrcJsvCfg4fFl59Nezf+xArX1yAY4/d6TY1DOPBZptmXsaWboJ8J1FESznxRhD04AWRFI+OKqek2DegQW4UaX3Jail2klyPoR48uiZ17LAm+t1yNsbjnLRKftIh0egYhM14tMQ9iNQN4uFwiSJsPaUztKRH4THtR6NK6eLL8N6HSH08qeiUcTboT6G9t6QMj0XZU/K97bD0wXVZ+eBkrWdnEu8ztZgZkvzhz8tJVjtRDcMw3ikvvPACu3btYnl5OX7dN94NRV5w/MRxThw/sdNbMQzDMAzjPkE1lNF89WsVq6vw+OPC534pOCIHg9DAncwsAqzdVH70o1CMs28fnDguvPa68sYbyt69ns9+xrG6Cl/5aljvxAnhhc/F9YZKbyteWGiHA2smJoRHjgi7d8PKCvzoh8rePZ4TJxxZFoTW199QvvZ1z/JyOOfECdi1KExNwdZWKO/54AeD43J9PczDBJidExYWhH7/nb8RFYGZGXj0UeG114J79NYtqEZwYH/TzG4YxsPD9oiXouGXehxSOzC9anAhRpXNS+NmdK2Yda0Mtgpq2vJjEupSqrsRGZtznbgmEe7SXEpXj3dsdNC4YoydC8RyHqkj7BL/FfBARksgTHtM8zZdcIcGRVAITejB1RhVTHwsKpLagRoilmXRCWU+UehMkfZmJmeMjLcFzrGv56msp3Ueiq9Co3h4AQpVvPf4MM8yCpQpyt8uMqJStNLbrmP8bXjttVf4xjf+igsX3trprbxvOXnyJ/R6vZ3exkPLnj17+MIXvjA2L9h4d9w+osQwDMMwHjS896xvXOHKldP4Cl552THoNym1iYkJDh06xO7du+/ZHvr9PktLS1y5cuWO50SEhYUFjh8/Tta2C24jqsrKygpvvvlm/X3Rm28qly9VFAWcPTvB8vIR9u7d+zPXEgkin/dw5kyYHbmyoszOwJVlOH9e6XZh//4QqT75hufFH3jyHD7yYcezzzq6Xc/XvuH50Y88e/cK+/cFQ86ZM2Gt1dWw3vJymBXZietNT9/5fUmew5Ejwkc/6vj61z1vnla2/qTi8CHP5KSwvh5av69cgT174InHhbk54eDBIKT+6CXl5Vc8RRlawM+eVU6fUaZnwrGLi1KLme+Ublc4fEiYnw/35OZNZXKS/5+9+w6O87wPff993nd7A3aBRe+NBCvYRVIkZUlWcVNkO7Zi5zg5OdIZxz65x4nv3Jk7yST/3XPvTGacHJ9kjuLEie1YcYvHiUxJUTFFkRQr2ECCAAEQvZcFtmDr+z73jwWWBMEi2QLr85mRONx9y7PLxWL39/4KtbXiho9BUZT720fU89JcKNOWmAtF1kLoSNNECJ2rEcSFDEFdZLdbCEguHCQXZFyaRymyx8kNv1kIVi70kTSNhWBiLsopsoPPrx2ss5CBmM3uzPaCFAtBVWMhoJirwZZXR/6YUiK0xbLvqyXwi1PTc30qFzM/r6ZnXi2PX3wUC1PPs1mOAqvFumTim7w2iLlw4+KwnmwduVxyHnnN/bk1SIlhmAu9QkWux2W2D6a50JtTgmkBufiYBYZhXs0uVfGHj8Ti629ycpzvfvfbuFyeu7yih1dfXw+ZTFoFiFaYen4VRVEU5eE1OTnO+0deof30v1NXZOXtIcFxb/aLXsaQRNIaW/c8xZ9885srtobOjkv872//FaGxfgKepTXDc/MZPKWN/Lc/+iPWrVu3Iuefmprixz/+Ma//20+pKcjWQUejkunhbBuvf+vVmA09xp/+2Z/d9liLfS53bNeIx7Ol2KdPSywWSGeyZeHr1go2btCYnpYcPyGZC8PaNYKNLRrBQsG6dYLBIcHlLsmpUyZPPKHljne5a/nx1q4RbNms4XTeeD0+n2D7Ng2rBVpPmwwOwtCQxGLJDsnRNKirE+zYLli7Njvpu6AgOxU9Y2SzQg++Z2K3QTwBTgds3qyxZYu2rGz8g9D17PGrqgQjoxJpQkW5oKZGQ7USV5QHz0fW89I0jOzU6oUIpDRNhJZNlTRltl9j7k1kocZ8MUtRE9rV9Egzm8WZDWZmMxlZzAyUAom5mFi5EKtcKANfHMyTG5qTKyLPHuf6DExzIWC3EFEUC4HFa2Z65zI35ULfSrlwbAN5zeD0q0FW01wszyYXfMwOJtLQyGaiYoKwaAtX/JYGZGXuz6vZpov13YvBylwA2Lw2mGnmhvJkMpmF4TwGpsxOZs9laAqZW/JCBXm2zPya5+pq59L7Ryw8QTQ0jK+gGoc7PzelfqVJKQlNdGMaafIKa7DaXLn7GuobOHXqFNFolJ6ey3dkPcqt1dTU4PP57vYyFEVRFEVRHjixWIT56UvsCpzl2VpPtjPYwhevhGlyaCzB5Y7KFV3DzPQU4x1H2eaeolFzLLnvcjJF20CC8fHxFQtexmIxxvo6CM6c4fGgFwDpA9MjSRmS48MJLl384JmnNhvU12t4PILhYZPpaUinJQ6nIFgoKC8XFBQIYjHJY/s0MgYUFmTv03UoK9P4xLOC7dskTicECwUOR3awT8sNjldWLigK3nxC92KfyZ27NGpqBZMTktAsZNISh0MQCEBxiaC46GoA1GaD2tpsefrwBpOpaciksz01i0sE5WXZknHITh//zGd0kslsUNJiuRo03btPY/367JTxa4cJ5ecLPvYxnTXNcmHb7KRxRVEePB9J8BJTgKGBls1oXByCk+v1yNVMRCGygU00keuxKMlmbQpNz5YysxhjlFeDjWJx5E42k9M0s/0yNbE4UGehH2Y2grlwi7m4Y7akeyG7MxvaWphefk2G5LV/X4xBioUsSxOJhr4w0EZmExcXw4wLJePZXppy4f8LmZQim9GZXU92Iru+GLxcrPqWC2Xni495MRNTLB4jVx2e2ya7Iq5uKyWmNMmYGdJmBkOaSCEXBg9lsz5NCboEsRDMNA1j4enV0DRLrmfm/ZR8KaVksOsQfe1vs2H3f6akehNCvzPBy3QySsepn2JkUmzc8wdYA1W5+7729a/TsqmFUCikymnvAXa7nc2bN1NXV3e3l6IoiqIoivLAkRJcVkFVoZ3t5UsDh/G0ZCRi0LfCn9EFkG+HlmI7zcGlqXx2HUbjK1MuvkhKicMiaCyws7V06fmThiSUMGi1fLjnwG6HqkpBaalOIpEdSmOxZEumF6vfbbarAcBr2WzZqdvV1Uvvq6oSlJVlj2cYy493K5oGPq/A6xHU1bJkTXa7yE0Fv34dFRWCkpKrj8FqFTjs11Rhki1XX7Nm+eNwOKCuVoPa5ce2WqGyQlBZoQKWivKg+0iCl5945uOk0mnguga/ixmTLGZEXg1LLg7CyZIL/SBzaYvXHEIsxiNBilwQcDGNUnD1HFczC7WrTSavm1V2taOmuPaGmxC5cvfFs8klu8jrthVLbr8223MhGRMEWCwWCvxeLBY9F7SUUluy35IcUJm78ZptrttWZgPF/jw7wUIf8XgcwzAwDDObhbmQiSkQaELD5nCgazoISMTnsdlsFBUXo+sWvJ7rxsvdw0wjzcTAWSYH20gl5u5ooDAem2GsrxWL1UEmfU3XbGBTSwsNDfWkUqk7th7l5iwWC263G8uNPlEpiqIoiqIoyr1KZIN0VutHF6CzWG7c2/IDL+nXWNNvek5FUR5uH8k3+VWr1JTU39xH80ae53NRWnL7JtB3kpFJMdbfynDP+8xN9WNkktjsHgKlq6le/Th5BdVoejbzc6y/lcmh8xSUrqG4ciMW29WmK5HQMIOX38Nic1LVtI9kfI6+S+8w1P0+85Fpus/tJzTZQ/Xqx9EtdoZ7jmK1uQiUrGK09yQTg+dIxsPYnV5Ka7dRvfpxnJ5ChBBkUnEGLh8kFh6nouHR3JogGygeHzjNxOB5CsqaKSpfz8TQefov/YqZ8R6sNicXj/2QwvJ1VDXtw+UNIoTA6/XeradcURRFURRFURRFURTlgaDSkJQVlUkn6Gz9OZdO/phELIQnvwyrzcV06DKDXUcZ6TnO5sf/kGDFBjShM9p3kotHX2HV5ucJlDRdF7wc4uKxV3C4/QTL1xOPTjHUdZi5ySHSqSRjfaeJhccpKGlG0y1cOv5jEAJ3XhHJ+Tl03YZhpBkfOMdQ11EioWHWPvJl3L4i0ql5es6/xsTQBdy+Yrz+8lzwEikZ7TvFxaOv0LT5OfIL65gZ72K4+yjxyBxpa5yhrvdJJaIUV7bg8hbyUQWjFUVRFEVRFEVRFEVRHmYqeKmsqPGBM1w4+gPikWnWP/oVyuoewWpzEY/N0H7sh/RfOoTLW4gnrwy3r5hUPEJsdpJkfA5pGkuOlUkniIUnkKaBaaTJD9bRtOWzhMZ7iIWnadr8HMXVmygoXU1oood4dIq56VFKatbTvP2LFJSsRmgao70nOfPu39F56ucUVWzA3rgH0zSIR2eIzU2QTsWvKdPPFoOnEhGisxMk42FAUtG4m0RshtBkH568Itbt/gqFpc148krU1GVFURRFURTloSOlJGNKMtfN/8zeJle8vZMk2+M/Y3KDNYBxB9pLZWet3uA5MCSGKVGt8BVFUX49KniprBgjk6K/4wCh8T4aNj5NY8tn8PorF4Y3mZhGhonBNga7jtC46TmcnoIPdXyXp5CSqk3Y3fmkElGKKjdQ0fAousVGeHoAFoZDVTTsoqb5CVzeIACe/DJGe0/Se/EAE0PnKa7e/CEfmcAfrKewrBmrzYHTE6C8bgeB4kZUxqWiKIqiKIrysHE4HKQsXn5yYY4TQ1GA3AX9tCGZydjZtca14muYMn385ZEr+J1Lv+aG4ia+xnp8Pt+Knd9ut2PaPPzrhTBnhmPA1XyIjCkJpa1seHJlnwNFUZQHlQpeKismlYgwPXIJkBRXb8bhCuQ+xAihEShpwusvZ3ygjbmZAYLl6z/iFQicHh+F5euwOa5+UHG4/PiLG+lrP8jcVC+ZVBxNt37E51YURVEURVGUh0NBQZBN275C++Ut2AOClhaN2pqrF/UtVguNDSs7J6FpVTN//Gf/g/HJyRumExQGgzQ2Nq7Y+QOBAJ/9wpdoWrMhd/7BQcmBd00sFti3zsLnPlu/Yue/nyQSMDJiMj4usTsEVZWCwsKr/2qmCeGwpKdHMjYuSSUldruguFhQV3fj6eqKojzYVPBSWTGpZJRELISmW3F5C5cFCG12D3ZXHqZhkIjOYBqZj3wNNqcHh9uPpuu52zSh4/QUoOkW4rEQppFWwUtFURRFURRF+TU5HE4qKjbS0LiesjJ4+hmdzZu0O7qGQEEBjz/55B0957UcDgfr169n/fqrCRltbSZDowZ2G+zerbFlq36LI9wdpgmjo5LTp03y8mD7dh2HY+XONTUlOX3GpL1dMjcnCQQEjz+u5YKXpgnj45JDhww6OiXz8wsZrELi9UBTo2DPHp3SUhXAVJSHiQpeKivGyKQwjDRC09B027JekJqmo1usgMQwUkhp3vhAvwFNs6JrVsS1118F6LoVIQRmJr0i51UURVEURVEURbnXJZPQ22dy4qRJfb1g0yaJw/HRBwYzGRgcNHn3oElPTzYomUwCSJKJq9vF45LOTpNTpyWagE0tgupqwcCA5OQpybnzkny/SWGhjlXlnyjKQ0MFL5UVo+kWNE3PDdi5vkO1KU2MTAaEQNdtCHG7q7My24n7Q5DSwJQGkqvdKKWUufVoFusHOC8qwKkoiqIoiqIoyj1DSpiZkXR0mKTTEAwKEkno6zOprNBoadGIxyWXL0sGBk3m58GiQ1GxYPUqjZISweys5Nw5k7YLknAYRkYkR46Y1NUJGho00mkYGDDp6pKEQhIEFBVl9y8vFwgBkYhkZESSSt14nUKA0wmBgGB8QjI2JmlqFOgWOHVq+Ze7ZBLGxrLBzcoK2LxFo65WI1Bg0t1jMDEBY6OSVEpitarsS0V5WKjgpbJibPZsybY5doV4ZBLDTKNjz92fTkRJzIfQdMtCWbklG0gUYJqZZcHOZDyMkUkhP0QEMxWPkkpEspPLtYUyDSmJx2YwjAxOdyAXwFzMDJWmseQc0syQjIcxDWPZmhRFURRFURRFUe40KWF2NpuNGA5LSooF4YhkchKMjElFheDsWZNTrSaxGOgWMA3QdcnwsGTvXo1MGs6clQwMSFJpGBuH4ydMTKlRUiLp6pK8d8hkYiL7HUjToL1dMjQo2btPo6pSY2xM8vY7JnNzN/6eJASUlgr27dUoL9N4+ilBsEjQ12ty+vTyfTQNLJZs4ommgdWaPYbVkv270LKPRdNU4FJRHiZ3JXgppeTo0aN0dnaSyWTYsGEDmzZtwmaz3Y3lKCvE5vBRWLaGkSunGR88S/WaJ7HZPbn7Z8YvE5kZxu0rxFdQhW6xYbW7EULLBhczVy/fZVIJJgbPkYzH8ORfexaBQCClXIgrLv0FGI/OMT3aQUn1FnRL9vWVTISZGe9CmiZ5wVqsVicIDYvNme2/GZ9d0n8zOjdKaLybTCp53SMUsOTciqIoiqIoiqIod4Zpwvx8NmBpmpK6OsEjOwTBoCASkfT2SgwD1qwRrFubDTQeed+ko0NSUiLZsllj6xZBPC4ZH4eqStj5SDarcmpacuyYyeCgpK5WsH27RjojOX1a0tEp8fpMCgsEmg4OB6RSNw4mCgF2O9hs2YE7JSUCw4CB/hs/JqdTUFurcbHdYGYGThzPDvYZHJCEQhDwQ1OjhgodKMrD5a4FL9955x1++tOfEo/Heemll2hublbByweMxWKjuvkJ+jsOMth5BH9RA3XrP4HDmcfMeBcXj/4z8egsa3d+gbxANZpuJb+wFpvDw8TAeUZ6j1NptWMaafou/YrRvtZsRuY1vTM13YLF6iCTThIODZKYn8XuzEMuRBOllFxpewNPXgnlDbsBSU/b64z1tuLOC1JUsRGr3Q0I8gqrgUMMdhwkWL6egpJVxObG6Gz9VyIzw8sen26xo+lW5iPTxObGcPuKsNrd6LoFbjjjUFEURVEURVEU5aPn9wt27dSortYQAmIxyZNPahhGtmQ7GBQMDJhc6oCRkezgHIcD6us12i5IpqYkhQWCDRs0rFbBmTMGA4MSlwvWrxds3qyRTEpSqWxAs78fQiFJWWk2mzKTuXk2h90uCARErkelYdz8cdhsUFcn2LFD4/33TU61Si51dp1U0QAAIABJREFUSBKJbBB06xaNpiaBfu/NPlIUZQXdteDl9PQ0fX19xGIxZmZmME3VU/CBIwRFFetpeewl2g7/E21H/pnei29hsdqJR2eIR0PUb3yKVVs+h9NTgBCCoqoWqpv30X32NU6++ddcOvETJBJNaATL1hCeHgQkUppIstmdwYq1jA+0cf6979Lf/g6rt38Bu8MHgMcfxOUt5OLxf+HSyZ8gTYPZyexlvjWPfJFg+Tp03QYIatZ8nLG+VkZ7z3DoF3+B0+3HyKRwegooLF9DLDyFKQ0Wszt9BVXkB6sZvXKGo/v/H/zFDWzc818Ilq9T08sVRVEURVEURbkjLBYoLICSkqsBQq9XYJrQP5Dti3n2XLbMPBzOZmymU9k/bySdloRmIR6HVApOnTLpuSIxzewxEkmIRrOZkNXVUFmZrUj7KGQyMDFxNWu0uBgKCwUzMzA5KekfkFRWSrxegXZnB9orinIXqZ6Xyoqy2t3Ur3sGX345E0PnmZsZwEgnCZavJ1CyiuKqFnwF1blgn9tXRMvelwiWr2Nm/DKZdBynp5DiyhYCxY1UrX4MTbPgzS9DCIHF6qB52xdxuAuYm+rFYnXg8gSzPS4Bm8ND0+bnMYw006OXSMRCBCvWU1S5kbK6R3B5CnKZnEUV69nxzP/FaO8JwqFBBOALVFNSswWb3UNjy6fwBiqxO/MA8OaXseljf0iw4hDx6DSe/HJsDl+2Ect9TkrJ9FgHk0NtSNOgrO4RfAWVaJp6y1DuIVISmR1hpPc4k0NtzEcmkaaB3ZWHv7iJivpd5Afrci0jFGUlSSmJhIaYGDpPcn6WwrK1FJY1o1vst9lnmJHeY0wOtRGPTqNpOl5/BWX1j1Bc2YLN4b2Dj0K536USEaZG2pkZ78LlDVJSvRmXN3jbfSaH2xjuOU54ZgAjk8LhyidYsZ7y+l34ApW5vuCKotybNB3sDnIDbAwDhoZMDh826e2TxOPZoKBhZAfi6Pqt56BKmd3elCANmAlBNHZ1j4JANtNzcWhQX58kkbh5z0uPR1BbK/B6b/9eMjcnaW3NDgmqKBc89ZRGMCgYG5McOGBy+bLE6TQpLxf4/eq9Sbn3pVPzjA+cYXaiB4engKqmvbf9fJdOzjM20MpIz1EioWEMI43THSBYsZ6qpn2480ru0OrvHSoSoaw4m8NLef1OgpUbSMXDmGYGq82F3Zm3LENRCI1ASRPeQAXJ+TlMM43V5l7Y1oK/uPG67QX+4iZcvmJSiQiapuNw+ZkYbsveD/gCVRSWNlPVtJdMOo7F6sDu8qNfd26L1UlZ3XYKS5tJJSMA2J15uT6cwYr1S7bXLXYq6ndRULIaI5PAYnVid+ajafd/DUNsbowLR75Hf8dBdIsNuysfT36pCl4q9wxpGoz2tXLpxI8Y7jlOJhVf+FkVJBMRhHib/vZ3WLPjd6hatU8FgJQVlUpGGek5xuUzv2C8/ywSybqdXyavsOamwUvTNBgfOMOl4//C4OUjRGcnSadSCCGwO130XzrA6m2/TUPLp3F5Cu/wI1LuN6ZpMDvZQ/fZV+lrf4fo3DiltVvw5JXcMngZC4/RdfZVLp/+BaGxKyTmY0hpYrHa8OS/RVVzK2sf+V2KKjaoAKai3MMESzprMT8vuXRJcvqsxKJne17W1QqisWwW5fT0rY+naWCzZv/0emDXLo36uqXvAVYr5OUJJiYkB98zmZ29efCyvEyQl6/jvc3HMSmzGZ3DI9ksz4JCqKkReDzZjNKSEujugcmJ7JAiFbxU7mWmmWFuqo+OUz+jv/0d4rEQBWWrKarYcPPvJlISj01z6cRP6Dz9c+Ymh0jG5zFNE6vNhsdfxGjvSTY99lX8RQ139gHdZfdsJKKnp4f29nYMw2DDhg1UVlbS29vLgQMH6OjoYG5uDrvdTkVFBXv37GXrtq04nU5M06Szs5P33nsvt53NZqOyspJ9+/axZcsWnE7nTc87NjbO8ePHaGtrY2xsjFgshqZpeL1eampqeOSRR9i4ceMtjwEwOjrKwYMHOX/+PJOTkwAUFhayadMm9u3bR2FhIZcuXaL3Si8SSUtLC+Xl5eg3aN4RCoU4ceIE58+fZ2hoiGg0iq7rFBQUsGrVKvbs2UN1dTUWyz37zwlCYLN7lgzsuRWrzYXV5vqAhxY4XPk4XFcn+YjryhY03XrbzIPssTTsrjzsrrwPdG5Nt+D2FX2gbe8XmXSSnrbXuHLhLWYnR3G43KSTsVwfUUW5F8xO9nLx2D9zpe1tCkobaWj5FP6iBoTQiIXH6T777wz3tJJJxbG78imvewRNv4ffI5X7kjRNQpM9dJ97lSttbzA7OUgqPo+m68RjIaS8eUuc2Ylu2o/9kK6zb+BweVm97TPkFzWQjM8x3P0+E4MdGMYruLxBatc+pTKIlZtKzM8yePkQl0//nLG+s8xHZjHSaXyBMTLpxE33S8bD9F58i7bD3yM2N0VJzQZK67ajW+xMDV9kqOsoXaf3Y7N5cPuK8TyEWR6Kcr9KJmFySpJMQFElbNumUV+n0dNj0tqazbqUkiVDR6VcyLSU2eE6+fngdGT/7nRAY6OGlDA9LRkdleh6tj+l0wVlpdw0q1IICBYK7B/w15gQ2f+khHQ6mwEKC1mji/NcBeqCinLvkpJkIkx/xwE6Tv6Msb6zpJJxMqkUdlfekuHA10un5ulpe40zB/+eRCxMZdN2Smu3o+kWJocv0H/xPS63/hKb08f2p76Jze6+gw/s7rpnv8m1trbyN3/zN0SjUb7xjW/g9/v54Q9/yMmTJxkfHyeRSKDrOvn5+bz55pt89atf5dlnn+XQoUN873vf48yZM0xMTCzZ7u233+brX/86n/jEJ3C5lgbGDMPg0KFD/PCHP6S1tZWRkRHC4TCphSwIh8OB3++nrq6Oz3zmM3zpS1+iuKgYoS1905RScub0Gb7z99/hyJEjuWAjgMfjoby8nIMHD/Liiy+yf/9+XnvtNXRN50//7E8pKipaErw0TZNTp07xyiuvcOzYMYaGhpidnSWZTKJpGi6Xi6KiIl599VVeeOEFnnn6Gbw+lV2k/GbGB8/QdebfMDJpbHYH4gEog1ceLNI0Ge0/xXDPCaw2J6u2fo6mTc/lWjoYmRR2p4/o3BjjgxcZ6ztFYWkzTk/BXV658qCJhsfoOPkTOlt/gcOdT8PGp5kcusDMeN8t90snYwxcfo/+jkO4fQWs2/WfqFv3NC5vkEw6QUn1Fs4f/kfmpvqZm+4nnYyp4KVyQ0YmyVD3Ec4efJno7BildVuQRoa+S4dvuZ+UJjNjnfScf41YZIa6DR9n/a6vEChuROgWwjODePLL6D77S8IzA8TC4yp4qSj3EU0jFyyMxaC7WzI9ZdDdky0hlxJCIRgezg7tsVqzwcGxMcnZsyZVVdkBP5WVgu5uyZmzEk03sFiyxxoflzQ2CAqDOiXFGh/7mLjtEB67HS5cMDlzxiSZypabJ1NghuHdgwZtF0z8+YL6ekFFhWBwUDI4KDl02KS0JFs2fuWKxGqD4uJscFVR7kWZTJLh7qOceut/Mh+ZpnLVbjRNp7N1/y33y1ZRXKH9+I9JxMKs2vIpNuz5A/ILaxFCY256AE9eGe3Hf0povJt4dEoFL+8F4XCYS5cuEQqFOHz4MP39/czOzvKJT3yCqqoqIpEI7733HidOnODIkSPouk4kEuEnP/lJbrvq6mqi0SiHDx/m2LFjHD58GJvNRnNzM6tXr14SKDx9+jTf+ta3OHDgAJlMhp07d7Jt2zaKiopIp9N0dXXxzjvv8P777zMwMIDVauUrX/kKeXlLM/T6+vr4X3/zv/jFL35BJBKhubmZffv2UVlZSSQS4ejRo/z7v/878/PzjIyMcO7cORwOB9FodFlm28mTJ/nLv/xL3nnnHRKJBJs2beJzn/sc5eXlxONxzp8/z6FDh9i/fz/9/f0YhsGnPvUpPJ4Plt34IHO4A1St2oOmWbIZmerK3AcSmxuj89TPmJ0aoLx+GxNDF0lEZ+/2shRlCcNIMTfdRyI2R0FpAwUlq7L9ZhfoFhsFpc34ApVMDfcQnhkklYyq4KXykUslIiTjYcobdtCw8VNYbS7mo1O3DV5GQkOMXDlBKhGjfuMz1G/8JD5/BZDtFV3Z+Cgub5B4dApfQTUW262rPZSHl2lkiEencLj91G/4JGV1jzB4+SCDl4/dcr90ap6JwXNMDl0iUFxL0+bnKa7elGsPU1DcxPrdv0dF4y6sNnfu9akoyv3B7RbU12t0XjaYnIRjx0x8vmzQr6VF0NoqGRmRHDtusmO7Rnm5oLNTMj4O7x0y2dSisWuXxs6dGsmkwfCwJBTKZlsmElBWKqit1fC4s+XcDsftv2vFYtmMzVOtklTq6u1GBnp6oOeKpLxUUlens3mTRnjO5HK35MgRE5cre950GppXC7Zv1/B41Pc75d5kGhnmo1PYHF7W7vwypTXbGOw6xOXTr91yPyMdZ7TvFFPDlwmU1LBu11cIlq/PZRkXlDSx4dE/oLR2K3ZnPg63/048nHvGPRu8tFgsCCEwTZM333yT2tpa/viP/5gdO3YQCARIJpPs2bOHP//zP6e1tZVTp04xPj5OQUEB3/zmN9m5cyeBQIBUKsVjjz3GX/zFX3Ds2DFOnTrFuXPnqK2tzWVfxuNxfv7zn3PkyBHi8Ti/9Vu/xde+9jWam5txu92Ypsnk5CQbN27kW9/6FgMDA/zoRz9i9+7dbNiwIVeunclk2L9/P2+99RbhcJitW7fm1uz3+0kmkzz77LN897vf5cCBA0SjUZLJJC6XK/d4F42NjfHd736Xt99+m1QqxRe+8AW+/OUvs2rVKvLy8kin04yMjPDqq6/yd3/3d5w/f57vfOc7NDY2snHjxnu7hPwO8OaXseHR/wwI3HklqqzgAzAyKXouvMFAxyGC5c1UNz9OeGaIRGzubi9NUZZbuNYjND3733V3a5rlmh6tquWBsjLcvmLW7PgdrHYXvkA1s5Pdy9qWXE9KyexkL7MTPbi8AUqqNuPxFS/Zxmp3U1K9eSWXrjwgdIuNioZHCZavJ6+wBk23Mtzz/m33S8RmmB7rJJNKECxfS2HZmiV9rYWmk1dQTV5B9UouX1GU34AQ2enin/m0TiqVnci9+BXQZoOmJoHVqjM0JEmnIRCA6moNhwMqKkxCoezgnZKS7OAbjyebDel0Zvtjer2CNc0Cny+boTm3kM+Q74eKCkF5mZabbP5B2O2Cdes0/H6BYS7/bCYAp1NQVSVwOgXPPitYP2QyPQOppMRmFxQUQEW5RkmJmjSu3LssVjvl9Y/gL2qgsHwNAsFQz5Hb7pdKxhjtPYlpmhRXbaawtHlJHENoOr5ABb7Aw3lB8Z6NcGkL70ZSSubm5nj++ed5+umnycvLy/0DPvroo+zevZv29nbC4TDDw8O8+OKLPPPMM+Tn5+e227lzJ3v27OH8+fOEw2E6OztJpVK54OXU1BRnz54lFovh9/v57Gc/y44dO5aUlufl5fH5z3+eAwcOMDY2xqVLl2hvb2f16tW5QOHExARvv/02U1NTeDwevvjFL/LUU0/h91+NiBcWFmK32+nt7eX999/HWMivv/bLjpSS9957jwMHDhCJRNi9ezcvvfQS27Ztw2azLTlWYWEhPT09/OxnP+PkyZO8++671NfXk/+Q59FbbE7yg/V3exn3lfHBs3Sd/jd0i5VVWz6LL1CtegQq9yTdYiOvoBqHy0dsbpzY3BiGkV5SVhsLTzAfncJiteL1V3zgfruK8mHYnXkUVS4OMvlgF8lMI0U4NEQ8OoO/uB5Pfhnx2AwzY5eJzY2BJvD6KwgUN+Fw+dXFN+WWNN1KfrAWEAghSCUit99JSuYj04RnBtGtNnwFVegWG5PDF5idvJLtFezOJ1DchM9fsWy4oqIo9wYhsn0m161b/nti8b61awX19RLDyAYP7fZsybjfr5NKSSwWweLXS79fJ5nMZlc6HNlAqMUCDfUa1VXkponb7Qv9Kz/kryeLBUpLBaWlH2zHykpBaalOIiHJZLL7L65LUe5lmm4lv7CO/GD9B/7dLKVJYn6W0EQPFouVwrJmJDAxeJ7ZyR4y6QQOd4CC0tX4ApUPZWu3e/pHf/EDe3V1NY8++iher3fJh3i73U5DQwM2mw0pJZWVlezZs2fZdjabjYaGBux2O+FwmMnJSTKZq01S8/Pz+ZM/+RNeeOEFTNNk7969OByOZesJBoPU19djs9mIRqMMDw+TTqdz9/f29tLd3U06naaxsZFdu3bhvW6kmqZprF+/nqeeeooLFy6QSCxvpB6LxTh48CDj4+PY7XaeffZZmpublwQuF5WWlvL000/z7rvvMjAwwJEjR3j++eeXBHkV5XZi4YlsufhkH6u3fZby+l0k5lW5uHJvEkKjtGYrJbWb6G8/RGfrv2Yz1Wq2ous2Zid76Gz9KTNjPQQr11Basw2703f7AyvKh/RhgpaLMukk8egU6VQS3WJn5MoxLrz/faZHOxc+3Aoc7jyClRtoanmO0tptWKzLP5MoyqIP+wVGAsn4LPHYDLrFxnxkkjPv/m9Geo4RC09iGhksNgd5BVXUrv04deufxX1ddrCiKPcHXc+WkF9LiGyPS6t16e0Ox43Lv4XIZnLabHf+u6XFgioPV+5L4kOmBkvTJBmfIx6dRrNYSKfmOfb6/8do70nikWlMM4PF5sz+bl73FE2bnn/ovt/c08HLRbW1tRQVFeWyMRcJIcjLy0PXdYQQVFdXLxt6s7idz+fLZUjOz88v6S/p9Xr5+Mc/jmEYpFIp7Hb7snNBNvCYl5eHxWLBNE3m57Mj6xf19fUxOzuLaZrU1dVRUlJyw+nhNpuNnTt3EgwGmZ6eXnb/1NQUnZ2dJBIJgsEgGzZswO2+cSNWIQTr1q2jqKiIwcFBurq6mJiYoKqq6qEvHVc+GCOT4spiuXjFGuo3fBK3r5hkXJWLK/cuX2E1G3b/Pjabm5ErJzn6y/+BK68ITbMQj04Tj85QWreF5q2/TVHlRpU5pNwzTDNNOhnDyKSZHu1kPjKByxukZs3HcLgDRGdHGbx8mK7WXzI/NwZCUF7/yJKSXkX5zUgyqTjp5DyJ+TB97QdwuPIIFDdQu/bjGEaa8f7TjPadZXayl0w6QfP2L+JwPVy9tRRFURTlTpGYpFPzpJJx0sk43WdfRUpJcdVG8gpryKSTjPW1MtR1nNDEFcxMmrW7/hMWi/1uL/2Ouec/CQshCAQC2Gy2G2YSLvaKXNzObrffcjvITha/fjgOgK7rpNNpLly4wODgINPT07mJ44ZhkMkYHDlymHg8njvOIiklY2NjJJNJhBCUlpbidDpvmv1YVVVFUVERXV1dy+6bmppienoawzCw2+1MT0/T1tZ2w0AowPT0NFarFU3TmJ6eZnp6BiNjPHTBSyklY32nGLlyjIKyNZTX7cB6i1JRKSWjvScY7T1BQWkzZfWPPJSlpZPDbVw+/Qs03ULT5ucJlDSpcnHlnqdpFiw2FxLJfHQGM5MhmYig6VbikRky6SRefylC09C0G793KsrdIE0Tw0hjGgbxyCylNZtZ88iXCFasx2pzkYqHKarcyLn3/oHh7lPkF72NP1iHJ7/sbi9deVBIFl6DaVLz84iARmPLp6ld+xQubxDTNAit6qb9+CtcPv1Lus/tp6C0mcqmvaqqR1EURVFWggTTSGMaGTKpNKlElM2P/yFVq/bhcAcwjQxVq/bi9v0z7cd/waWTP6W0bjtFFRvu9srvmPsiQnGzgCSw5Har1XrLD1W3ui8Wi/HWW2/xxhtv0NnZyczMDLFYjEQigWEYmKaJaZrE43ESicSyY0kpCYfDuYDmYkbozbjdbvLy8m6Y4RmJRIjH40gpmZiY4Nvf/jZut/um60+n03R2dpLJZEgkEoTDcximccNt7yfpVJyZ8cs43QE8eaW3D6hJyeTIBS6d/Bn165+hqGLDLYOXSMnE4DkuHH2F+g3PEqzY8NAFL+cjk3QsKRffidV24yxfRbmXTI1c4vzhf2Tw8hFKalqoWfMkeQXVaJpOPDbDQMcBhruPc+bdvwMEVU171cRm5R4ikUjceQVUNz9Oef3OXGm41eaiuvlxZsYvE5roY3zgDHPTfSp4qXx0xNXPxLrVSkl1C3XrnsFXUJXbpKhiA4nYDBODbYQmepkYPEtp7TasNtfNjqooiqIoym9Kgm7RKa3dQsPGT2F35uXuClasp2nLZxnoPEJoop/hnqMqeHm/+nWvBs/Pz/P973+f733ve7S1tZFOp6mpqaGxsRG/34/b7cZisWCxWDh37hynTp0ilUotPYiEZDKJlBIhBDab7YaByUUWi+Wm2aTpdDqXHZrJZBgZGcF6m1Fubrcbt9uN1+u9YW/M+9HcdB9n332ZylV7adjwSWy695bbCyGoaNiN01NIXqASm+PW2wNkUnESsTnSqRhI87bbP0iMTJrei28y0PEeRRVraVgoF1dZFcq9Lp2M0df+Jn0Xf4XXX8baR75MVdM+rPbsl2rTNAgUNZJKROnvOMyVttcJFDeoIV7KPUFoOrrFjqbpuHxBvIGKZT0t7c488oN12aFUs+PMRybv0mqVB5NA063ouhWr3YEvUInTU7BkC0234PVXkFdQxfRoD5HQEOlEVAUvFUVRFGUlCIGu29AsFnQp8Rc3YnMs7Wmpa1Z8gWryg9UMdZ0iNL68ivdB9kAFL39dR48e5R//8R85e/Ysbreb3//93+fpp5+mrKwMl8uFzWbL9dV8+eWXuXDhwvLgJdmemLcrTV8kpVxSdn4ti8WSO19FRQX//f/479TU1twyGHrtvmvWrMFuv/97H8xO9DDadxp/cQOmkbn9DkIQKG4iUNy08ot7AMyMd3L5zC+IhacIlDQw1HWYsf4zuftj4TFicxNkUkkGOt8lHpumoLSZkqpNHygwrCgrZT4yycRgG4n5KPUb1hMsX5sLXAJomo6/uJGiyo0Md59gcvgi0blxFbxU7gm6bsPu8GKxWBFCQ6BlR79ec+FICA2rzY3FaieZiGJkln/mUJTfhNXuxub0Mh+ZAaEtef0tsljtWO1ukJJ0Ko5hqNehoiiKoqwEIQQ2uwe7w818ZhYhblDFKwS6xYbN6UVKk1QyducXehc99MHLZDLJm2++SVdXF6Zp8uyzz/LVr36V5ubmZT0jTdO8eQm7AJfLhaZpSCmZn5+/aXASIJFIEI1Glwz8WeTxeHLnsdvtbNu+jS1btjwQPSzTqTgjV44x2neSaGgEw0hhtbnID9ZRteox/EUNGJkU/R2/ouvMvzEfnmHkygkAyup2UFDazFhfK/HoJMVVm5gYamOs7xT5wVpWbf4c0fAY4wNn8Rc1UFzVgtXmQkrJzFgn/R2/IjTRjWmkceeVUtW0j3R6/obrNM0Mk0NtDPccZXbyCulUHLvDQ2HZWipXPYbXX37f99Gbm+5nbnKAZHye4e4TjPWd5dqpuaaZIR4NYxoGPeffYqDjMA0tz5JfUKOCl8pdlUpGScbDICV2Vz66ZfkkZk23YnP40CxWkvEImXT8LqxUUZazWB24fMXYnG6S83Mk5kNIaS79kColmXScTCaFpulqWI/ykRJC4HT5cXuLmB7pJhGbJpOKL8uqNIw0mXQcSfY9VajXoaIoiqKsCCF07G4/7rwSorNTROdGMM0M+pKhoxLTyJBORIFsIPNh8tB/ColGo1y+fJlEIoHL5eLJJ5+ktrbuhoHCeDzO8PDwDbMuhRAUFhZisViQUjI1NUU6nb7pecfHx5mamrphgLOwsJD8/Hw0TWN2dpZQKIRh3P8DeIx0ks7Wn3HpxI9Jp+K484qxWOxEZ0cZ6jrKcM8xtjz+dbz+CkZ6TzA+cJ50MsHMWDepRBS704cnv5zei28yMXiesrpLjFw5STwyRXnjI9St/wTj/Wc4f/ifqFv/NIHiRqw2F9Mj7bT+6m8YuXISpyeAJ7+UeCzE9GgHmVScTGbpv5NppOlpe4P2468QmRnG5QtitbmYnehh8PIRRvtO0bLvJQpKmu/rwTZuXzE1ax8nNjcOLM8STiUijPdfIJmI4y+qwZNfQn6wDt26PFCkKHeSbrGhW2xIKUklIjfMBjLNDKlkGCOTxu7w3Nc/q8qDRdN18guz76kzY1eYHuugvGEXTncgt00qGWFuqo9EbI78YBV2V/5dXLHyIHJ5i8gP1jHQ+T4z413MTfdlX4MLF+ilNInOjjA3PYBuseLyBrNZmIqiKIqifOSEEDiceQQr1zHae47JwTYioWHyC2ty25iGQXRuhNnJfnSLBV+g8u4t+C546L/NpVNp5ufnMU0Th8NBcXExVuuNn5auri4uXLiY6215LSEE5eXluFwuhBD09/cTDocpKSlZVu5tmiatra2Mj4/fNHhZW1vL6dOnCYVCXLp0iZ07d960FDwWi3H69Gny8vJoaGi45ZTzuykcGuTy6V8QC0+yce9/obRm60JJXITus6/SfXY/ve1vsX7X79G06bcITw8wH56hvH47tes+TmHpWqw2F8n5WWYn+jGNNJVNeyiuasEbqMTpLiCViBCdHSc5P4uUJulUnK5zrzLQcYjimo2s3fE7+AJVpFPz9La/RdeZX5JJJpfE7qZG2rnw/veYnRxgzY7fpqJhNzaHl1hkkvZjP6T/0kFc3iAte4tw5xXfvSf0N1RQsoqWvS/dtBwxNNnDfOQvMaeHadj4SSqb9uL2FeNw++/wShVlKaenEF9BFbrlJJNDbcxN9eLxlSwJUEZDw0yNtJNKxCitbcHpLrjFERXlThL4ixooqljP1HAXfRffJlC8ipo1T2C1uTAySUaunGCw6zCZVJJASeND9+FUWXkOt5+iqg14/cVMDF6k+9x+XJ4ifIEKJDA3PUDfpXeYGbuCJ7+IwtJm1e9SURRFUVaQ1e6momE3Xa2vMjncyaXj/0LLvv/k+BY1AAAgAElEQVSK01OQvag4N0pn68+JhCbw+IOUVG+520u+ox764KXVZsXpdKJpGul0esnE8GuFQiF+9KMf0d3dRSaTQQhBKpVaUvZdV1dHcXExQ0ND9PT00NHRQXV1NU7n0gm3Q0ND/Md//AczMzM37Ivp8XjYvXs377zzDqOjo7z55ps8+eSTrFmzZtkEcyklx48f56/+6q9IJBI8//zzvPDCC/j9916AKRYeJxIawen2U9Gwm8KyZoTIBnbdvmJKarbgzS/D6SnAk1+OL1CJZrHiL6qnqukxHG4/yfgcCI1MOoU7r4TVWz9PQVkzmmZB3qAEPzY3ysiV4yAEDRs/RdWqx3KZA3anj7HeU8xNjea2N4w0/R0HmBzupKZ5L02bP0t+sBYhNAqliZQG06OXGeh8j9p1T+P0Fty35Xw2h/eW5d9SmljtLjRNx5NfRqBkFVY1rVm5B9idPqpXP8Z4/2mmhrs4e/A7RELDBEpWoes2onMj9F18m6HLR3G686hs2ofXX3G3l608YKRpMNh1mCsXXicemQIgEZ9jaqiDdDJJz7n9zIx1YrHYQGiU1m6jYcMn8eSX4fIGqVnzJJPDF5kYvMSpt/8nI73H8eSVMB+ZYuTKCSaHOskLVlC1ap+aNK7cVHhmgO5zv2R84AxIiWFkmJ3sJRmfZ2qki6Ov/b+4FobxuPNKqN/wScrrd6JbbBRXbaa6+WO0H/spHSd/TiQ0RGFZc7aCaKSdkZ6TAFQ07qK4est93y5HWVnRaIRL7Qc4cfRNfF6d3i5BcTCbTGFKicvt5tE9e3ni40+v2BpGhod4bf+rXGpvx3ZdxZphSmrq6vn8F75IUVHRipw/Fotx9P0jvLH/l1gt2Z+XqWlJ2wWJrknGh1zo7OaZT356Rc6vKMo9QEpikQnajvxTbqCOYWSYm+ojk84wOzHAwZ//39jsHkDg8PhZu+PLFFVuRLfYKa5qoaHlWS4c/SkXj/2IcGiQgtJmTCPD5HAbQ5ePoek6tWufoKiy5a4+1Dvt/oy6fIQ8Hg/19fXYbDZisRgHDhxg165dVFZW5jIm+/v7+cEPfsDrr79OXV0d8Xic+fl5hoeHSSQSuQnj1dXVbNmyhfb2diYnJ3nllVeora1l7dq1uZLv4eFhvvOd73Du3DnsdjvJZHLZmjRN42Mf+xj79+/nrbfe4vjx4/zDP/wDX/va16ivq0df+GVomiYnT57k5Zdf5t1338VqtfLcc899oME+d4PN7sVqdxEJjdJzfj9CaPiDdehWO3kFNeQFqhCaBogb9gK9ltA0iirW4ckvu2XwMDo7QmxuHKfHjz9Yv6TkOa+gBn9xA2N9F3K3pRIRJofakKZBSc0WXL5gLsAqhEZh2Rp8BRVMDrUTCQ0RLF+PZnvof4wU5Y7SNAtldY+wce+LtB/7F0autDI92onTE0DTdJLxCLG5Kdx5haza+llq134cu9N3+wMryocgpSQ00U332deZm5pYdv/kUC+TQ70AaJrANNJUND6a/b2lWymt3cbGPX9A25HvM9p3npmxXqw2O+lUEtMwKCyrZ80jX6Kyaa/KeFNuKhELMXj5MD3n3l12QTwzN0ts7lju7/7icgrK1lBevxMAb34Zq7f9NkYmRfe51+k++yYDHYeQEtLJOA6Xl1VbPs2a7S/gVQF05TZCoWmudP6Kovl/ZnvAhj4q0MazwcuUYdIxpxGbj69o8HKgv4+3fvZP5Ee6qMxb2gtuKJzhva6NbNm2fcWCl7Ozsxw7dIBLb/2AnZXZijmnlJT6JBlD0tshePXVkApeKsoDTALJRIS+9l8x3H1+2f3xaJTusweBbKcWr7+AqqbHKKrciBACt7eYdbu+gmlk6DrzGp0n9+NwH0RKSTIew+HysGbH51i/6/dwPGRthR76qIvNZuOJJ57gjTfeoLOzk/3792MYBps2bcLpdDIxMcGZM2dobW1l+/btbNu2jW9/+9sMDAxw9OhRXn75ZZqbm/nkJz+J1+vl+eef59ixY5w/f563336bZDLJ9u3ZX5KRSITTp09z/PhxNm3aRH9/P+fPL39BA1RXV/Piiy8yNjbG2bNn+fGPf8zg4CBbt26luKiYdCbNwMAAJ06c4NSpU2QyGT73uc/x5JNP4vF47vCz+MHkFdbQ2PJp2o78gAtH/4WhriPkB2sJlKyipHozheXrsH3Afkq6bsHpLUJb0sB2ucR8iHQqgdcfwOrw5gKRALrFjtNTiGax5ObUpJNR4tFp0qkkPW2vMz3asSQ4mskkCU1cIRWPEY/OYBpp4MHMRvTml7H96T8mGQ9TUr3loWsIrNzbHK586tY9Q36wjvDMQHYgzzXf2zXNgjuvhILS1bi8wSU/+4ryURCaRtWqfbi8haQS0VtvKwS+guolASC7M4/q5ifwBiqZm+7DSF+9mKnpVrz+cgpKVuO4phemolzPF6hi88f+kIaWT96offUSdqePwrJ1ub9rupXC0mY2PfZVSmu3MTV8gVh4HITAk1dKYdkaiio3kldQrfoGK7eVSqUQ6QhbyzQ+37z0gks8bfJad4yxufCKriGRSGBNhXiiykZzcGmP9raJJO+mkkSjt36//k2kUimMRIQNQcFzDUvPn8yYvHUlxsXw3IqdX1GUu08IgdtXzI5n/0/mw8svbl/PandTVLkx93dNt1BQspotT/w3yup2MDl8gfnIJELT8OSVEixfR1FVC3kF1fdkq8CV9NB/EhFCsGvXLl588UVefvllenp6+MlPfsI777yD1WolGo0ihOCpp57ipZdeoqioiCNHjjAxMcHAwAB///d/T2NjI7t378bn87Fjxw7+6I/+iL/927/l3LlzvP7665w4cQK3253rlfnUU0/xwgsv8PLLL3PhwkLW33WvO6vVymOPPYZpmnz/+9/n3Xff5bXXXuP999/H4/FgmiahUIhoNEpFRQXPPfccv/u7v0ttbe2y0vJ7hd3hZfW2L+D1VzDUdZjxgXN0n/sP9IsH8PpLqWjcxZodL5AfrANuHWgQmoZusd42IJHJpJCmia5b0TSd63++dd265IfeNA1MMwNSkpwPEwkNL2SDXpVfWI2/qA63r2jZfQ8SuyufmuYn7/YyFOWm7E4fpTVbKa3ZereXojyEhNDwFzXgL2r4tY+hXsPKb8rh9lPZtOfX3l/TreQHa/H6y6ls2ksqEUEIgc3hxe7Kv27KqaLcmkUTeG06PvvSz8dWTeC26ej6yn5uFoBdF/js2rI1+OwaDnPlv+jrmsBzg+cgaRF47DqWFX4OFEW5++wOL7VrPv5r76/pFvKDdXj9lVSt2kcqGV343ezD4Q48tG1c7krwUtM0vvzlL7Njxw4ymQxr167F5Vp6he7RRx/l29/+NolEgqamJrzeG/fm27p1K3/9139NIpGgvr4en+/GpYGbN2/mW9/6FvPz89TW1i7ZLj8/ny996UvU1dVx7Ngxent7icfjOBwOKioqaGlpYevWrdTX1yOE4Bvf+AYNDQ10d3djmmZuSA5ky9Cfe+45KisrOXr0KJ2dnczNzf3/7d1ZdF31effx7977zJqOJsuaZ/AgjG1sI4M84jfg2AomYUgoXWmz3tXVtL3pZS/alMtmJW+6mrarK1lNQiCQQMBMxRjZxhgTD8UTxvOELFmyxqP5zHu/F0IHhGwwIFkC/T5rcaFz/tr7OefILOvn5/9/cLvdFBcXs2TJEu68806ys7MxTRPHcbAsC7fbPSE5T09P5xvf+Abl5eV861vf4vjx47S0tDAyMoJpmmRlZVFdXc2iRYtYvHgxJSUlM3siuWGQESyidnEjcyvuYDDUymDoCl2tx/ng5Juc3P8shmGwZO1f47uh4RrGhND3kyzLhWEY2HYCx7FHuxI+9j2JROyjxwHTdGNablweL1W33UdxdT2Wa+KgJMMwSQ8W4XJ/PbsuRUREZPawXB7SMueQljk122lFbMehu9vm0KFPPxrqyzh7ziYSvXYLsgMMDdmcPWsTDE5NDW1tNp2dNnOuMdNAROTzslxu0rLmcmN7U7/+piXpMgyD5cuXs3z58uuuqaqqoqqq6jOvVV5eTnl5+WeuKykpoaTk+gMbCgoK2LRpE8uXLycUChGLxXC73QSDQebMmYPH89GW2ZUrV1JRUUGoN4SDQ2ZmJsHgR+cNBINB1q1bx+23305XVxfhcBjLsghmBZlTMAev10tbWxuDg4PYtk16ejoZGRnXPKvS5/Nx++23M2/ePO655x76+vqIRqOYponf7yc3N5fs7OwZ2235SY7jYFoegnmVBPMqse0kZbesJruglnff+Dmt5//Egju/d4Ph5Wfz+DKx3F5ikSHi0WEcx8YwRt+rRCJKeLgbO/HRgCaPNw3/h1v0vP5M8ooX4vWND8THzjgVEREREZHP5tjQ0gKvbZs4GHWyNF9yGBm5TnjpQKgPDhy06e6ZmhpCIZvmZoc8/ZogIjLpZnCb3s3ncrkoKiqiqOjTDyV3uVyUlpZSWlo67vFYLMaOHTs4dOgQoVCILVu2cOedd+L1Tuzcu3jxIm1tbSQSCcrKysjPz//UANLr9VJcXExxcfEXe3HTzHEcmk/vouXsHspuWUNR9UrcHn9qkvXcsiV4A5kkYhEce+wvFAbg4ODwmQc5XUd6ViGBjBwGetoY7G0hr3A+rg8nZg+GWunruEAiHkutd3vTyS2aT8u5/XRcPkLF/A14vRmM7Tcf6r/KmXf/iMvjp/q2jaRlFSrIFBEREZFZzbLA8ymnDDjA8Ai0tU1dDT09DolPySVjMejpAf8UbZwaGIDhYXDUJiUiMukUXk4i0zQ5c+YMTzzxBB0dHXR1dVFQUEBNTc24YLKtrY1nn32W1tZW3G43d999N3l5eTN2SvhkiUWGuHj8DbqvnCQeD1NYsQy3N52RwS4unWxiZKiX4urluDwBDMD9Ycg40NtCeDiE6fLifMYU8k/KyC6moGwxoauXOHvkJfwZ+eQWzmNksItTB//AQO+V0Vj0w/zRslyU3bqO5lO7aT23n8ycP1C7+FukZc1luL+dM4de4MyhrZTduprKhf/ns3ati4iIiIh87ZkmDMZtTrYMkeEZ33QQTjgcaY9QXBznwe9M3e87x44YvHzB5o3zg1wKxcY9d64nxpCZYPMqk6VLp6aGq1dNXt0KB48PM2f8iWhEkw6H2iLE/fEpubeIyNedwstJZFkW9fX1bN26ldbWVl5//XWSySRr1qyhtLQUwzBob2/nnXfe4Y033mBoaIiVK1eycePG657V+XVhGAZFVSuoWbyJC0df48DrPyU9c05qS/dg7xVyC2uZv/xh/Om5GIZBfslt+NOCXD71FkOhNoqq66ld/K3PdV+3N51bln6b/u5m2i6+y1BfG/70XBLxCC63j7zi+USGB3Ds5GiHp2GQX7SAxWv+L+/t/TWn332eK+ffwePLIBoeZDDURnZBDdWLvok/PY8JE4BERERERGaZ3Nw8lt+9lvDwIC2B8X8/TjpQUhFg8coNrFo1dUddFRVWMjL0XTqaz9Li+UQN+bCuZB5bttRSXj41NfT352Iaq9np66blE92XtgP5pT7q7lw7JfcWEfm6Mxzn2icKh8NhDMPA6/VqW+znEA6Hef311/nP//xP9u3bh23bFBYWkp2dDcDAwADt7e3Ytk1DQwM//OEP2bBhA+np6dNc+dRzHIfBUCsdLUfpaTvJ8EAndjKOx5dBVl4lc0oXkVe0AI83DTAYGezm0sk36Gw5hp2IM7diGZULNhDqOs9Q/1UKSm8nM6cc03KN3YCeq2fobj9JVm45eUULcLn9JBMxuq4cp+3SQfq7LmEn46Nb1SuW4QsE6e/5gIxgCXnFC3F7Rv+ZNBYdovvKCTpbjtHX/QGJ2DBuTxrBOdUUlC4mt3Aebq/2hIiIiIjIV1Ny+H+JXv05dvQKrqzV+Ip/9MWvlUzS29vL1atXudavlx6Ph/z8fHJzJ+dc+2uJRqN0dnYSCoUmPGcYBhkZGVM64NS2bUKhEG1tbdd+D9we8vLzyMvLm5L7y81jx5qJXv03koNHMANV+Mt/jmFqkKvIVFJ4OQVGRkY4duwYf/rTnzhy5AhXrlxhaGgIgIyMDEpLS1NTxxctWkQgEJhV77Hj2ERH+olFB3FsG8vtxRcI4nL5JnQyxmMjRIZ7cWwbrz8Ljy8D44tsr3ccYtFhouE+bNvG68/E68vAMD/9X17jsRGiI30kkzEsy4svLRuXy6uOSxERERH5SpvM8FJkNlF4KXLzadv4FAgEAtTX1zN//nza29tTE8JhdHp4dnY2RYVFpGekz6rQcoxhmPjSsvGlZX/mWrcnkOqG/JI3xeNLx+P7fB2uk3Z/ERERERERERH53BReThHDMAgGgwSDwekuRURERERERERE5Cvp6z3eWkRERERERERERL6yFF6KiIiIiIiIiIjIjKTwUkRERERERERERGYkhZciIiIiIiIiIiIyIym8FBERERERERERkRlJ08ZFRERERERkyti2zeXLl3n77bfxeDw0NDQwODjInj17yM/PZ926dQSDwWt+bzwe5+zZs+zbt4/CwkJWr15NRkbGDd87kUgQjUaxLAuv14thGJP1sr5yxt4L27bHPW6aJpZl4fF4MM2b398UDoc5duwY7777LgsWLGD16tXE43GOHj3KoUOHWLhwIatXr8ayrJtem4jMDAovRUREREREZMq0t7fz+9//nrfeeovNmzfjcrk4e/YsTz31FPPmzeOOO+64bnhpmiaGYXDy5EleeuklotEomzdvxuPxfOZ9o5Eo+/bv4/nnn6empoZHH32U/Pz8yX55XwnxeJzjx4/z5JNP0tHRMe45wzCwTIuMzAxqa2tpaGhg0aJFN/QeT4ZIJMLhw4f51a9+xZYtW7j77ruJRCIcPXqUJ598kgceeIC7775b4aXILKbwUkRERERERKbEwMAAu3fv5tVXX6W2tpb169eTmZlJJBKhp6eHgYGBCZ2AH2dZFhUVFaxZs4Zjx47xzDPPUFZWxh133PGZXZTxRJyLFy/y0ksvUV9fz/333z9rw0vbtrly5QqvvvoqnZ2dlJWV4fP5AHAch1gsRn9/P4Zh8NZbb/Hoo4/S2NiYWjPVtY2MjNDT08PQ0BCO4xAIBFi9ejUFBQVUVlbicim6EJnN9H8AERERERERmXTJZJJz587x8ssvA9DY2EhNTQ3JZPJzXScQCLBixQrWrVvHs88+yyuvvEJVVRU5OTlTUfbXmmVZVFdX8zd/8zdUV1enHo/H43R0dLBjxw527NhBPB6nvLycFStWTEudXq+XhQsXsnDhwmm5v4jMLAovRUREREREZNINDAywf/9+Tpw4wdq1a1lZvxK32z0uvEwmk1y4cIGmpiZOnTpFNBqloKCA+vp6Vq5cmTrfMi8vj4aGBt566y327NnDunXrWLNmzRc6w3JkZIQTJ05w7tw5qqqqME2TXbt20draisfjoa6ujvXr15OWlsY777zDwYMH6e/vJycnh5UrV7Jq1apUXY7j0NXVxb59+zh06BDd3d0A5Ofns2TJEhoaGsjLyxt3/76+Pvbv388777xDT08PmZmZLFmyhMWLF9Pa2kpXVxcrVqygqqpq3D3+9Kc/ceTIEbq7u7Esi5KSEurr61m2bBmBQOCGXrthGASDQZYtW8bixYvHPReNRikvL6ejo4MTJ06wf/9+li69A5fLwrZtOjo6eOeddzh27Bi9vb2YpklhYSHLli2jvr6ezMzM1LVs26azs5O9e/eOWz937tzU+qysrOvWGYlEOHXqFO+//z61tbUsX76cwcFBDhw4QH9/P4sWLaKjo4M9e/Zw9epVXC4XNTU1rF+/ngULFqR+Lmzb5urVq+zatYujR48SDodTP181NTUcOXIEt9vN6tWrr3t0gYhMP4WXIiIiIiIiMqkcx6GtrY19+/bhcrlYuXIl+XPyJ6y5dOkSTz31FL29vQQCAQYGBti7dy979uzh+9//Pt/5zncIBAK43W5qampYtmwZL7zwAvv27WPFihU3HNp93MjICAcOHOAPf/gDt956Kz6fj1AoRDgc5uLFi+zcuZOTJ09SUlLCgQMHsCyLnp4edu3axcGDB0kkEmzatAnTNGltbeXpp59OdZcWFRXhOA7vv/8+O3fu5MyZM/zFX/wFBQUFAPT29vLKK6/wq1/9it6eXsorygkEApw9e5adO3cyNDREKBQiLy+PqqoqbNvmgw8+4MnfPskbTW9gGAYFBQXEYjEOHDjAm2++ycMPP8yDDz74uQYZXYvX66WyspIFCxZw7Ngxrly5QjwewzA8XLhwgd/85je8+eabuN1u5s6dSzwe59ChQzQ1NfHAAw/w2GOPkZOTQzKZ5OLFi/zm179h15u7UusTiQSHDx9mx44dbNmyhccee4zc3Nxr1hIOhzl48CC//e1v+da3vsWSJUsIhUL8z//8D0ePHqWhoYHLly9j2zaJRIKWlha2b9/O6dOn+fu//3tqampwHIfW1lZ+/etf8+KLL2IYBlVVVXR2dnLs2DFKS0s5efIkc+fOZdGiRQovRWYwhZciIiIiIiIyqRKJBM3NzZw5c4bi4mIWLlw44dzCeDxBa2srtbW1/Pmf/znV1dUMDw+zY8cOnn76aZ5//nnq6upYsmQJADk5OdTV1fHSSy/x/vvv09nZSUVFxeeuzbEdBgYGOHXqFJFIhG9/+9s8+uijuFwu9uzZwy9+8Quee+456uvrue+++1i0aBH9/f1s3bqVl19+mV27drFq1Sp8Ph/vvvsuzz77LF6vl7/+67/mtttuw3Ecjh07xm9+8xu2bt3K/PnzaWxsJJlMcvr0aZ555hk6Ozt57LHH2LBhAwCHDx/mj3/8IydOnGDu3LlEIhFgtEvz9ddf57k/PkdVVRXf//73qaysJBaLsX//fn73u9/x5JNPUlZWxvr167/0NHXHcUgkEhiGgcvlwjTNVGj4wgsvcOutt/KDH/yA8vJy4vE4hw8f5re//S2/+93vKC0t5f7770+tf/6F57nlllv4wQ9+QEVFBfF4nCNHjvDEE0+k1m/ZsuWadSSTSfr7+2ltbSUUCqXq6u3t5b333sPr9bJp0yZWrlyJx+Ph1KlT/PKXv2TXrl3U19dTXV3NyMgI+/fv59lnnyUzM5Mf/vCH1NXVMTg4yFtvvcULL7xAa2sry5Yt+9xHGYjIzaXwUkRERERERCZVOBympaWFvr4+7rjjDgoLCyescRybrKws1q5dy+bNm/H5fNi2jd/v57333uP06dMcO3YsFV76fD5KS0vJzc3lypUrtLW1faHwEoNUyFdUVMSmTZtYdNsibGd0cFBTUxNHjx6lvLyc+++/n5ycHMLhMN3d3ezYsYPW1lb6+vooKCggGAyybt06ampqaGxsJDs7Gxjd5n7i/RO8sPUF3n//fe677z4ikQgnTpzg7NmzLF++nEceeSTVIVhaWkpnZyeHDx8eLdEwsG2b5uZmtm/fjmmaPPzwwzQ2NuLxeHAch6KiItrb23nuuefYvXs39fX1pKWlfYFPa1QkEuHMmTMcOXKEjIwMampqsCyLlpYWdu7ciWVZPPTQQ2zcuDFVQ3FxMe3t7fz6179m9+7drFmzhtbWVnbu3Ilpmjz00EN885vfTK0vKSmhvb2d//7v/2b37t2sX7/+xj82w8AwDEzTpLKykkceeST1c1VQUMCxY8d4+umnOX/+PPF4nFAoxP79o9vMt2zZwgMPPEBGRgbJZJLc3FxOnz7NuXPnUtcVkZlL4aWIiIiIiIhMqnA4TGdnJ8lkkjlz5lxzS7NpmhQVFVFXV5eaam2aJgUFBdTW1nLo0CFaW1vHrQ8Gg+Tn53Pp0iU6Ojq+VI1+v5/y8nJKS0vBANMwSUtLIzs7m6ysLGpra1NbiV0uF5mZmfj9fiKRCJFIBK/Xy5IlS6ioqCAQCJCWlkZfXx8jwyP09vbicrtSHYTxeJzh4WGam5uJx+PU1NRQXFwMjIZyeXl5LF++nKKiolR98XiclpYWzp8/T3l5OUuXLsXj8aS+Jzc3l9tvv52tW7dy7tw5ent7PzO8tG2brq4utm/fzqlTp1KPR6NRWltb2bt3L5cuXWLt2rXcddddJJNJrly5wqVLlygqKuL2228fV0MwGGThwoV4vV4uXLhAW1sbbW1tXLx4kaKiIhYvXjxh/djnfeHCBbq6uj734KXMzEzq6uqYM2dO6jGv15uaJD80NEQymWRwcJALF86TlpbGbbfdRnp6OjA6tKi4uJiVK1fS1NT0ue4tItND4aWIiIiIiIhMqlgsxsDAAIZhkJGRkQqwPs6yLLKysiaEV16vl2AwiOM4DA0NjXvO5/ORlZVFJBKhv7//S9XocrlIT08fV5tlWliWhdfrJSMjI9WRZxgGlmVhmia2bWPbNpZl4fF46OzsZPfu3Vy8eJGBgQGi0SiRSITLly8TjUZJJpM4jkM0GqWvrw/LssjNzR13X7fbTX5+PgUFBfT29gKj4WVvby+hUAjbtvmP//iPcUNxkskkH3zwAQMDA/T19dHf3z8axH6KsTM0f/nLX+L1elOPJxIJwuEwgUCAxsZG/uzP/ozKykri8Th9fX0MDw8TDAYnfFZut5vs7Gz8fj8DAwP0dPcQCoUYHh4mKysr1Yn68fc8GAwSCATo7++nr6/vc4eXHo+HYDCIaZqpx8Y+n7GOVdu2CYfDhEIh/H4/eXl547orfT4fxcXF495PEZm5FF6KiIiIiIjIpLJtm1gsBoyGTR8PmsYYhoHb7Z4QbJqmicvlwnGcCWcRulwuvF4vyWQydf0vamy78LgtwwYYfLQ9+dOMjIywZ88e/uu//otz585RXFxMRUUFRUVF2Lad2ro8xk7aRKNRTNPE4/FM2Krsdrvx+/2pr8fOeUwmk0QiEVpbW8cFjgDJRJIVK1Ywf/78VPfqpzFNk/z8fDZs2JDqXBz7HL4snV8AABKMSURBVILBIGVlZcybN4/KykrcbjeRSCQVwLrd7gnnlo59r2VZJBIJItGP1ns8Htxu96euj8fjn1nz9V7H9bZ6O46Teu/i8Tgul+uaP2N+v/+aobqIzDwKL0VERERERGRSfTz8cxznuuuSyeSEgHJsgjSMdmd+nOM42LZ9Q+HiVGtra+PFF1/kyJEjbNy4kccee4zCwkJ8Ph8DAwM888wznD17NrXeMI1UKDv2+j7uk4HsWMjp9Xqpra3lr/7qr1Jboz/OMAwy0jOuea7oJ5mGSUlJCd/73vdYuHBh6vvHuk19Pt+4UHAsSDYMIxWkftzHA9axTtSx9fF4fMLr/Ph6v98/IdycTKZpftQpm7Qn1HGt+kRkZlJ4KSIiIiIiIpPK5XIRCAQAGB4eJh6PT+wa/PBcwoGBgXGPx2Ix+vv7MU1zwhmO0WiU4eFh3G73lxpO82WNnR158uRJcnNz2bRpE3fddVcqUB17bR/vLHS73amBMWPnYI69J2OTtLu6ulLrx7ZYp6en43K5qKysZMGCBRNqcRznxgfOGKPXzcrKIi8v7zOXj9Xg9/vp7+8nFAqNO5czkUiMnvM5MkJpaSl5eXkMDg6m1vf19aXO9hxb39/fz8jICEVFRakzRSebYRj4fD4yMjIIhUL09fWNez4Wi9HR0cHg4OCU3F9EJtf0/lOViIiIiIiIfO34fD5yc3NxHIeenh7C4fCENclkkvb2ds6cOZPq6HMch66ubs6fP08gEKCkpCS1fuwMzO7ubtLS0sjNzb1pr+dakslkKoDMzMxMBZfRaJTTp09z4MAB4vE4tm3jOA6BQICioiIcx6G5uZment7Utfr7+3nvvffGDShyu90UFRVRUVFBa2srx48fH9f5GA6Hefvtt/nxj3/Mm2++OSVdhG63m+LiYiorK2lra+PkyZPY9kddjIODg5w6dYpoNEp1dTXFxcWp9e3t7Zw4cWLC+pMnTxKJRKiurr5mJ+lkMAyD9PR0SktLGRwc5MzZM6kgefRnrIuDBw9OOFNVRGYmhZciIiIiIiIyqfx+P4WFhfj9ftrb28ed/TjGMAx6enpoampiz549dHd3c+HCBbZte43jx49TU1PD7bffnlqfiCfo6uqiq6uL/Pz8G9omPVXGJp8XFxfT2dnJ/v37uXLlCm1tbWzbto3nn3+eRCKBZVl0dnbS1dWFz+dj/vz5FBYWcujQIV555WVaW1tpbm7mtdde44033hi3Td40TcrKyli3bh0jIyP8/ve/p6mpia6uLq5evUpTUxP/+q//ytatW+nu7r7x7svPwbKscTU899xzvP322/T29tLS0sJrr73Gtm3bKCws5J577iErK4uysjLuuecewuHwhPXbtm3jtddeY+7cuWzYsOGaU+gnSzAYZNmyZRiGwY4dO9i1axddXV2cOnWKF154gUOHDk3oBhaRmUnbxkVERERERGRSeTweysvLKSkpoaWlhQsXLlBRUYFhGKlBPJZlUVFRgdfr5Wc/+xl+v59wOMzFixfJzc3lwQcf5NZbb01dc2h4iPPnzzM4OEh1dfW47cvTobiomHvvvZdz587xxBNPcODAASzLIhKJUFtby8MPP8xTTz3F3r17+fGPf8yjjz5KXV0dDzzwAE888QT//u//zvbt21PnMpaUlDA0NMTIyEjqHjk5OWzevJnu7m62bdvG448/TmlpKY7j0NbWRiwWo7GxkRUrVkw4H3Sy5OTk0NjYSFdXF01NTfzjP/4jRUVFxGIxrly5gsfj4S//8i9paGjAsiyys7PZvHkznZ2dbN++/brrV61aNWU1A2RkZNDQ0MC9997L9u3befzxxykpKSEej+N2u1m+fLk6L0W+Iqx//ud//udrPZFIJDAMI3XYroiIiIiIiHw9OPE2kkMHcZKDmL5yXJlrJ/X6Y1uoW1paOHLkCHPmzGHp0qWpKdvp6eksXLiQjRs30tDQQHp6OrFYjEAgwJ133sn3vvc91q5dmzoT0XEcLl68yLPPPksoFOKhhx5i2bJlnzq0xzRMMjIzqKurY82aNdTW1uLxeLAsi5ycHJYsWcJdd91FcXFx6jpjW7Xr6+u54447yA5mw4e/Dvv9fmpra1m1ahXz588nKysrta07KysLy7IoLCxk9erVbN68maVLl1JQUEBmZiZz5sxh4cKF1NTUUFFRQVVVFVlZWbjdbkpLS9mwYQN1dbdx/PhxwuEw9913H1VVVakOz1tuuYWamhqysrJwHIf09HQWL17MQw89xMaNGyktLf3U39sNwyAQCDBv3jzWrl3LggULUmeS3shnOVZDZWUl6enpOI5DdnY2K1eu5Lvf/S7r169PbeP/rPWPPPII69evT525ORZ4Ll68mIaGBkpKSnC5XGRnZ7NkyRIaGhooLi7G4/FQVFTEypUrWbZsGTk5ORNqXLx4MasaVlFaWorL5SIzM4uammrKysrw+/34/X7mzZvHvffeS2FhIXv37iU/P5/GxkaysrJu6P1wkv0khw7gxK5iuLNxB7+JYUzd4CERAcO5zui3cDiMYRh4vV6FlyIiIiIiIl8jyeH/JXr159jRK7iyVuMr/tGk3yMSidDU1MS//Mu/kJuby49+9COWLl06YZ3jOITDYXp7e0kkEqSnp5OdnT2uK29oaIiXXnqJn/70p9TV1fFP//RP1NTUTHrNX0Q0Gk0NrfH5fGRnZ+Pz+bBtm+HhYUKhEJZlEQwGGRgY4MiRIyQSCWpqavH7fQQCATIzMzl48CD/8A//gM/n4yc/+cmE92ps2M3YkJmMjEyCwawp7V78pHg8Tn9/P0NDQ1iWRVZWFhkZGdfNDD7v+smutbm5mffee4+0tDRqa2uxLIuMjEwsy+TFF1/k8ccfZ+3atfz0pz8lOzv7hq5rx5qJXv03koNHMANV+Mt/jmH6p/jViMxu2jYuIiIiIiIik87r9bFkyRLuueceXn75ZbZt20ZlZeWEkGisK/B6nYDJZJLz58/z+uuv4/P52LRpE2VlZTfjJdwQr9dLQUHBhMdN0yQjIyN1rqNt21y4cIE//vGPtLW18dhjj9HY2EhaWtqHZ31uo6uri/vuu4/S0tIJ13O5XOTm5k7roCK3201eXt4NTSr/IusnUzKZ5NKlS/ziF78gKyuLv/3bv6W+vp5kMsnhw4fZtm0bbrebZcuWkZ6eftPrE5Ebp/BSREREREREJp1hQEFBAZs3b+bSpUvs3LmTefPm0djYiMfjuaFrOI7D1atXefXVVzl79iybN29m3dp1N/z9M4lpmsyZM4e6ujoOHTrEz372M5qamlJDjS5dusSiRYv4zne+M25LtHwxY1vya2tr2bZtGz/60Y9S3brNzc20tbVx33338Y1vfAO3W9u+RWYynXkpIiIiIiIyy0z1mZdjxs4zzMvLIxwO4/F4qKysJC0t7Ya+P5FI0PxBM6dOnWLhwoU89NBDlJZ9+vmOM5nf76e8vJzKykr8fj8DAwNEo1Fyc3PZsGEDjzzyCEuXLtUU7ElgmiaZmZlUV1dTUlICQH9/P8lkkvLycrZs2cK3v/1tqqurP/Xs1E/SmZciN586L0VERERERGTKBAIB6uvrqaioIJlM3nBwCaMBVHFJMQ8//DCZmZnMnTv3KxtcwmiYW1JSwpYtW1i1ahWhUCj1nuTm5pKZmfmVfn0zjcfjYf78+VRUVNDV1cXg4CCGYZCRkUF+fj5+v86qFPkqUHgpIiIiIiIiUyoQCFBVVfW5v8+yrGk7M3EqeTweCgsLKSwsnO5SvvbGzlQtLy+f7lJE5Au68d5oERERERERERERkZtInZciIiIiIiKzkrYn32zxeJzOzk5aW1txHGdK7pGWlk55eRmZmZlTcn0RkZtN4aWIiIiIiMisY4yOAwdgakI0Gc+2bY4cOcrPfvb/6OjomLLwMhAIcM899/B3f/d3X8mp7DOeA2B/+IUBWNNXi8gsofBSRERERERk1jEAExyHj4IYmUqRSIT33jvGc889RzKZnLL7mKZJPB7nu9/9LkVFRVN2n9nLAWfs8zMwDIWXIlNN4aWIiIiIiMhsY5ij/wE4NqPtZNpGPpUSiQTDw8MfThdP57bbllJXt2TSrt/b283Bg3tpbW0mHA4zPDw8adeWj/sovDQwPvpzJCJTRuGliIiIiIjIrPNh5yUwGsbYoA6ymyY3dw6bNn2Hb37zgUm75uXLlwiHR2htbZ60a8q1OMCHnZeGiUJ/kamn8FJERERERGTWMT+23dUGJ6Hw8iby+fwUFZVQWFgyadeMx+Pk5ORN2vXkOhwbx0mOnhmrPzMiN4X6m0VERERERGYbww2GFwDHjuPYI9NckMhXRRLsMGCA6Z/uYkRmBYWXIiIiIiIis4xh+jCsjNEvnChOcmB6CxL5inCcGE5iEAwLw8qc7nJEZgWFlyIiIiIiIrON6U+Fl44dwUn2TXNBIl8FDthRnOQIhuHCsLKnuyCRWUHhpYiIiIiIyCwz2nmZNfqFHcVJ9k9vQSJfBU4SJzk4Om3ccGG4FF6K3AwKL0VERERERGYZw/SDa3TLq6Nt4yI3xHFiOMnQ6BemOi9FbhaFlyIiIiIiIrON4cEwM8AwwY6p81LkRjgxnEQvAIbhxnDlTHNBIrODwksREREREZFZx8Aw/RiWX+GlyI2yYziJD8+HNVwKL0VuEoWXIiIiIiIis5Hpx7DScJwYaNu4yGcat23ccGNq27jITaHwUkREREREZBYyTD9Y6WAncZJD4MSnuySRmc2JpzovDdOD4QpOc0Eis4PCSxERERERkdnI9GNYHw7tsYdxEqFpLkhkZnPsCHaiZ/SsWDMTDPd0lyQyKyi8FBERERERmYUMMxPTXQCAkxzAjrVOc0UiM5iTxEn24cRDGKYH01M83RWJzBoKL0VERERERGYhw5WN4SkBA5zEAHasebpLEpmxHCeME7sMdhxMH6averpLEpk1FF6KiIiIiIjMQobpw3QXjA7tSQ5hx1qmuySRGctJDpKMnAfAsAKY3tpprkhk9lB4KSIiIiIiMisZGFYOhnsO2DGc+FWc5OB0FyUyMyWHsCMXwTAwrCxMb9l0VyQyayi8FBERERERmaUMVw6mtwQcByfRix27Mt0licxANk6yDzvaDoYbw1OCYQamuyiRWUPhpYiIiIiIyCxluHIwPaUA2IkQdvTcNFckMvM4dmR0oJUdwbB8WL6a6S5JZFZReCkiIiIiIjJLGWYapqcMw5WOE+/HDp8GJz7dZYnMLMkh7MhosG+YAUyfzrsUuZkUXoqIiIiIiMxWhoXhKcH0VYIdw441Y0cvT3dVIjOIg53oIjny3uh5l65MTG/VdBclMqu4prsAERERERERmT6muwjTv4Dk0HHsaBuJ4QN4fFWAMd2lfW11dLTxyivPMTQ0eQOSrly5zMGDeyftejLKSQ6SHD6KHe3AMH2Y/kUYVsZ0lyUyqyi8FBERERERmcUMKxPLP5+EJx8n3kNy+F3s9Hp1l00ywzCwLAuA/v4QTU2v8v77Ryft+sPDQ7S3t064l3wZDk78KonB3eAkMdx5uDLXo2Bf5OZSeCkiIiIiIjKbGSam71ZcGXcS7/0f7PA5Yj3P4M7aiOmrwbAyp7vCrwWfz0d5eTllZWVcvnyZ4eEhzp8/Pen3SU9P55ZbbiEvL2/Srz27ONjxduKhrdiR5tFBPWmLsHzV012YyKyj8FJERERERGSWM115WOl3kRx5Hzv8AcmhdzFMPy7Tj+VXeDkZ3G43K1as4Cc/+SmdnR04jjPp9zAMA7/fT11dHRkZ2tr85Rg48U4S/W+Pdl16i3BlbgDDPd2Ficw6Ci9FRERERERmO8PC8i/Anfsg8Z7ncOLdOMl+sCfvTEaBgoICHnrwQWzHnrJ7GIaBYWhb86RwEjh2BMOdhzv7ASzfvOmuSGRWUngpIiIiIiIiGFYmrvRVGFYWdvQDTPccDHfhdJf19WOAaZjTXYXcAMMVxJV5F1baElxZ94Dpne6SRGYlw7lOr3o4HMYwDLxer/7VRkREREREZLZwkjhODLAxDI+2ycqs5dgjOPF2DPdcDDNtussRmbUUXoqIiIiIiIiIiMiMpF51ERERERERERERmZEUXoqIiIiIiIiIiMiMpPBSREREREREREREZiSFlyIiIiIiIiIiIjIjKbwUERERERERERGRGUnhpYiIiIiIiIiIiMxICi9FRERERERERERkRlJ4KSIiIiIiIiIiIjOS69OeTCQSmKaFYdysckRERERERERERERGXTe8dLlc2LZNIhHHUHopIiIiIiIiIiIiN9mnhpcKLUVERERERERERGS6GI7jONNdhIiIiIiIiIiIiMgn/X//PpEyZsMduwAAAABJRU5ErkJggg==">
<br>
<center><figcaption><b>Figure 4.</b> DCNN with Atrous Convolutions & Atrous Spatial Pyramid Pooling</figcaption></center>
</figure>_____no_output_____## Building a Deeplab V3 model
_____no_output_____Step-by-step process of building a DeepLab network is given below:
- Features are extracted from the backbone network (VGG, DenseNet, ResNet)
- To control the size of the feature map, atrous convolution is used in the last few blocks of the backbone
- On top of extracted features from the backbone, the ASPP network is added to classify each pixel corresponding to their classes
- The output from the ASPP network is passed through a 1 x 1 convolution to get the actual size of the image which will be the final segmented mask for the image
_____no_output_____### References:_____no_output_____[1] L Chen, G Papandreou, I Kokkinos, K Murphy, A Yuille Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, arXiv:1412.7062 2016
[2] L Chen, G Papandreou, I Kokkinos, K Murphy, A Yuille DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs arXiv:1606.00915 2017
[3] L Chen, G Papandreou, F Schroff, H Adam Rethinking Atrous Convolution for Semantic Image Segmentation arXiv:1706.05587 2017
[4] Sik-Ho Tsang. Review: DilatedNet — Dilated Convolution (Semantic Segmentation). https://towardsdatascience.com/review-dilated-convolution-semantic-segmentation-9d5a5bd768f5. Accessed 10 November 2019.
[5] Beeren Sahu, The Evolution of Deeplab for Semantic Segmentation, https://towardsdatascience.com/the-evolution-of-deeplab-for-semantic-segmentation-95082b025571, Accessed 21 Februrary 2020
[6] Sik-Ho Tsang, Review: DeepLabv3 — Atrous Convolution (Semantic Segmentation), https://towardsdatascience.com/review-deeplabv3-atrous-convolution-semantic-segmentation-6d818bfd1d74, Accessed 21 Februrary 2020
[7] Saurabh Pal, Semantic Segmentation: Introduction to the Deep Learning Technique Behind Google Pixel’s Camera!, https://www.analyticsvidhya.com/blog/2019/02/tutorial-semantic-segmentation-google-deeplab/, Accessed 21 February 2020
_____no_output_____
| {
"repository": "markjdugger/arcgis-python-api",
"path": "guide/14-deep-learning/how_deeplabv3_works.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 492128,
"hexsha": "cb1305d8e00b5ac7da3b99258723e9797d60aa33",
"max_line_length": 321963,
"avg_line_length": 2050.5333333333,
"alphanum_fraction": 0.9590147279
} |
# Notebook from csavur/biosignalsnotebooks
Path: biosignalsnotebooks_notebooks/Categories/Train_And_Classify/classification_game_volume_2.ipynb
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_7"><div id="image_img"
class="header_image_7"></div></td>
<td class="header_text"> Rock, Paper or Scissor Game - Train and Classify [Volume 2] </td>
</tr>
</table>_____no_output_____<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">train_and_classify☁machine-learning☁features☁extraction</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>_____no_output_____<span class="color4"><strong>Previous Notebooks that are part of "Rock, Paper or Scissor Game - Train and Classify" module</strong></span>
<ul>
<li><a href="classification_game_volume_1.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 1] | Experimental Setup <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
</ul>
<span class="color7"><strong>Following Notebooks that are part of "Rock, Paper or Scissor Game - Train and Classify" module</strong></span>
<ul>
<li><a href="classification_game_volume_3.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 3] | Training a Classifier <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
<li><a href="../Evaluate/classification_game_volume_4.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 4] | Performance Evaluation <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
</ul>
<table width="100%">
<tr>
<td style="text-align:left;font-size:12pt;border-top:dotted 2px #62C3EE">
<span class="color1">☌</span> After the presentation of data acquisition conditions on the previous <a href="classification_game_volume_1.ipynb">Jupyter Notebook <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>, we will follow our Machine Learning Journey by specifying which features will be extracted.
<br>
"Features" are numerical parameters extracted from the training data (in our case physiological signals acquired when executing gestures of "Rock, Paper or Scissor" game), characterizing objectively the training example.
A good feature is a parameter that has the ability to separate the different classes of our classification system, i.e, a parameter with a characteristic range of values for each available class.
</td>
</tr>
</table>
<hr>_____no_output_____<p style="font-size:20pt;color:#62C3EE;padding-bottom:5pt">Starting Point (Setup)</p>
<strong>List of Available Classes:</strong>
<br>
<ol start="0">
<li><span class="color1"><strong>"No Action"</strong></span> [When the hand is relaxed]</li>
<li><span class="color4"><strong>"Paper"</strong></span> [All fingers are extended]</li>
<li><span class="color7"><strong>"Rock"</strong></span> [All fingers are flexed]</li>
<li><span class="color13"><strong>"Scissor"</strong></span> [Forefinger and middle finger are extended and the remaining ones are flexed]</li>
</ol>
<table align="center">
<tr>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_paper.png" style="display:block;height:100%">
</td>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_stone.png" style="display:block;height:100%">
</td>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_scissor.png" style="display:block;height:100%">
</td>
</tr>
<tr>
<td style="text-align:center">
<strong>Paper</strong>
</td>
<td style="text-align:center">
<strong>Rock</strong>
</td>
<td style="text-align:center">
<strong>Scissor</strong>
</td>
</tr>
</table>
<strong>Acquired Data:</strong>
<br>
<ul>
<li>Electromyography (EMG) | 2 muscles | Adductor pollicis and Flexor digitorum superficialis</li>
<li>Accelerometer (ACC) | 1 axis | Sensor parallel to the thumb nail (Axis perpendicular)</li>
</ul>_____no_output_____<p style="font-size:20pt;color:#62C3EE;padding-bottom:5pt">Protocol/Feature Extraction</p>
<strong>Extracted Features</strong>
<ul>
<li><span style="color:#E84D0E"><strong>[From] EMG signal</strong></span></li>
<ul>
<li>Standard Deviation ☆</li>
<li>Maximum sampled value ☝</li>
<li><a href="https://en.wikipedia.org/wiki/Zero-crossing_rate">Zero-Crossing Rate</a> ☌</li>
<li>Standard Deviation of the absolute signal ☇</li>
</ul>
<li><span style="color:#FDC400"><strong>[From] ACC signal</strong></span></li>
<ul>
<li>Average Value ☉</li>
<li>Standard Deviation ☆</li>
<li>Maximum sampled value ☝</li>
<li><a href="https://en.wikipedia.org/wiki/Zero-crossing_rate">Zero-Crossing Rate</a> ☌</li>
<li><a href="https://en.wikipedia.org/wiki/Slope">Slope of the regression curve</a> ☍</li>
</ul>
</ul>
<strong>Formal definition of parameters</strong>
<br>
☝ | Maximum Sample Value of a set of elements is equal to the last element of the sorted set
☉ | $\mu = \frac{1}{N}\sum_{i=1}^N (sample_i)$
☆ | $\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(sample_i - \mu_{signal})^2}$
☌ | $zcr = \frac{1}{N - 1}\sum_{i=1}^{N-1}bin(i)$
☇ | $\sigma_{abs} = \sqrt{\frac{1}{N}\sum_{i=1}^N(|sample_i| - \mu_{signal_{abs}})^2}$
☍ | $m = \frac{\Delta signal}{\Delta t}$
... being $N$ the number of acquired samples (that are part of the signal), $sample_i$ the value of the sample number $i$, $signal_{abs}$ the absolute signal, $\Delta signal$ is the difference between the y coordinate of two points of the regression curve and $\Delta t$ the difference between the x (time) coordinate of the same two points of the regression curve.
... and
$bin(i)$ a binary function defined as:
$bin(i) = \begin{cases} 1, & \mbox{if } signal_i \times signal_{i-1} \leq 0 \\ 0, & \mbox{if } signal_i \times signal_{i-1}>0 \end{cases}$
<hr>_____no_output_____<p class="steps">0 - Import of the needed packages for a correct execution of the current <span class="color4">Jupyter Notebook</span></p>_____no_output_____
<code>
# Package that ensures a programatically interaction with operating system folder hierarchy.
from os import listdir
# Package used for clone a dictionary.
from copy import deepcopy
# Functions intended to extract some statistical parameters.
from numpy import max, std, average, sum, absolute
# With the following import we will be able to extract the linear regression parameters after
# fitting experimental points to the model.
from scipy.stats import linregress
# biosignalsnotebooks own package that supports some functionalities used on the Jupyter Notebooks.
import biosignalsnotebooks as bsnb_____no_output_____
</code>
<p class="steps">1 - Loading of all signals that integrates our training samples (storing them inside a dictionary)</p>
The acquired signals are stored inside a folder which can be accessed through a relative path <span class="color7">"../../signal_samples/classification_game/data"</span>_____no_output_____<p class="steps">1.1 - Identification of the list of files/examples</p>_____no_output_____
<code>
# Transposition of data from signal files to a Python dictionary.
relative_path = "../../signal_samples/classification_game"
data_folder = "data"
# List of files (each file is a training example).
list_examples = listdir(relative_path + "/" + data_folder)_____no_output_____print(list_examples)_____no_output_____
</code>
The first digit of filename identifies the class to which the training example belongs and the second digit is the trial number <span class="color1">(<i><class>_<trial>.txt</i>)</span>_____no_output_____<p class="steps">1.2 - Access the content of each file and store it on the respective dictionary entry</p>_____no_output_____
<code>
# Initialization of dictionary.
signal_dict = {}
# Scrolling through each entry in the list.
for example in list_examples:
if ".txt" in example: # Read only .txt files.
# Get the class to which the training example under analysis belong.
example_class = example.split("_")[0]
# Get the trial number of the training example under analysis.
example_trial = example.split("_")[1].split(".")[0]
# Creation of a new "class" entry if it does not exist.
if example_class not in signal_dict.keys():
signal_dict[example_class] = {}
# Load data.
complete_data = bsnb.load(relative_path + "/" + data_folder + "/" + example)
# Store data in the dictionary.
signal_dict[example_class][example_trial] = complete_data_____no_output_____
</code>
<p class="steps">1.3 - Definition of the content of each channel</p>_____no_output_____
<code>
# Channels (CH1 Flexor digitorum superficialis | CH2 Aductor policis | CH3 Accelerometer axis Z).
emg_flexor = "CH1"
emg_adductor = "CH2"
acc_z = "CH3"_____no_output_____
</code>
<p class="steps">2 - Extraction of features according to the signal under analysis</p>
The extracted values of each feature will be stored in a dictionary with the same hierarchical structure as "signal_dict"_____no_output_____
<code>
# Clone "signal_dict".
features_dict = deepcopy(signal_dict)
# Navigate through "signal_dict" hierarchy.
list_classes = signal_dict.keys()
for class_i in list_classes:
list_trials = signal_dict[class_i].keys()
for trial in list_trials:
# Initialise "features_dict" entry content.
features_dict[class_i][trial] = []
for chn in [emg_flexor, emg_adductor, acc_z]:
# Temporary storage of signal inside a reusable variable.
signal = signal_dict[class_i][trial][chn]
# Start the feature extraction procedure accordingly to the channel under analysis.
if chn == emg_flexor or chn == emg_adductor: # EMG Features.
# Converted signal (taking into consideration that our device is a "biosignalsplux", the resolution is
# equal to 16 bits and the output unit should be in "mV").
signal = bsnb.raw_to_phy("EMG", device="biosignalsplux", raw_signal=signal, resolution=16, option="mV")
# Standard Deviation.
features_dict[class_i][trial] += [std(signal)]
# Maximum Value.
features_dict[class_i][trial] += [max(signal)]
# Zero-Crossing Rate.
features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal))
if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]
# Standard Deviation of the absolute signal.
features_dict[class_i][trial] += [std(absolute(signal))]
else: # ACC Features.
# Converted signal (taking into consideration that our device is a "biosignalsplux", the resolution is
# equal to 16 bits and the output unit should be in "g").
signal = bsnb.raw_to_phy("ACC", device="biosignalsplux", raw_signal=signal, resolution=16, option="g")
# Average value.
features_dict[class_i][trial] += [average(signal)]
# Standard Deviation.
features_dict[class_i][trial] += [std(signal)]
# Maximum Value.
features_dict[class_i][trial] += [max(signal)]
# Zero-Crossing Rate.
features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal))
if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]
# Slope of the regression curve.
x_axis = range(0, len(signal))
features_dict[class_i][trial] += [linregress(x_axis, signal)[0]]_____no_output_____
</code>
Each training array has the following structure/content:
<br>
\[$\sigma_{emg\,flexor}$, $max_{emg\,flexor}$, $zcr_{emg\,flexor}$, $\sigma_{emg\,flexor}^{abs}$, $\sigma_{emg\,adductor}$, $max_{emg\,adductor}$, $zcr_{emg\,adductor}$, $\sigma_{emg\,adductor}^{abs}$, $\mu_{acc\,z}$, $\sigma_{acc\,z}$, $max_{acc\,z}$, $zcr_{acc\,z}$, $m_{acc\,z}$\] _____no_output_____<p class="steps">3 - Storage of the content inside the filled "features_dict" to an external file (<a href="https://fileinfo.com/extension/json">.json <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>)</p>
With this procedure it is possible to ensure a "permanent" memory of the results produced during feature extraction, reusable in the future by simple reading the file (without the need to reprocess again)._____no_output_____
<code>
# Package dedicated to the manipulation of json files.
from json import dump
filename = "classification_game_features.json"
# Generation of .json file in our previously mentioned "relative_path".
# [Generation of new file]
with open(relative_path + "/features/" + filename, 'w') as file:
dump(features_dict, file)_____no_output_____
</code>
We reach the end of the "Classification Game" second volume. Now all the features of training examples are in our possession.
If you are feeling your interest increasing, please jump to the next <a href="../Train_and_Classify/classification_game_volume_3.ipynb">volume <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>
<strong><span class="color7">We hope that you have enjoyed this guide. </span><span class="color2">biosignalsnotebooks</span><span class="color4"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href="../MainFiles/biosignalsnotebooks.ipynb">Notebooks <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a></span></strong> !_____no_output_____<span class="color6">**Auxiliary Code Segment (should not be replicated by
the user)**</span>_____no_output_____
<code>
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()_____no_output_____%%html
<script>
// AUTORUN ALL CELLS ON NOTEBOOK-LOAD!
require(
['base/js/namespace', 'jquery'],
function(jupyter, $) {
$(jupyter.events).on("kernel_ready.Kernel", function () {
console.log("Auto-running all cells-below...");
jupyter.actions.call('jupyter-notebook:run-all-cells-below');
jupyter.actions.call('jupyter-notebook:save-notebook');
});
}
);
</script>_____no_output_____
</code>
| {
"repository": "csavur/biosignalsnotebooks",
"path": "biosignalsnotebooks_notebooks/Categories/Train_And_Classify/classification_game_volume_2.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 21311,
"hexsha": "cb13c0df0bac5a4fa86ad7bcdde3ee34805a8c1f",
"max_line_length": 443,
"avg_line_length": 41.7862745098,
"alphanum_fraction": 0.5507484398
} |
# Notebook from hossainlab/dsnotes
Path: book/pandas/08-Filtering Rows.ipynb
# Filtering Rows _____no_output_____
<code>
# import pandas
import pandas as pd _____no_output_____# read movie data
movies = pd.read_csv("http://bit.ly/imdbratings")_____no_output_____# examine first few rows
movies.head() _____no_output_____
</code>
## Filtering Movies with `for` Loop_____no_output_____
<code>
booleans = []
for length in movies.duration:
if length >= 200:
booleans.append(True)
else:
booleans.append(False)_____no_output_____# Check length of booleans
len(booleans)_____no_output_____# Inspect booleans elements
booleans[0:5]_____no_output_____# create a pandas series
is_long = pd.Series(booleans)_____no_output_____# Inspect few values
is_long.head() _____no_output_____# show dataframe all columns in duration 200 minutes
movies[is_long]_____no_output_____
</code>
## Filtering by Condition_____no_output_____
<code>
# filtering by conditions
is_long = movies.duration >= 200
is_long.head() _____no_output_____# show the rows duration >200
movies[is_long]_____no_output_____
</code>
## Filtering in DataFrame _____no_output_____
<code>
# filtering by columns
movies[movies.duration >= 200]_____no_output_____# select only genre
movies[movies.duration >= 200].genre_____no_output_____# same as above
movies[movies.duration >= 200]['genre']_____no_output_____# select columns by label
movies.loc[movies.duration >= 200, 'genre']_____no_output_____
</code>
## Multiple Filtering Criteria_____no_output_____
<code>
# True and True == True
# True and False == False
# True or True == True
# True or False == True
# False or False == False
True and True
True and False
True or True
True or False_____no_output_____# multiple criteria
movies[(movies.duration >= 200) & (movies.genre == 'Drama')]_____no_output_____# multiple criteria
movies[(movies.duration >= 200) | (movies.genre == 'Drama')]_____no_output_____# multiple or conditions
movies[(movies.genre == "Crime") | (movies.genre == 'Drama') | (movies.genre == "Action")]_____no_output_____# multiple or using isin() method
movies.genre.isin(["Drama", "Action", "Crime"])_____no_output_____# pass the series in DataFrame
movies[movies.genre.isin(["Drama", "Action", "Crime"])]_____no_output_____
</code>
<h3>About the Author</h3>
This repo was created by <a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> <br>
<a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> is a student of Microbiology at Jagannath University and the founder of <a href="https://github.com/hdro" target="_blank">Health Data Research Organization</a>. He is also a team member of a bioinformatics research group known as Bio-Bio-1.
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.m_____no_output_____
| {
"repository": "hossainlab/dsnotes",
"path": "book/pandas/08-Filtering Rows.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 61128,
"hexsha": "cb143331a02468105873a81b67fde5847b586b10",
"max_line_length": 415,
"avg_line_length": 34.3802024747,
"alphanum_fraction": 0.3620272216
} |
# Notebook from PacktPublishing/Hands-On-Artificial-Intelligence-for-IoT
Path: Chapter05/GuessTheWord.ipynb
<code>
import string
import random
from deap import base, creator, tools_____no_output_____## Create a Finess base class which is to be minimized
# weights is a tuple -sign tells to minimize, +1 to maximize
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
_____no_output_____
</code>
This will define a class ```FitnessMax``` which inherits the Fitness class of deep.base module. The attribute weight which is a tuple is used to specify whether fitness function is to be maximized (weights=1.0) or minimized weights=-1.0. The DEAP library allows multi-objective Fitness function.
### Individual
Next we create a ```Individual``` class, which inherits the class ```list``` and has the ```FitnessMax``` class in its Fitness attribute. _____no_output_____
<code>
# Now we create a individual class
creator.create("Individual", list, fitness=creator.FitnessMax)_____no_output_____
</code>
# Population
Once the individuals are created we need to create population and define gene pool, to do this we use DEAP toolbox. All the objects that we will need now onwards- an individual, the population, the functions, the operators and the arguments are stored in the container called ```Toolbox```
We can add or remove content in the container ```Toolbox``` using ```register()``` and ```unregister()``` methods_____no_output_____
<code>
toolbox = base.Toolbox()
# Gene Pool
toolbox.register("attr_string", random.choice, string.ascii_letters + string.digits )_____no_output_____#Number of characters in word
word = list('hello')
N = len(word)
# Initialize population
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_string, N )
toolbox.register("population",tools.initRepeat, list, toolbox.individual)_____no_output_____def evalWord(individual, word):
#word = list('hello')
return sum(individual[i] == word[i] for i in range(len(individual))),
_____no_output_____toolbox.register("evaluate", evalWord, word)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)_____no_output_____
</code>
We define the other operators/functions we will need by registering them in the toolbox. This allows us to easily switch between the operators if desired.
## Evolving the Population
Once the representation and the genetic operators are chosen, we will define an algorithm combining all the individual parts and performing the evolution of our population until the One Max problem is solved. It is good style in programming to do so within a function, generally named main().
Creating the Population
First of all, we need to actually instantiate our population. But this step is effortlessly done using the population() method we registered in our toolbox earlier on._____no_output_____
<code>
def main():
random.seed(64)
# create an initial population of 300 individuals (where
# each individual is a list of integers)
pop = toolbox.population(n=300)
# CXPB is the probability with which two individuals
# are crossed
#
# MUTPB is the probability for mutating an individual
CXPB, MUTPB = 0.5, 0.2
print("Start of evolution")
# Evaluate the entire population
fitnesses = list(map(toolbox.evaluate, pop))
for ind, fit in zip(pop, fitnesses):
#print(ind, fit)
ind.fitness.values = fit
print(" Evaluated %i individuals" % len(pop))
# Extracting all the fitnesses of
fits = [ind.fitness.values[0] for ind in pop]
# Variable keeping track of the number of generations
g = 0
# Begin the evolution
while max(fits) < 5 and g < 1000:
# A new generation
g = g + 1
print("-- Generation %i --" % g)
# Select the next generation individuals
offspring = toolbox.select(pop, len(pop))
# Clone the selected individuals
offspring = list(map(toolbox.clone, offspring))
# Apply crossover and mutation on the offspring
for child1, child2 in zip(offspring[::2], offspring[1::2]):
# cross two individuals with probability CXPB
if random.random() < CXPB:
toolbox.mate(child1, child2)
# fitness values of the children
# must be recalculated later
del child1.fitness.values
del child2.fitness.values
for mutant in offspring:
# mutate an individual with probability MUTPB
if random.random() < MUTPB:
toolbox.mutate(mutant)
del mutant.fitness.values
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
print(" Evaluated %i individuals" % len(invalid_ind))
# The population is entirely replaced by the offspring
pop[:] = offspring
# Gather all the fitnesses in one list and print the stats
fits = [ind.fitness.values[0] for ind in pop]
length = len(pop)
mean = sum(fits) / length
sum2 = sum(x*x for x in fits)
std = abs(sum2 / length - mean**2)**0.5
print(" Min %s" % min(fits))
print(" Max %s" % max(fits))
print(" Avg %s" % mean)
print(" Std %s" % std)
print("-- End of (successful) evolution --")
best_ind = tools.selBest(pop, 1)[0]
print("Best individual is %s, %s" % (''.join(best_ind), best_ind.fitness.values))_____no_output_____main()Start of evolution
Evaluated 300 individuals
-- Generation 1 --
Evaluated 178 individuals
Min 0.0
Max 2.0
Avg 0.22
Std 0.4526956299030656
-- Generation 2 --
Evaluated 174 individuals
Min 0.0
Max 2.0
Avg 0.51
Std 0.613650280425803
-- Generation 3 --
Evaluated 191 individuals
Min 0.0
Max 3.0
Avg 0.9766666666666667
Std 0.6502221842484989
-- Generation 4 --
Evaluated 167 individuals
Min 0.0
Max 4.0
Avg 1.45
Std 0.6934214687571574
-- Generation 5 --
Evaluated 191 individuals
Min 0.0
Max 4.0
Avg 1.9833333333333334
Std 0.7765665171481163
-- Generation 6 --
Evaluated 168 individuals
Min 0.0
Max 4.0
Avg 2.48
Std 0.7678541528180985
-- Generation 7 --
Evaluated 192 individuals
Min 1.0
Max 5.0
Avg 3.013333333333333
Std 0.6829999186595044
-- End of (successful) evolution --
Best individual is hello, (5.0,)
</code>
| {
"repository": "PacktPublishing/Hands-On-Artificial-Intelligence-for-IoT",
"path": "Chapter05/GuessTheWord.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 95,
"size": 10241,
"hexsha": "cb16d95088fd269cbb56752c468185563d5f0ae6",
"max_line_length": 305,
"avg_line_length": 31.5107692308,
"alphanum_fraction": 0.5341275266
} |
# Notebook from cumc/xqtl-pipeline
Path: code/commands_generator/eQTL_analysis_commands.ipynb
# Bulk RNA-seq eQTL analysis
This notebook provide a command generator on the XQTL workflow so it can automate the work for data preprocessing and association testing on multiple data collection as proposed._____no_output_____
<code>
%preview ../images/eqtl_command.png_____no_output_____
</code>
This master control notebook is mainly to serve the 8 tissues snuc_bulk_expression analysis, but should be functional on all analysis where expression data are are a tsv table in a bed.gz like format.
Input:
A recipe file,each row is a data collection and with the following column:
Theme
name of dataset, must be different, each uni_study analysis will be performed in a folder named after each, meta analysis will be performed in a folder named as {study1}_{study2}
The column name must contain the # and be the first column
genotype_file
{Path to a whole genome genotype file}
molecular_pheno
{Path to file}
covariate_file
{Path to file}
### note: Only data collection from the same Populations and conditions will me merged to perform Fix effect meta analysis
A genotype list, with two column, `#chr` and `path`
This can be generated by the genotype session of this command generator.
Output:
1 set of association_scan result for each tissue (each row in the recipe)_____no_output_____
<code>
pd.DataFrame({"Theme":"MWE","molecular_pheno":"MWE.log2cpm.tsv","genotype_file":"MWE.bed","covariate_file":"MWE.covariate.cov.gz"}).to_csv("/mnt/vast/hpc/csg/snuc_pseudo_bulk/eight_tissue_analysis/MWE/command_generator",sep = "\t",index = 0)_____no_output_____
</code>
| Theme | molecular_pheno | genotype_file |covariate_file|
| ----------- | ----------- |-----------||
| MWE | MWE.log2cpm.tsv | /data/genotype_data/GRCh38_liftedover_sorted_all.add_chr.leftnorm.filtered.bed |MWE.covariate.cov.gz|_____no_output_____## Minimal Working Example_____no_output_____### Genotype
The MWE for the genotype session can be ran with the following commands, please be noted that a [seperated MWE genoFile]( https://drive.google.com/file/d/1zaacRlZ63Nf_oEUv2nIiqekpQmt2EDch/view?usp=sharing) was needed._____no_output_____
<code>
sos run pipeline/eQTL_analysis_commands.ipynb plink_per_chrom \
--ref_fasta reference_data/GRCh38_full_analysis_set_plus_decoy_hla.noALT_noHLA_noDecoy_ERCC.fasta \
--genoFile mwe_genotype.vcf.gz \
--dbSNP_vcf reference_data/00-All.vcf.gz \
--sample_participant_lookup reference_data/sampleSheetAfterQC.txt -n _____no_output_____
</code>
### Per tissue analysis
A MWE for the core per tissue analysis can be ran with the following commands, a complete collection of input file as well as intermediate output of the analysis can be found at [here](https://drive.google.com/drive/folders/16ZUsciZHqCeeEWwZQR46Hvh5OtS8lFtA?usp=sharing). _____no_output_____
<code>
sos run pipeline/eQTL_analysis_commands.ipynb sumstat_merge \
--recipe MWE.recipe \
--genotype_list plink_files_list.txt \
--annotation_gtf reference_data/genes.reformatted.gene.gtf \
--sample_participant_lookup reference_data/sampleSheetAfterQC.txt \
--Association_option "TensorQTL" -n _____no_output_____sos run pipeline/eQTL_analysis_commands.ipynb sumstat_merge \
--recipe MWE.recipe \
--genotype_list plink_files_list.txt \
--annotation_gtf /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/genes.reformatted.gene.gtf \
--sample_participant_lookup /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/sampleSheetAfterQC.txt \
--Association_option "APEX" -n _____no_output_____
</code>
## Example for running the workflow
This will run the workflow from via several submission_____no_output_____
<code>
sos run ~/GIT/xqtl-pipeline/pipeline/eQTL_analysis_commands.ipynb sumstat_merge \
--recipe /mnt/vast/hpc/csg/snuc_pseudo_bulk//data/recipe_8tissue_new \
--genotype_list /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/genotype_qced/plink_files_list.txt \
--annotation_gtf /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/genes.reformatted.gene.gtf \
--sample_participant_lookup /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/sampleSheetAfterQC.txt \
--Association_option "TensorQTL" --run &_____no_output_____sos run ~/GIT/xqtl-pipeline/pipeline/eQTL_analysis_commands.ipynb sumstat_merge \
--recipe <(cat /mnt/vast/hpc/csg/snuc_pseudo_bulk//data/recipe_8tissue_new | head -2) \
--genotype_list /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/genotype_qced/plink_files_list.txt \
--annotation_gtf /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/genes.reformatted.gene.gtf \
--sample_participant_lookup /mnt/vast/hpc/csg/snuc_pseudo_bulk/data/reference_data/sampleSheetAfterQC.txt \
--factor_option "PEER" --Association_option "TensorQTL" -n_____no_output_____[global]
## The aforementioned input recipe
parameter: recipe = path(".") # Added option to run genotype part without the recipe input, which was not used.
## Overall wd, the file structure of analysis is wd/[steps]/[sub_dir for each steps]
parameter: cwd = path("output")
## Diretory to the excutable
parameter: exe_dir = path("~/GIT/xqtl-pipeline/")
parameter: container_base_bioinfo = 'containers/bioinfo.sif'
parameter: container_apex = 'containers/apex.sif'
parameter: container_PEER = 'containers/PEER.sif'
parameter: container_TensorQTL = 'containers//TensorQTL.sif'
parameter: container_rnaquant = 'containers/rna_quantification.sif'
parameter: container_flashpca = 'containers/flashpcaR.sif'
parameter: container_susie = 'containers/stephenslab.sif'
parameter: sample_participant_lookup = path
parameter: phenotype_id_type = "gene_name"
parameter: yml = path("csg.yml")
parameter: run = False
interpreter = 'cat' if not run else 'bash'
import pandas as pd
if recipe.is_file():
input_inv = pd.read_csv(recipe, sep = "\t").to_dict("records")
import os
parameter: jobs = 50 # Number of jobs that are submitted to the cluster
parameter: queue = "csg" # The queue that jobs are submitted to
submission = f'-J {jobs} -c {yml} -q {queue}'
## Control of the workflow
### Factor option (PEER vs BiCV)
parameter: factor_option = "PEER"
### Association scan option (APEX vs TensorQTL)
parameter: Association_option = "TensorQTL"_____no_output_____
</code>
## Data Preprocessing
### Genotype Preprocessing (Once for all tissues)_____no_output_____
<code>
[dbSNP]
parameter: dbSNP_vcf = path
input: dbSNP_vcf
parameter: add_chr = True
output: f'{cwd}/reference_data/{_input:bnn}.add_chr.variants.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/pipeline//VCF_QC.ipynb dbsnp_annotate \
--genoFile $[_input] \
--cwd $[_output:d] \
--container $[container_base_bioinfo] \
$[submission if yml.is_file() else "" ] $["--add_chr" if add_chr else "--no-add_chr" ]_____no_output_____[VCF_QC]
parameter: genoFile = path
parameter: ref_fasta = path
parameter: add_chr = True
input: genoFile, output_from("dbSNP")
output: f'{cwd}/data_preprocessing/{_input[0]:bnn}.{"add_chr." if add_chr else False}leftnorm.filtered.bed'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]//pipeline/VCF_QC.ipynb qc \
--genoFile $[_input[0]] \
--dbsnp-variants $[_input[1]] \
--reference-genome $[ref_fasta] \
--cwd $[_output:d] \
--container $[container_base_bioinfo] \
--walltime "24h" \
$[submission if yml.is_file() else "" ] $["--add_chr" if add_chr else "--no-add_chr" ]_____no_output_____[plink_QC]
# minimum MAF filter to use. 0 means do not apply this filter.
parameter: maf_filter = 0.05
# maximum MAF filter to use. 0 means do not apply this filter.
parameter: maf_max_filter = 0.0
# Maximum missingess per-variant
parameter: geno_filter = 0.1
# Maximum missingness per-sample
parameter: mind_filter = 0.1
# HWE filter
parameter: hwe_filter = 1e-06
input: output_from("VCF_QC")
output: f'{_input:n}.filtered.bed'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]//pipeline/GWAS_QC.ipynb qc_no_prune \
--cwd $[_output:d] \
--genoFile $[_input] \
--maf-filter $[maf_filter] \
--geno-filter $[geno_filter] \
--mind-filter $[mind_filter] \
--hwe-filter $[hwe_filter] \
--mem 40G \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ]_____no_output_____[plink_per_chrom]
input: output_from("plink_QC")
output: f'{cwd:a}/data_preprocessing/{_input:bn}.plink_files_list.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]//pipeline/genotype_formatting.ipynb plink_by_chrom \
--genoFile $[_input] \
--cwd $[_output:d] \
--chrom `cut -f 1 $[_input:n].bim | uniq | sed "s/chr//g"` \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ]_____no_output_____[plink_to_vcf]
parameter: genotype_list = path
input: genotype_list
import pandas as pd
parameter: genotype_file_name = pd.read_csv(_input,"\t",nrows = 1).values.tolist()[0][1]
output: f'{cwd:a}/data_preprocessing/{path(genotype_file_name):bnn}.vcf_files_list.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]//pipeline/genotype_formatting.ipynb plink_to_vcf \
--genoFile $[_input] \
--cwd $[_output:d] \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ]_____no_output_____[plink_per_gene]
# The plink genotype file
parameter: genoFile = path
input: output_from("region_list_concat"),genoFile
output: f'{cwd:a}/{_input[1]:bn}.plink_files_list.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/pipeline//genotype_formatting.ipynb plink_by_gene \
--genoFile $[_input[1]] \
--cwd $[_output:d] \
--region_list $[_input[0]] \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ]_____no_output_____
</code>
### Molecular Phenotype Processing_____no_output_____
<code>
[annotation]
stop_if(not recipe.is_file(), msg = "Please specify a valid recipe as input")
import os
parameter: annotation_gtf = path
input: for_each = "input_inv"
output: f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/phenotype_data/{path(_input_inv["molecular_pheno"]):bn}.bed.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/pipeline/gene_annotation.ipynb annotate_coord \
--cwd $[_output:d] \
--phenoFile $[_input_inv["molecular_pheno"]] \
--annotation-gtf $[annotation_gtf] \
--sample-participant-lookup $[sample_participant_lookup] \
--container $[container_rnaquant] \
--phenotype-id-type $[phenotype_id_type] $[submission if yml.is_file() else "" ]_____no_output_____[region_list_generation]
parameter: annotation_gtf = path
input: output_from("annotation"), group_with = "input_inv"
output: pheno_mod = f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/phenotype_data/{_input:bnn}.region_list'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/gene_annotation.ipynb region_list_generation \
--cwd $[_output:d] \
--phenoFile $[_input]\
--annotation-gtf $[annotation_gtf] \
--sample-participant-lookup $[sample_participant_lookup] \
--container $[container_rnaquant] \
--phenotype-id-type $[phenotype_id_type] $[submission if yml.is_file() else "" ]_____no_output_____[region_list_concat]
input: output_from("region_list_generation"), group_by = "all"
output: f'{cwd:a}/data_preprocessing/phenotype_data/concat.region_list'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
cat $[_input:a] | sort | uniq > $[_output:a] _____no_output_____[phenotype_partition_by_chrom]
input: output_from("annotation"),output_from("region_list_generation"), group_with = "input_inv"
output: per_chrom_pheno_list = f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/phenotype_data/{_input[0]:bn}.processed_phenotype.per_chrom.recipe'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/phenotype_formatting.ipynb partition_by_chrom \
--cwd $[_output:d] \
--phenoFile $[_input[0]:a] \
--region-list $[_input[1]:a] \
--container $[container_rnaquant] \
--mem 4G $[submission if yml.is_file() else "" ]_____no_output_____
</code>
### Genotype Processing
Since genotype is shared among the eight tissue, the QC of whole genome file is not needed. Only pca needed to be run again._____no_output_____
<code>
[sample_match]
input: for_each = "input_inv"
output: f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/{sample_participant_lookup:bn}.filtered.txt',
geno = f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/{sample_participant_lookup:bn}.filtered_geno.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/sample_matcher.ipynb filtered_sample_list \
--cwd $[_output[0]:d] \
--phenoFile $[_input_inv["molecular_pheno"]] \
--genoFile $[path(_input_inv["genotype_file"]):n].fam \
--sample-participant-lookup $[sample_participant_lookup] \
--container $[container_rnaquant] \
--translated_phenoFile $[submission if yml.is_file() else "" ]_____no_output_____[king]
parameter: maximize_unrelated = False
input:output_from("sample_match")["geno"], group_with = "input_inv"
output: related = f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/genotype_data/{path(_input_inv["genotype_file"]):bn}.{_input_inv["Theme"]}.related.bed',
unrelated = f'{cwd:a}/data_preprocessing/{_input_inv["Theme"]}/genotype_data/{path(_input_inv["genotype_file"]):bn}.{_input_inv["Theme"]}.unrelated.bed'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/GWAS_QC.ipynb king \
--cwd $[_output[0]:d] \
--genoFile $[_input_inv["genotype_file"]] \
--name $[_input_inv["Theme"]] \
--keep-samples $[_input] \
--container $[container_base_bioinfo] \
--walltime 48h $[submission if yml.is_file() else "" ] $["--maximize_unrelated" if maximize_unrelated else "--no-maximize_unrelated"]_____no_output_____[unrelated_QC]
input: output_from("king")["unrelated"]
output: unrelated_bed = f'{_input:n}.filtered.prune.bed',
prune = f'{_input:n}.filtered.prune.in'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/GWAS_QC.ipynb qc \
--cwd $[_output[0]:d] \
--genoFile $[_input] \
--exclude-variants /mnt/vast/hpc/csg/snuc_pseudo_bulk/Ast/genotype/dupe_snp_to_exclude \
--maf-filter 0.05 \
--container $[container_base_bioinfo] \
--mem 40G $[submission if yml.is_file() else "" ] _____no_output_____[related_QC]
input: output_from("king")["related"],output_from("unrelated_QC")["prune"]
output: f'{_input[0]:n}.filtered.extracted.bed'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/GWAS_QC.ipynb qc_no_prune \
--cwd $[_output[0]:d] \
--genoFile $[_input[0]] \
--maf-filter 0 \
--geno-filter 0 \
--mind-filter 0.1 \
--hwe-filter 0 \
--keep-variants $[_input[1]] \
--container $[container_base_bioinfo] \
--mem 40G $[submission if yml.is_file() else "" ]_____no_output_____
</code>
## Factor Analysis_____no_output_____
<code>
[pca]
input: output_from("unrelated_QC")["unrelated_bed"],group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/pca/{_input:bn}.pca.rds',
f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/pca/{_input:bn}.pca.scree.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/PCA.ipynb flashpca \
--cwd $[_output:d] \
--genoFile $[_input] \
--container $[container_flashpca] $[submission if yml.is_file() else "" ]_____no_output_____[projected_sample]
# The percentage of PVE explained
parameter: PVE_treshold = 0.7
input: output_from("related_QC"),output_from("pca"), group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/pca/{_input[0]:bn}.pca.projected.rds',
f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/pca/{_input[0]:bn}.pca.projected.scree.txt'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/PCA.ipynb project_samples \
--cwd $[_output:d] \
--genoFile $[_input[0]] \
--pca-model $[_input[1]] \
--maha-k `awk '$3 < $[PVE_treshold]' $[_input[2]] | tail -1 | cut -f 1 ` \
--container $[container_flashpca] $[submission if yml.is_file() else "" ]_____no_output_____[merge_pca_covariate]
# The percentage of PVE explained
parameter: PVE_treshold = 0.7
input: output_from("projected_sample"),group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/covariates/{path(_input_inv["covariate_file"]):bn}.pca.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/covariate_formatting.ipynb merge_pca_covariate \
--cwd $[_output:d] \
--pcaFile $[_input[0]:a] \
--covFile $[path(_input_inv["covariate_file"])] \
--tol_cov 0.3 \
--k `awk '$3 < $[PVE_treshold]' $[_input[1]] | tail -1 | cut -f 1 ` \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ] --name $[_output:bn] --outliersFile $[_input[0]:an].outliers_____no_output_____[resid_exp]
input: output_from("merge_pca_covariate"),output_from("annotation"),group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/resid_phenotype/{_input[1]:bnn}.{_input[0]:bn}.resid.bed.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/covariate_formatting.ipynb compute_residual \
--cwd $[_output:d] \
--phenoFile $[_input[1]:a] \
--covFile $[_input[0]:a] \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ]_____no_output_____[factor]
parameter: N = 0
input: output_from("resid_exp"),group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/covariates/{_input[0]:bnn}.{factor_option}.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/pipeline/$[factor_option]_factor.ipynb $[factor_option] \
--cwd $[_output:d] \
--phenoFile $[_input[0]:a] \
--container $[container_apex if factor_option == "BiCV" else container_PEER] \
--walltime 24h \
--numThreads 8 \
--iteration 1000 \
--N $[N] $[submission if yml.is_file() else "" ]_____no_output_____[merge_factor_covariate]
# The percentage of PVE explained
parameter: PVE_treshold = 0.7
input: output_from("factor"),output_from("merge_pca_covariate"),group_with = "input_inv"
output: f'{cwd}/data_preprocessing/{_input_inv["Theme"]}/covariates/{_input[0]:bn}.cov.gz'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
sos run $[exe_dir]/pipeline/covariate_formatting.ipynb merge_factor_covariate \
--cwd $[_output:d] \
--factorFile $[_input[0]:a] \
--covFile $[_input[1]:a] \
--container $[container_base_bioinfo] $[submission if yml.is_file() else "" ] --name $[_output:bn]_____no_output_____
</code>
## Association Scan_____no_output_____
<code>
[TensorQTL]
# The number of minor allele count as treshold for the analysis
parameter: MAC = 0
# The minor allele frequency as treshold for the analysis, overwrite MAC
parameter: maf_threshold = 0
parameter: genotype_list = path
input: genotype_list, output_from("phenotype_partition_by_chrom"),output_from("merge_factor_covariate"),group_with = "input_inv"
output: f'{cwd:a}/association_scan/{_input_inv["Theme"]}/TensorQTL/TensorQTL.cis._recipe.tsv'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/TensorQTL.ipynb cis \
--genotype-list $[_input[0]] \
--phenotype-list $[_input[1]] \
--covariate-file $[_input[2]] \
--cwd $[_output:d] \
--container $[container_TensorQTL] $[submission if yml.is_file() else "" ] $[f'--MAC {MAC}' if MAC else ""] $[f'--maf_threshold {maf_threshold}' if maf_threshold else ""] _____no_output_____[APEX]
parameter: genotype_list = path
input: output_from("plink_to_vcf"), output_from("phenotype_partition_by_chrom"),output_from("merge_factor_covariate"),group_with = "input_inv"
output: f'{cwd:a}/association_scan/{_input_inv["Theme"]}/APEX/APEX_QTL_recipe.tsv'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/APEX.ipynb cis \
--genotype-list $[_input[0]] \
--phenotype-list $[_input[1]] \
--covariate-file $[_input[2]] \
--cwd $[_output:d] \
--container $[container_apex] $[submission if yml.is_file() else "" ] --name $[_input[1]:bnn]_____no_output_____
</code>
## Trans Association Scan_____no_output_____
<code>
[TensorQTL_Trans]
parameter: MAC = 0
# The minor allele frequency as treshold for the analysis, overwrite MAC
parameter: maf_threshold = 0
parameter: genotype_list = path
parameter: region_list = path
input: genotype_list, output_from("phenotype_partition_by_chrom"),output_from("merge_factor_covariate"),group_with = "input_inv"
output: f'{cwd:a}/association_scan/{_input_inv["Theme"]}/Trans/TensorQTL.trans._recipe.tsv'
script: interpreter = interpreter, expand = "$[ ]", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout'
sos run $[exe_dir]/pipeline/TensorQTL.ipynb trans \
--genotype-list $[_input[0]] \
--phenotype-list $[_input[1]] \
--covariate-file $[_input[2]] \
--cwd $[_output:d] \
--region_list $[region_list] \
--container $[container_TensorQTL] $[submission if yml.is_file() else "" ] $[f'--MAC {MAC}' if MAC else ""] $[f'--maf_threshold {maf_threshold}' if maf_threshold else ""]_____no_output_____
</code>
## SuSiE
_____no_output_____
<code>
[UniSuSiE]
input: output_from("plink_per_gene"), output_from("annotation"),output_from("factor"), output_from("region_list_concat"), group_by = "all"
output: f'{cwd:a}/Fine_mapping/UniSuSiE/UniSuSiE_recipe.tsv'
script: interpreter = interpreter, expand = "$[ ]"
sos run $[exe_dir]/pipeline/SuSiE.ipynb uni_susie \
--genoFile $[_input[0]] \
--phenoFile $[" ".join([str(x) for x in _input[1:len(input_inv)+1]])] \
--covFile $[" ".join([str(x) for x in _input[len(input_inv)+1:len(input_inv)*2+1]])] \
--cwd $[_output:d] \
--tissues $[" ".join([x["Theme"] for x in input_inv])] \
--region-list $[_input[3]] \
--container $[container_susie] $[submission if yml.is_file() else "" ]_____no_output_____
</code>
## Sumstat Merger_____no_output_____
<code>
[yml_generation]
parameter: TARGET_list = path("./")
input: output_from(Association_option), group_by = "all"
output: f'{cwd:a}/data_intergration/{Association_option}/qced_sumstat_list.txt',f'{cwd:a}/data_intergration/{Association_option}/yml_list.txt'
script: interpreter = interpreter, expand = "$[ ]"
sos run $[exe_dir]/pipeline/yml_generator.ipynb yml_list \
--sumstat-list $[_input] \
--cwd $[_output[1]:d] --name $[" ".join([str(x).split("/")[-3] for x in _input])] --TARGET_list $[TARGET_list]_____no_output_____[sumstat_merge]
input: output_from("yml_generation")
script: interpreter = interpreter, expand = "$[ ]"
sos run $[exe_dir]/pipeline/summary_stats_merger.ipynb \
--sumstat-list $[_input[0]] \
--yml-list $[_input[1]] \
--cwd $[_input[0]:d] $[submission if yml.is_file() else "" ] --mem 50G --walltime 48h
_____no_output_____
</code>
| {
"repository": "cumc/xqtl-pipeline",
"path": "code/commands_generator/eQTL_analysis_commands.ipynb",
"matched_keywords": [
"RNA-seq"
],
"stars": 2,
"size": 385801,
"hexsha": "cb174a32ff2168fa1b8f7fcad5b4b49b946bd05d",
"max_line_length": 347365,
"avg_line_length": 380.8499506417,
"alphanum_fraction": 0.9230432269
} |
# Notebook from JulioLarrea/course-content
Path: tutorials/W1D1_ModelTypes/student/W1D1_Tutorial1.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/student/W1D1_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 1: "What" models
**Week 1, Day 1: Model Types**
**By Neuromatch Academy**
__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording
__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom
We would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here.
_____no_output_____**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>_____no_output________
# Tutorial Objectives
This is tutorial 1 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore 'What' models, used to describe the data. To understand what our data looks like, we will visualize it in different ways. Then we will compare it to simple mathematical models. Specifically, we will:
- Load a dataset with spiking activity from hundreds of neurons and understand how it is organized
- Make plots to visualize characteristics of the spiking activity across the population
- Compute the distribution of "inter-spike intervals" (ISIs) for a single neuron
- Consider several formal models of this distribution's shape and fit them to the data "by hand"_____no_output_____
<code>
# @title Video 1: "What" Models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KgqR_jbjMQg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)_____no_output_____
</code>
# Setup
_____no_output_____Python requires you to explictly "import" libraries before their functions are available to use. We will always specify our imports at the beginning of each notebook or script._____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt_____no_output_____
</code>
Tutorial notebooks typically begin with several set-up steps that are hidden from view by default.
**Important:** Even though the code is hidden, you still need to run it so that the rest of the notebook can work properly. Step through each cell, either by pressing the play button in the upper-left-hand corner or with a keyboard shortcut (`Cmd-Return` on a Mac, `Ctrl-Enter` otherwise). A number will appear inside the brackets (e.g. `[3]`) to tell you that the cell was executed and what order that happened in.
If you are curious to see what is going on inside each cell, you can double click to expand. Once expanded, double-click the white space to the right of the editor to collapse again._____no_output_____
<code>
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____#@title Helper functions
#@markdown Most of the tutorials make use of helper functions
#@markdown to simplify the code that you need to write. They are defined here.
# Please don't edit these, or worry about understanding them now!
def restrict_spike_times(spike_times, interval):
"""Given a spike_time dataset, restrict to spikes within given interval.
Args:
spike_times (sequence of np.ndarray): List or array of arrays,
each inner array has spike times for a single neuron.
interval (tuple): Min, max time values; keep min <= t < max.
Returns:
np.ndarray: like `spike_times`, but only within `interval`
"""
interval_spike_times = []
for spikes in spike_times:
interval_mask = (spikes >= interval[0]) & (spikes < interval[1])
interval_spike_times.append(spikes[interval_mask])
return np.array(interval_spike_times, object)_____no_output_____#@title Data retrieval
#@markdown This cell downloads the example dataset that we will use in this tutorial.
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Failed to download data')
else:
spike_times = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']_____no_output_____
</code>
---
# Section 1: Exploring the Steinmetz dataset
In this tutorial we will explore the structure of a neuroscience dataset.
We consider a subset of data from a study of [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x). In this study, Neuropixels probes were implanted in the brains of mice. Electrical potentials were measured by hundreds of electrodes along the length of each probe. Each electrode's measurements captured local variations in the electric field due to nearby spiking neurons. A spike sorting algorithm was used to infer spike times and cluster spikes according to common origin: a single cluster of sorted spikes is causally attributed to a single neuron.
In particular, a single recording session of spike times and neuron assignments was loaded and assigned to `spike_times` in the preceding setup.
Typically a dataset comes with some information about its structure. However, this information may be incomplete. You might also apply some transformations or "pre-processing" to create a working representation of the data of interest, which might go partly undocumented depending on the circumstances. In any case it is important to be able to use the available tools to investigate unfamiliar aspects of a data structure.
Let's see what our data looks like..._____no_output_____## Section 1.1: Warming up with `spike_times`_____no_output_____What is the Python type of our variable?_____no_output_____
<code>
type(spike_times)_____no_output_____
</code>
You should see `numpy.ndarray`, which means that it's a normal NumPy array.
If you see an error message, it probably means that you did not execute the set-up cells at the top of the notebook. So go ahead and make sure to do that.
Once everything is running properly, we can ask the next question about the dataset: what's its shape?_____no_output_____
<code>
spike_times.shape_____no_output_____
</code>
There are 734 entries in one dimension, and no other dimensions. What is the Python type of the first entry, and what is *its* shape?_____no_output_____
<code>
idx = 0
print(
type(spike_times[idx]),
spike_times[idx].shape,
sep="\n",
)_____no_output_____
</code>
It's also a NumPy array with a 1D shape! Why didn't this show up as a second dimension in the shape of `spike_times`? That is, why not `spike_times.shape == (734, 826)`?
To investigate, let's check another entry._____no_output_____
<code>
idx = 321
print(
type(spike_times[idx]),
spike_times[idx].shape,
sep="\n",
)_____no_output_____
</code>
It's also a 1D NumPy array, but it has a different shape. Checking the NumPy types of the values in these arrays, and their first few elements, we see they are composed of floating point numbers (not another level of `np.ndarray`):_____no_output_____
<code>
i_neurons = [0, 321]
i_print = slice(0, 5)
for i in i_neurons:
print(
"Neuron {}:".format(i),
spike_times[i].dtype,
spike_times[i][i_print],
"\n",
sep="\n"
)_____no_output_____
</code>
Note that this time we've checked the NumPy `dtype` rather than the Python variable type. These two arrays contain floating point numbers ("floats") with 32 bits of precision.
The basic picture is coming together:
- `spike_times` is 1D, its entries are NumPy arrays, and its length is the number of neurons (734): by indexing it, we select a subset of neurons.
- An array in `spike_times` is also 1D and corresponds to a single neuron; its entries are floating point numbers, and its length is the number of spikes attributed to that neuron. By indexing it, we select a subset of spike times for that neuron.
Visually, you can think of the data structure as looking something like this:
```
| . . . . . |
| . . . . . . . . |
| . . . |
| . . . . . . . |
```
Before moving on, we'll calculate and store the number of neurons in the dataset and the number of spikes per neuron:_____no_output_____
<code>
n_neurons = len(spike_times)
total_spikes_per_neuron = [len(spike_times_i) for spike_times_i in spike_times]
print(f"Number of neurons: {n_neurons}")
print(f"Number of spikes for first five neurons: {total_spikes_per_neuron[:5]}")_____no_output_____# @title Video 2: Exploring the dataset
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="oHwYWUI_o1U", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)_____no_output_____
</code>
## Section 1.2: Getting warmer: counting and plotting total spike counts
As we've seen, the number of spikes over the entire recording is variable between neurons. More generally, some neurons tend to spike more than others in a given period. Lets explore what the distribution of spiking looks like across all the neurons in the dataset._____no_output_____Are most neurons "loud" or "quiet", compared to the average? To see, we'll define bins of constant width in terms of total spikes and count the neurons that fall in each bin. This is known as a "histogram".
You can plot a histogram with the matplotlib function `plt.hist`. If you just need to compute it, you can use the numpy function `np.histogram` instead._____no_output_____
<code>
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons");_____no_output_____
</code>
Let's see what percentage of neurons have a below-average spike count:_____no_output_____
<code>
mean_spike_count = np.mean(total_spikes_per_neuron)
frac_below_mean = (total_spikes_per_neuron < mean_spike_count).mean()
print(f"{frac_below_mean:2.1%} of neurons are below the mean")_____no_output_____
</code>
We can also see this by adding the average spike count to the histogram plot:_____no_output_____
<code>
plt.hist(total_spikes_per_neuron, bins=50, histtype="stepfilled")
plt.xlabel("Total spikes per neuron")
plt.ylabel("Number of neurons")
plt.axvline(mean_spike_count, color="orange", label="Mean neuron")
plt.legend();_____no_output_____
</code>
This shows that the majority of neurons are relatively "quiet" compared to the mean, while a small number of neurons are exceptionally "loud": they must have spiked more often to reach a large count.
### Exercise 1: Comparing mean and median neurons
If the mean neuron is more active than 68% of the population, what does that imply about the relationship between the mean neuron and the median neuron?
*Exercise objective:* Reproduce the plot above, but add the median neuron.
_____no_output_____
<code>
# To complete the exercise, fill in the missing parts (...) and uncomment the code
median_spike_count = ... # Hint: Try the function np.median
# plt.hist(..., bins=50, histtype="stepfilled")
# plt.axvline(..., color="limegreen", label="Median neuron")
# plt.axvline(mean_spike_count, color="orange", label="Mean neuron")
# plt.xlabel("Total spikes per neuron")
# plt.ylabel("Number of neurons")
# plt.legend()_____no_output_____
</code>
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial1_Solution_b3411d5d.py)
*Example output:*
<img alt='Solution hint' align='left' width=558 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial1_Solution_b3411d5d_0.png>
_____no_output_____
*Bonus:* The median is the 50th percentile. What about other percentiles? Can you show the interquartile range on the histogram?_____no_output_____---
# Section 2: Visualizing neuronal spiking activity_____no_output_____## Section 2.1: Getting a subset of the data
Now we'll visualize trains of spikes. Because the recordings are long, we will first define a short time interval and restrict the visualization to only the spikes in this interval. We defined a utility function, `restrict_spike_times`, to do this for you. If you call `help()` on the function, it will tell you a little bit about itself:_____no_output_____
<code>
help(restrict_spike_times)_____no_output_____t_interval = (5, 15) # units are seconds after start of recording
interval_spike_times = restrict_spike_times(spike_times, t_interval)_____no_output_____
</code>
Is this a representative interval? What fraction of the total spikes fall in this interval?_____no_output_____
<code>
original_counts = sum([len(spikes) for spikes in spike_times])
interval_counts = sum([len(spikes) for spikes in interval_spike_times])
frac_interval_spikes = interval_counts / original_counts
print(f"{frac_interval_spikes:.2%} of the total spikes are in the interval")_____no_output_____
</code>
How does this compare to the ratio between the interval duration and the experiment duration? (What fraction of the total time is in this interval?)
We can approximate the experiment duration by taking the minimum and maximum spike time in the whole dataset. To do that, we "concatenate" all of the neurons into one array and then use `np.ptp` ("peak-to-peak") to get the difference between the maximum and minimum value:_____no_output_____
<code>
spike_times_flat = np.concatenate(spike_times)
experiment_duration = np.ptp(spike_times_flat)
interval_duration = t_interval[1] - t_interval[0]
frac_interval_time = interval_duration / experiment_duration
print(f"{frac_interval_time:.2%} of the total time is in the interval")_____no_output_____
</code>
These two values—the fraction of total spikes and the fraction of total time—are similar. This suggests the average spike rate of the neuronal population is not very different in this interval compared to the entire recording.
## Section 2.2: Plotting spike trains and rasters
Now that we have a representative subset, we're ready to plot the spikes, using the matplotlib `plt.eventplot` function. Let's look at a single neuron first:_____no_output_____
<code>
neuron_idx = 1
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);_____no_output_____
</code>
We can also plot multiple neurons. Here are three:_____no_output_____
<code>
neuron_idx = [1, 11, 51]
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);_____no_output_____
</code>
This makes a "raster" plot, where the spikes from each neuron appear in a different row.
Plotting a large number of neurons can give you a sense for the characteristics in the population. Let's show every 5th neuron that was recorded:_____no_output_____
<code>
neuron_idx = np.arange(0, len(spike_times), 5)
plt.eventplot(interval_spike_times[neuron_idx], color=".2")
plt.xlabel("Time (s)")
plt.yticks([]);_____no_output_____
</code>
*Question*: How does the information in this plot relate to the histogram of total spike counts that you saw above?_____no_output_____
<code>
# @title Video 3: Visualizing activity
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="QGA5FCW7kkA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)_____no_output_____
</code>
---
# Section 3: Inter-spike intervals and their distributions_____no_output_____Given the ordered arrays of spike times for each neuron in `spike_times`, which we've just visualized, what can we ask next?
Scientific questions are informed by existing models. So, what knowledge do we already have that can inform questions about this data?
We know that there are physical constraints on neuron spiking. Spiking costs energy, which the neuron's cellular machinery can only obtain at a finite rate. Therefore neurons should have a refractory period: they can only fire as quickly as their metabolic processes can support, and there is a minimum delay between consecutive spikes of the same neuron.
More generally, we can ask "how long does a neuron wait to spike again?" or "what is the longest a neuron will wait?" Can we transform spike times into something else, to address questions like these more directly?
We can consider the inter-spike times (or interspike intervals: ISIs). These are simply the time differences between consecutive spikes of the same neuron.
### Exercise 2: Plot the distribution of ISIs for a single neuron
*Exercise objective:* make a histogram, like we did for spike counts, to show the distribution of ISIs for one of the neurons in the dataset.
Do this in three steps:
1. Extract the spike times for one of the neurons
2. Compute the ISIs (the amount of time between spikes, or equivalently, the difference between adjacent spike times)
3. Plot a histogram with the array of individual ISIs_____no_output_____
<code>
def compute_single_neuron_isis(spike_times, neuron_idx):
"""Compute a vector of ISIs for a single neuron given spike times.
Args:
spike_times (list of 1D arrays): Spike time dataset, with the first
dimension corresponding to different neurons.
neuron_idx (int): Index of the unit to compute ISIs for.
Returns:
isis (1D array): Duration of time between each spike from one neuron.
"""
#############################################################################
# Students: Fill in missing code (...) and comment or remove the next line
raise NotImplementedError("Exercise: compute single neuron ISIs")
#############################################################################
# Extract the spike times for the specified neuron
single_neuron_spikes = ...
# Compute the ISIs for this set of spikes
# Hint: the function np.diff computes discrete differences along an array
isis = ...
return isis
# Uncomment the following lines when you are ready to test your function
# single_neuron_isis = compute_single_neuron_isis(spike_times, neuron_idx=283)
# plt.hist(single_neuron_isis, bins=50, histtype="stepfilled")
# plt.axvline(single_neuron_isis.mean(), color="orange", label="Mean ISI")
# plt.xlabel("ISI duration (s)")
# plt.ylabel("Number of spikes")
# plt.legend()_____no_output_____
</code>
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial1_Solution_4792dbfa.py)
*Example output:*
<img alt='Solution hint' align='left' width=558 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/static/W1D1_Tutorial1_Solution_4792dbfa_0.png>
_____no_output_____---
In general, the shorter ISIs are predominant, with counts decreasing rapidly (and smoothly, more or less) with increasing ISI. However, counts also rapidly decrease to zero with _decreasing_ ISI, below the maximum of the distribution (8-11 ms). The absence of these very low ISIs agrees with the refractory period hypothesis: the neuron cannot fire quickly enough to populate this region of the ISI distribution.
Check the distributions of some other neurons. To resolve various features of the distributions, you might need to play with the value of `n_bins`. Using too few bins might smooth over interesting details, but if you use too many bins, the random variability will start to dominate.
You might also want to restrict the range to see the shape of the distribution when focusing on relatively short or long ISIs. *Hint:* `plt.hist` takes a `range` argument_____no_output_____---
# Section 4: What is the functional form of an ISI distribution?_____no_output_____
<code>
# @title Video 4: ISI distribution
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="DHhM80MOTe8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)_____no_output_____
</code>
The ISI histograms seem to follow continuous, monotonically decreasing functions above their maxima. The function is clearly non-linear. Could it belong to a single family of functions?
To motivate the idea of using a mathematical function to explain physiological phenomena, let's define a few different function forms that we might expect the relationship to follow: exponential, inverse, and linear._____no_output_____
<code>
def exponential(xs, scale, rate, x0):
"""A simple parametrized exponential function, applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
scale (float): Linear scaling factor.
rate (float): Exponential growth (positive) or decay (negative) rate.
x0 (float): Horizontal offset.
"""
ys = scale * np.exp(rate * (xs - x0))
return ys
def inverse(xs, scale, x0):
"""A simple parametrized inverse function (`1/x`), applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
scale (float): Linear scaling factor.
x0 (float): Horizontal offset.
"""
ys = scale / (xs - x0)
return ys
def linear(xs, slope, y0):
"""A simple linear function, applied element-wise.
Args:
xs (np.ndarray or float): Input(s) to the function.
slope (float): Slope of the line.
y0 (float): y-intercept of the line.
"""
ys = slope * xs + y0
return ys_____no_output_____
</code>
### Interactive Demo: ISI functions explorer
Here is an interactive demo where you can vary the parameters of these functions and see how well the resulting outputs correspond to the data. Adjust the parameters by moving the sliders and see how close you can get the lines to follow the falling curve of the histogram. This will give you a taste of what you're trying to do when you *fit a model* to data.
"Interactive demo" cells have hidden code that defines an interface where you can play with the parameters of some function using sliders. You don't need to worry about how the code works – but you do need to **run the cell** to enable the sliders.
_____no_output_____
<code>
#@title
#@markdown Be sure to run this cell to enable the demo
# Don't worry about understanding this code! It's to setup an interactive plot.
single_neuron_idx = 283
single_neuron_spikes = spike_times[single_neuron_idx]
single_neuron_isis = np.diff(single_neuron_spikes)
counts, edges = np.histogram(
single_neuron_isis,
bins=50,
range=(0, single_neuron_isis.max())
)
functions = dict(
exponential=exponential,
inverse=inverse,
linear=linear,
)
colors = dict(
exponential="C1",
inverse="C2",
linear="C4",
)
@widgets.interact(
exp_scale=widgets.FloatSlider(1000, min=0, max=20000, step=250),
exp_rate=widgets.FloatSlider(-10, min=-200, max=50, step=1),
exp_x0=widgets.FloatSlider(0.1, min=-0.5, max=0.5, step=0.005),
inv_scale=widgets.FloatSlider(1000, min=0, max=3e2, step=10),
inv_x0=widgets.FloatSlider(0, min=-0.2, max=0.2, step=0.01),
lin_slope=widgets.FloatSlider(-1e5, min=-6e5, max=1e5, step=10000),
lin_y0=widgets.FloatSlider(10000, min=0, max=4e4, step=1000),
)
def fit_plot(
exp_scale=1000, exp_rate=-10, exp_x0=0.1,
inv_scale=1000, inv_x0=0,
lin_slope=-1e5, lin_y0=2000,
):
"""Helper function for plotting function fits with interactive sliders."""
func_params = dict(
exponential=(exp_scale, exp_rate, exp_x0),
inverse=(inv_scale, inv_x0),
linear=(lin_slope, lin_y0),
)
f, ax = plt.subplots()
ax.fill_between(edges[:-1], counts, step="post", alpha=.5)
xs = np.linspace(1e-10, edges.max())
for name, function in functions.items():
ys = function(xs, *func_params[name])
ax.plot(xs, ys, lw=3, color=colors[name], label=name);
ax.set(
xlim=(edges.min(), edges.max()),
ylim=(0, counts.max() * 1.1),
xlabel="ISI (s)",
ylabel="Number of spikes",
)
ax.legend()_____no_output_____# @title Video 5: Fitting models by hand
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="uW2HDk_4-wk", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)_____no_output_____
</code>
# Summary
In this tutorial, we loaded some neural data and poked at it to understand how the dataset is organized. Then we made some basic plots to visualize (1) the average level of activity across the population and (2) the distribution of ISIs for an individual neuron. In the very last bit, we started to think about using mathematical formalisms to understand or explain some physiological phenomenon. All of this only allowed us to understand "What" the data looks like.
This is the first step towards developing models that can tell us something about the brain. That's what we'll focus on in the next two tutorials._____no_output_____
| {
"repository": "JulioLarrea/course-content",
"path": "tutorials/W1D1_ModelTypes/student/W1D1_Tutorial1.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 1,
"size": 48152,
"hexsha": "cb17dc686300c84b6eae23215552b82cbacc492c",
"max_line_length": 588,
"avg_line_length": 35.5890613452,
"alphanum_fraction": 0.6028410035
} |
# Notebook from soumitrahazra/platypos
Path: examples/Evolve_one_planet.ipynb
<code>
import sys
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from astropy import constants as const
# remove this line if you installed platypos with pip
sys.path.append('/work2/lketzer/work/gitlab/platypos_group/platypos/')
import platypos
from platypos import Planet_LoFo14
from platypos import Planet_Ot20
# import the classes with fixed step size for completeness
from platypos.planet_LoFo14_PAPER import Planet_LoFo14_PAPER
from platypos.planet_Ot20_PAPER import Planet_Ot20_PAPER
import platypos.planet_models_LoFo14 as plmoLoFo14_____no_output_____
</code>
# Create Planet object and stellar evolutionary track_____no_output_____## Example planet 1.1 - V1298Tau c with 5 Eearth mass core and measured radius (var. step)_____no_output_____
<code>
# (David et al. 2019, Chandra observation)
L_bol, mass_star, radius_star = 0.934, 1.101, 1.345 # solar units
age_star = 23. # Myr
Lx_age = Lx_chandra = 1.3e30 # erg/s in energy band: (0.1-2.4 keV)
Lx_age_error = 1.4e29
# use dictionary to store star-parameters
star_V1298Tau = {'star_id': 'V1298Tau', 'mass': mass_star, 'radius': radius_star, 'age': age_star, 'L_bol': L_bol, 'Lx_age': Lx_age}
Lx_1Gyr, Lx_5Gyr = 2.10*10**28, 1.65*10**27
track_low = {"t_start": star_V1298Tau["age"], "t_sat": star_V1298Tau["age"], "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 20., "Lx_drop_factor": 16.}
track_med = {"t_start": star_V1298Tau["age"], "t_sat": star_V1298Tau["age"], "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
track_high = {"t_start": star_V1298Tau["age"], "t_sat": 240., "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
# planet c
planet = {"core_mass": 5.0, "radius": 5.59, "distance": 0.0825, "metallicity": "solarZ"}
pl = Planet_LoFo14(star_V1298Tau, planet)
pl.__dict_______no_output_____
</code>
### Example planet 1.1.1 - V1298Tau c with 5 Eearth mass core and measured radius (fixed step)_____no_output_____
<code>
pl = Planet_LoFo14_PAPER(star_V1298Tau, planet)_____no_output_____
</code>
## Example planet 1.2 - V1298Tau c with mass estimate from Otegi et al. (2020) and measured radius (var step)_____no_output_____
<code>
pl = Planet_Ot20(star_V1298Tau, planet)
pl.__dict_______no_output_____
</code>
### Example planet 1.2.1 - V1298Tau c with mass estimate from Otegi et al. (2020) and measured radius (fixed step)_____no_output_____
<code>
pl = Planet_Ot20_PAPER(star_V1298Tau, planet)
pl.__dict_______no_output_____
</code>
## Example planet 2 - artificial planet with specified core mass and envelope mass fraction_____no_output_____
<code>
Lx_1Gyr, Lx_5Gyr = 2.10*10**28, 1.65*10**27
dict_star = {'star_id': 'star_age1.0_mass0.89',
'mass': 0.8879632311581124,
'radius': None,
'age': 1.0,
'L_bol': 1.9992811847525246e+33/const.L_sun.cgs.value,
'Lx_age': 1.298868513129789e+30}
dict_pl = {'distance': 0.12248611607793611,
'metallicity': 'solarZ',
'fenv': 3.7544067802231664,
'core_mass': 4.490153906104026}
track = {"t_start": dict_star["age"], "t_sat": 100., "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
pl = Planet_LoFo14(dict_star, dict_pl)
#pl.__dict_______no_output_____
</code>
# Evolve & create outputs_____no_output_____
<code>
%%time
folder_id = "dummy"
path_save = os.getcwd() + "/" + folder_id +"/"
if not os.path.exists(path_save):
os.makedirs(path_save)
else:
os.system("rm -r " + path_save[:-1])
os.makedirs(path_save)
t_final = 5007.
pl.evolve_forward_and_create_full_output(t_final, 0.1, 0.1, "yes", "yes", track_high, path_save, folder_id)CPU times: user 14.8 s, sys: 11.8 ms, total: 14.8 s
Wall time: 14.8 s
</code>
# Read in results and plot_____no_output_____
<code>
df_pl = pl.read_results(path_save)
df_pl.head()_____no_output_____df_pl.tail()_____no_output_____# fig, ax = plt.subplots(figsize=(10,5))
# ax.plot(df_pl["Time"], df_pl["Lx"])
# ax.loglog()
# plt.show()_____no_output_____fig, ax = plt.subplots(figsize=(10,5))
age_arr = np.logspace(np.log10(pl.age), np.log10(t_final), 100)
if (type(pl) == platypos.planet_LoFo14.Planet_LoFo14
or type(pl) == platypos.planet_LoFo14_PAPER.Planet_LoFo14_PAPER):
ax.plot(age_arr, plmoLoFo14.calculate_planet_radius(pl.core_mass, pl.fenv, age_arr, pl.flux, pl.metallicity), \
lw=2.5, label='thermal contraction only', color="blue")
ax.plot(df_pl["Time"], df_pl["Radius"],
marker="None", ls="--", label='with photoevaporation', color="red")
else:
ax.plot(df_pl["Time"], df_pl["Radius"], marker="None", ls="--", label='with photoevaporation', color="red")
ax.legend(fontsize=10)
ax.set_xlabel("Time [Myr]", fontsize=16)
ax.set_ylabel("Radius [R$_\oplus$]", fontsize=16)
ax.set_xscale('log')
#ax.set_ylim(5.15, 5.62)
plt.show()_____no_output_____
</code>
| {
"repository": "soumitrahazra/platypos",
"path": "examples/Evolve_one_planet.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 10,
"size": 37275,
"hexsha": "cb1912df067f2c70b0b44902df6b935df5123bad",
"max_line_length": 21144,
"avg_line_length": 65.2802101576,
"alphanum_fraction": 0.7480348759
} |
# Notebook from smasoka/python-introduction
Path: notebooks/exercises/7 - NumPy.ipynb
# Exercises_____no_output_____## Simple array manipulation
Investigate the behavior of the statements below by looking
at the values of the arrays a and b after assignments:
```
a = np.arange(5)
b = a
b[2] = -1
b = a[:]
b[1] = -1
b = a.copy()
b[0] = -1
```_____no_output_____Generate a 1D NumPy array containing numbers from -2 to 2
in increments of 0.2. Use optional start and step arguments
of **np.arange()** function._____no_output_____Generate another 1D NumPy array containing 11 equally
spaced values between 0.5 and 1.5. Extract every second
element of the array_____no_output_____Create a 4x4 array with arbitrary values._____no_output_____Extract every element from the second row_____no_output_____Extract every element from the third column_____no_output_____Assign a value of 0.21 to upper left 2x2 subarray._____no_output_____## Simple plotting
Plot to the same graph **sin** and **cos** functions in the interval $[-\pi/2, \pi/2]$. Use $\theta$ as x-label and insert also legends._____no_output_____## Pie chart
The file "../data/csc_usage.txt" contains the usage of CSC servers by different disciplines. Plot a pie chart about the resource usage._____no_output_____## Bonus exercises
### Numerical derivative with finite differences
Derivatives can be calculated numerically with the finite-difference method
as:
$$ f'(x_i) = \frac{f(x_i + \Delta x)- f(x_i - \Delta x)}{2 \Delta x} $$
Construct 1D Numpy array containing the values of xi in the interval $[0, \pi/2]$ with spacing
$\Delta x = 0.1$. Evaluate numerically the derivative of **sin** in this
interval (excluding the end points) using the above formula. Try to avoid
`for` loops. Compare the result to function **cos** in the same interval._____no_output_____### Game of Life
Game of life is a cellular automaton devised by John Conway
in 70's: http://en.wikipedia.org/wiki/Conway's_Game_of_Life
The game consists of two dimensional orthogonal grid of
cells. Cells are in two possible states, alive or dead. Each cell
interacts with its eight neighbours, and at each time step the
following transitions occur:
* Any live cell with fewer than two live neighbours dies, as if
caused by underpopulation
* Any live cell with more than three live neighbours dies, as if
by overcrowding
* Any live cell with two or three live neighbours lives on to
the next generation
* Any dead cell with exactly three live neighbours becomes a
live cell
The initial pattern constitutes the seed of the system, and
the system is left to evolve according to rules. Deads and
births happen simultaneously.
Implement the Game of Life using Numpy, and visualize the
evolution with Matplotlib's **imshow**. Try first 32x32
square grid and cross-shaped initial pattern:

Try also other grids and initial patterns (e.g. random
pattern). Try to avoid **for** loops._____no_output_____
| {
"repository": "smasoka/python-introduction",
"path": "notebooks/exercises/7 - NumPy.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 10,
"size": 6214,
"hexsha": "cb196c163853e275e7166992a1f8c503280a48db",
"max_line_length": 145,
"avg_line_length": 23.0148148148,
"alphanum_fraction": 0.552944963
} |
# Notebook from fiji/Kappa
Path: Analysis/Notebooks/Spiral Dataset/2_Measure_Curvature.ipynb
**Important**: This notebook is different from the other as it directly calls **ImageJ Kappa plugin** using the [`scyjava` ImageJ brige](https://github.com/scijava/scyjava).
Since Kappa uses ImageJ1 features, you might not be able to run this notebook on an headless machine (need to be tested)._____no_output_____
<code>
from pathlib import Path
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
import sys; sys.path.append("../../")
import pykappa
# Init ImageJ with Fiji plugins
# It can take a while if Java artifacts are not yet cached.
import imagej
java_deps = []
java_deps.append('org.scijava:Kappa:1.7.1')
ij = imagej.init("+".join(java_deps), headless=False)
import jnius
# Load Java classes
KappaFrame = jnius.autoclass('sc.fiji.kappa.gui.KappaFrame')
CurvesExporter = jnius.autoclass('sc.fiji.kappa.gui.CurvesExporter')
# Load ImageJ services
dsio = ij.context.getService(jnius.autoclass('io.scif.services.DatasetIOService'))
dsio = jnius.cast('io.scif.services.DatasetIOService', dsio)
# Set data path
data_dir = Path("/home/hadim/.data/Postdoc/Kappa/spiral_curve_SDM/")
# Pixel size used when fixed
fixed_pixel_size = 0.16
# Used to select pixels around the initialization curves
base_radius_um = 1.6
enable_control_points_adjustment = True
# "Point Distance Minimization" or "Squared Distance Minimization"
if '_SDM' in data_dir.name:
fitting_algorithm = "Squared Distance Minimization"
else:
fitting_algorithm = "Point Distance Minimization"
fitting_algorithm_____no_output_____experiment_names = ['variable_snr', 'variable_initial_position', 'variable_pixel_size', 'variable_psf_size']
experiment_names = ['variable_psf_size']
for experiment_name in tqdm(experiment_names, total=len(experiment_names)):
experiment_path = data_dir / experiment_name
fnames = sorted(list(experiment_path.glob("*.tif")))
n = len(fnames)
for fname in tqdm(fnames, total=n, leave=False):
tqdm.write(str(fname))
kappa_path = fname.with_suffix(".kapp")
assert kappa_path.exists(), f'{kappa_path} does not exist.'
curvatures_path = fname.with_suffix(".csv")
if not curvatures_path.is_file():
frame = KappaFrame(ij.context)
frame.getKappaMenubar().openImageFile(str(fname))
frame.resetCurves()
frame.getKappaMenubar().loadCurveFile(str(kappa_path))
frame.getCurves().setAllSelected()
# Compute threshold according to the image
dataset = dsio.open(str(fname))
mean = ij.op().stats().mean(dataset).getRealDouble()
std = ij.op().stats().stdDev(dataset).getRealDouble()
threshold = int(mean + std * 2)
# Used fixed pixel size or the one in the filename
if fname.stem.startswith('pixel_size'):
pixel_size = float(fname.stem.split("_")[-2])
if experiment_name == 'variable_psf_size':
pixel_size = 0.01
else:
pixel_size = fixed_pixel_size
base_radius = int(np.round(base_radius_um / pixel_size))
# Set curve fitting parameters
frame.setEnableCtrlPtAdjustment(enable_control_points_adjustment)
frame.setFittingAlgorithm(fitting_algorithm)
frame.getInfoPanel().thresholdRadiusSpinner.setValue(ij.py.to_java(base_radius))
frame.getInfoPanel().thresholdSlider.setValue(threshold)
frame.getInfoPanel().updateConversionField(str(pixel_size))
# Fit the curves
frame.fitCurves()
# Save fitted curves
frame.getKappaMenubar().saveCurveFile(str(fname.with_suffix(".FITTED.kapp")))
# Export results
exporter = CurvesExporter(frame)
exporter.exportToFile(str(curvatures_path), False)
# Remove duplicate rows during CSV export.
df = pd.read_csv(curvatures_path)
df = df.drop_duplicates()
df.to_csv(curvatures_path)_____no_output_____0.13**2_____no_output_____
</code>
| {
"repository": "fiji/Kappa",
"path": "Analysis/Notebooks/Spiral Dataset/2_Measure_Curvature.ipynb",
"matched_keywords": [
"ImageJ"
],
"stars": null,
"size": 7818,
"hexsha": "cb1b7195fb98eac6b381963274799542d9d8230b",
"max_line_length": 182,
"avg_line_length": 34.1397379913,
"alphanum_fraction": 0.5621642364
} |
# Notebook from dschwen/chimad-phase-field
Path: hackathon1/problems.ipynb
<code>
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
$('div.prompt').hide();
} else {
$('div.input').show();
$('div.prompt').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Code Toggle"></form>''')_____no_output_____
</code>
# Table of Contents
* [Challenge Problems](#Challenge-Problems)
* [1. Spinodal Decomposition - Cahn-Hilliard](#1.-Spinodal-Decomposition---Cahn-Hilliard)
* [Parameter Values](#Parameter-Values)
* [Initial Conditions](#Initial-Conditions)
* [Domains](#Domains)
* [a. Square Periodic](#a.-Square-Periodic)
* [b. No Flux](#b.-No-Flux)
* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)
* [d. Sphere](#d.-Sphere)
* [Tasks](#Tasks)
* [2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations](#2.-Ostwald-Ripening----coupled-Cahn-Hilliard-and-Allen-Cahn-equations)
* [Parameter Values](#Parameter-Values)
* [Initial Conditions](#Initial-Conditions)
* [Domains](#Domains)
* [a. Square Periodic](#a.-Square-Periodic)
* [b. No Flux](#b.-No-Flux)
* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)
* [d. Sphere](#d.-Sphere)
* [Tasks](#Tasks)
_____no_output_____# Challenge Problems_____no_output_____For the first hackathon there are two challenge problems, a spinodal decomposition problem and an Ostwald ripening problem. The only solutions included here currently are with FiPy._____no_output_____## 1. Spinodal Decomposition - Cahn-Hilliard_____no_output_____The free energy density is given by,
$$ f = f_0 \left[ c \left( \vec{r} \right) \right] + \frac{\kappa}{2} \left| \nabla c \left( \vec{r} \right) \right|^2 $$
where $f_0$ is the bulk free energy density given by,
$$ f_0\left[ c \left( \vec{r} \right) \right] =
- \frac{A}{2} \left(c - c_m\right)^2
+ \frac{B}{4} \left(c - c_m\right)^4
+ \frac{c_{\alpha}}{4} \left(c - c_{\alpha} \right)^4
+ \frac{c_{\beta}}{4} \left(c - c_{\beta} \right)^4 $$
where $c_m = \frac{1}{2} \left( c_{\alpha} + c_{\beta} \right)$ and $c_{\alpha}$ and $c_{\beta}$ are the concentrations at which the bulk free energy density has minima (corresponding to the solubilities in the matrix phase and the second phase, respectively).
The time evolution of the concentration field, $c$, is given by the Cahn-Hilliard equation:
$$ \frac{\partial c}{\partial t} = \nabla \cdot \left[
D \left( c \right) \nabla \left( \frac{ \partial f_0 }{ \partial c} - \kappa \nabla^2 c \right)
\right] $$
where $D$ is the diffusivity._____no_output_____### Parameter Values_____no_output_____Use the following parameter values.
<table width="200">
<tr>
<td> $c_{\alpha}$ </td>
<td> 0.05 </td>
</tr>
<tr>
<td> $c_{\beta}$ </td>
<td> 0.95 </td>
</tr>
<tr>
<td> A </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa$ </td>
<td> 2.0 </td>
</tr>
</table>
with
$$ B = \frac{A}{\left( c_{\alpha} - c_m \right)^2} $$
$$ D = D_{\alpha} = D_{\beta} = \frac{2}{c_{\beta} - c_{\alpha}} $$_____no_output_____### Initial Conditions_____no_output_____Set $c\left(\vec{r}, t\right)$ such that
$$ c\left(\vec{r}, 0\right) = \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) $$
where
<table width="200">
<tr>
<td> $\bar{c}_0$ </td>
<td> 0.45 </td>
</tr>
<tr>
<td> $\vec{q}$ </td>
<td> $\left(\sqrt{2},\sqrt{3}\right)$ </td>
</tr>
<tr>
<td> $\epsilon$ </td>
<td> 0.01 </td>
</tr>
</table>_____no_output_____### Domains_____no_output_____#### a. Square Periodic_____no_output_____2D square domain with $L_x = L_y = 200$ and periodic boundary conditions._____no_output_____
<code>
from IPython.display import SVG
SVG(filename='../images/block1.svg')_____no_output_____
</code>
#### b. No Flux_____no_output_____2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions._____no_output_____#### c. T-Shape No Flux_____no_output_____T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$_____no_output_____
<code>
from IPython.display import SVG
SVG(filename='../images/t-shape.svg')_____no_output_____
</code>
#### d. Sphere_____no_output_____Domain is the surface of a sphere with radius 100, but with initial conditions of
$$ c\left(\theta, \phi, 0\right) = \bar{c}_0 + \epsilon \cos \left( \sqrt{233} \theta \right)
\sin \left( \sqrt{239} \phi \right) $$
where $\theta$ and $\phi$ are the polar and azimuthal angles in a spherical coordinate system. $\bar{c}_0$ and $\epsilon$ are given by the values in the table above._____no_output_____### Tasks_____no_output_____Your task for each domain,
1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie
2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.
3. Present wall clock time for the calculations, and wall clock time per core used in the calculation.
4. For domain a. above, demonstrate that the solution is robust with respect to meshing by refining the mesh (e.g. reduce the mesh size by about a factor of $\sqrt{2}$ in linear dimensions -- use whatever convenient way you have to refine the mesh without exploding the computational time)._____no_output_____## 2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations_____no_output_____Expanded problem in that the phase field, described by variables $\eta_i$, is now coupled to the concentration field $c$. The Ginzberg-Landau free energy density is now taken to be,
$$ f = f_0 \left[ C \left( \vec{r} \right), \eta_1, ... , \eta_p \right]
+ \frac{\kappa_C}{2} \left[ \nabla C \left( \vec{r} \right) \right]^2 +
\sum_{i=1}^p \frac{\kappa_C}{2} \left[ \nabla \eta_i \left( \vec{r} \right) \right]^2
$$
Here, $f_0$ is a bulk free energy density,
$$ f_0 \left[ C \left( \vec{r} \right), \eta_1, ... , \eta_p \right]
= f_1 \left( C \right) + \sum_{i=1}^p f_2 \left( C, \eta_i \right)
+ \sum_{i=1}^p \sum_{j\ne i}^p f_3 \left( \eta_j, \eta_i \right) $$
Here, $ f_1 \left( C \right) $ is the free energy density due to the concentration field, $C$, with local minima at $C_{\alpha}$ and $C_{\beta}$ corresponding to the solubilities in the matrix phase and the second phase, respectively; $f_2\left(C , \eta_i \right)$ is an interaction term between the concentration field and the phase fields, and $f_3 \left( \eta_i, \eta_j \right)$ is the free energy density of the phase fields. Simple models for these free energy densities are,
$$ f_1\left( C \right) =
- \frac{A}{2} \left(C - C_m\right)^2
+ \frac{B}{4} \left(C - C_m\right)^4
+ \frac{D_{\alpha}}{4} \left(C - C_{\alpha} \right)^4
+ \frac{D_{\beta}}{4} \left(C - C_{\beta} \right)^4 $$
where
$$ C_m = \frac{1}{2} \left(C_{\alpha} + C_{\beta} \right) $$
and
$$ f_2 \left( C, \eta_i \right) = - \frac{\gamma}{2} \left( C - C_{\alpha} \right)^2 \eta_i^2 + \frac{\beta}{2} \eta_i^4 $$
where
$$ f_3 \left( \eta_i, \eta_j \right) = \frac{ \epsilon_{ij} }{2} \eta_i^2 \eta_j^2, i \ne j $$
The time evolution of the system is now given by coupled Cahn-Hilliard and Allen-Cahn (time dependent Gizberg-Landau) equations for the conserved concentration field and the non-conserved phase fields:
$$
\begin{eqnarray}
\frac{\partial C}{\partial t} &=& \nabla \cdot \left \{
D \nabla \left[ \frac{\delta F}{\delta C} \right] \right \} \\
&=& D \left[ -A + 3 B \left( C- C_m \right)^2 + 3 D_{\alpha} \left( C - C_{\alpha} \right)^2 + 3 D_{\beta} \left( C - C_{\beta} \right)^2 \right] \nabla^2 C \\
& & -D \gamma \sum_{i=1}^{p} \left[ \eta_i^2 \nabla^2 C + 4 \nabla C \cdot \nabla \eta_i + 2 \left( C - C_{\alpha} \right) \nabla^2 \eta_i \right] - D \kappa_C \nabla^4 C
\end{eqnarray}
$$
and the phase field equations
$$
\begin{eqnarray}
\frac{\partial \eta_i}{\partial t} &=& - L_i \frac{\delta F}{\delta \eta_i} \\
&=& \frac{\partial f_2}{\delta \eta_i} + \frac{\partial f_3}{\delta \eta_i} - \kappa_i \nabla^2 \eta_i \left(\vec{r}, t\right) \\
&=& L_i \gamma \left( C - C_{\alpha} \right)^2 \eta_i - L_i \beta \eta_i^3 - L_i \eta_i \sum_{j\ne i}^{p} \epsilon_{ij} \eta^2_j + L_i \kappa_i \nabla^2 \eta_i
\end{eqnarray}
$$_____no_output_____### Parameter Values_____no_output_____Use the following parameter values.
<table width="200">
<tr>
<td> $C_{\alpha}$ </td>
<td> 0.05 </td>
</tr>
<tr>
<td> $C_{\beta}$ </td>
<td> 0.95 </td>
</tr>
<tr>
<td> A </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_i$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_j$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_k$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\epsilon_{ij}$ </td>
<td> 3.0 </td>
</tr>
<tr>
<td> $\beta$ </td>
<td> 1.0 </td>
</tr>
<tr>
<td> $p$ </td>
<td> 10 </td>
</tr>
</table>
with
$$ B = \frac{A}{\left( C_{\alpha} - C_m \right)^2} $$
$$ \gamma = \frac{2}{\left(C_{\beta} - C_{\alpha}\right)^2} $$
$$ D = D_{\alpha} = D_{\beta} = \frac{\gamma}{\delta^2} $$
The diffusion coefficient, $D$, is constant and isotropic and the same (unity) for both phases; the mobility-related constants, $L_i$, are the same (unity) for all phase fields._____no_output_____### Initial Conditions_____no_output_____Set $c\left(\vec{r}, t\right)$ such that
$$
\begin{eqnarray}
c\left(\vec{r}, 0\right) &=& \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) \\
\eta_i\left(\vec{r}, 0\right) &=& \bar{\eta}_0 + 0.01 \epsilon_i \cos^2 \left( \vec{q} \cdot \vec{r} \right)
\end{eqnarray}
$$
where
<table width="200">
<tr>
<td> $\bar{c}_0$ </td>
<td> 0.5 </td>
</tr>
<tr>
<td> $\vec{q}$ </td>
<td> $\left(\sqrt{2},\sqrt{3}\right)$ </td>
</tr>
<tr>
<td> $\epsilon$ </td>
<td> 0.01 </td>
</tr>
<tr>
<td> $\vec{q}_i$ </td>
<td> $\left( \sqrt{23 + i}, \sqrt{149 + i} \right)$ </td>
</tr>
<tr>
<td> $\epsilon_i$ </td>
<td> 0.979285, 0.219812, 0.837709, 0.695603, 0.225115,
0.389266, 0.585953, 0.614471, 0.918038, 0.518569 </td>
</tr>
<tr>
<td> $\eta_0$ </td>
<td> 0.0 </td>
</tr>
</table>_____no_output_____### Domains_____no_output_____#### a. Square Periodic_____no_output_____2D square domain with $L_x = L_y = 200$ and periodic boundary conditions._____no_output_____
<code>
from IPython.display import SVG
SVG(filename='../images/block1.svg')_____no_output_____
</code>
#### b. No Flux_____no_output_____2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions._____no_output_____#### c. T-Shape No Flux_____no_output_____T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$_____no_output_____
<code>
from IPython.display import SVG
SVG(filename='../images/t-shape.svg')_____no_output_____
</code>
#### d. Sphere_____no_output_____Domain is the surface of a sphere with radius 100, but with initial conditions of
$$ c\left(\theta, \phi, 0\right) = \bar{c}_0 + \epsilon \cos \left( \sqrt{233} \theta \right)
\sin \left( \sqrt{239} \phi \right) $$
and
$$ \eta_i\left(\theta, \phi, 0\right) = \bar{\eta}_0 + 0.01 \epsilon_i \cos^2 \left( \sqrt{23 + i} \theta \right)
\sin^2 \left( \sqrt{149 + i} \phi \right) $$
where $\theta$ and $\phi$ are the polar and azimuthal angles in a spherical coordinate system and parameter values are in the table above._____no_output_____### Tasks_____no_output_____Your task for each domain,
1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie
2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.
3. Present wall clock time for the calculations, and wall clock time per core used in the calculation._____no_output_____
| {
"repository": "dschwen/chimad-phase-field",
"path": "hackathon1/problems.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 35514,
"hexsha": "cb1c08ac68f7b33737dd42eb6053c2daf8c12af3",
"max_line_length": 5186,
"avg_line_length": 51.025862069,
"alphanum_fraction": 0.5679450358
} |
# Notebook from UWashington-Astro300/Astro300-W17
Path: FirstLast_HW4.ipynb
# First Last - Homework 4_____no_output_____* Use the `Astropy` units and constants packages to solve the following problems.
* Do not hardcode any constants!
* Unless asked, your units should be in the simplest SI units possible_____no_output_____
<code>
import numpy as np
from astropy import units as u
from astropy import constants as const
from astropy.units import imperial
imperial.enable()_____no_output_____
</code>
### Impulse is a change in momentum
$$ I = \Delta\ p\ =\ m\Delta v $$_____no_output_____**Problem 1** - Calculate the $\Delta$v that would be the result of an impuse of 700 (N * s) for M = 338 kg._____no_output_____**Problem 2** - Calculate the $\Delta$v that would be the result of an impuse of 700 (lbf * s) for M = 338 kg._____no_output_____This is the unit conversion error that doomed the [Mars Climate Orbiter](https://en.wikipedia.org/wiki/Mars_Climate_Orbiter)_____no_output_____### The range of a projectile launched with a velocity (v) at and angle ($\theta$) is
$$R\ =\ {v^2 \over g}\ sin(2\theta)$$_____no_output_____**Problem 3** - Find R for v = 123 mph and $\theta$ = 1000 arc minutes_____no_output_____**Problem 4** - How fast to you have to throw a football at 33.3 degrees so that is goes exactly 100 yards? Express your answer in mph_____no_output_____### Kepler's third law can be expressed as:
$$ T^2 = \left( {{4\pi^2} \over {GM}} \right)\ r^3 $$
Where **T** is the orbial period of an object at distance (**r**) around a central object of mass (**M**).
It assumes the mass of the orbiting object is small compared to the mass of the central object._____no_output_____**Problem 5** - Calculate the orbital period of HST. HST orbits 353 miles above the **surface** of the Earth. Expess your answer in minutes._____no_output_____** Problem 6 ** - An exoplanet orbits the star Epsilon Tauri in 595 days at a distance of 1.93 AU. Calculate the mass of Epsilon Tauri in terms of solar masses._____no_output_____### The velocity of an object in orbit is
$$ v=\sqrt{GM\over r} $$
Where the object is at a distance (**r**) around a central object of mass (**M**)._____no_output_____**Problem 7** - Calculate the velocity of HST. Expess your answer in km/s and mph._____no_output_____**Problem 8** - The Procliamer's song [500 miles](https://youtu.be/MJuyn0WAYNI?t=27s) has a duration of 3 minutes and 33 seconds. Calculate at what altitude, above the Earth's surface, you would have to orbit to go 1000 miles in this time. Express your answer in km and miles._____no_output_____### The Power being received by a solar panel in space can be expressed as:
$$ I\ =\ {{L_{\odot}} \over {4 \pi d^2}}\ \varepsilon$$
Where **I** is the power **per unit area** at a distance (**d**) from the Sun, and $\varepsilon$ is the efficiency of the solar panel.
The solar panels that power spacecraft have an efficiency of about 40%._____no_output_____** Problem 9 ** - The [New Horizons](http://pluto.jhuapl.edu/) spacecraft requires 220 Watts of power.
Calculate the area of a solar panel that would be needed to power New Horizons at a distance of 1 AU from the Sun._____no_output_____** Problem 10 ** - Express your answer in units of the area of a piece of US letter sized paper (8.5 in x 11 in)._____no_output_____** Problem 11 ** - Same question as above but now a d = 30 AU.
Express you answer in both sq meters and US letter sized paper_____no_output_____** Problem 12 ** - The main part of the Oort cloud is thought to be at a distance of about 10,000 AU.
Calculate the size of the solar panel New Horizons would need to operate in the Oort cloud.
Express your answer in units of the area of an American football field (120 yd x 53.3 yd)._____no_output_____** Problem 13 ** - Calculate the maximum distance from the Sun where a solar panel of 1 football field can power the New Horizons spacecraft. Express your answer in AU._____no_output_____### Due Tue Jan 31 - 5pm
- `Make sure to change the filename to your name!`
- `Make sure to change the Title to your name!`
- `File -> Download as -> HTML (.html)`
- `upload your .html and .ipynb file to the class Canvas page` _____no_output_____
| {
"repository": "UWashington-Astro300/Astro300-W17",
"path": "FirstLast_HW4.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 8490,
"hexsha": "cb1c08e90560a37de4d323ad41e054583cb670e8",
"max_line_length": 282,
"avg_line_length": 23.7150837989,
"alphanum_fraction": 0.5455830389
} |
# Notebook from yaoyongxin/qucochemistry
Path: examples/Tutorial_Disassociation_curve_end_to_end.ipynb
# End-to-end quantum chemistry VQE using Qu & Co Chemistry
In this tutorial we show how to solve the groundstate energy of a hydrogen molecule using VQE, as a function of the spacing between the atoms of the molecule. For a more detailed discussion on MolecularData generation or VQE settings, please refer to our other tutorials. We here focus on the exact UCCSD method, which is the upper bound of a UCCSD-based VQE approach performance. In reality, errors are incurred by Trotterizing the UCC Hamiltonian evolution._____no_output_____
<code>
from openfermion.hamiltonians import MolecularData
from qucochemistry.vqe import VQEexperiment
from openfermionpyscf import run_pyscf
import numpy as np
#H2 spacing
spacing =np.array([0.1,0.15,0.2,0.25,0.3,0.4,0.5,0.6,0.7,0.74,0.75,0.8,0.85,0.9,1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0,2.2,2.4,2.6,2.8,3.0])
M=len(spacing)
# Set molecule parameters and desired basis.
basis = 'sto-3g'
multiplicity = 1
# Set calculation parameters.
run_scf = 1
run_mp2 = 1
run_cisd = 1
run_ccsd = 1
run_fci = 1
E_fci=np.zeros([M,1])
E_hf=np.zeros([M,1])
E_ccsd=np.zeros([M,1])
E_uccsd=np.zeros([M,1])
E_uccsd_opt=np.zeros([M,1])
for i, space in enumerate(spacing):
#construct molecule data storage object
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., space))]
molecule = MolecularData(geometry, basis, multiplicity,description='pyscf_H2_' + str(space*100))
molecule.filename = 'molecules/H2/H2_pyscf_' + str(space)[0] +'_' +str(space)[2:] #location of the .hdf5 file to store the data in
# Run PySCF to add the data.
molecule = run_pyscf(molecule,
run_scf=run_scf,
run_mp2=run_mp2,
run_cisd=run_cisd,
run_ccsd=run_ccsd,
run_fci=run_fci)
vqe = VQEexperiment(molecule=molecule,method='linalg', strategy='UCCSD')
E_uccsd[i]=vqe.objective_function()
vqe.start_vqe()
E_uccsd_opt[i]=vqe.get_results().fun
E_fci[i]=float(molecule.fci_energy)
E_hf[i]=float(molecule.hf_energy)
E_ccsd[i]=float(molecule.ccsd_energy)_____no_output_____
</code>
We compare the results for 5 different strategies: classical HF, CCSD, and FCI, with a quantum unitary variant of CCSD, called UCCSD and its optimized version. In other words, we calculate the Hamiltonian expectation value for a wavefunction which was propagated by a UCCSD ansatz with CCSD amplitudes. Then we initiate an optimization algorithm over these starting amplitudes in order to reach even closer to the true ground state and thus minimizing the energy.
In essence, with the method='linalg' option, we do not create a quantum circuit, but rather directly take the matrix exponential of the UCC-Hamiltonian. In reality, for a gate-based architecture, one would need to select a Trotterization protocol to execute this action on a QPU, incurring Trotterization errors along the way.
We plot the results below:_____no_output_____
<code>
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
plt.plot(spacing,E_hf,label='HF energy')
plt.plot(spacing,E_ccsd,label='CCSD energy')
plt.plot(spacing,E_uccsd,label='UCCSD energy (guess)')
plt.plot(spacing,E_uccsd_opt,label='UCCSD energy (optim)')
plt.plot(spacing,E_fci,label='FCI energy')
plt.xlabel('spacing (Angstrom)')
plt.ylabel('Energy (Hartree)')
plt.title('Disassociation curve hydrogen molecule')
plt.legend()_____no_output_____plt.figure()
plt.semilogy(spacing,np.abs(E_fci-E_hf),label='HF energy')
plt.semilogy(spacing,np.abs(E_fci-E_ccsd),label='CCSD energy')
plt.semilogy(spacing,np.abs(E_fci-E_uccsd),label='UCCSD energy (guess)')
plt.semilogy(spacing,np.abs(E_fci-E_uccsd_opt),label='UCCSD energy (optim)')
plt.semilogy(spacing,0.0016*np.ones([len(spacing),1]),label='chemical accuracy',linestyle='-.',color='black')
plt.xlabel('spacing (Angstrom)')
plt.ylabel('Energy error with FCI (Hartree)')
plt.title('Error with FCI - Disassociation curve hydrogen molecule')
plt.legend()_____no_output_____
</code>
We find that the HF energy is not within chemical accuracy with the FCI energy, while CCSD and UCCSD can reach that level. Clearly, for larger bond distances the approximations are less accurate but still the UCCSD optimization reaches numerical precision accuracy to the ground state. Note that the UCCSD method is not guaranteed to reach this level of accuracy with general molecules; one can experiment with that using this notebook before implementing the UCC in a quantum circuit, which will always perform worse._____no_output_____
| {
"repository": "yaoyongxin/qucochemistry",
"path": "examples/Tutorial_Disassociation_curve_end_to_end.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 388447,
"hexsha": "cb1cb4003ec87f1a35975d6450dad8fefb132499",
"max_line_length": 177909,
"avg_line_length": 220.33295519,
"alphanum_fraction": 0.8697994836
} |
# Notebook from LearnableLoopAI/blog2
Path: _notebooks/2022-01-27-MCControl_OffPolicy_BlackJack.ipynb
# "Monte Carlo 6: Off-Policy Control with Importance Sampling in Reinforcement Learning"
> Find the optimal policy using Weighted Importance Sampling
- toc: true
- branch: master
- badges: false
- comments: true
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
- image: images/MCControl_OffPolicy_BlackJack.png
- categories: [Reinforcement_Learning,MC, OpenAI,Gym,]
- show_tags: true_____no_output_____
<code>
# hide
# inspired by
# https://github.com/dennybritz/reinforcement-learning/blob/master/MC/Off-Policy%20MC%20Control%20with%20Weighted%20Importance%20Sampling%20Solution.ipynb_____no_output_____#hide
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
# base_dir = root_dir + 'Sutton&Barto/ch05/dennybritz_reinforcement-learning_MC/'
base_dir = root_dir + 'Sutton&Barto/'Mounted at /content/gdrive
# hide
%cd "{base_dir}"/content/gdrive/My Drive/Sutton&Barto
# hide
!pwd/content/gdrive/My Drive/Sutton&Barto
</code>
## 1. Introduction
In a *Markov Decision Process* (Figure 1) the *agent* and *environment* interacts continuously.

More details are available in [Reinforcement Learning: An Introduction by Sutton and Barto](http://incompleteideas.net/book/RLbook2020.pdf).
The dynamics of the MDP is given by
$$
\begin{aligned}
p(s',r|s,a) &= Pr\{ S_{t+1}=s',R_{t+1}=r | S_t=s,A_t=a \} \\
\end{aligned}
$$
The *policy* of an agent is a mapping from the current state of the environment to an *action* that the agent needs to take in this state. Formally, a policy is given by
$$
\begin{aligned}
\pi(a|s) &= Pr\{A_t=a|S_t=s\}
\end{aligned}
$$
The discounted *return* is given by
$$
\begin{aligned}
G_t &= R_{t+1} + \gamma R_{t+2} + \gamma ^2 R_{t+3} + ... + R_T \\
&= \sum_{k=0}^\infty \gamma ^k R_{t+1+k}
\end{aligned}
$$
where $\gamma$ is the discount factor and $R$ is the *reward*.
Most reinforcement learning algorithms involve the estimation of value functions - in our present case, the *state-value function*. The state-value function maps each state to a measure of "how good it is to be in that state" in terms of expected rewards. Formally, the state-value function, under policy $\pi$ is given by
$$
\begin{aligned}
v_\pi(s) &= \mathbb{E}_\pi[G_t|S_t=s]
\end{aligned}
$$
The Monte Carlo algorithm discussed in this post will numerically estimate $v_\pi(s)$._____no_output_____## 2. Environment_____no_output_____The environment is the game of *Blackjack*. The player tries to get cards whose sum is as great as possible without exceeding 21. Face cards count as 10. An ace can be taken either as a 1 or an 11. Two cards are dealth to both dealer and player. One of the dealer's cards is face up (other is face down). The player can request additional cards, one by one (called *hits*) until the player stops (called *sticks*) or goes above 21 (goes *bust* and loses). When the players sticks it becomes the dealer's turn which uses a fixed strategy: sticks when the sum is 17 or greater and hits otherwise. If the dealer goes bust the player wins, otherwise the winner is determined by whose sum is closer to 21.
We formulate this game as an episodic finite MDP. Each game is an episode.
* States are based on the player's
* current sum (12-21)
* player will automatically keep on getting cards until the sum is at least 12 (this is a rule and the player does not have a choice in this matter)
* dealer's face up card (ace-10)
* whether player holds usable ace (True or False)
This gives a total of 200 states: $10 × 10 \times 2 = 200$
* Rewards:
* +1 for winning
* -1 for losing
* 0 for drawing
* Reward for stick:
* +1 if sum > sum of dealer
* 0 if sum = sum of dealer
* -1 if sum < sum of dealer
* Reward for hit:
* -1 if sum > 21
* 0 otherwise
The environment is implemented using the OpenAI Gym library._____no_output_____## 3. Agent_____no_output_____The *agent* is the player. After observing the state of the *environment*, the agent can take one of two possible actions:
* stick (0) [stop receiving cards]
* hit (1) [have another card]
The agent's policy will be deterministic - will always stick of the sum is 20 or 21, and hit otherwise. We call this *policy1* in the code._____no_output_____## 4. Monte Carlo Estimation of the Action-value Function, $q_\pi(s,a)$_____no_output_____We will now proceed to estimate the action-value function for the given policy $\pi$. We can take $\gamma=1$ as the sum will remain finite:
$$ \large
\begin{aligned}
q_\pi(s,a) &= \mathbb{E}_\pi[G_t | S_t=s, A_t=a] \\
&= \mathbb{E}_\pi[R_{t+1} + \gamma R_{t+2} + \gamma ^2 R_{t+3} + ... + R_T | S_t=s, A_t=a] \\
&= \mathbb{E}_\pi[R_{t+1} + R_{t+2} + R_{t+3} + ... + R_T | S_t=s, A_t=a]
\end{aligned}
$$
In numeric terms this means that, given a state and an action, we take the sum of all rewards from that state onwards (following policy $\pi$) until the game ends, and take the average of all such sequences.
_____no_output_____### 4.1 Off-policy Estimation via Importance Sampling_____no_output_____On-policy methods, used so far in this series, represents a compromise. They learn action values not for the optimal policy but for a near-optimal policy that can still explore. The off-policy methods, on the other hand, make use of *two* policies - one that is being optimized (called the *target* policy) and another one (the *behavior* policy) that is used for exploratory purposes.
An important concept used by off-policy methods is *importance sampling*. This is a general technique for extimating expected values under one distribution by using samples from another. This allows us to weight returns according to the relative probability of a trajectory occurring under the target and behavior policies. This relative probability is called the importance-sampling ratio
$$ \large
\rho_{t:T-1}=\frac{\prod_{k=t}^{T-1} \pi(A_k|S_k) p(S_{k+1}|S_k,A_k)}{\prod_{k=t}^{T-1} b(A_k|S_k) p(S_{k+1}|S_k,A_k)}=∏_{k=t}^{T-1} \frac{\pi(A_k|S_k)}{b(A_k|S_k)}
$$
where $\pi$ is the *target* policy, and $b$ is the *behavior* policy.
In oder to estimate $q_{\pi}(s,a)$ we need to estimate expected returns under the target policy. However, we only have access to returns due to the behavior
policy. To perform this "off-policy" procedure we can make use of the following:
$$ \large
\begin{aligned}
q_\pi(s,a) &= \mathbb E_\pi[G_t|S_t=s, A_t=a] \\
&= \mathbb E_b[\rho_{t:T-1}G_t|S_t=s, A_t=a]
\end{aligned}
$$
This allows us to simply scale or weight the returns under $b$ to yield returns under $\pi$.
In our current *prediction* problem, both the target and behavior policies are fixed._____no_output_____## 5. Implementation_____no_output_____Figure 2 shows the algorithm for off-policy control for the estimation of the optimal policy function:
_____no_output_____Next, we present the code that implements the algorithm._____no_output_____
<code>
import gym
import matplotlib
import numpy as np
import sys
from collections import defaultdict
import pprint as pp
from matplotlib import pyplot as plt
%matplotlib inline_____no_output_____# hide
# from lib import plotting as myplot
# from lib.envs.blackjack import BlackjackEnv
from dennybritz_lib import plotting as myplot
from dennybritz_lib.envs.blackjack import BlackjackEnv_____no_output_____# hide
# env = gym.make('Blackjack-v0')#.has differences cp to the one used here
#- env = gym.make('Blackjack-v1')#.does not exist_____no_output_____env = BlackjackEnv()_____no_output_____
</code>
### 5.1 Policy
The following function captures the target policy used:_____no_output_____
<code>
def create_random_policy(n_A):
A = np.ones(n_A, dtype=float)/n_A
def policy_function(observation):
return A
return policy_function #vector of action probabilities_____no_output_____# hide
# def create_greedy_policy(Q):
# def policy_function(state):
# A = np.zeros_like(Q[state], dtype=float)
# best_action = np.argmax(Q[state])
# A[best_action] = 1.0
# return A
# return policy_function_____no_output_____# hide
# def create_policy():
# policy = defaultdict(int)
# for sum in [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]:
# for showing in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:
# for usable in [False, True]:
# policy[(sum, showing, usable)] = np.random.choice([0, 1]) #random
# # policy[(sum, showing, usable)] = 0 #all zeros
# return policy_____no_output_____def create_policy():
policy = defaultdict(int)
return policy_____no_output_____
</code>
### 5.2 Generate episodes
The following function sets the environment to a random initial state. It then enters a loop where each iteration applies the policy to the environment's state to obtain the next action to be taken by the agent. That action is then applied to the environment to get the next state, and so on until the episode ends._____no_output_____
<code>
def generate_episode(env, policy):
episode = []
state = env.reset() #to a random state
while True:
probs = policy(state)
action = np.random.choice(np.arange(len(probs)), p=probs)
next_state, reward, done, _ = env.step(action) # St+1, Rt+1 OR s',r
episode.append((state, action, reward)) # St, At, Rt+1 OR s,a,r
if done:
break
state = next_state
return episode_____no_output_____
</code>
### 5.3 Main loop
The following function implements the main loop of the algorithm. It iterates for ``n_episodes``. It also takes a list of ``monitored_state_actions`` for which it will record the evolution of action values. This is handy for showing how action values converge during the process._____no_output_____
<code>
def mc_control(env, n_episodes, discount_factor=1.0, monitored_state_actions=None, diag=False):
#/// G_sum = defaultdict(float)
#/// G_count = defaultdict(float)
Q = defaultdict(lambda: np.zeros(env.action_space.n))
C = defaultdict(lambda: np.zeros(env.action_space.n))
pi = create_policy()
monitored_state_action_values = defaultdict(list)
for i in range(1, n_episodes + 1):
if i%1000 == 0: print("\rEpisode {}/{}".format(i, n_episodes), end=""); sys.stdout.flush()
b = create_random_policy(env.action_space.n)
episode = generate_episode(env, b); print(f'\nepisode {i}: {episode}') if diag else None
G = 0.0
W = 1.0
for t in range(len(episode))[::-1]:
St, At, Rtp1 = episode[t]
print(f"---t={t} St, At, Rt+1: {St, At, Rtp1}") if diag else None
G = discount_factor*G + Rtp1; print(f"G: {G}") if diag else None
C[St][At] += W; print(f"C[St][At]: {C[St][At]}") if diag else None #Weighted Importance Sampling (WIS) denominator
Q[St][At] += (W/C[St][At])*(G - Q[St][At]); print(f"Q[St][At]: {Q[St][At]}") if diag else None
pi[St] = np.argmax(Q[St]) #greedify pi, max_a Q[state][0], Q[state][1]
if At != np.argmax(pi[St]):
break
W = W*1.0/b(St)[At]; print(f"W: {W}, b(St)[At]: {b(St)[At]}") if diag else None
if monitored_state_actions:
for msa in monitored_state_actions:
s = msa[0]; a = msa[1]
# print("\rQ[{}]: {}".format(msa, Q[s][a]), end=""); sys.stdout.flush()
monitored_state_action_values[msa].append(Q[s][a])
print('\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None
#/// pp.pprint(f'G_sum: {G_sum}') if diag else None
#/// pp.pprint(f'G_count: {G_count}') if diag else None
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None
print('\nmonitored_state_action_values:', monitored_state_action_values) if diag else None
return Q,pi,monitored_state_action_values _____no_output_____
</code>
### 5.4 Monitored state-actions
Let's pick a number of state-actions to monitor. Each tuple captures the player's sum, the dealer's showing card, and whether the player has a usable ace, as well as the action taken in the state:_____no_output_____
<code>
monitored_state_actions=[((21, 7, False), 0), ((20, 7, True), 0), ((12, 7, False), 1), ((17, 7, True), 0)]_____no_output_____Q,pi,monitored_state_action_values = mc_control(
env,
n_episodes=10,
monitored_state_actions=monitored_state_actions,
diag=True)
episode 1: [((15, 3, False), 1, 0), ((19, 3, False), 1, -1)]
---t=1 St, At, Rt+1: ((19, 3, False), 1, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
episode 2: [((18, 1, False), 0, -1)]
---t=0 St, At, Rt+1: ((18, 1, False), 0, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
W: 2.0, b(St)[At]: 0.5
episode 3: [((20, 10, False), 0, 1)]
---t=0 St, At, Rt+1: ((20, 10, False), 0, 1)
G: 1.0
C[St][At]: 1.0
Q[St][At]: 1.0
W: 2.0, b(St)[At]: 0.5
episode 4: [((20, 6, False), 0, 1)]
---t=0 St, At, Rt+1: ((20, 6, False), 0, 1)
G: 1.0
C[St][At]: 1.0
Q[St][At]: 1.0
W: 2.0, b(St)[At]: 0.5
episode 5: [((15, 3, True), 1, 0), ((15, 3, False), 0, -1)]
---t=1 St, At, Rt+1: ((15, 3, False), 0, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
W: 2.0, b(St)[At]: 0.5
---t=0 St, At, Rt+1: ((15, 3, True), 1, 0)
G: -1.0
C[St][At]: 2.0
Q[St][At]: -1.0
episode 6: [((13, 4, False), 0, 1)]
---t=0 St, At, Rt+1: ((13, 4, False), 0, 1)
G: 1.0
C[St][At]: 1.0
Q[St][At]: 1.0
W: 2.0, b(St)[At]: 0.5
episode 7: [((14, 9, False), 0, -1)]
---t=0 St, At, Rt+1: ((14, 9, False), 0, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
W: 2.0, b(St)[At]: 0.5
episode 8: [((17, 10, False), 1, -1)]
---t=0 St, At, Rt+1: ((17, 10, False), 1, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
episode 9: [((16, 7, False), 0, -1)]
---t=0 St, At, Rt+1: ((16, 7, False), 0, -1)
G: -1.0
C[St][At]: 1.0
Q[St][At]: -1.0
W: 2.0, b(St)[At]: 0.5
episode 10: [((20, 6, False), 1, 0), ((21, 6, False), 0, 0)]
---t=1 St, At, Rt+1: ((21, 6, False), 0, 0)
G: 0.0
C[St][At]: 1.0
Q[St][At]: 0.0
W: 2.0, b(St)[At]: 0.5
---t=0 St, At, Rt+1: ((20, 6, False), 1, 0)
G: 0.0
C[St][At]: 2.0
Q[St][At]: 0.0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"G_sum: defaultdict(<class 'float'>, {})"
"G_count: defaultdict(<class 'float'>, {})"
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
monitored_state_action_values: defaultdict(<class 'list'>, {((21, 7, False), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((20, 7, True), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((12, 7, False), 1): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ((17, 7, True), 0): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]})
Q_____no_output_____Q[(13, 5, False)]_____no_output_____pi_____no_output_____pi[(18, 4, False)]_____no_output_____V = defaultdict(float)
for state, actions in Q.items():
action_value = np.max(actions)
V[state] = action_value_____no_output_____V_____no_output_____print(monitored_state_actions[0])
print(monitored_state_action_values[monitored_state_actions[0]])((21, 7, False), 0)
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values[msa][-1], Q[s][a] #monitored_stuff[msa] BUT Q[s][a]msa: ((21, 7, False), 0)
s: (21, 7, False)
a: 0
</code>
### 5.5 Run 1
First, we will run the algorithm for 10,000 episodes, using policy1:_____no_output_____
<code>
Q1,pi1,monitored_state_action_values1 = mc_control(
env,
n_episodes=10_000,
monitored_state_actions=monitored_state_actions,
diag=False)Episode 10000/10000#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values1[msa][-1], Q1[s][a] #monitored_stuff[msa] BUT Q[s][a]msa: ((21, 7, False), 0)
s: (21, 7, False)
a: 0
</code>
The following chart shows how the values of the 4 monitored state-actions converge to their values:_____no_output_____
<code>
plt.rcParams["figure.figsize"] = (18,10)
for msa in monitored_state_actions:
plt.plot(monitored_state_action_values1[msa])
plt.title('Estimated $q_\pi(s,a)$ for some state-actions', fontsize=18)
plt.xlabel('Episodes', fontsize=16)
plt.ylabel('Estimated $q_\pi(s,a)$', fontsize=16)
plt.legend(monitored_state_actions, fontsize=16)
plt.show()_____no_output_____
</code>
The following charts shows the estimate of the associated estimated optimal state-value function, $v_*(s)$, for the cases of a usable ace as well as not a usable ace. First, we compute ```V1``` which is the estimate for $v_*(s)$:_____no_output_____
<code>
V1 = defaultdict(float)
for state, actions in Q1.items():
action_value = np.max(actions)
V1[state] = action_value_____no_output_____AZIM = -110
ELEV = 20_____no_output_____myplot.plot_pi_star_and_v_star(pi1, V1, title="$\pi_* and v_*$", wireframe=False, azim=AZIM-40, elev=ELEV);_____no_output_____
</code>
### 5.6 Run 2
Our final run uses 500,000 episodes and the accuracy of the action-value function is higher._____no_output_____
<code>
Q2,pi2,monitored_state_action_values2 = mc_control(
env,
n_episodes=500_000,
monitored_state_actions=monitored_state_actions,
diag=False)Episode 500000/500000#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values2[msa][-1], Q2[s][a] #monitored_stuff[msa] BUT Q[s][a]msa: ((21, 7, False), 0)
s: (21, 7, False)
a: 0
plt.rcParams["figure.figsize"] = (18,12)
for msa in monitored_state_actions:
plt.plot(monitored_state_action_values2[msa])
plt.title('Estimated $q_\pi(s,a)$ for some state-actions', fontsize=18)
plt.xlabel('Episodes', fontsize=16)
plt.ylabel('Estimated $q_\pi(s,a)$', fontsize=16)
plt.legend(monitored_state_actions, fontsize=16)
plt.show()_____no_output_____V2 = defaultdict(float)
for state, actions in Q2.items():
action_value = np.max(actions)
V2[state] = action_value_____no_output_____# myplot.plot_action_value_function(Q2, title="500,000 Steps", wireframe=True, azim=AZIM, elev=ELEV)
myplot.plot_pi_star_and_v_star(pi2, V2, title="$\pi_* and v_*$", wireframe=False, azim=AZIM-40, elev=ELEV);_____no_output__________no_output_____
</code>
| {
"repository": "LearnableLoopAI/blog2",
"path": "_notebooks/2022-01-27-MCControl_OffPolicy_BlackJack.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 654161,
"hexsha": "cb2071004dc3925bb86c89288d76e4dfeab21ff7",
"max_line_length": 241454,
"avg_line_length": 497.8394216134,
"alphanum_fraction": 0.9314022083
} |
# Notebook from JulienPeloton/fink_grandma_kn
Path: KN-Mangrove_filter.ipynb
# GRANDMA/Kilonova-catcher --- KN-Mangrove
The purpose of this notebook is to inspect the ZTF alerts that were selected by the Fink KN-Mangrove filter as potential Kilonova candidates in the period 2021/04/01 to 2021/08/31, and forwarded to the GRANDMA/Kilonova-catcher project for follow-up observations.
With the other filter (KN-LC), we need at least two days to identify a candidate. It may seem like a short amount of time, but if the object is a kilonova, it will already be fading or even too faint to be observed. The second filter aims to tackle younger detections. An alert will be considered as a candidate if, on top of the other cuts, one can identify a suitable host and the resulting absolute magnitude is compatible with the kilonovae models.
To identify possible hosts, we used the MANGROVE catalog [1]. It is an inventory of 800,000 galaxies. At this point, we are only interested in their position in the sky: right ascension, declination, and luminosity distance. We only considered the galaxies in a 230 Mpc range, as it is the current observation range of the gravitational waves interferometers.
This filter uses the following cuts:
- Point-like object: the star/galaxy extractor score must be above 0.4. This could be justified by saying that the alert should be a point-like object. Actually, few objects score below 0.4 given the current implementation, and objects that do are most likely boguses.
- Non-artefact: the deep real/bogus score must be above 0.5.
- Object non referenced in SIMBAD catalog (galactic ones).
- Young detection: less than 6 hours.
- Galaxy association: The alert should within 10 kpc of a galaxy from the Mangrove catalog.
- Absolute magnitude: the absolute magnitude of the alert should be −16 ± 1.
According to [2], we expect a kilonova event to display an absolute magnitude of −16 ± 1. We don’t know the distance of the alerts in general, so we will compute the absolute magnitude of an alert as if it were in a given galaxy. This threshold is given in the band g but it was implemented and band g and r without distinction. This hypothesis is due to the lack of observations.
The galaxy association method is also not perfect: it can lead to the mis-association of an event that is in the foreground or the background of a galaxy. But this is necessary as the luminosity distance between the earth and the alert is usually unknown.
[1] J-G Ducoin et al. “Optimizing gravitational waves follow-up using galaxies stellar mass”. In: Monthly Notices of the Royal Astronomical Society 492.4 (Jan. 2020), pp. 4768–4779. issn: 1365-2966. doi: 10.1093/mnras/staa114. url: http://dx.doi.org/10.1093/ mnras/staa114.
[2] Mansi M. Kasliwal et al. “Kilonova Luminosity Function Constraints Based on Zwicky Transient Facility Searches for 13 Neutron Star Merger Triggers during O3”. In: The Astrophysical Journal 905.2 (Dec. 2020), p. 145. issn: 1538-4357. doi: 10.3847/1538- 4357/abc335. url: http://dx.doi.org/10.3847/1538-4357/abc335._____no_output_____
<code>
import os
import requests
import pandas as pd
import numpy as np
from astropy.coordinates import SkyCoord
from astropy import units as u
# pip install fink_filters
from fink_filters import __file__ as fink_filters_location
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
APIURL = 'https://fink-portal.org'_____no_output_____
</code>
## KN-Mangrove data
Let's load the alert data from this filter:_____no_output_____
<code>
pdf_kn_ma = pd.read_parquet('data/0104_3009_kn_filter2_class.parquet')
nalerts_kn_ma = len(pdf_kn_ma)
nunique_alerts_kn_ma = len(np.unique(pdf_kn_ma['objectId']))
print(
'{} alerts loaded ({} unique objects)'.format(
nalerts_kn_ma,
nunique_alerts_kn_ma
)
)68 alerts loaded (59 unique objects)
</code>
## Visualising the candidates
Finally, let's inspect one lightcurve:_____no_output_____
<code>
oid = pdf_kn_ma['objectId'].values[2]
tns_class = pdf_kn_ma['TNS'].values[2]
kn_trigger = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[2]
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'withupperlim': 'True'
}
)
# Format output in a DataFrame
pdf = pd.read_json(r.content)
fig = plt.figure(figsize=(15, 6))
colordic = {1: 'C0', 2: 'C1'}
for filt in np.unique(pdf['i:fid']):
maskFilt = pdf['i:fid'] == filt
# The column `d:tag` is used to check data type
maskValid = pdf['d:tag'] == 'valid'
plt.errorbar(
pdf[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskValid & maskFilt]['i:magpsf'],
pdf[maskValid & maskFilt]['i:sigmapsf'],
ls = '', marker='o', color=colordic[filt]
)
maskUpper = pdf['d:tag'] == 'upperlim'
plt.plot(
pdf[maskUpper & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskUpper & maskFilt]['i:diffmaglim'],
ls='', marker='^', color=colordic[filt], markerfacecolor='none'
)
maskBadquality = pdf['d:tag'] == 'badquality'
plt.errorbar(
pdf[maskBadquality & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskBadquality & maskFilt]['i:magpsf'],
pdf[maskBadquality & maskFilt]['i:sigmapsf'],
ls='', marker='v', color=colordic[filt]
)
plt.axvline(kn_trigger - 2400000.5, ls='--', color='grey')
plt.gca().invert_yaxis()
plt.xlabel('Modified Julian Date')
plt.ylabel('Magnitude')
plt.title('{}'.format(oid))
plt.show()
print('{}/{}'.format(APIURL, oid))_____no_output_____
</code>
Circles (●) with error bars show valid alerts that pass the Fink quality cuts. Upper triangles with errors (▲), representing alert measurements that do not satisfy Fink quality cuts, but are nevetheless contained in the history of valid alerts and used by classifiers. Lower triangles (▽), representing 5-sigma mag limit in difference image based on PSF-fit photometry contained in the history of valid alerts. The vertical line shows the KN trigger by Fink._____no_output_____## Evolution of the classification_____no_output_____Each alert was triggered because the Fink pipelines favoured the KN flavor at the time of emission. But the underlying object on the sky might have generated further alerts after, and the classification could evolve. For a handful of alerts, let see what they became. For this, we will use the Fink REST API, and query all the data for the underlying object:_____no_output_____
<code>
NALERTS = 3
oids = pdf_kn_ma['objectId'].values[0: NALERTS]
kn_triggers = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[0: NALERTS]
for oid, kn_trigger in zip(oids, kn_triggers):
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'output-format': 'json'
}
)
# Format output in a DataFrame
pdf_ = pd.read_json(r.content)
times, classes = np.transpose(pdf_[['i:jd','v:classification']].values)
fig = plt.figure(figsize=(12, 5))
plt.plot(times, classes, ls='', marker='o')
plt.axvline(kn_trigger, ls='--', color='C1')
plt.title(oid)
plt.xlabel('Time (Julian Date)')
plt.ylabel('Fink inferred classification')
plt.show()_____no_output_____oids_____no_output_____
</code>
Note that Kilonova classification does not appear here as this label is reserved to the KN-LC filter. We are working on giving a new label.
One can see that alert classification for a given object can change over time. With time, we collect more data, and have a clearer view on the nature of the object. Let's make an histogram of the final classification for each object (~1min to run)_____no_output_____
<code>
final_classes = []
oids = np.unique(pdf_kn_ma['objectId'].values)
for oid in oids:
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'output-format': 'json'
}
)
pdf_ = pd.read_json(r.content)
if not pdf_.empty:
final_classes.append(pdf_['v:classification'].values[0])_____no_output_____fig = plt.figure(figsize=(12, 5))
plt.hist(final_classes)
plt.xticks(rotation=15.)
plt.title('Final Fink classification of KN candidates');_____no_output_____
</code>
Most of the objects are still unknown according to Fink._____no_output_____## Follow-up of candidates by other instruments_____no_output_____Some of the alerts benefited from follow-up by other instruments to determine their nature. Usually this information can be found on the TNS server (although this is highly biased towards Supernovae). We attached this information to the alerts (if it exists):_____no_output_____
<code>
pdf_kn_ma.groupby('TNS').count().sort_values('objectId', ascending=False)['objectId']_____no_output_____
</code>
We can see that among all 53 alerts forwarded by Fink, 39 have no known counterpart in TNS (i.e. no follow-up result was reported). _____no_output_____## Retrieving Mangrove data_____no_output_____
<code>
catalog_path = os.path.join(os.path.dirname(fink_filters_location), 'data/mangrove_filtered.csv')
pdf_mangrove = pd.read_csv(catalog_path)_____no_output_____pdf_mangrove.head(2)_____no_output_____# ZTF
ra1 = pdf_kn_ma['candidate'].apply(lambda x: x['ra'])
dec1 = pdf_kn_ma['candidate'].apply(lambda x: x['dec'])
# Mangrove
cols = ['internal_names', 'ra', 'declination', 'discoverydate', 'type']
ra2, dec2, name, lum_dist, ang_dist = pdf_mangrove['ra'], pdf_mangrove['dec'], pdf_mangrove['2MASS_name'], pdf_mangrove['lum_dist'], pdf_mangrove['ang_dist']
# create catalogs
catalog_ztf = SkyCoord(ra=ra1.values*u.degree, dec=dec1.values*u.degree)
catalog_tns = SkyCoord(ra=np.array(ra2, dtype=np.float)*u.degree, dec=np.array(dec2, dtype=np.float)*u.degree)
# cross-match
idx, d2d, d3d = catalog_ztf.match_to_catalog_sky(catalog_tns)
pdf_kn_ma['2MASS_name'] = name.values[idx]
pdf_kn_ma['separation (Kpc)'] = d2d.radian * ang_dist.values[idx] * 1000
pdf_kn_ma['lum_dist (Mpc)'] = lum_dist.values[idx]/Users/julien/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:11: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
# This is added back by InteractiveShellApp.init_path()
pdf_kn_ma[['objectId', '2MASS_name', 'separation (Kpc)', 'lum_dist (Mpc)']].head(5)_____no_output_____fig = plt.figure(figsize=(12, 6))
plt.hist(pdf_kn_ma['lum_dist (Mpc)'], bins=20)
plt.xlabel('Luminosity distance of matching galaxies (Mpc)');_____no_output_____np.max(pdf_kn_ma['lum_dist (Mpc)'])_____no_output_____
</code>
| {
"repository": "JulienPeloton/fink_grandma_kn",
"path": "KN-Mangrove_filter.ipynb",
"matched_keywords": [
"STAR",
"evolution"
],
"stars": null,
"size": 174121,
"hexsha": "cb20908449b871bb1155cbf59f91b1799f1d8f3b",
"max_line_length": 28284,
"avg_line_length": 228.8055190539,
"alphanum_fraction": 0.9070071961
} |
# Notebook from BioFreak95/schnetpack
Path: docs/tutorials/tutorial_01_preparing_data.ipynb
# Preparing and loading your data
This tutorial introduces how SchNetPack stores and loads data.
Before we can start training neural networks with SchNetPack, we need to prepare our data.
This is because SchNetPack has to stream the reference data from disk during training in order to be able to handle large datasets.
Therefore, it is crucial to use data format that allows for fast random read access.
We found that the [ASE database format](https://wiki.fysik.dtu.dk/ase/ase/db/db.html) fulfills perfectly.
To further improve the performance, we internally encode properties in binary.
However, as long as you only access the ASE database via the provided SchNetPack `AtomsData` class, you don't have to worry about that._____no_output_____
<code>
from schnetpack import AtomsData_____no_output_____
</code>
## Predefined datasets
SchNetPack supports several benchmark datasets that can be used without preparation.
Each one can be accessed using a corresponding class that inherits from `DownloadableAtomsData`, which supports automatic download and conversion. Here, we show how to use these data sets at the example of the QM9 benchmark.
First, we have to import the dataset class and instantiate it. This will automatically download the data to the specified location._____no_output_____
<code>
from schnetpack.datasets import QM9
qm9data = QM9('./qm9.db', download=True)_____no_output_____
</code>
Let's have a closer look at this dataset.
We can find out how large it is and which properties it supports:_____no_output_____
<code>
print('Number of reference calculations:', len(qm9data))
print('Available properties:')
for p in qm9data.available_properties:
print('-', p)Number of reference calculations: 133885
Available properties:
- rotational_constant_A
- rotational_constant_B
- rotational_constant_C
- dipole_moment
- isotropic_polarizability
- homo
- lumo
- gap
- electronic_spatial_extent
- zpve
- energy_U0
- energy_U
- enthalpy_H
- free_energy
- heat_capacity
</code>
We can load data points using zero-base indexing. The result is a dictionary containing the geometry and properties:_____no_output_____
<code>
example = qm9data[0]
print('Properties:')
for k, v in example.items():
print('-', k, ':', v.shape)Properties:
- rotational_constant_A : torch.Size([1])
- rotational_constant_B : torch.Size([1])
- rotational_constant_C : torch.Size([1])
- dipole_moment : torch.Size([1])
- isotropic_polarizability : torch.Size([1])
- homo : torch.Size([1])
- lumo : torch.Size([1])
- gap : torch.Size([1])
- electronic_spatial_extent : torch.Size([1])
- zpve : torch.Size([1])
- energy_U0 : torch.Size([1])
- energy_U : torch.Size([1])
- enthalpy_H : torch.Size([1])
- free_energy : torch.Size([1])
- heat_capacity : torch.Size([1])
- _atomic_numbers : torch.Size([5])
- _positions : torch.Size([5, 3])
- _cell : torch.Size([3, 3])
- _neighbors : torch.Size([5, 4])
- _cell_offset : torch.Size([5, 4, 3])
- _idx : torch.Size([1])
</code>
We see that all available properties have been loaded as torch tensors with the given shapes. Keys with an underscore indicate that these names are reserved for internal use. This includes the geometry (`_atomic_numbers`, `_positions`, `_cell`), the index within the dataset (`_idx`) as well as information about neighboring atoms and periodic boundary conditions (`_neighbors`, `_cell_offset`).
<div class="alert alert-info">
**Note:** Neighbors are collected using an `EnvironmentProvider`, that can be passed to the `AtomsData` constructor. The default is the `SimpleEnvironmentProvider`, which constructs the neighbor list using a full distance matrix. This is suitable for small molecules. We supply environment providers using a cutoff (`AseEnvironmentProvider`, `TorchEnvironmentProvider`) that are able to handle larger molecules and periodic boundary conditions.
</div>
We can directly obtain an ASE atoms object as follows:_____no_output_____
<code>
at = qm9data.get_atoms(idx=0)
print('Atoms object:', at)
at2, props = qm9data.get_properties(idx=0)
print('Atoms object (not the same):', at2)
print('Equivalent:', at2 == at, '; not the same object:', at2 is at)Atoms object: Atoms(symbols='CH4', pbc=False)
Atoms object (not the same): Atoms(symbols='CH4', pbc=False)
Equivalent: True ; not the same object: False
</code>
Alternatively, all property names are pre-defined as class-variable for convenient access:_____no_output_____
<code>
print('Total energy at 0K:', props[QM9.U0])
print('HOMO:', props[QM9.homo])Total energy at 0K: tensor([-1101.4878])
HOMO: tensor([-10.5499])
</code>
## Preparing your own data
In the following we will create an ASE database from our own data.
For this tutorial, we will use a dataset containing a molecular dynamics (MD) trajectory of ethanol, which can be downloaded [here](http://quantum-machine.org/gdml/data/xyz/ethanol_dft.zip)._____no_output_____
<code>
import os
if not os.path.exists('./ethanol_dft.zip'):
!wget http://quantum-machine.org/gdml/data/xyz/ethanol_dft.zip
if not os.path.exists('./ethanol.xyz'):
!unzip ./ethanol_dft.zip_____no_output_____
</code>
The data set is in xyz format with the total energy given in the comment row. For this kind of data, we supply a script that converts it into the SchNetPack ASE DB format.
```
schnetpack_parse.py ./ethanol.xyz ./ethanol.db
```
In the following, we show how this can be done in general, so that you apply this to any other data format.
First, we need to parse our data. For this we use the IO functionality supplied by ASE.
In order to create a SchNetPack DB, we require a **list of ASE `Atoms` objects** as well as a corresponding **list of dictionaries** `[{property_name1: property1_molecule1}, {property_name1: property1_molecule2}, ...]` containing the mapping from property names to values._____no_output_____
<code>
from ase.io import read
import numpy as np
# load atoms from xyz file. Here, we only parse the first 10 molecules
atoms = read('./ethanol.xyz', index=':10')
# comment line is weirdly stored in the info dictionary as key by ASE. here it corresponds to the energy
print('Energy:', atoms[0].info)
print()
# parse properties as list of dictionaries
property_list = []
for at in atoms:
# All properties need to be stored as numpy arrays.
# Note: The shape for scalars should be (1,), not ()
# Note: GPUs work best with float32 data
energy = np.array([float(list(at.info.keys())[0])], dtype=np.float32)
property_list.append(
{'energy': energy}
)
print('Properties:', property_list)Energy: {'-97208.40600498248': True}
Properties: [{'energy': array([-97208.41], dtype=float32)}, {'energy': array([-97208.375], dtype=float32)}, {'energy': array([-97208.04], dtype=float32)}, {'energy': array([-97207.5], dtype=float32)}, {'energy': array([-97206.84], dtype=float32)}, {'energy': array([-97206.1], dtype=float32)}, {'energy': array([-97205.266], dtype=float32)}, {'energy': array([-97204.29], dtype=float32)}, {'energy': array([-97203.16], dtype=float32)}, {'energy': array([-97201.875], dtype=float32)}]
</code>
Once we have our data in this format, it is straightforward to create a new SchNetPack DB and store it._____no_output_____
<code>
%rm './new_dataset.db'
new_dataset = AtomsData('./new_dataset.db', available_properties=['energy'])
new_dataset.add_systems(atoms, property_list)_____no_output_____
</code>
Now we can have a look at the data in the same way we did before for QM9:_____no_output_____
<code>
print('Number of reference calculations:', len(new_dataset))
print('Available properties:')
for p in new_dataset.available_properties:
print('-', p)
print()
example = new_dataset[0]
print('Properties of molecule with id 0:')
for k, v in example.items():
print('-', k, ':', v.shape)Number of reference calculations: 10
Available properties:
- energy
Properties of molecule with id 0:
- energy : torch.Size([1])
- _atomic_numbers : torch.Size([9])
- _positions : torch.Size([9, 3])
- _cell : torch.Size([3, 3])
- _neighbors : torch.Size([9, 8])
- _cell_offset : torch.Size([9, 8, 3])
- _idx : torch.Size([1])
</code>
The same way, we can store multiple properties, including atomic properties such as forces, or tensorial properties such as polarizability tensors.
In the following tutorials, we will describe how these datasets can be used to train neural networks._____no_output_____
| {
"repository": "BioFreak95/schnetpack",
"path": "docs/tutorials/tutorial_01_preparing_data.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": null,
"size": 12763,
"hexsha": "cb20c06cc1c4a9831c06c53bef6a52ddf434776d",
"max_line_length": 493,
"avg_line_length": 31.9075,
"alphanum_fraction": 0.5826999922
} |
# Notebook from Skharwa1/Applied-Data-Science-with-Python-Specialization
Path: Course - 4: Applied Text Mining in Python/Module+2+(Python+3).ipynb
# Module 2 (Python 3)_____no_output_____## Basic NLP Tasks with NLTK_____no_output_____
<code>
import nltk
nltk.download()NLTK Downloader
---------------------------------------------------------------------------
d) Download l) List u) Update c) Config h) Help q) Quit
---------------------------------------------------------------------------
Downloader> d
Download which package (l=list; x=cancel)?
Identifier> l
Packages:
[ ] abc................. Australian Broadcasting Commission 2006
[ ] alpino.............. Alpino Dutch Treebank
[ ] averaged_perceptron_tagger Averaged Perceptron Tagger
[ ] averaged_perceptron_tagger_ru Averaged Perceptron Tagger (Russian)
[ ] basque_grammars..... Grammars for Basque
[ ] biocreative_ppi..... BioCreAtIvE (Critical Assessment of Information
Extraction Systems in Biology)
[ ] bllip_wsj_no_aux.... BLLIP Parser: WSJ Model
[ ] book_grammars....... Grammars from NLTK Book
[ ] brown............... Brown Corpus
[ ] brown_tei........... Brown Corpus (TEI XML Version)
[ ] cess_cat............ CESS-CAT Treebank
[ ] cess_esp............ CESS-ESP Treebank
[ ] chat80.............. Chat-80 Data Files
[ ] city_database....... City Database
[ ] cmudict............. The Carnegie Mellon Pronouncing Dictionary (0.6)
[ ] comparative_sentences Comparative Sentence Dataset
[ ] comtrans............ ComTrans Corpus Sample
[ ] conll2000........... CONLL 2000 Chunking Corpus
[ ] conll2002........... CONLL 2002 Named Entity Recognition Corpus
Hit Enter to continue:
[ ] conll2007........... Dependency Treebanks from CoNLL 2007 (Catalan
and Basque Subset)
[ ] crubadan............ Crubadan Corpus
[ ] dependency_treebank. Dependency Parsed Treebank
[ ] dolch............... Dolch Word List
[ ] europarl_raw........ Sample European Parliament Proceedings Parallel
Corpus
[ ] floresta............ Portuguese Treebank
[ ] framenet_v15........ FrameNet 1.5
[ ] framenet_v17........ FrameNet 1.7
[ ] gazetteers.......... Gazeteer Lists
[ ] ieer................ NIST IE-ER DATA SAMPLE
[ ] indian.............. Indian Language POS-Tagged Corpus
[ ] jeita............... JEITA Public Morphologically Tagged Corpus (in
ChaSen format)
[ ] kimmo............... PC-KIMMO Data Files
[ ] knbc................ KNB Corpus (Annotated blog corpus)
[ ] large_grammars...... Large context-free and feature-based grammars
for parser comparison
[ ] lin_thesaurus....... Lin's Dependency Thesaurus
[ ] mac_morpho.......... MAC-MORPHO: Brazilian Portuguese news text with
part-of-speech tags
Hit Enter to continue:
[ ] machado............. Machado de Assis -- Obra Completa
[ ] masc_tagged......... MASC Tagged Corpus
[ ] maxent_ne_chunker... ACE Named Entity Chunker (Maximum entropy)
[ ] maxent_treebank_pos_tagger Treebank Part of Speech Tagger (Maximum entropy)
[ ] moses_sample........ Moses Sample Models
[ ] movie_reviews....... Sentiment Polarity Dataset Version 2.0
[ ] mte_teip5........... MULTEXT-East 1984 annotated corpus 4.0
[ ] mwa_ppdb............ The monolingual word aligner (Sultan et al.
2015) subset of the Paraphrase Database.
[ ] names............... Names Corpus, Version 1.3 (1994-03-29)
[ ] nombank.1.0......... NomBank Corpus 1.0
[ ] nonbreaking_prefixes Non-Breaking Prefixes (Moses Decoder)
[ ] omw................. Open Multilingual Wordnet
[ ] opinion_lexicon..... Opinion Lexicon
[ ] panlex_swadesh...... PanLex Swadesh Corpora
[ ] paradigms........... Paradigm Corpus
[ ] pe08................ Cross-Framework and Cross-Domain Parser
Evaluation Shared Task
[ ] perluniprops........ perluniprops: Index of Unicode Version 7.0.0
character properties in Perl
[ ] pil................. The Patient Information Leaflet (PIL) Corpus
Hit Enter to continue:
[ ] pl196x.............. Polish language of the XX century sixties
[ ] porter_test......... Porter Stemmer Test Files
[ ] ppattach............ Prepositional Phrase Attachment Corpus
[ ] problem_reports..... Problem Report Corpus
[ ] product_reviews_1... Product Reviews (5 Products)
[ ] product_reviews_2... Product Reviews (9 Products)
[ ] propbank............ Proposition Bank Corpus 1.0
[ ] pros_cons........... Pros and Cons
[ ] ptb................. Penn Treebank
[ ] punkt............... Punkt Tokenizer Models
[ ] qc.................. Experimental Data for Question Classification
[ ] reuters............. The Reuters-21578 benchmark corpus, ApteMod
version
[ ] rslp................ RSLP Stemmer (Removedor de Sufixos da Lingua
Portuguesa)
[ ] rte................. PASCAL RTE Challenges 1, 2, and 3
[ ] sample_grammars..... Sample Grammars
[ ] semcor.............. SemCor 3.0
[ ] senseval............ SENSEVAL 2 Corpus: Sense Tagged Text
[ ] sentence_polarity... Sentence Polarity Dataset v1.0
[ ] sentiwordnet........ SentiWordNet
Hit Enter to continue:
[ ] shakespeare......... Shakespeare XML Corpus Sample
[ ] sinica_treebank..... Sinica Treebank Corpus Sample
[ ] smultron............ SMULTRON Corpus Sample
[ ] snowball_data....... Snowball Data
[ ] spanish_grammars.... Grammars for Spanish
[ ] state_union......... C-Span State of the Union Address Corpus
[ ] stopwords........... Stopwords Corpus
[ ] subjectivity........ Subjectivity Dataset v1.0
[ ] swadesh............. Swadesh Wordlists
[ ] switchboard......... Switchboard Corpus Sample
[ ] tagsets............. Help on Tagsets
[ ] timit............... TIMIT Corpus Sample
[ ] toolbox............. Toolbox Sample Files
[ ] treebank............ Penn Treebank Sample
[ ] twitter_samples..... Twitter Samples
[ ] udhr2............... Universal Declaration of Human Rights Corpus
(Unicode Version)
[ ] udhr................ Universal Declaration of Human Rights Corpus
[ ] unicode_samples..... Unicode Samples
[ ] universal_tagset.... Mappings to the Universal Part-of-Speech Tagset
[ ] universal_treebanks_v20 Universal Treebanks Version 2.0
Hit Enter to continue:
[ ] vader_lexicon....... VADER Sentiment Lexicon
[ ] verbnet3............ VerbNet Lexicon, Version 3.3
[ ] verbnet............. VerbNet Lexicon, Version 2.1
[ ] webtext............. Web Text Corpus
[ ] wmt15_eval.......... Evaluation data from WMT15
[ ] word2vec_sample..... Word2Vec Sample
[ ] wordnet............. WordNet
[ ] wordnet_ic.......... WordNet-InfoContent
[ ] words............... Word Lists
[ ] ycoe................ York-Toronto-Helsinki Parsed Corpus of Old
English Prose
Collections:
[P] all-corpora......... All the corpora
[P] all-nltk............ All packages available on nltk_data gh-pages
branch
[P] all................. All packages
[P] book................ Everything used in the NLTK Book
[P] popular............. Popular packages
[ ] tests............... Packages for running tests
[ ] third-party......... Third-party data packages
([*] marks installed packages; [P] marks partially installed collections)
Download which package (l=list; x=cancel)?
Identifier> l
Packages:
[ ] abc................. Australian Broadcasting Commission 2006
[ ] alpino.............. Alpino Dutch Treebank
[ ] averaged_perceptron_tagger Averaged Perceptron Tagger
[ ] averaged_perceptron_tagger_ru Averaged Perceptron Tagger (Russian)
[ ] basque_grammars..... Grammars for Basque
[ ] biocreative_ppi..... BioCreAtIvE (Critical Assessment of Information
Extraction Systems in Biology)
[ ] bllip_wsj_no_aux.... BLLIP Parser: WSJ Model
[ ] book_grammars....... Grammars from NLTK Book
[ ] brown............... Brown Corpus
[ ] brown_tei........... Brown Corpus (TEI XML Version)
[ ] cess_cat............ CESS-CAT Treebank
[ ] cess_esp............ CESS-ESP Treebank
[ ] chat80.............. Chat-80 Data Files
[ ] city_database....... City Database
[ ] cmudict............. The Carnegie Mellon Pronouncing Dictionary (0.6)
[ ] comparative_sentences Comparative Sentence Dataset
[ ] comtrans............ ComTrans Corpus Sample
[ ] conll2000........... CONLL 2000 Chunking Corpus
[ ] conll2002........... CONLL 2002 Named Entity Recognition Corpus
import nltk
from nltk.book import **** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
</code>
### Counting vocabulary of words_____no_output_____
<code>
text7_____no_output_____sent7_____no_output_____len(sent7)_____no_output_____len(text7)_____no_output_____len(set(text7))_____no_output_____list(set(text7))[:10]_____no_output_____
</code>
### Frequency of words_____no_output_____
<code>
dist = FreqDist(text7)
len(dist)_____no_output_____vocab1 = dist.keys()
#vocab1[:10]
# In Python 3 dict.keys() returns an iterable view instead of a list
list(vocab1)[:10]_____no_output_____dist['four']_____no_output_____freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100]
freqwords_____no_output_____
</code>
### Normalization and stemming_____no_output_____
<code>
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1_____no_output_____porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]_____no_output_____
</code>
### Lemmatization_____no_output_____
<code>
udhr = nltk.corpus.udhr.words('English-Latin1')
udhr[:20]_____no_output_____[porter.stem(t) for t in udhr[:20]] # Still Lemmatization_____no_output_____WNlemma = nltk.WordNetLemmatizer()
[WNlemma.lemmatize(t) for t in udhr[:20]]_____no_output_____
</code>
### Tokenization_____no_output_____
<code>
text11 = "Children shouldn't drink a sugary drink before bed."
text11.split(' ')_____no_output_____nltk.word_tokenize(text11)_____no_output_____text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text12)
len(sentences)_____no_output_____sentences_____no_output_____
</code>
## Advanced NLP Tasks with NLTK_____no_output_____### POS tagging_____no_output_____
<code>
nltk.help.upenn_tagset('MD')MD: modal auxiliary
can cannot could couldn't dare may might must need ought shall should
shouldn't will would
text13 = nltk.word_tokenize(text11)
nltk.pos_tag(text13)_____no_output_____text14 = nltk.word_tokenize("Visiting aunts can be a nuisance")
nltk.pos_tag(text14)_____no_output_____# Parsing sentence structure
text15 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text15)
for tree in trees:
print(tree)(S (NP Alice) (VP (V loves) (NP Bob)))
text16 = nltk.word_tokenize("I saw the man with a telescope")
grammar1 = nltk.data.load('mygrammar.cfg')
grammar1_____no_output_____parser = nltk.ChartParser(grammar1)
trees = parser.parse_all(text16)
for tree in trees:
print(tree)(S
(NP I)
(VP
(VP (V saw) (NP (Det the) (N man)))
(PP (P with) (NP (Det a) (N telescope)))))
(S
(NP I)
(VP
(V saw)
(NP (Det the) (N man) (PP (P with) (NP (Det a) (N telescope))))))
from nltk.corpus import treebank
text17 = treebank.parsed_sents('wsj_0001.mrg')[0]
print(text17)(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(, ,)
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))
(VP
(MD will)
(VP
(VB join)
(NP (DT the) (NN board))
(PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director)))
(NP-TMP (NNP Nov.) (CD 29))))
(. .))
</code>
### POS tagging and parsing ambiguity_____no_output_____
<code>
text18 = nltk.word_tokenize("The old man the boat")
nltk.pos_tag(text18)_____no_output_____text19 = nltk.word_tokenize("Colorless green ideas sleep furiously")
nltk.pos_tag(text19)_____no_output_____
</code>
| {
"repository": "Skharwa1/Applied-Data-Science-with-Python-Specialization",
"path": "Course - 4: Applied Text Mining in Python/Module+2+(Python+3).ipynb",
"matched_keywords": [
"biology"
],
"stars": 9,
"size": 42139,
"hexsha": "cb217edae9c8e19639f10ce22151b4d41188d2cf",
"max_line_length": 1695,
"avg_line_length": 37.3572695035,
"alphanum_fraction": 0.510690809
} |
# Notebook from vasudev-sharma/course-content
Path: tutorials/W2D4_DynamicNetworks/W2D4_Tutorial1.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D4_DynamicNetworks/W2D4_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 1: Neural Rate Models
**Week 2, Day 4: Dynamic Networks**
**By Neuromatch Academy**
__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom
_____no_output_____---
# Tutorial Objectives
The brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks.
The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.).
How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.
In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.
In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons.
**Steps:**
- Write the equation for the firing rate dynamics of a 1D excitatory population.
- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.
- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system.
- Investigate the stability of the fixed points by linearizing the dynamics around them.
_____no_output_____---
# Setup_____no_output_____
<code>
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm_____no_output_____# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()_____no_output_____
</code>
---
# Section 1: Neuronal network dynamics_____no_output_____
<code>
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
## Section 1.1: Dynamics of a single excitatory population
Individual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:
\begin{align}
\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)
\end{align}
$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.
To start building the model, please execute the cell below to initialize the simulation parameters._____no_output_____
<code>
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars_____no_output_____
</code>
You can now use:
- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters.
- `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step
- To update an existing parameter dictionary, use `pars['New_para'] = value`
Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax._____no_output_____## Section 1.2: F-I curves
In electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.
The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values.
A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.
$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$
The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.
Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$._____no_output_____### Exercise 1: Implement F-I curve
Let's first investigate the activation functions before simulating the dynamics of the entire population.
In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters._____no_output_____
<code>
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)_____no_output_____# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)_____no_output_____
</code>
### Interactive Demo: Parameter exploration of F-I curve
Here's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?_____no_output_____
<code>
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))_____no_output_____# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";_____no_output_____
</code>
## Section 1.3: Simulation scheme of E dynamics
Because $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:
\begin{align}
&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t}
\end{align}
where $r[k] = r(k\Delta t)$.
Thus,
$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}[k];a,\theta)]$$
Hence, Equation (1) is updated at each time step by:
$$r[k+1] = r[k] + \Delta r[k]$$
_____no_output_____
<code>
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)_____no_output_____
</code>
### Interactive Demo: Parameter Exploration of single population dynamics
Note that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.
How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$.
Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section._____no_output_____
<code>
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))_____no_output_____# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";_____no_output_____
</code>
## Think!
Above, we have numerically solved a system driven by a positive input. Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.
- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite?
- Which parameter would you change in order to increase the maximum value of the response? _____no_output_____
<code>
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";_____no_output_____
</code>
---
# Section 2: Fixed points of the single population system
_____no_output_____
<code>
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$.
We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:
$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$
When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.
From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value.
In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point:
$$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\
We can now numerically calculate the fixed point with a root finding algorithm._____no_output_____## Exercise 2: Visualization of the fixed points
When it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points.
Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain
$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$
Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points. _____no_output_____
<code>
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)_____no_output_____# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)_____no_output_____
</code>
## Exercise 3: Fixed point calculation
We will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).
The next cell defines three helper functions that we will use:
- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value
- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points
- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions_____no_output_____
<code>
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)_____no_output_____r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)_____no_output_____# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)_____no_output_____
</code>
## Interactive Demo: fixed points as a function of recurrent and external inputs.
You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?_____no_output_____
<code>
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))_____no_output_____# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";_____no_output_____
</code>
---
# Summary
In this tutorial, we have investigated the dynamics of a rate-based single population of neurons.
We learned about:
- The effect of the input parameters and the time constant of the network on the dynamics of the population.
- How to find the fixed point(s) of the system.
Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:
- How to determine the stability of a fixed point by linearizing the system.
- How to add realistic inputs to our model._____no_output_____---
# Bonus 1: Stability of a fixed point_____no_output_____
<code>
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
#### Initial values and trajectories
Here, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$._____no_output_____
<code>
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()_____no_output_____
</code>
## Interactive Demo: dynamics as a function of the initial value
Let's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?_____no_output_____
<code>
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))_____no_output_____# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";_____no_output_____
</code>
### Stability analysis via linearization of the dynamics
Just like Equation $1$ in the case ($w=0$) discussed above, a generic linear system
$$\frac{dx}{dt} = \lambda (x - b),$$
has a fixed point for $x=b$. The analytical solution of such a system can be found to be:
$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$
Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as:
$$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$
- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".
- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" ._____no_output_____### Compute the stability of Equation $1$
Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:
\begin{align}
\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon
\end{align}
where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:
\begin{align}
\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)]
\end{align}
That is, as in the linear system above, the value of
$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$
determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system._____no_output_____## Exercise 4: Compute $dF$
The derivative of the sigmoid transfer function is:
\begin{align}
\frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\
& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)
\end{align}
Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it._____no_output_____
<code>
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)_____no_output_____# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)_____no_output_____
</code>
## Exercise 5: Compute eigenvalues
As discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?
Note that the expression of the eigenvalue at fixed point $r^*$
$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$_____no_output_____
<code>
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')_____no_output_____
</code>
**SAMPLE OUTPUT**
```
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point2 at 0.447 with Eigenvalue=0.498
Fixed point3 at 0.900 with Eigenvalue=-0.626
```_____no_output_____
<code>
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')_____no_output_____
</code>
## Think!
Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$? _____no_output_____
<code>
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";_____no_output_____
</code>
---
# Bonus 2: Noisy input drives the transition between two stable states
_____no_output_____## Ornstein-Uhlenbeck (OU) process
As discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows:
$$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$
Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process._____no_output_____
<code>
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()_____no_output_____
</code>
## Example: Up-Down transition
In the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs._____no_output_____
<code>
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()_____no_output_____
</code>
| {
"repository": "vasudev-sharma/course-content",
"path": "tutorials/W2D4_DynamicNetworks/W2D4_Tutorial1.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 59784,
"hexsha": "cb21a535cbca3e9deaafa83a1f8e6943989c16e4",
"max_line_length": 535,
"avg_line_length": 36.1888619855,
"alphanum_fraction": 0.5664559079
} |
# Notebook from bitcoffe/bofin
Path: examples/example-1d-Spin-bath-model-ohmic-fitting.ipynb
# Example 1d: Spin-Bath model, fitting of spectrum and correlation functions
### Introduction_____no_output_____The HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.
In this example we show the evolution of a single two-level system in contact with a single Bosonic environment. The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.
The Bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.
In the example below we show how model an Ohmic environment with exponential cut-off in two ways. First we fit the spectrum with a set of underdamped brownian oscillator functions. Second, we evaluate the correlation functions, and fit those with a certain choice of exponential functions.
_____no_output_____
<code>
%pylab inline
The history saving thread hit an unexpected error (DatabaseError('database disk image is malformed')).History will not be written to the database.
Populating the interactive namespace from numpy and matplotlib
from qutip import *_____no_output_____%load_ext autoreload
%autoreload 2_____no_output_____
from bofin.heom import BosonicHEOMSolver_____no_output_____
def cot(x):
return 1./np.tan(x)
_____no_output_____# Defining the system Hamiltonian
eps = .0 # Energy of the 2-level system.
Del = .2 # Tunnelling term
Hsys = 0.5 * eps * sigmaz() + 0.5 * Del* sigmax()_____no_output_____# Initial state of the system.
rho0 = basis(2,0) * basis(2,0).dag() _____no_output_____#Import mpmath functions for evaluation of correlation functions
from mpmath import mp
from mpmath import zeta
from mpmath import gamma
mp.dps = 15; mp.pretty = True_____no_output_____
Q = sigmaz()
alpha = 3.25
T = 0.5
wc = 1
beta = 1/T
s = 1
tlist = np.linspace(0, 10, 5000)
tlist3 = linspace(0,15,50000)
#note: the arguments to zeta should be in as high precision as possible, might need some adjustment
# see http://mpmath.org/doc/current/basics.html#providing-correct-input
ct = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist]
#also check long timescales
ctlong = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist3]
corrRana = real(ctlong)
corrIana = imag(ctlong)
pref = 1.
_____no_output_____
#lets try fitting the spectrurum
#use underdamped case with meier tannor form
wlist = np.linspace(0, 25, 20000)
from scipy.optimize import curve_fit
#seperate functions for plotting later:
def fit_func_nocost(x, a, b, c, N):
tot = 0
for i in range(N):
tot+= 2 * a[i] * b[i] * (x)/(((x+c[i])**2 + (b[i]**2))*((x-c[i])**2 + (b[i]**2)))
cost = 0.
return tot
def wrapper_fit_func_nocost(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]),list(args[0][2*N:3*N])
# print("debug")
return fit_func_nocost(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker(tlist, vals, N):
y = []
for i in tlist:
# print(i)
y.append(wrapper_fit_func_nocost(i, N, vals))
return y
#######
#Real part
def wrapper_fit_func(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]),list(args[0][2*N:3*N])
# print("debug")
return fit_func(x, a, b, c, N)
def fit_func(x, a, b, c, N):
tot = 0
for i in range(N):
tot+= 2 * a[i] * b[i] * (x)/(((x+c[i])**2 + (b[i]**2))*((x-c[i])**2 + (b[i]**2)))
cost = 0.
#for i in range(N):
#print(i)
# cost += ((corrRana[0]-a[i]*np.cos(d[i])))
tot+=0.0*cost
return tot
def fitterR(ans, tlist, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = 100*abs(max(ans, key = abs))
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [abs(max(ans, key = abs))]*(i+1)
bguess = [1*wc]*(i+1)
cguess = [1*wc]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess)
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [0.1*wc]*(i+1)
clower = [0.1*wc]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
#bhigher = [np.inf]*(i+1)
bhigher = [100*wc]*(i+1)
chigher = [100*wc]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, \
params_0), tlist, ans, p0=guess, bounds = param_bounds,sigma=[0.0001 for w in wlist], maxfev = 1000000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
J = [w * alpha * e**(-w/wc) for w in wlist]
k = 4
popt1 = fitterR(J, wlist, k)
for i in range(k):
y = checker(wlist, popt1[i],i+1)
print(popt1[i])
plt.plot(wlist, J, wlist, y)
plt.show()
1
2
3
4
[6.14746382 1.77939431 0.1 ]
lam = list(popt1[k-1])[:k]
gamma = list(popt1[k-1])[k:2*k] #damping terms
w0 = list(popt1[k-1])[2*k:3*k] #w0 termss
print(lam)
print(gamma)
print(w0)[0.60083768747418, 7.9159271481798354, -4.407893509547376, 0.010585173501683682]
[1.0024683988395526, 2.296190188285376, 4.299081656029848, 0.30736352564766894]
[0.10000000000000002, 0.10000000000032039, 3.981685947699812, 0.10000000000000002]
lamT = []
print(lam)
print(gamma)
print(w0)
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(wlist, J, 'r--', linewidth=2, label="original")
for kk,ll in enumerate(lam):
#axes.plot(wlist, [lam[kk] * gamma[kk] * (w)/(((w**2-w0[kk]**2)**2 + (gamma[kk]**2*w**2))) for w in wlist],linewidth=2)
axes.plot(wlist, [2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2))) for w in wlist],linewidth=2, label="fit")
axes.set_xlabel(r'$w$', fontsize=28)
axes.set_ylabel(r'J', fontsize=28)
axes.legend()
fig.savefig('noisepower.eps')
wlist2 = np.linspace(-10,10 , 50000)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = [sum([(2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2)))) * ((1/(e**(w/T)-1))+1) for kk,lamkk in enumerate(lam)]) for w in wlist2]
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(wlist2, s1, 'r', linewidth=2,label="original")
axes.plot(wlist2, s2, 'b', linewidth=2,label="fit")
axes.set_xlabel(r'$w$', fontsize=28)
axes.set_ylabel(r'S(w)', fontsize=28)
#axes.axvline(x=Del)
print(min(s2))
axes.legend()
#fig.savefig('powerspectrum.eps')
#J(w>0) * (n(w>w)+1)
[0.60083768747418, 7.9159271481798354, -4.407893509547376, 0.010585173501683682]
[1.0024683988395526, 2.296190188285376, 4.299081656029848, 0.30736352564766894]
[0.10000000000000002, 0.10000000000032039, 3.981685947699812, 0.10000000000000002]
def cot(x):
return 1./np.tan(x)
def coth(x):
"""
Calculates the coth function.
Parameters
----------
x: np.ndarray
Any numpy array or list like input.
Returns
-------
cothx: ndarray
The coth function applied to the input.
"""
return 1/np.tanh(x)_____no_output_____
#underdamped meier tannior version with terminator
TermMax = 1000
TermOps = 0.*spre(sigmaz())
Nk = 1 # number of exponentials in approximation of the Matsubara approximation
pref = 1
ckAR = []
vkAR = []
ckAI = []
vkAI = []
for kk, ll in enumerate(lam):
#print(kk)
lamt = lam[kk]
Om = w0[kk]
Gamma = gamma[kk]
print(T)
print(coth(beta*(Om+1.0j*Gamma)/2))
ckAR_temp = [(lamt/(4*Om))*coth(beta*(Om+1.0j*Gamma)/2),(lamt/(4*Om))*coth(beta*(Om-1.0j*Gamma)/2)]
for k in range(1,Nk+1):
#print(k)
ek = 2*pi*k/beta
ckAR_temp.append((-2*lamt*2*Gamma/beta)*ek/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2)))
term = 0
for k in range(Nk+1,TermMax):
#print(k)
ek = 2*pi*k/beta
ck = ((-2*lamt*2*Gamma/beta)*ek/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2)))
term += ck/ek
ckAR.extend(ckAR_temp)
vkAR_temp = [-1.0j*Om+Gamma,1.0j*Om+Gamma]
vkAR_temp.extend([2 * np.pi * k * T + 0.j for k in range(1,Nk+1)])
vkAR.extend(vkAR_temp)
factor=1./4.
ckAI.extend([-factor*lamt*1.0j/(Om),factor*lamt*1.0j/(Om)])
vkAI.extend( [-(-1.0j*(Om) - Gamma),-(1.0j*(Om) - Gamma)])
TermOps += term * (2*spre(Q)*spost(Q.dag()) - spre(Q.dag()*Q) - spost(Q.dag()*Q))
print(ckAR)
print(vkAR)
Q2 = []
NR = len(ckAR)
NI = len(ckAI)
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)0.5
(0.13974897556002197-0.6297171398194138j)
0.5
(0.17664345475450763+0.8710462271395351j)
0.5
(0.999528560660008-0.0005117331721781953j)
0.5
(0.9911396415614544-2.839161761980097j)
[(0.20991612825592318-0.9458944751298783j), (0.20991612825592318+0.9458944751298783j), (-0.04802667599727985+0j), (3.4957417975875082+17.237846191778434j), (3.4957417975875082-17.237846191778434j), (-5.3276721492789765+0j), (-0.2766300201103056+0.00014162752649838175j), (-0.2766300201103056-0.00014162752649838175j), (0.09723738681206452+0j), (0.026228462675811418-0.07513254962476318j), (0.026228462675811418+0.07513254962476318j), (-0.0002134910414080296+0j)]
[(1.0024683988395526-0.10000000000000002j), (1.0024683988395526+0.10000000000000002j), (3.141592653589793+0j), (2.296190188285376-0.10000000000032039j), (2.296190188285376+0.10000000000032039j), (3.141592653589793+0j), (4.299081656029848-3.981685947699812j), (4.299081656029848+3.981685947699812j), (3.141592653589793+0j), (0.30736352564766894-0.10000000000000002j), (0.30736352564766894+0.10000000000000002j), (3.141592653589793+0j)]
#corrRana = real(ct)
#corrIana = imag(ct)
corrRana = real(ctlong)
corrIana = imag(ctlong)
def checker2(tlisttemp):
y = []
for i in tlisttemp:
# print(i)
temp = []
for kkk,ck in enumerate(ckAR):
temp.append(ck*exp(-vkAR[kkk]*i))
y.append(sum(temp))
return y
yR = checker2(tlist3)
# function that evaluates values with fitted params at
# given inputs
def checker2(tlisttemp):
y = []
for i in tlisttemp:
# print(i)
temp = []
for kkk,ck in enumerate(ckAI):
if i==0:
print(vkAI[kkk])
temp.append(ck*exp(-vkAI[kkk]*i))
y.append(sum(temp))
return y
yI = checker2(tlist3)
(1.0024683988395526+0.10000000000000002j)
(1.0024683988395526-0.10000000000000002j)
(2.296190188285376+0.10000000000032039j)
(2.296190188285376-0.10000000000032039j)
(4.299081656029848+3.981685947699812j)
(4.299081656029848-3.981685947699812j)
(0.30736352564766894+0.10000000000000002j)
(0.30736352564766894-0.10000000000000002j)
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 20
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False_____no_output_____tlist2 = tlist3
from cycler import cycler
wlist2 = np.linspace(-2*pi*4,2 * pi *4 , 50000)
wlist2 = np.linspace(-7,7 , 50000)
fig = plt.figure(figsize=(12,10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
default_cycler = (cycler(color=['r', 'g', 'b', 'y','c','m','k']) +
cycler(linestyle=['-', '--', ':', '-.',(0, (1, 10)), (0, (5, 10)),(0, (3, 10, 1, 10))]))
plt.rc('axes',prop_cycle=default_cycler )
axes1 = fig.add_subplot(grid[0,0])
axes1.set_yticks([0.,1.])
axes1.set_yticklabels([0,1])
axes1.plot(tlist2, corrRana,"r",linewidth=3,label="Original")
axes1.plot(tlist2, yR,"g",dashes=[3,3],linewidth=2,label="Reconstructed")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$',fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes1.locator_params(axis='y', nbins=4)
axes1.locator_params(axis='x', nbins=4)
axes1.text(2.,1.5,"(a)",fontsize=28)
axes2 = fig.add_subplot(grid[0,1])
axes2.set_yticks([0.,-0.4])
axes2.set_yticklabels([0,-0.4])
axes2.plot(tlist2, corrIana,"r",linewidth=3,label="Original")
axes2.plot(tlist2, yI,"g",dashes=[3,3], linewidth=2,label="Reconstructed")
axes2.legend(loc=0)
axes2.set_ylabel(r'$C_I(t)$',fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes2.locator_params(axis='y', nbins=4)
axes2.locator_params(axis='x', nbins=4)
axes2.text(12.5,-0.2,"(b)",fontsize=28)
axes3 = fig.add_subplot(grid[1,0])
axes3.set_yticks([0.,.5,1])
axes3.set_yticklabels([0,0.5,1])
axes3.plot(wlist, J, "r",linewidth=3,label="$J(\omega)$ original")
y = checker(wlist, popt1[3],4)
axes3.plot(wlist, y, "g", dashes=[3,3], linewidth=2, label="$J(\omega)$ Fit $k_J = 4$")
axes3.set_ylabel(r'$J(\omega)$',fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$',fontsize=28)
axes3.locator_params(axis='y', nbins=4)
axes3.locator_params(axis='x', nbins=4)
axes3.legend(loc=0)
axes3.text(3,1.1,"(c)",fontsize=28)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = [sum([(2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2)))) * ((1/(e**(w/T)-1))+1) for kk,lamkk in enumerate(lam)]) for w in wlist2]
axes4 = fig.add_subplot(grid[1,1])
axes4.set_yticks([0.,1])
axes4.set_yticklabels([0,1])
axes4.plot(wlist2, s1,"r",linewidth=3,label="Original")
axes4.plot(wlist2, s2, "g", dashes=[3,3], linewidth=2,label="Reconstructed")
axes4.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes4.set_ylabel(r'$S(\omega)$', fontsize=28)
axes4.locator_params(axis='y', nbins=4)
axes4.locator_params(axis='x', nbins=4)
axes4.legend()
axes4.text(4.,1.2,"(d)",fontsize=28)
fig.savefig("figures/figFiJspec.pdf")
/home/neill/anaconda3/lib/python3.7/site-packages/numpy/core/numeric.py:501: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
NC = 11
NR = len(ckAR)
NI = len(ckAI)
print(NR)
print(NI)
Q2 = []
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
12
8
#Q2 = [Q for kk in range(NR+NI)]
#print(Q2)
options = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
import time
start = time.time()
print("start")
Ltot = liouvillian(Hsys) + TermOps
HEOMFit = BosonicHEOMSolver(Ltot, Q2, ckAR, ckAI, vkAR, vkAI, NC, options=options)
print("end")
end = time.time()
print(end - start)start
#tlist4 = np.linspace(0, 50, 1000)
tlist4 = np.linspace(0, 4*pi/Del, 600)
tlist4 = np.linspace(0, 30*pi/Del, 600)
rho0 = basis(2,0) * basis(2,0).dag()
import time
start = time.time()
resultFit = HEOMFit.run(rho0, tlist4)
end = time.time()
print(end - start)_____no_output_____# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p=basis(2,0) * basis(2,0).dag()
P22p=basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p=basis(2,0) * basis(2,1).dag()
# Calculate expectation values in the bases
P11exp11K4NK1TL = expect(resultFit.states, P11p)
P22exp11K4NK1TL = expect(resultFit.states, P22p)
P12exp11K4NK1TL = expect(resultFit.states, P12p)_____no_output_____
tlist3 = linspace(0,15,50000)
#also check long timescales
ctlong = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist3]
corrRana = real(ctlong)
corrIana = imag(ctlong)_____no_output_____
tlist2 = tlist3
from scipy.optimize import curve_fit
#seperate functions for plotting later:
def fit_func_nocost(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.cos(c[i]*x)
cost = 0.
return tot
def wrapper_fit_func_nocost(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func_nocost(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker(tlist_local, vals, N):
y = []
for i in tlist_local:
# print(i)
y.append(wrapper_fit_func_nocost(i, N, vals))
return y
#######
#Real part
def wrapper_fit_func(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func(x, a, b, c, N)
def fit_func(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.cos(c[i]*x )
cost = 0.
for i in range(N):
#print(i)
cost += ((corrRana[0]-a[i]))
tot+=0.0*cost
return tot
def fitterR(ans, tlist_local, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = 20*abs(max(ans, key = abs))
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [abs(max(ans, key = abs))]*(i+1)
bguess = [-wc]*(i+1)
cguess = [wc]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess) #c
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [-np.inf]*(i+1)
clower = [0]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
#bhigher = [np.inf]*(i+1)
bhigher = [0.1]*(i+1)
chigher = [np.inf]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, \
params_0), tlist_local, ans, p0=guess, sigma=[0.1 for t in tlist_local], bounds = param_bounds, maxfev = 100000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
k = 3
popt1 = fitterR(corrRana, tlist2, k)
for i in range(k):
y = checker(tlist2, popt1[i],i+1)
plt.plot(tlist2, corrRana, tlist2, y)
plt.show()
#y = checker(tlist3, popt1[k-1],k)
#plt.plot(tlist3, real(ctlong), tlist3, y)
#plt.show()
#######
#Imag part
def fit_func2(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.sin(c[i]*x)
cost = 0.
for i in range(N):
# print(i)
cost += (corrIana[0]-a[i])
tot+=0*cost
return tot
# actual fitting function
def wrapper_fit_func2(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func2(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker2(tlist_local, vals, N):
y = []
for i in tlist_local:
# print(i)
y.append(wrapper_fit_func2(i, N, vals))
return y
def fitterI(ans, tlist_local, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = abs(max(ans, key = abs))*5
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [-abs(max(ans, key = abs))]*(i+1)
bguess = [-2]*(i+1)
cguess = [1]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess) #c
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [-100]*(i+1)
clower = [0]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
bhigher = [0.01]*(i+1)
chigher = [100]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func2(x, i+1, \
params_0), tlist_local, ans, p0=guess, sigma=[0.0001 for t in tlist_local], bounds = param_bounds, maxfev = 100000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
k1 = 3
popt2 = fitterI(corrIana, tlist2, k1)
for i in range(k1):
y = checker2(tlist2, popt2[i], i+1)
plt.plot(tlist2, corrIana, tlist2, y)
plt.show()
#tlist3 = linspace(0,1,1000)
#y = checker(tlist3, popt2[k-1],k)
#plt.plot(tlist3, imag(ctlong), tlist3, y)
#plt.show()1
2
3
#ckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]
ckAR1 = list(popt1[k-1])[:k]
#0.5 from cosine
ckAR = [0.5*x+0j for x in ckAR1]
#dress with exp(id)
#for kk in range(k):
# ckAR[kk] = ckAR[kk]*exp(1.0j*list(popt1[k-1])[3*k+kk])
ckAR.extend(conjugate(ckAR)) #just directly double
# vkAR, vkAI
vkAR1 = list(popt1[k-1])[k:2*k] #damping terms
wkAR1 = list(popt1[k-1])[2*k:3*k] #oscillating term
vkAR = [-x-1.0j*wkAR1[kk] for kk, x in enumerate(vkAR1)] #combine
vkAR.extend([-x+1.0j*wkAR1[kk] for kk, x in enumerate(vkAR1)]) #double
print(ckAR)
print(vkAR)
[(0.11144410593349063+0j), (1.1220870984713296+0j), (-0.46905439824964296+0j), (0.11144410593349063-0j), (1.1220870984713296-0j), (-0.46905439824964296-0j)]
[(0.34246550478406607-1.3300052634275397e-20j), (2.217530091540586-2.464176361865918e-14j), (4.925330410074036-3.8813583110031855j), (0.34246550478406607+1.3300052634275397e-20j), (2.217530091540586+2.464176361865918e-14j), (4.925330410074036+3.8813583110031855j)]
#ckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]
ckAI1 = list(popt2[k1-1])[:k1]
#0.5 from cosine
ckAI = [-1.0j*0.5*x for x in ckAI1]
#dress with exp(id)
#for kk in range(k1):
# ckAI[kk] = ckAI[kk]*exp(1.0j*list(popt2[k1-1])[3*k1+kk])
ckAI.extend(conjugate(ckAI)) #just directly double
# vkAR, vkAI
vkAI1 = list(popt2[k1-1])[k1:2*k1] #damping terms
wkAI1 = list(popt2[k1-1])[2*k1:3*k1] #oscillating term
vkAI = [-x-1.0j*wkAI1[kk] for kk, x in enumerate(vkAI1)] #combine
vkAI.extend([-x+1.0j*wkAI1[kk] for kk, x in enumerate(vkAI1)]) #double
print(ckAI)
print(vkAI)[0.42951890967132456j, 1.6563481683820898j, 0.15377742510150993j, -0.42951890967132456j, -1.6563481683820898j, -0.15377742510150993j]
[(1.0930279223636246-1.3691370731372476j), (0.9959034288214084-0.16655390928549837j), (1.186330164110597-2.696687419426089j), (1.0930279223636246+1.3691370731372476j), (0.9959034288214084+0.16655390928549837j), (1.186330164110597+2.696687419426089j)]
#check the spectrum of the fit
def spectrum_matsubara_approx(w, ck, vk):
"""
Calculates the approximate Matsubara correlation spectrum
from ck and vk.
Parameters
==========
w: np.ndarray
A 1D numpy array of frequencies.
ck: float
The coefficient of the exponential function.
vk: float
The frequency of the exponential function.
"""
return ck*2*(vk)/(w**2 + vk**2)
def spectrum_approx(w, ck,vk):
"""
Calculates the approximate non Matsubara correlation spectrum
from the bath parameters.
Parameters
==========
w: np.ndarray
A 1D numpy array of frequencies.
coup_strength: float
The coupling strength parameter.
bath_broad: float
A parameter characterizing the FWHM of the spectral density, i.e.,
the bath broadening.
bath_freq: float
The bath frequency.
"""
sw = []
for kk,ckk in enumerate(ck):
#sw.append((ckk*(real(vk[kk]))/((w-imag(vk[kk]))**2+(real(vk[kk])**2))))
sw.append((ckk*(real(vk[kk]))/((w-imag(vk[kk]))**2+(real(vk[kk])**2))))
return sw
_____no_output_____
from cycler import cycler
wlist2 = np.linspace(-7,7 , 50000)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = spectrum_approx(wlist2,ckAR,vkAR)
s2.extend(spectrum_approx(wlist2,[1.0j*ckk for ckk in ckAI],vkAI))
#s2 = spectrum_approx(wlist2,ckAI,vkAI)
print(len(s2))
s2sum = [0. for w in wlist2]
for s22 in s2:
for kk,ww in enumerate(wlist2):
s2sum[kk] += s22[kk]
fig = plt.figure(figsize=(12,10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
default_cycler = (cycler(color=['r', 'g', 'b', 'y','c','m','k']) +
cycler(linestyle=['-', '--', ':', '-.',(0, (1, 10)), (0, (5, 10)),(0, (3, 10, 1, 10))]))
plt.rc('axes',prop_cycle=default_cycler )
axes1 = fig.add_subplot(grid[0,0])
axes1.set_yticks([0.,1.])
axes1.set_yticklabels([0,1])
y = checker(tlist2, popt1[2], 3)
axes1.plot(tlist2, corrRana,'r',linewidth=3,label="Original")
axes1.plot(tlist2, y,'g',dashes=[3,3],linewidth=3,label="Fit $k_R = 3$")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$',fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes1.locator_params(axis='y', nbins=3)
axes1.locator_params(axis='x', nbins=3)
axes1.text(2.5,0.5,"(a)",fontsize=28)
axes2 = fig.add_subplot(grid[0,1])
y = checker2(tlist2, popt2[2], 3)
axes2.plot(tlist2, corrIana,'r',linewidth=3,label="Original")
axes2.plot(tlist2, y,'g',dashes=[3,3],linewidth=3,label="Fit $k_I = 3$")
axes2.legend(loc=0)
axes2.set_yticks([0.,-0.4])
axes2.set_yticklabels([0,-0.4])
axes2.set_ylabel(r'$C_I(t)$',fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes2.locator_params(axis='y', nbins=3)
axes2.locator_params(axis='x', nbins=3)
axes2.text(12.5,-0.1,"(b)",fontsize=28)
axes3 = fig.add_subplot(grid[1,0:])
axes3.plot(wlist2, s1, 'r',linewidth=3,label="$S(\omega)$ original")
axes3.plot(wlist2, real(s2sum), 'g',dashes=[3,3],linewidth=3, label="$S(\omega)$ reconstruction")
axes3.set_yticks([0.,1.])
axes3.set_yticklabels([0,1])
axes3.set_xlim(-5,5)
axes3.set_ylabel(r'$S(\omega)$',fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$',fontsize=28)
axes3.locator_params(axis='y', nbins=3)
axes3.locator_params(axis='x', nbins=3)
axes3.legend(loc=1)
axes3.text(-4,1.5,"(c)",fontsize=28)
fig.savefig("figures/figFitCspec.pdf")
12
Q2 = []
NR = len(ckAR)
NI = len(ckAI)
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)_____no_output_____
NC = 11
#Q2 = [Q for kk in range(NR+NI)]
#print(Q2)
options = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
import time
start = time.time()
#HEOMFit = BosonicHEOMSolver(Hsys, Q2, ckAR2, ckAI2, vkAR2, vkAI2, NC, options=options)
HEOMFitC = BosonicHEOMSolver(Hsys, Q2, ckAR, ckAI, vkAR, vkAI, NC, options=options)
print("hello")
end = time.time()
print(end - start)hello
470.1895263195038
tlist4 = np.linspace(0, 30*pi/Del, 600)
rho0 = basis(2,0) * basis(2,0).dag()
import time
start = time.time()
resultFit = HEOMFitC.run(rho0, tlist4)
print("hello")
end = time.time()
print(end - start)_____no_output_____# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p=basis(2,0) * basis(2,0).dag()
P22p=basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p=basis(2,0) * basis(2,1).dag()
# Calculate expectation values in the bases
P11expC11k33L = expect(resultFit.states, P11p)
P22expC11k33L = expect(resultFit.states, P22p)
P12expC11k33L = expect(resultFit.states, P12p)
_____no_output_____qsave(P11expC11k33L,'P11expC12k33L')
qsave(P11exp11K4NK1TL,'P11exp11K4NK1TL')
qsave(P11exp11K3NK1TL,'P11exp11K3NK1TL')
qsave(P11exp11K3NK2TL,'P11exp11K3NK2TL')_____no_output_____P11expC11k33L=qload('data/P11expC12k33L')
P11exp11K4NK1TL=qload('data/P11exp11K4NK1TL')
P11exp11K3NK1TL=qload('data/P11exp11K3NK1TL')
P11exp11K3NK2TL=qload('data/P11exp11K3NK2TL')Loaded ndarray object.
Loaded ndarray object.
Loaded ndarray object.
Loaded ndarray object.
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False_____no_output_____tlist4 = np.linspace(0, 4*pi/Del, 600)
# Plot the results
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(12,15))
axes[0].set_yticks([0.6,0.8,1])
axes[0].set_yticklabels([0.6,0.8,1])
axes[0].plot(tlist4, np.real(P11expC11k33L), 'y', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes[0].plot(tlist4, np.real(P11exp11K3NK1TL), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $N_k=1$ & Terminator")
axes[0].plot(tlist4, np.real(P11exp11K3NK2TL), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $N_k=2$ & Terminator")
axes[0].plot(tlist4, np.real(P11exp11K4NK1TL), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $N_k=1$ & Terminator")
axes[0].set_ylabel(r'$\rho_{11}$',fontsize=30)
axes[0].set_xlabel(r'$t\;\omega_c$',fontsize=30)
axes[0].locator_params(axis='y', nbins=3)
axes[0].locator_params(axis='x', nbins=3)
axes[0].legend(loc=0, fontsize=25)
axes[1].set_yticks([0,0.01])
axes[1].set_yticklabels([0,0.01])
#axes[0].plot(tlist4, np.real(P11exp11K3NK1TL)-np.real(P11expC11k33L), 'b-.', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes[1].plot(tlist4, np.real(P11exp11K3NK1TL)-np.real(P11expC11k33L), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes[1].plot(tlist4, np.real(P11exp11K3NK2TL)-np.real(P11expC11k33L), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=2$ & Terminator")
axes[1].plot(tlist4, np.real(P11exp11K4NK1TL)-np.real(P11expC11k33L), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $K=1$ & Terminator")
axes[1].set_ylabel(r'$\rho_{11}$ difference',fontsize=30)
axes[1].set_xlabel(r'$t\;\omega_c$',fontsize=30)
axes[1].locator_params(axis='y', nbins=3)
axes[1].locator_params(axis='x', nbins=3)
#axes[1].legend(loc=0, fontsize=25)
fig.savefig("figures/figFit.pdf")_____no_output_____tlist4 = np.linspace(0, 4*pi/Del, 600)
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12,5))
axes.plot(tlist4, np.real(P12expC11k33L), 'y', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes.plot(tlist4, np.real(P12exp11K3NK1TL), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes.plot(tlist4, np.real(P12exp11K3NK2TL), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes.plot(tlist4, np.real(P12exp11K4NK1TL), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $K=1$ & Terminator")
axes.set_ylabel(r'$\rho_{12}$',fontsize=28)
axes.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes.locator_params(axis='y', nbins=6)
axes.locator_params(axis='x', nbins=6)
axes.legend(loc=0)
_____no_output_____from qutip.ipynbtools import version_table
version_table()_____no_output_____
</code>
| {
"repository": "bitcoffe/bofin",
"path": "examples/example-1d-Spin-bath-model-ohmic-fitting.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 7,
"size": 577193,
"hexsha": "cb2204cfc23fecace2ff127350a118c47e492f84",
"max_line_length": 109764,
"avg_line_length": 305.2316234796,
"alphanum_fraction": 0.9151306409
} |
# Notebook from annacuomo/TenK10K_analyses_HPC
Path: notebooks/estimate_betas_CellRegMap_Bcells_noplasma.ipynb
<code>
import scanpy as sc
import pandas as pd
import xarray as xr
from numpy import ones
from pandas_plink import read_plink1_bin
from numpy.linalg import cholesky
import matplotlib.pyplot as plt
import time
from limix.qc import quantile_gaussianizeMatplotlib created a temporary config/cache directory at /tmp/matplotlib-dnvcosyx because the default path (/home/jovyan/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
import cellregmap
cellregmap _____no_output_____from cellregmap import estimate_betas_____no_output_____mydir = "/share/ScratchGeneral/anncuo/OneK1K/"
input_files_dir = "/share/ScratchGeneral/anncuo/OneK1K/input_files_CellRegMap/"_____no_output_____chrom = 8_____no_output_____## sample mapping file
## this file will map cells to donors
## here, B cells only
sample_mapping_file = input_files_dir+"smf_Bcells_noplasma.csv"
sample_mapping = pd.read_csv(sample_mapping_file, dtype={"individual_long": str, "genotype_individual_id": str, "phenotype_sample_id": str}, index_col=0)_____no_output_____sample_mapping.shape_____no_output_____sample_mapping.head()_____no_output_____## extract unique individuals
donors0 = sample_mapping["genotype_individual_id"].unique()
donors0.sort()
print("Number of unique donors: {}".format(len(donors0)))Number of unique donors: 981
#### kinship file_____no_output_____## read in GRM (genotype relationship matrix; kinship matrix)
kinship_file="/share/ScratchGeneral/anncuo/OneK1K/input_files_CellRegMap/grm_wide.csv"
K = pd.read_csv(kinship_file, index_col=0)
K.index = K.index.astype('str')
assert all(K.columns == K.index) #symmetric matrix, donors x donors/share/ScratchGeneral/anncuo/jupyter/conda_notebooks/envs/cellregmap_notebook/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3457: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False.
exec(code_obj, self.user_global_ns, self.user_ns)
K = xr.DataArray(K.values, dims=["sample_0", "sample_1"], coords={"sample_0": K.columns, "sample_1": K.index})
K = K.sortby("sample_0").sortby("sample_1")
donors = sorted(set(list(K.sample_0.values)).intersection(donors0))
print("Number of donors after kinship intersection: {}".format(len(donors)))Number of donors after kinship intersection: 981
## subset to relevant donors
K = K.sel(sample_0=donors, sample_1=donors)
assert all(K.sample_0 == donors)
assert all(K.sample_1 == donors)_____no_output_____plt.matshow(K)_____no_output_____## and decompose such as K = hK @ hK.T (using Cholesky decomposition)
hK = cholesky(K.values)
hK = xr.DataArray(hK, dims=["sample", "col"], coords={"sample": K.sample_0.values})
assert all(hK.sample.values == K.sample_0.values)_____no_output_____del K
print("Sample mapping number of rows BEFORE intersection: {}".format(sample_mapping.shape[0]))
## subsample sample mapping file to donors in the kinship matrix
sample_mapping = sample_mapping[sample_mapping["genotype_individual_id"].isin(donors)]
print("Sample mapping number of rows AFTER intersection: {}".format(sample_mapping.shape[0]))Sample mapping number of rows BEFORE intersection: 130091
Sample mapping number of rows AFTER intersection: 130091
## use sel from xarray to expand hK (using the sample mapping file)
hK_expanded = hK.sel(sample=sample_mapping["genotype_individual_id"].values)
assert all(hK_expanded.sample.values == sample_mapping["genotype_individual_id"].values)_____no_output_____hK_expanded.shape_____no_output_____#### phenotype file_____no_output_____# open anndata
my_file = "/share/ScratchGeneral/anncuo/OneK1K/expression_objects/sce"+str(chrom)+".h5ad"
adata = sc.read(my_file)
# sparse to dense
mat = adata.raw.X.todense()
# make pandas dataframe
mat_df = pd.DataFrame(data=mat.T, index=adata.raw.var.index, columns=adata.obs.index)
# turn into xr array
phenotype = xr.DataArray(mat_df.values, dims=["trait", "cell"], coords={"trait": mat_df.index.values, "cell": mat_df.columns.values})
phenotype = phenotype.sel(cell=sample_mapping["phenotype_sample_id"].values)_____no_output_____del mat
del mat_df_____no_output_____phenotype.shape_____no_output_____phenotype.head()_____no_output_____#### genotype file_____no_output_____## read in genotype file (plink format)
plink_folder = "/share/ScratchGeneral/anncuo/OneK1K/plink_files/"
plink_file = plink_folder+"plink_chr"+str(chrom)+".bed"
G = read_plink1_bin(plink_file)Mapping files: 100%|██████████| 3/3 [00:01<00:00, 3.00it/s]
G_____no_output_____G.shape_____no_output_____# change this to select known eQTLs instead_____no_output_____# Filter on specific gene-SNP pairs
# eQTL from B cells (B IN + B Mem)
Bcell_eqtl_file = input_files_dir+"fvf_Bcell_eqtls.csv"
Bcell_eqtl = pd.read_csv(Bcell_eqtl_file, index_col = 0)
Bcell_eqtl.head()_____no_output_____## SELL (chr1, index=30)
## REL (chr2, index=6)
## BLK (chr8, index=4)
## ORMDL3 (chr17, index=2)_____no_output_____genes = Bcell_eqtl[Bcell_eqtl['chrom']==int(chrom)]['feature'].unique()
genes_____no_output_____# (1) gene name (feature_id)
gene_name = genes[4]
gene_name_____no_output_____# select SNPs for a given gene
leads = Bcell_eqtl[Bcell_eqtl['feature']==gene_name]['snp_id'].unique()
leads_____no_output_____#breakpoint()
G_sel = G[:,G['snp'].isin(leads)]_____no_output_____G_sel_____no_output_____# expand out genotypes from cells to donors (and select relevant donors in the same step)
G_expanded = G_sel.sel(sample=sample_mapping["individual_long"].values)
# assert all(hK_expanded.sample.values == G_expanded.sample.values)/share/ScratchGeneral/anncuo/jupyter/conda_notebooks/envs/cellregmap_notebook/lib/python3.7/site-packages/xarray/core/indexing.py:1227: PerformanceWarning: Slicing with an out-of-order index is generating 281 times more chunks
return self.array[key]
G_expanded.shape_____no_output_____del G_____no_output_____#### context file_____no_output_____# cells (B cells only) by PCs
C_file = input_files_dir+"PCs_Bcells_noplasma.csv"
C = pd.read_csv(C_file, index_col = 0)
# C_file = input_files_dir+"PCs_Bcells.csv.pkl"
# C = pd.read_pickle(C_file)
C = xr.DataArray(C.values, dims=["cell", "pc"], coords={"cell": C.index.values, "pc": C.columns.values})
C = C.sel(cell=sample_mapping["phenotype_sample_id"].values)
assert all(C.cell.values == sample_mapping["phenotype_sample_id"].values)_____no_output_____C.shape_____no_output_____# C_gauss = quantile_gaussianize(C)_____no_output_____# select gene
y = phenotype.sel(trait=gene_name)_____no_output_____[(y == 0).astype(int).sum()/len(y)]_____no_output_____plt.hist(y)
plt.show()_____no_output_____y = quantile_gaussianize(y)_____no_output_____plt.hist(y)
plt.show()_____no_output_____n_cells = phenotype.shape[1]
W = ones((n_cells, 1))_____no_output_____del phenotype_____no_output_____start_time = time.time()
GG = G_expanded.values
print("--- %s seconds ---" % (time.time() - start_time))--- 0.16577982902526855 seconds ---
# del G_expanded
del G_sel_____no_output_____snps = G_expanded["snp"].values
snps_____no_output_____# get MAF
MAF_dir = "/share/ScratchGeneral/anncuo/OneK1K/snps_with_maf_greaterthan0.05/"
myfile = MAF_dir+"chr"+str(chrom)+".SNPs.txt"
df_maf = pd.read_csv(myfile, sep="\t")
df_maf.head()_____no_output_____import numpy as np_____no_output_____mafs = np.array([])
for snp in snps:
mafs = np.append(mafs, df_maf[df_maf["SNP"] == snp]["MAF"].values)
mafs_____no_output_____start_time = time.time()
betas = estimate_betas(y=y, W=W, E=C.values[:,0:10], G=GG, hK=hK_expanded, maf=mafs)
print("--- %s seconds ---" % (time.time() - start_time))_____no_output_____beta_G = betas[0]
beta_GxC = betas[1][0]_____no_output_____beta_G_df = pd.DataFrame({"chrom":G_expanded.chrom.values,
"betaG":beta_G,
"variant":G_expanded.snp.values})
beta_G_df.head()_____no_output_____cells = phenotype["cell"].values
snps = G_expanded["variant"].values
beta_GxC_df = pd.DataFrame(data = beta_GxC, columns=snps, index=cells)
beta_GxC_df.head()_____no_output_____## took over an hour to run for one SNP!_____no_output_____gene_name_____no_output_____folder = mydir + "CRM_interaction/Bcells_noplasma_Bcell_eQTLs/betas/"
outfilename = f"{folder}{gene_name}"
print(outfilename)_____no_output_____beta_G_df.to_csv(outfilename+"_betaG.csv")_____no_output_____beta_GxC_df.to_csv(outfilename+"_betaGxC.csv")_____no_output_____
</code>
| {
"repository": "annacuomo/TenK10K_analyses_HPC",
"path": "notebooks/estimate_betas_CellRegMap_Bcells_noplasma.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": null,
"size": 190573,
"hexsha": "cb2265400b63110153d3d1b22922d06dd3b56233",
"max_line_length": 51600,
"avg_line_length": 70.8449814126,
"alphanum_fraction": 0.6711496382
} |
# Notebook from liaison/python
Path: tensorflow/tensorflow_simple_linear_regression.ipynb
### Abstract
This is an example to show to use use the basic API of TensorFlow, to construct a linear regression model.
This notebook is an exercise adapted from [the Medium.com blog](https://medium.com/@saxenarohan97/intro-to-tensorflow-solving-a-simple-regression-problem-e87b42fd4845).
Note that recent version of TensorFlow does have more advanced API such like LinearClassifier that provides the scikit-learn alike machine learning API._____no_output_____
<code>
import tensorflow as tf
import numpy as np
from sklearn.datasets import load_boston
from sklearn.preprocessing import scale
from matplotlib import pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6_____no_output_____
</code>
Split the data into training, validation and test sets._____no_output_____
<code>
# Retrieve the data
bunch = load_boston()
print('total data shape:', bunch.data.shape)
total_features = bunch.data[:, range(12)]
total_prices = bunch.data[:, [12]]
print('features shape:', total_features.shape, 'targe shape:', total_prices.shape)
# new in 0.18 version
# total_features, total_prices = load_boston(True)
# Keep 300 samples for training
train_features = scale(total_features[:300])
train_prices = total_prices[:300]
print('training dataset:', len(train_features))
print('feature example:', train_features[0:1])
print('mean of feature 0:', np.asarray(train_features[:, 0]).mean())
# Keep 100 samples for validation
valid_features = scale(total_features[300:400])
valid_prices = total_prices[300:400]
print('validation dataset:', len(valid_features))
# Keep remaining samples as test set
test_features = scale(total_features[400:])
test_prices = total_prices[400:]
print('test dataset:', len(test_features))total data shape: (506, 13)
features shape: (506, 12) targe shape: (506, 1)
training dataset: 300
feature example: [[-0.64113113 0.10080399 -1.03067021 -0.31448545 0.217757 0.21942717
0.08260981 -0.09559716 -2.15826599 -0.23254428 -1.00268807 0.42054571]]
mean of feature 0: 2.36847578587e-17
validation dataset: 100
test dataset: 106
</code>
#### Linear Regression Model _____no_output_____
<code>
w = tf.Variable(tf.truncated_normal([12, 1], mean=0.0, stddev=1.0, dtype=tf.float64))
b = tf.Variable(tf.zeros(1, dtype = tf.float64))_____no_output_____def calc(x, y):
'''
linear regression model that return (prediction, L2_error)
'''
# Returns predictions and error
predictions = tf.add(b, tf.matmul(x, w))
error = tf.reduce_mean(tf.square(y - predictions))
return [ predictions, error ]_____no_output_____y, cost = calc(train_features, train_prices)
# augment the model with the regularisation
L1_regu_cost = tf.add(cost, tf.reduce_mean(tf.abs(w)))
L2_regu_cost = tf.add(cost, tf.reduce_mean(tf.square(w)))_____no_output_____def train(cost, learning_rate=0.025, epochs=300):
'''
run the cost computation graph with gradient descent optimizer.
'''
errors = [[], []]
init = tf.global_variables_initializer()
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
with sess:
sess.run(init)
for i in range(epochs):
sess.run(optimizer)
errors[0].append(i+1)
errors[1].append(sess.run(cost))
# Get the parameters of the linear regression model.
print('weights:\n', sess.run(w))
print('bias:', sess.run(b))
valid_cost = calc(valid_features, valid_prices)[1]
print('Validation error =', sess.run(valid_cost), '\n')
test_cost = calc(test_features, test_prices)[1]
print('Test error =', sess.run(test_cost), '\n')
return errors_____no_output_____# with L1 regularisation, the testing error is slightly improved, i.e. 75 vs. 76
# similarly with L1 regularisation, the L2 regularisation improves the testing error to 75 as well.
epochs = 500
errors_lr_005 = train(cost, learning_rate=0.005, epochs=epochs)
errors_lr_025 = train(cost, learning_rate=0.025, epochs=epochs)weights:
[[ 0.01976278]
[ 0.2216641 ]
[ 0.33897141]
[ 0.14712622]
[ 1.49651714]
[-3.30264682]
[ 2.19215736]
[ 0.75213107]
[-0.2088241 ]
[-0.35450866]
[ 0.65472403]
[ 0.07399816]]
bias: [ 10.63383511]
Validation error = 39.140050724
Test error = 76.3269062042
weights:
[[ 0.40265458]
[ 0.38716099]
[ 0.40915654]
[ 0.11570143]
[ 0.7819646 ]
[-3.46135321]
[ 2.58540755]
[ 0.64041114]
[-0.13593196]
[-0.43936893]
[ 0.54024542]
[ 0.12436986]]
bias: [ 10.70416667]
Validation error = 37.305764913
Test error = 75.9908063646
ax = plt.subplot(111)
plt.plot(errors_lr_005[1], color='green', label='learning rate 0.005')
plt.plot(errors_lr_025[1], color='red', label='learning rate 0.025')
#ax = plt.plot(errors[0], errors[1], 'r--')
plt.axis([0, epochs, 0, 200])
plt.title('Evolution of L2 errors along each epoch')
plt.xlabel('epoch')
plt.ylabel('L2 error')
_ = plt.legend(loc='best')
plt.show()_____no_output_____
</code>
The **higher** the learning rate, the **faster** that the model converges. But if the learning rate is too large, it could also prevent the model from convergence._____no_output_____
| {
"repository": "liaison/python",
"path": "tensorflow/tensorflow_simple_linear_regression.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 3,
"size": 53891,
"hexsha": "cb239e4d7a8f027b1f2ba78869fe89a979b3e537",
"max_line_length": 45024,
"avg_line_length": 168.9373040752,
"alphanum_fraction": 0.8877920246
} |
# Notebook from edgarriba/lightning-flash
Path: flash_notebooks/image_classification.ipynb
<a href="https://colab.research.google.com/github/PyTorchLightning/lightning-flash/blob/master/flash_notebooks/image_classification.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>_____no_output_____In this notebook, we'll go over the basics of lightning Flash by finetuning/predictin with an ImageClassifier on [Hymenoptera Dataset](https://www.kaggle.com/ajayrana/hymenoptera-data) containing ants and bees images.
# Finetuning
Finetuning consists of four steps:
- 1. Train a source neural network model on a source dataset. For computer vision, it is traditionally the [ImageNet dataset](http://www.image-net.org/search?q=cat). As training is costly, library such as [Torchvion](https://pytorch.org/docs/stable/torchvision/index.html) library supports popular pre-trainer model architectures . In this notebook, we will be using their [resnet-18](https://pytorch.org/hub/pytorch_vision_resnet/).
- 2. Create a new neural network called the target model. Its architecture replicates the source model and parameters, expect the latest layer which is removed. This model without its latest layer is traditionally called a backbone
- 3. Add new layers after the backbone where the latest output size is the number of target dataset categories. Those new layers, traditionally called head will be randomly initialized while backbone will conserve its pre-trained weights from ImageNet.
- 4. Train the target model on a target dataset, such as Hymenoptera Dataset with ants and bees. However, freezing some layers at training start such as the backbone tends to be more stable. In Flash, it can easily be done with `trainer.finetune(..., strategy="freeze")`. It is also common to `freeze/unfreeze` the backbone. In `Flash`, it can be done with `trainer.finetune(..., strategy="freeze_unfreeze")`. If one wants more control on the unfreeze flow, Flash supports `trainer.finetune(..., strategy=MyFinetuningStrategy())` where `MyFinetuningStrategy` is subclassing `pytorch_lightning.callbacks.BaseFinetuning`.
---
- Give us a ⭐ [on Github](https://www.github.com/PytorchLightning/pytorch-lightning/)
- Check out [Flash documentation](https://lightning-flash.readthedocs.io/en/latest/)
- Check out [Lightning documentation](https://pytorch-lightning.readthedocs.io/en/latest/)
- Join us [on Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)_____no_output_____
<code>
%%capture
! pip install git+https://github.com/PyTorchLightning/pytorch-flash.git_____no_output_____
</code>
### The notebook runtime has to be re-started once Flash is installed._____no_output_____
<code>
# https://github.com/streamlit/demo-self-driving/issues/17
if 'google.colab' in str(get_ipython()):
import os
os.kill(os.getpid(), 9)_____no_output_____import flash
from flash.data.utils import download_data
from flash.vision import ImageClassificationData, ImageClassifier_____no_output_____
</code>
## 1. Download data
The data are downloaded from a URL, and save in a 'data' directory._____no_output_____
<code>
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", 'data/')_____no_output_____
</code>
<h2>2. Load the data</h2>
Flash Tasks have built-in DataModules that you can use to organize your data. Pass in a train, validation and test folders and Flash will take care of the rest.
Creates a ImageClassificationData object from folders of images arranged in this way:</h4>
train/dog/xxx.png
train/dog/xxy.png
train/dog/xxz.png
train/cat/123.png
train/cat/nsdf3.png
train/cat/asd932.png
Note: Each sub-folder content will be considered as a new class._____no_output_____
<code>
datamodule = ImageClassificationData.from_folders(
train_folder="data/hymenoptera_data/train/",
val_folder="data/hymenoptera_data/val/",
test_folder="data/hymenoptera_data/test/",
)_____no_output_____
</code>
### 3. Build the model
Create the ImageClassifier task. By default, the ImageClassifier task uses a [resnet-18](https://pytorch.org/hub/pytorch_vision_resnet/) backbone to train or finetune your model.
For [Hymenoptera Dataset](https://www.kaggle.com/ajayrana/hymenoptera-data) containing ants and bees images, ``datamodule.num_classes`` will be 2.
Backbone can easily be changed with `ImageClassifier(backbone="resnet50")` or you could provide your own `ImageClassifier(backbone=my_backbone)`_____no_output_____
<code>
model = ImageClassifier(num_classes=datamodule.num_classes)_____no_output_____
</code>
### 4. Create the trainer. Run once on data
The trainer object can be used for training or fine-tuning tasks on new sets of data.
You can pass in parameters to control the training routine- limit the number of epochs, run on GPUs or TPUs, etc.
For more details, read the [Trainer Documentation](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html).
In this demo, we will limit the fine-tuning to run just one epoch using max_epochs=2._____no_output_____
<code>
trainer = flash.Trainer(max_epochs=3)_____no_output_____
</code>
### 5. Finetune the model_____no_output_____
<code>
trainer.finetune(model, datamodule=datamodule, strategy="freeze_unfreeze")_____no_output_____
</code>
### 6. Test the model_____no_output_____
<code>
trainer.test()_____no_output_____
</code>
### 7. Save it!_____no_output_____
<code>
trainer.save_checkpoint("image_classification_model.pt")_____no_output_____
</code>
# Predicting_____no_output_____### 1. Load the model from a checkpoint_____no_output_____
<code>
model = ImageClassifier.load_from_checkpoint("https://flash-weights.s3.amazonaws.com/image_classification_model.pt")_____no_output_____
</code>
### 2a. Predict what's on a few images! ants or bees?_____no_output_____
<code>
predictions = model.predict([
"data/hymenoptera_data/val/bees/65038344_52a45d090d.jpg",
"data/hymenoptera_data/val/bees/590318879_68cf112861.jpg",
"data/hymenoptera_data/val/ants/540543309_ddbb193ee5.jpg",
])
print(predictions)_____no_output_____
</code>
### 2b. Or generate predictions with a whole folder!_____no_output_____
<code>
datamodule = ImageClassificationData.from_folders(predict_folder="data/hymenoptera_data/predict/")
predictions = flash.Trainer().predict(model, datamodule=datamodule)
print(predictions)_____no_output_____
</code>
<code style="color:#792ee5;">
<h1> <strong> Congratulations - Time to Join the Community! </strong> </h1>
</code>
Congratulations on completing this notebook tutorial! If you enjoyed it and would like to join the Lightning movement, you can do so in the following ways!
### Help us build Flash by adding support for new data-types and new tasks.
Flash aims at becoming the first task hub, so anyone can get started to great amazing application using deep learning.
If you are interested, please open a PR with your contributions !!!
### Star [Lightning](https://github.com/PyTorchLightning/pytorch-lightning) on GitHub
The easiest way to help our community is just by starring the GitHub repos! This helps raise awareness of the cool tools we're building.
* Please, star [Lightning](https://github.com/PyTorchLightning/pytorch-lightning)
### Join our [Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)!
The best way to keep up to date on the latest advancements is to join our community! Make sure to introduce yourself and share your interests in `#general` channel
### Interested by SOTA AI models ! Check out [Bolt](https://github.com/PyTorchLightning/lightning-bolts)
Bolts has a collection of state-of-the-art models, all implemented in [Lightning](https://github.com/PyTorchLightning/pytorch-lightning) and can be easily integrated within your own projects.
* Please, star [Bolt](https://github.com/PyTorchLightning/lightning-bolts)
### Contributions !
The best way to contribute to our community is to become a code contributor! At any time you can go to [Lightning](https://github.com/PyTorchLightning/pytorch-lightning) or [Bolt](https://github.com/PyTorchLightning/lightning-bolts) GitHub Issues page and filter for "good first issue".
* [Lightning good first issue](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
* [Bolt good first issue](https://github.com/PyTorchLightning/lightning-bolts/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
* You can also contribute your own notebooks with useful examples !
### Great thanks from the entire Pytorch Lightning Team for your interest !
<img src="https://raw.githubusercontent.com/PyTorchLightning/lightning-flash/18c591747e40a0ad862d4f82943d209b8cc25358/docs/source/_static/images/logo.svg" width="800" height="200" />_____no_output_____
| {
"repository": "edgarriba/lightning-flash",
"path": "flash_notebooks/image_classification.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 13172,
"hexsha": "cb25a201ca2750923ad92f7a726d93d49511af9d",
"max_line_length": 633,
"avg_line_length": 35.0319148936,
"alphanum_fraction": 0.6305040996
} |
# Notebook from patrick-kidger/diffrax
Path: examples/symbolic_regression.ipynb
# Symbolic Regression_____no_output_____This example combines neural differential equations with regularised evolution to discover the equations
$\frac{\mathrm{d} x}{\mathrm{d} t}(t) = \frac{y(t)}{1 + y(t)}$
$\frac{\mathrm{d} y}{\mathrm{d} t}(t) = \frac{-x(t)}{1 + x(t)}$
directly from data.
**References:**
This example appears as an example in:
```bibtex
@phdthesis{kidger2021on,
title={{O}n {N}eural {D}ifferential {E}quations},
author={Patrick Kidger},
year={2021},
school={University of Oxford},
}
```
Whilst drawing heavy inspiration from:
```bibtex
@inproceedings{cranmer2020discovering,
title={{D}iscovering {S}ymbolic {M}odels from {D}eep {L}earning with {I}nductive
{B}iases},
author={Cranmer, Miles and Sanchez Gonzalez, Alvaro and Battaglia, Peter and
Xu, Rui and Cranmer, Kyle and Spergel, David and Ho, Shirley},
booktitle={Advances in Neural Information Processing Systems},
publisher={Curran Associates, Inc.},
year={2020},
}
@software{cranmer2020pysr,
title={PySR: Fast \& Parallelized Symbolic Regression in Python/Julia},
author={Miles Cranmer},
publisher={Zenodo},
url={http://doi.org/10.5281/zenodo.4041459},
year={2020},
}
```
This example is available as a Jupyter notebook [here](https://github.com/patrick-kidger/diffrax/blob/main/examples/symbolic_regression.ipynb)._____no_output_____
<code>
import tempfile
from typing import List
import equinox as eqx # https://github.com/patrick-kidger/equinox
import jax
import jax.numpy as jnp
import optax # https://github.com/deepmind/optax
import pysr # https://github.com/MilesCranmer/PySR
import sympy
# Note that PySR, which we use for symbolic regression, uses Julia as a backend.
# You'll need to install a recent version of Julia if you don't have one.
# (And can get funny errors if you have a too-old version of Julia already.)
# You may also need to restart Python after running `pysr.install()` the first time.
pysr.silence_julia_warning()
pysr.install(quiet=True)_____no_output_____
</code>
Now for a bunch of helpers. We'll use these in a moment; skip over them for now._____no_output_____
<code>
def quantise(expr, quantise_to):
if isinstance(expr, sympy.Float):
return expr.func(round(float(expr) / quantise_to) * quantise_to)
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(*[quantise(arg, quantise_to) for arg in expr.args])
class SymbolicFn(eqx.Module):
fn: callable
parameters: jnp.ndarray
def __call__(self, x):
# Dummy batch/unbatching. PySR assumes its JAX'd symbolic functions act on
# tensors with a single batch dimension.
return jnp.squeeze(self.fn(x[None], self.parameters))
class Stack(eqx.Module):
modules: List[eqx.Module]
def __call__(self, x):
return jnp.stack([module(x) for module in self.modules], axis=-1)
def expr_size(expr):
return sum(expr_size(v) for v in expr.args) + 1
def _replace_parameters(expr, parameters, i_ref):
if isinstance(expr, sympy.Float):
i_ref[0] += 1
return expr.func(parameters[i_ref[0]])
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(
*[_replace_parameters(arg, parameters, i_ref) for arg in expr.args]
)
def replace_parameters(expr, parameters):
i_ref = [-1] # Distinctly sketchy approach to making this conversion.
return _replace_parameters(expr, parameters, i_ref)_____no_output_____
</code>
Okay, let's get started.
We start by running the [Neural ODE example](./neural_ode.ipynb).
Then we extract the learnt neural vector field, and symbolically regress across this.
Finally we fine-tune the resulting symbolic expression.
_____no_output_____
<code>
def main(
symbolic_dataset_size=2000,
symbolic_num_populations=100,
symbolic_population_size=20,
symbolic_migration_steps=4,
symbolic_mutation_steps=30,
symbolic_descent_steps=50,
pareto_coefficient=2,
fine_tuning_steps=500,
fine_tuning_lr=3e-3,
quantise_to=0.01,
):
#
# First obtain a neural approximation to the dynamics.
# We begin by running the previous example.
#
# Runs the Neural ODE example.
# This defines the variables `ts`, `ys`, `model`.
print("Training neural differential equation.")
%run neural_ode.ipynb
#
# Now symbolically regress across the learnt vector field, to obtain a Pareto
# frontier of symbolic equations, that trades loss against complexity of the
# equation. Select the "best" from this frontier.
#
print("Symbolically regressing across the vector field.")
vector_field = model.func.mlp # noqa: F821
dataset_size, length_size, data_size = ys.shape # noqa: F821
in_ = ys.reshape(dataset_size * length_size, data_size) # noqa: F821
in_ = in_[:symbolic_dataset_size]
out = jax.vmap(vector_field)(in_)
with tempfile.TemporaryDirectory() as tempdir:
symbolic_regressor = pysr.PySRRegressor(
niterations=symbolic_migration_steps,
ncyclesperiteration=symbolic_mutation_steps,
populations=symbolic_num_populations,
npop=symbolic_population_size,
optimizer_iterations=symbolic_descent_steps,
optimizer_nrestarts=1,
procs=1,
verbosity=0,
tempdir=tempdir,
temp_equation_file=True,
output_jax_format=True,
)
symbolic_regressor.fit(in_, out)
best_equations = symbolic_regressor.get_best()
expressions = [b.sympy_format for b in best_equations]
symbolic_fns = [
SymbolicFn(b.jax_format["callable"], b.jax_format["parameters"])
for b in best_equations
]
#
# Now the constants in this expression have been optimised for regressing across
# the neural vector field. This was good enough to obtain the symbolic expression,
# but won't quite be perfect -- some of the constants will be slightly off.
#
# To fix this we now plug our symbolic function back into the original dataset
# and apply gradient descent.
#
print("Optimising symbolic expression.")
symbolic_fn = Stack(symbolic_fns)
flat, treedef = jax.tree_flatten(
model, is_leaf=lambda x: x is model.func.mlp # noqa: F821
)
flat = [symbolic_fn if f is model.func.mlp else f for f in flat] # noqa: F821
symbolic_model = jax.tree_unflatten(treedef, flat)
@eqx.filter_grad
def grad_loss(symbolic_model):
vmap_model = jax.vmap(symbolic_model, in_axes=(None, 0))
pred_ys = vmap_model(ts, ys[:, 0]) # noqa: F821
return jnp.mean((ys - pred_ys) ** 2) # noqa: F821
optim = optax.adam(fine_tuning_lr)
opt_state = optim.init(eqx.filter(symbolic_model, eqx.is_inexact_array))
@eqx.filter_jit
def make_step(symbolic_model, opt_state):
grads = grad_loss(symbolic_model)
updates, opt_state = optim.update(grads, opt_state)
symbolic_model = eqx.apply_updates(symbolic_model, updates)
return symbolic_model, opt_state
for _ in range(fine_tuning_steps):
symbolic_model, opt_state = make_step(symbolic_model, opt_state)
#
# Finally we round each constant to the nearest multiple of `quantise_to`.
#
trained_expressions = []
for module, expression in zip(symbolic_model.func.mlp.modules, expressions):
expression = replace_parameters(expression, module.parameters.tolist())
expression = quantise(expression, quantise_to)
trained_expressions.append(expression)
print(f"Expressions found: {trained_expressions}")_____no_output_____main()Training neural differential equation.
Step: 0, Loss: 0.1665748506784439, Computation time: 24.18653130531311
Step: 100, Loss: 0.011155527085065842, Computation time: 0.09058809280395508
Step: 200, Loss: 0.006481727119535208, Computation time: 0.0928184986114502
Step: 300, Loss: 0.001382559770718217, Computation time: 0.09850335121154785
Step: 400, Loss: 0.001073717838153243, Computation time: 0.09830045700073242
Step: 499, Loss: 0.0007992316968739033, Computation time: 0.09975647926330566
Step: 0, Loss: 0.02832634374499321, Computation time: 24.61294913291931
Step: 100, Loss: 0.005440382286906242, Computation time: 0.40324854850769043
Step: 200, Loss: 0.004360489547252655, Computation time: 0.43680524826049805
Step: 300, Loss: 0.001799552352167666, Computation time: 0.4346010684967041
Step: 400, Loss: 0.0017023109830915928, Computation time: 0.437793493270874
Step: 499, Loss: 0.0011540694395080209, Computation time: 0.42920470237731934
</code>
| {
"repository": "patrick-kidger/diffrax",
"path": "examples/symbolic_regression.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 377,
"size": 59208,
"hexsha": "cb25ab7422736d1fb205489118f8290cb58badf8",
"max_line_length": 46340,
"avg_line_length": 166.7830985915,
"alphanum_fraction": 0.8792899608
} |
# Notebook from ShepherdCode/Soars2021
Path: Notebooks/ORF_MLP_117.ipynb
# ORF MLP
Trying to fix bugs.
NEURONS=128 and K={1,2,3}.
_____no_output_____
<code>
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()2021-07-25 20:40:39 UTC
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=8000
NC_TESTS=8000
RNA_LEN=1000
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=128
DROP_RATE=0.01
EPOCHS=100 # 1000 # 200
SPLITS=5
FOLDS=5 # make this 5 for serious testing_____no_output_____import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model_____no_output_____import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import Collection_Generator, Transcript_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_gen import Collection_Generator, Transcript_Oracle
from SimTools.KmerTools import KmerTools
BESTMODELPATH=DATAPATH+"BestModel" # saved on cloud instance and lost after logout
LASTMODELPATH=DATAPATH+"LastModel" # saved on Google Drive but requires loginOn Google CoLab, mount cloud-local file, get our code from GitHub.
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
</code>
## Data Load
_____no_output_____
<code>
show_time()
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
pcgen.set_seq_oracle(Transcript_Oracle())
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(RNA_LEN)
pc_all = pc_sim.get_sequences(PC_TRAINS+PC_TESTS)
nc_all = nc_sim.get_sequences(NC_TRAINS+NC_TESTS)
print("Generated",len(pc_all),"PC seqs")
print("Generated",len(nc_all),"NC seqs")2021-07-25 20:40:40 UTC
Generated 16000 PC seqs
Generated 16000 NC seqs
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
show_time()Simulated sequences prior to adjustment:
PC seqs
Average RNA length: 1000.0
Average ORF length: 675.05625
NC seqs
Average RNA length: 1000.0
Average ORF length: 180.3455625
2021-07-25 20:41:00 UTC
</code>
## Data Prep_____no_output_____
<code>
# Any portion of a shuffled list is a random selection
pc_train=pc_all[:PC_TRAINS]
nc_train=nc_all[:NC_TRAINS]
pc_test=pc_all[PC_TRAINS:PC_TRAINS+PC_TESTS]
nc_test=nc_all[NC_TRAINS:NC_TRAINS+PC_TESTS]
print("PC train, NC train:",len(pc_train),len(nc_train))
print("PC test, NC test:",len(pc_test),len(nc_test))
# Garbage collection
pc_all=None
nc_all=None
print("First PC train",pc_train[0])
print("First PC test",pc_test[0])PC train, NC train: 8000 8000
PC test, NC test: 8000 8000
First PC train GCCGAAGCGTCTGTTTTCGCACGCTCAGCCATGTCTAACCCTCCCCACTGCGTGGGAATTCCTTCCCCCGTATAGCCTTGCCTGCCAAGGTCTCGACGCTCACGCCGAAGCCGCCGTAGATGCCATTTCATGAAGATCCTGACCTAGACTTGACAATTTTAGATATGTACCGGTATTGCGCGTGTGCATCCCCCCACCGCCTTCTGCGTAGGCATTACGCTATGATTCTTAACAATGGGGTAACAAGAGAAGTTTGCAAAGACACAATGTGCGATCCTCCACCTAAATGTATCCTGGCAGCACCATCACCGCGCATGGTAGTCTTGTCGGGTACGTTTCGGCTTGGCACGATTGTAATCGTGCGCTCCATCCACCTCCCCTCTACAGTTAGATCTGCCGTGAAAGAAGTCAATTTCCAATACCGGTATCGAATAGACATCGGCGATTGCAACAAAAACACTCGAAGCTATGCCAGTGCAAGGACCGTGACTAAGCCAAGCATCGTTCGGTGGTCAGGAAGACATGCCCAAGTCCCCAGCTTTAATGGTACTTCGCCCGGGAATTTCGATACCACCTGGGCTCTCTCGCCTAGATCCAGACCTCCAGCGGCGAATCTTCACAACGACTTGTGTATGCTGGTGTGCGACCGGTTTAATAAAATCTCGAGCCAGTTTGTGATCCCTAGGTCGGTGTCGACGCGGATGAATTATGAGGCAGTGGTTCAACGGGCACTGACACAAGCAGTTAGTAGGTCCACCACGCCGGTTATTACTGGAAGGTGGAGGGTCAAACTGTGTCAAGTTTACATGCCAACCACCCTGAAGTATCTTCATGTTAATTCAACGCGGTATTTCAAAGCTCCCGACTCAGACCCAGGGTGAAAGATTCCAGGTGTAACGAGTCACTGACGAAGAATCTCAGGCCGGGTATTCCGTCCTTAGGGCAGCTACCTGTCCTAACACAAATCCTTTGGAGACTTTAATGCCTGATGGGAAGCGTA
First PC test GGGTCCTGGTTTTTCAGACAAGGTTACAACTACCCTTCGCAATCGAAGCCGTATTAGGCGTAGTTCATTTTGACGCCGAGGCTACTTATGCGCAGCGGCAATCTAGCGAATATGTTTCCGCATCCTACGGTCTGGCTAAACGAAAGATGCGGTCTTTACGCGAATGAGCAGAATTAGACAGTGCGTGTGATTACTGGCGGACTGTAAGTATCTGATTGAATGCCCTAGATGGACTGTTTCGATGGTGTCTTTAGTTTGCTTGATCACCATCCATCGGGTCTAGGCCTCTGGGTGGTAAAGGTGAGGGTCCCCAAAGCTTTGAATACCTGGTACGGTTATAGCGTTTGCCTTGCCTCAACGGATGGTAGGCTCAGCGAAGAGACTATGGCACCGCGAACAAAAACAATGCCTCAAATGCTGGCCCTTGCGCATATACAGGCTGTGGAGGGACAGAGACAAGAAGCACGCCACAGGTGTCCCGGCGCCGAGGGGCCTTTTAGACGAACGATAGGTGTCTCCGATACCCGAAAACTAGGCTGGTTCTTGACGGACTGCGTCTTCGTTAAAGAAACGCTACCGGAGTCGGTCTATAGAGAGTGTAGTACTTCCCCCGAGGAGATTTCGTCTAACATTCATATAGTGAGTCATCCGGGACCCGAAACCACCCCGACACCCGAGACGAGAAGGCACGCGCTGATCTTACTTTGCGATACTCAGCACACCGGCGACGCGGAACCTTGCTTGGGGTGCTATTGTGCGCAATGGCACTAGTCTCTCAGTCGCGTCGCTTAAACCAGAAATGCTCCCCGGCGGGCAGTCGGGTACTCACACACCGTACAGTGATCTAAACTGGCCCGATGCTGTTAAGTCACACTGGCCCTCTACGGATTATGAGCTGACTTCCGTAATTGCCCATTGACTAGGGCACGTCCATTCACCTCTATCACCTCTCCCCTATAAACCACGCCCGAATTCAGCTGCGGTACCTTACTGGCCTCTG
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
total=len1+len0
L1=np.ones(len1,dtype=np.int8)
L0=np.zeros(len0,dtype=np.int8)
S1 = np.asarray(seqs1)
S0 = np.asarray(seqs0)
all_labels = np.concatenate((L1,L0))
all_seqs = np.concatenate((S1,S0))
for i in range(0,len0):
all_labels[i*2] = L0[i]
all_seqs[i*2] = S0[i]
all_labels[i*2+1] = L1[i]
all_seqs[i*2+1] = S1[i]
return all_seqs,all_labels # use this to test unshuffled
# bug in next line?
X,y = shuffle(all_seqs,all_labels) # sklearn.utils.shuffle
#Doesn't fix it
#X = shuffle(all_seqs,random_state=3) # sklearn.utils.shuffle
#y = shuffle(all_labels,random_state=3) # sklearn.utils.shuffle
return X,y
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print(Xseq[:3])
print(y[:3])
# Tests:
show_time()['TGTCCAGTATATGCGCCGAGGGGTCGCGTGCCGTCCAAATCACCTGATGCATAAAGAACGCGTTAGGAAAAATTCGGTTGGGCAGTGCGATACACTTTTAAGTCTAGGTGCACGACTCCGATTCGATGGTTGCCAACGAGGGCTACTTATGAACTATTTTGGGCTGCCCGCTAGATCTGCAAGCGTACCTTAGAATATACGCCACCAACGATCAAGCGTGTCTCCCGGGTCTGTCTGTTCATCCCCGAAGTAAGTATGCGACCGGACTTTGTCTTTGCATAAAAGGGGTGCGCAGACCCCCACGCAAAAGGGCCTGGTGGAAAAAAGGCTTCGGATTGTAAACTAAATGACGGCTGCTTTTGATAGCAGATTGAACCTGTTGGGTCCAAAATCTCCAGAGTTGGCGCGGACGGTGCGTTGTAATGTTGTTACAACCTAGTTTCACTTATACATCGGACTTAGAGAGAAATCACGTGAATTTTGCGTGAACCATGGCGTAGCTGTATTCCACGAGTGAGGTTCTGGGACTTCACGTTCGACCATCAATCTGTCGCATTCTACCGATAGGTCTCGGCTATTGTAACGTAGCATTATTATCTTAGTCACGGAACCTTTATGAGGCGCCAAAAGAGCTACATCGCCGTGAGGTCGCGTTTTCCTCCGTCTGTACTATAGTATCCTCACTTGCAGATCCACGGGAAATCTAGTGAAGACGTACGTCCCTTAACCGTGATATCGTTGATGAGAATTCCTGGGTATCACGTCTGCCCAGAGGTCTCATGTGTGCGTGCACAGAGTCGTGACCCGTTAGTATAATTTCTTCATGTATAGAGAGGTTTCTTTTGCTGCACTAGATCAGAGGATCGTAGGATTTTTGCAGCTGCTGCATCAATAAGTGCAATTGGCGGAAGCTTAACCGATCGTTAGGCAAGACTCCACTGGAACTTGCCGGGTCGACAGATACGCTGGAAATGCTCCTGGGTAAGCGTGCACACAAAAC'
'GCCGAAGCGTCTGTTTTCGCACGCTCAGCCATGTCTAACCCTCCCCACTGCGTGGGAATTCCTTCCCCCGTATAGCCTTGCCTGCCAAGGTCTCGACGCTCACGCCGAAGCCGCCGTAGATGCCATTTCATGAAGATCCTGACCTAGACTTGACAATTTTAGATATGTACCGGTATTGCGCGTGTGCATCCCCCCACCGCCTTCTGCGTAGGCATTACGCTATGATTCTTAACAATGGGGTAACAAGAGAAGTTTGCAAAGACACAATGTGCGATCCTCCACCTAAATGTATCCTGGCAGCACCATCACCGCGCATGGTAGTCTTGTCGGGTACGTTTCGGCTTGGCACGATTGTAATCGTGCGCTCCATCCACCTCCCCTCTACAGTTAGATCTGCCGTGAAAGAAGTCAATTTCCAATACCGGTATCGAATAGACATCGGCGATTGCAACAAAAACACTCGAAGCTATGCCAGTGCAAGGACCGTGACTAAGCCAAGCATCGTTCGGTGGTCAGGAAGACATGCCCAAGTCCCCAGCTTTAATGGTACTTCGCCCGGGAATTTCGATACCACCTGGGCTCTCTCGCCTAGATCCAGACCTCCAGCGGCGAATCTTCACAACGACTTGTGTATGCTGGTGTGCGACCGGTTTAATAAAATCTCGAGCCAGTTTGTGATCCCTAGGTCGGTGTCGACGCGGATGAATTATGAGGCAGTGGTTCAACGGGCACTGACACAAGCAGTTAGTAGGTCCACCACGCCGGTTATTACTGGAAGGTGGAGGGTCAAACTGTGTCAAGTTTACATGCCAACCACCCTGAAGTATCTTCATGTTAATTCAACGCGGTATTTCAAAGCTCCCGACTCAGACCCAGGGTGAAAGATTCCAGGTGTAACGAGTCACTGACGAAGAATCTCAGGCCGGGTATTCCGTCCTTAGGGCAGCTACCTGTCCTAACACAAATCCTTTGGAGACTTTAATGCCTGATGGGAAGCGTA'
'AGTGTATATGTCTCCTTGTGTAACCAACCTCGAAATTTCAGAGTGCGTCGGGTGAATCTTAGGGCTTATTTCTTTTACCATATCCAAGATATTTTGGGCACGTACATGCAACGGCTGCGGTAGATTTCTCTACGACGTACAGGCCACCGCTATAAGTTGTGTCCCGAATGGACAATGCGGGTTTGTAGCCAATTATGTACGGGTAACAAACGGCCGGCGACAGAACAGGTATCTGTCTAATTATAGGTTTCTGACCAGAAGGCACAGTTGTTCGGGCAACGAGCTCGTGCCGTCGTTGGGCATTGCGATTTGACAACGCCAGGTTGGGCATATGAGTATGGCTTGGAGGACAGTTGACGCTTATCTTGGGGTTAAAGTTGAACGACGAGGCCGCACGATTATCTAGTTGAGAAGACGATTGCCTCATGGTGAGTAGCCGCAGTGGAGCGCCGCGCCTCCCTTATACGTGGGAGTCATATCGATAGAAGGGCGGTTAGCTAGATTCGCTGTGAAAGTTAAATGGTTCCTGCCGCCACATCGGTTTAAGCAGTGGTCCCATCGGAAATCAGTTAGCCGGCCCGCGGTACGGACTTTTGCCTTATTTTCTGCTGTTCGTTAAGGCGATGCGGTCGCCCTATTCGAATTCGCGAGTTGACCTGGTACTAAACACGATGACCTCGATTCTATGTTAGAATCGCCCGAAACCCTACCCCCCGTGCGTTTGTGAGCACTCAGAAAAAGGACGTGCTAGCTGCCTGATGAACCTGTACTAGGCGTGTGAAATCAACCTGGATTACAAGCGCGCCTGCAGGACCGTCTCAAATGTGCTATACCCGAGGGCGCAATGCGACCGCCGGGCCTCAAGGTGCTTTGCATCAAATTCATCAAGGTCGCCATCCCTGGTTAGGCGTCGTGATCGAGTAGAATTACACTTCATGCTCAAATCGTTTTCATAGAATCTTACACAGTACTCCGTGATCCGCCAAGACATCGTTGAAAC']
[0 1 0]
2021-07-25 20:41:00 UTC
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
collection = []
for seq in seqs:
counts = tool.make_dict_upto_K(max_K)
# Last param should be True when using Harvester.
counts = tool.update_count_one_K(counts,max_K,seq,True)
# Given counts for K=3, Harvester fills in counts for K=1,2.
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
return np.asarray(collection)
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print(Xfrq[:3])
show_time()[[0.247 0.23 0.256 0.267 0.06806807 0.05605606
0.06006006 0.06306306 0.05505506 0.05005005 0.06506507 0.05905906
0.06406406 0.06106106 0.05805806 0.07307307 0.06006006 0.06306306
0.07307307 0.07107107 0.0240481 0.01402806 0.01503006 0.01503006
0.00801603 0.01402806 0.01803607 0.01503006 0.02104208 0.00901804
0.01603206 0.01402806 0.01503006 0.02004008 0.01302605 0.01503006
0.01402806 0.01803607 0.01102204 0.01202405 0.01503006 0.01002004
0.01402806 0.01102204 0.01402806 0.01302605 0.01302605 0.0250501
0.01202405 0.00901804 0.02004008 0.01803607 0.01703407 0.01302605
0.01603206 0.01803607 0.01803607 0.01002004 0.01803607 0.01503006
0.01402806 0.01202405 0.01503006 0.01703407 0.01803607 0.01903808
0.01903808 0.01703407 0.01302605 0.01102204 0.01803607 0.01803607
0.01402806 0.01603206 0.01503006 0.01803607 0.01503006 0.02705411
0.01402806 0.01703407 0.01503006 0.01503006 0.02004008 0.02104208]
[0.244 0.274 0.239 0.243 0.06606607 0.06006006
0.05805806 0.05905906 0.07107107 0.08308308 0.06106106 0.05905906
0.05705706 0.06206206 0.05405405 0.06606607 0.05005005 0.06906907
0.06506507 0.05905906 0.01202405 0.01202405 0.02304609 0.01903808
0.01603206 0.01903808 0.01302605 0.01202405 0.01703407 0.01503006
0.01302605 0.01302605 0.00601202 0.01803607 0.01903808 0.01603206
0.02204409 0.02004008 0.01603206 0.01302605 0.0240481 0.02004008
0.01703407 0.02204409 0.01703407 0.01603206 0.01503006 0.01302605
0.01102204 0.01703407 0.01603206 0.01503006 0.01903808 0.01803607
0.00601202 0.01402806 0.01503006 0.02104208 0.01503006 0.01102204
0.00901804 0.01102204 0.01202405 0.02204409 0.01903808 0.01603206
0.01903808 0.01202405 0.01302605 0.01002004 0.01302605 0.01302605
0.01603206 0.02304609 0.01603206 0.01402806 0.01402806 0.01903808
0.01402806 0.01803607 0.01402806 0.01803607 0.01102204 0.01603206]
[0.237 0.239 0.264 0.26 0.06106106 0.05405405
0.05805806 0.06406406 0.05405405 0.05905906 0.07407407 0.05105105
0.06206206 0.06906907 0.06206206 0.07107107 0.05905906 0.05705706
0.07007007 0.07407407 0.01603206 0.01402806 0.01402806 0.01703407
0.01603206 0.01402806 0.01703407 0.00601202 0.01402806 0.01002004
0.01703407 0.01703407 0.01102204 0.01903808 0.01603206 0.01803607
0.01703407 0.00901804 0.01402806 0.01402806 0.01102204 0.01302605
0.01803607 0.01703407 0.02004008 0.02004008 0.01603206 0.01803607
0.01202405 0.01202405 0.01402806 0.01302605 0.01903808 0.01603206
0.01302605 0.01402806 0.01302605 0.02304609 0.01903808 0.01402806
0.01002004 0.01903808 0.01402806 0.01903808 0.01603206 0.01102204
0.02004008 0.0240481 0.00901804 0.01503006 0.01603206 0.01903808
0.01402806 0.00901804 0.02004008 0.01402806 0.01803607 0.02004008
0.01503006 0.01703407 0.02004008 0.01503006 0.02004008 0.01903808]]
2021-07-25 20:41:10 UTC
</code>
## Neural network_____no_output_____
<code>
def make_DNN():
dt=np.float32
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt)) # relu doesn't work as well
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=dt))
dnn.compile(optimizer='adam', # adadelta doesn't work as well
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
return dnn
model = make_DNN()
print(model.summary())make_DNN
input shape: (None, 84)
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 128) 10880
_________________________________________________________________
dropout_4 (Dropout) (None, 128) 0
_________________________________________________________________
dense_7 (Dense) (None, 128) 16512
_________________________________________________________________
dropout_5 (Dropout) (None, 128) 0
_________________________________________________________________
dense_8 (Dense) (None, 1) 129
=================================================================
Total params: 27,521
Trainable params: 27,521
Non-trainable params: 0
_________________________________________________________________
None
def do_cross_validation(X,y):
cv_scores = []
fold=0
#mycallbacks = [ModelCheckpoint(
# filepath=MODELPATH, save_best_only=True,
# monitor='val_accuracy', mode='max')]
# When shuffle=True, the valid indices are a random subset.
splitter = KFold(n_splits=SPLITS,shuffle=True)
model = None
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
# callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
return model # parameters at end of training_____no_output_____show_time()
last_model = do_cross_validation(Xfrq,y)2021-07-25 20:41:10 UTC
MODEL
make_DNN
input shape: (None, 84)
FIT
Epoch 1/100
400/400 [==============================] - 2s 4ms/step - loss: 0.7090 - accuracy: 0.5057 - val_loss: 0.6931 - val_accuracy: 0.4953
Epoch 2/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6962 - accuracy: 0.5065 - val_loss: 0.6985 - val_accuracy: 0.5047
Epoch 3/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6944 - accuracy: 0.5142 - val_loss: 0.7140 - val_accuracy: 0.4953
Epoch 4/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6942 - accuracy: 0.5208 - val_loss: 0.6871 - val_accuracy: 0.5738
Epoch 5/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6905 - accuracy: 0.5278 - val_loss: 0.6842 - val_accuracy: 0.4991
Epoch 6/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6831 - accuracy: 0.5569 - val_loss: 0.6789 - val_accuracy: 0.5097
Epoch 7/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6725 - accuracy: 0.5972 - val_loss: 0.6594 - val_accuracy: 0.6191
Epoch 8/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6548 - accuracy: 0.6315 - val_loss: 0.6332 - val_accuracy: 0.6597
Epoch 9/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6374 - accuracy: 0.6467 - val_loss: 0.6083 - val_accuracy: 0.6909
Epoch 10/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6177 - accuracy: 0.6625 - val_loss: 0.5976 - val_accuracy: 0.6847
Epoch 11/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6092 - accuracy: 0.6627 - val_loss: 0.5830 - val_accuracy: 0.6984
Epoch 12/100
400/400 [==============================] - 1s 3ms/step - loss: 0.6056 - accuracy: 0.6692 - val_loss: 0.5859 - val_accuracy: 0.6812
Epoch 13/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5977 - accuracy: 0.6799 - val_loss: 0.5871 - val_accuracy: 0.6797
Epoch 14/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5881 - accuracy: 0.6803 - val_loss: 0.5759 - val_accuracy: 0.6947
Epoch 15/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5788 - accuracy: 0.7004 - val_loss: 0.5642 - val_accuracy: 0.7047
Epoch 16/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5785 - accuracy: 0.6947 - val_loss: 0.5779 - val_accuracy: 0.6872
Epoch 17/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5874 - accuracy: 0.6841 - val_loss: 0.5681 - val_accuracy: 0.6994
Epoch 18/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5837 - accuracy: 0.6896 - val_loss: 0.5578 - val_accuracy: 0.7103
Epoch 19/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5715 - accuracy: 0.7017 - val_loss: 0.5600 - val_accuracy: 0.7084
Epoch 20/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5714 - accuracy: 0.6999 - val_loss: 0.5648 - val_accuracy: 0.7022
Epoch 21/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5741 - accuracy: 0.6985 - val_loss: 0.5534 - val_accuracy: 0.7141
Epoch 22/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5743 - accuracy: 0.6900 - val_loss: 0.5578 - val_accuracy: 0.7094
Epoch 23/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5685 - accuracy: 0.6995 - val_loss: 0.5580 - val_accuracy: 0.7072
Epoch 24/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5663 - accuracy: 0.7050 - val_loss: 0.5563 - val_accuracy: 0.7094
Epoch 25/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5610 - accuracy: 0.7075 - val_loss: 0.5467 - val_accuracy: 0.7178
Epoch 26/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5640 - accuracy: 0.7046 - val_loss: 0.5664 - val_accuracy: 0.6997
Epoch 27/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5699 - accuracy: 0.6962 - val_loss: 0.5446 - val_accuracy: 0.7175
Epoch 28/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5617 - accuracy: 0.7093 - val_loss: 0.5437 - val_accuracy: 0.7209
Epoch 29/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5518 - accuracy: 0.7202 - val_loss: 0.5551 - val_accuracy: 0.7088
Epoch 30/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5628 - accuracy: 0.7013 - val_loss: 0.5427 - val_accuracy: 0.7228
Epoch 31/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5573 - accuracy: 0.7130 - val_loss: 0.5446 - val_accuracy: 0.7188
Epoch 32/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5597 - accuracy: 0.7096 - val_loss: 0.5658 - val_accuracy: 0.7034
Epoch 33/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5573 - accuracy: 0.7122 - val_loss: 0.5482 - val_accuracy: 0.7159
Epoch 34/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5541 - accuracy: 0.7097 - val_loss: 0.5428 - val_accuracy: 0.7191
Epoch 35/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5521 - accuracy: 0.7162 - val_loss: 0.5391 - val_accuracy: 0.7262
Epoch 36/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5544 - accuracy: 0.7132 - val_loss: 0.5498 - val_accuracy: 0.7153
Epoch 37/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5552 - accuracy: 0.7144 - val_loss: 0.5392 - val_accuracy: 0.7262
Epoch 38/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5486 - accuracy: 0.7141 - val_loss: 0.5377 - val_accuracy: 0.7294
Epoch 39/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5510 - accuracy: 0.7138 - val_loss: 0.5713 - val_accuracy: 0.6997
Epoch 40/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5511 - accuracy: 0.7139 - val_loss: 0.5414 - val_accuracy: 0.7188
Epoch 41/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5515 - accuracy: 0.7139 - val_loss: 0.5411 - val_accuracy: 0.7194
Epoch 42/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5518 - accuracy: 0.7085 - val_loss: 0.5687 - val_accuracy: 0.6975
Epoch 43/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5633 - accuracy: 0.7030 - val_loss: 0.5411 - val_accuracy: 0.7197
Epoch 44/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5500 - accuracy: 0.7157 - val_loss: 0.5382 - val_accuracy: 0.7244
Epoch 45/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5461 - accuracy: 0.7214 - val_loss: 0.5366 - val_accuracy: 0.7291
Epoch 46/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5440 - accuracy: 0.7185 - val_loss: 0.5374 - val_accuracy: 0.7241
Epoch 47/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5461 - accuracy: 0.7199 - val_loss: 0.5351 - val_accuracy: 0.7341
Epoch 48/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5470 - accuracy: 0.7168 - val_loss: 0.5461 - val_accuracy: 0.7191
Epoch 49/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5496 - accuracy: 0.7147 - val_loss: 0.5454 - val_accuracy: 0.7184
Epoch 50/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5473 - accuracy: 0.7123 - val_loss: 0.5577 - val_accuracy: 0.7059
Epoch 51/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5483 - accuracy: 0.7170 - val_loss: 0.5350 - val_accuracy: 0.7303
Epoch 52/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5421 - accuracy: 0.7201 - val_loss: 0.5404 - val_accuracy: 0.7231
Epoch 53/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5504 - accuracy: 0.7165 - val_loss: 0.5345 - val_accuracy: 0.7316
Epoch 54/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5448 - accuracy: 0.7240 - val_loss: 0.5406 - val_accuracy: 0.7225
Epoch 55/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5424 - accuracy: 0.7258 - val_loss: 0.5386 - val_accuracy: 0.7197
Epoch 56/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5380 - accuracy: 0.7269 - val_loss: 0.5538 - val_accuracy: 0.7084
Epoch 57/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5461 - accuracy: 0.7165 - val_loss: 0.5351 - val_accuracy: 0.7309
Epoch 58/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5421 - accuracy: 0.7266 - val_loss: 0.5481 - val_accuracy: 0.7138
Epoch 59/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5362 - accuracy: 0.7255 - val_loss: 0.5376 - val_accuracy: 0.7228
Epoch 60/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5436 - accuracy: 0.7217 - val_loss: 0.5464 - val_accuracy: 0.7122
Epoch 61/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5402 - accuracy: 0.7268 - val_loss: 0.5374 - val_accuracy: 0.7219
Epoch 62/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5422 - accuracy: 0.7246 - val_loss: 0.5346 - val_accuracy: 0.7319
Epoch 63/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5422 - accuracy: 0.7183 - val_loss: 0.5350 - val_accuracy: 0.7312
Epoch 64/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5467 - accuracy: 0.7203 - val_loss: 0.5325 - val_accuracy: 0.7331
Epoch 65/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5479 - accuracy: 0.7197 - val_loss: 0.5616 - val_accuracy: 0.7053
Epoch 66/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5481 - accuracy: 0.7246 - val_loss: 0.5456 - val_accuracy: 0.7131
Epoch 67/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5462 - accuracy: 0.7223 - val_loss: 0.5341 - val_accuracy: 0.7337
Epoch 68/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5438 - accuracy: 0.7231 - val_loss: 0.5494 - val_accuracy: 0.7122
Epoch 69/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5353 - accuracy: 0.7283 - val_loss: 0.5336 - val_accuracy: 0.7328
Epoch 70/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5425 - accuracy: 0.7248 - val_loss: 0.5519 - val_accuracy: 0.7113
Epoch 71/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5409 - accuracy: 0.7229 - val_loss: 0.5399 - val_accuracy: 0.7175
Epoch 72/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5540 - accuracy: 0.7089 - val_loss: 0.5327 - val_accuracy: 0.7337
Epoch 73/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5416 - accuracy: 0.7285 - val_loss: 0.5401 - val_accuracy: 0.7244
Epoch 74/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5404 - accuracy: 0.7269 - val_loss: 0.5449 - val_accuracy: 0.7172
Epoch 75/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5424 - accuracy: 0.7290 - val_loss: 0.5322 - val_accuracy: 0.7331
Epoch 76/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5467 - accuracy: 0.7201 - val_loss: 0.5335 - val_accuracy: 0.7341
Epoch 77/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5358 - accuracy: 0.7293 - val_loss: 0.5328 - val_accuracy: 0.7344
Epoch 78/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5425 - accuracy: 0.7231 - val_loss: 0.5327 - val_accuracy: 0.7350
Epoch 79/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5355 - accuracy: 0.7279 - val_loss: 0.5410 - val_accuracy: 0.7159
Epoch 80/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5425 - accuracy: 0.7228 - val_loss: 0.5387 - val_accuracy: 0.7188
Epoch 81/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5400 - accuracy: 0.7266 - val_loss: 0.5334 - val_accuracy: 0.7353
Epoch 82/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5429 - accuracy: 0.7237 - val_loss: 0.5325 - val_accuracy: 0.7334
Epoch 83/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5438 - accuracy: 0.7192 - val_loss: 0.5330 - val_accuracy: 0.7350
Epoch 84/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5422 - accuracy: 0.7267 - val_loss: 0.5423 - val_accuracy: 0.7163
Epoch 85/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5379 - accuracy: 0.7265 - val_loss: 0.5329 - val_accuracy: 0.7344
Epoch 86/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5460 - accuracy: 0.7214 - val_loss: 0.5381 - val_accuracy: 0.7269
Epoch 87/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5382 - accuracy: 0.7266 - val_loss: 0.5379 - val_accuracy: 0.7269
Epoch 88/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5344 - accuracy: 0.7370 - val_loss: 0.5337 - val_accuracy: 0.7359
Epoch 89/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5432 - accuracy: 0.7232 - val_loss: 0.5334 - val_accuracy: 0.7356
Epoch 90/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5351 - accuracy: 0.7290 - val_loss: 0.5335 - val_accuracy: 0.7328
Epoch 91/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5400 - accuracy: 0.7250 - val_loss: 0.5435 - val_accuracy: 0.7150
Epoch 92/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5382 - accuracy: 0.7255 - val_loss: 0.5323 - val_accuracy: 0.7328
Epoch 93/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5344 - accuracy: 0.7237 - val_loss: 0.5339 - val_accuracy: 0.7319
Epoch 94/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5305 - accuracy: 0.7371 - val_loss: 0.5334 - val_accuracy: 0.7322
Epoch 95/100
400/400 [==============================] - 1s 4ms/step - loss: 0.5419 - accuracy: 0.7281 - val_loss: 0.5327 - val_accuracy: 0.7319
Epoch 96/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5403 - accuracy: 0.7275 - val_loss: 0.5347 - val_accuracy: 0.7294
Epoch 97/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5359 - accuracy: 0.7288 - val_loss: 0.5342 - val_accuracy: 0.7297
Epoch 98/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5326 - accuracy: 0.7276 - val_loss: 0.5325 - val_accuracy: 0.7319
Epoch 99/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5361 - accuracy: 0.7324 - val_loss: 0.5380 - val_accuracy: 0.7275
Epoch 100/100
400/400 [==============================] - 1s 3ms/step - loss: 0.5346 - accuracy: 0.7264 - val_loss: 0.5330 - val_accuracy: 0.7337
Fold 1, 100 epochs, 142 sec
def show_test_AUC(model,X,y):
ns_probs = [0 for _ in range(len(y))]
bm_probs = model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
def show_test_accuracy(model,X,y):
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
_____no_output_____print("Accuracy on training data.")
print("Prepare...")
show_time()
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print("Extract K-mer features...")
show_time()
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print("Plot...")
show_time()
show_test_AUC(last_model,Xfrq,y)
show_test_accuracy(last_model,Xfrq,y)
show_time()Accuracy on training data.
Prepare...
2021-07-25 20:52:57 UTC
Extract K-mer features...
2021-07-25 20:52:58 UTC
Plot...
2021-07-25 20:53:07 UTC
print("Accuracy on test data.")
print("Prepare...")
show_time()
Xseq,y=prepare_x_and_y(pc_test,nc_test)
print("Extract K-mer features...")
show_time()
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print("Plot...")
show_time()
show_test_AUC(last_model,Xfrq,y)
show_test_accuracy(last_model,Xfrq,y)
show_time()Accuracy on test data.
Prepare...
2021-07-25 20:53:09 UTC
Extract K-mer features...
2021-07-25 20:53:09 UTC
Plot...
2021-07-25 20:53:19 UTC
</code>
| {
"repository": "ShepherdCode/Soars2021",
"path": "Notebooks/ORF_MLP_117.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 1,
"size": 318676,
"hexsha": "cb277b0c46f2f0def4903c6ad691d666cf9b917f",
"max_line_length": 31258,
"avg_line_length": 171.7004310345,
"alphanum_fraction": 0.774203266
} |
# Notebook from tech4germany/bam-inclusify
Path: data/vienna_catalog/main.ipynb
This notebook downloads and processes the gender data from the Vienna catalog. The data comes from a [gendering add-in for Microsoft Word 2010](https://web.archive.org/web/20210629142645/https:/archive.codeplex.com/?p=gendering) that has been developed by Microsoft. The data includes two styles (double notation and inner I).
Some more manual normalization would be necessary to make this data useful for our project. For example, the inner I and double notation forms can both be derived from just the female form (in addition to the male form, which is already given by the replaced word), and entries for the same word but with different cases could be reduced to a single entry.
The data is highly relevant for this project, because it has been created in a government context as well, and includes many government-specific words._____no_output_____
<code>
import io
import pandas as pd
import re
import requests
from typing import *
import sys
sys.path.insert(0, "..")
from helpers import add_to_dict, log
from helpers_csv import csvs_to_list, dict_to_csvs_____no_output_____csv = requests.get(
"https://www.data.gv.at/katalog/dataset/15d6ede8-f128-4fcd-aa3a-4479e828f477/resource/804f6db1-add7-4480-b4d0-e52e61c48534/download/worttabelle.csv"
).content
text = re.sub(";;\r\n", "\n", csv.decode("utf-8"))
df = pd.read_csv(io.StringIO(text))
df.head()_____no_output_____df.to_csv(
"vienna_catalog_raw.csv",
index=False,
)_____no_output_____
</code>
We change Binnen-I to gender star to have one simple style, and we try to attribute singular and plural as well as possible:_____no_output_____
<code>
dic: Dict[str, Dict[str, List[str]]] = {"sg": {}, "pl": {}}
for (_, _, ungendered, gendered, binnenI) in df.to_records():
if binnenI == "Y":
gendered = re.sub(r"([a-zäöüß])I", r"\1*i", gendered)
if type(gendered) == str: # skip ill-formatted rows
if (
re.findall(r"[iI]n( .*)?$", gendered) != []
or re.findall(r" bzw\.? ", gendered) != []
):
add_to_dict(ungendered, [gendered], dic["sg"])
elif (
re.findall(r"[iI]nnen( .*)?$", gendered) != []
or re.findall(r" und ", gendered) != []
):
add_to_dict(ungendered, [gendered], dic["pl"])
else:
add_to_dict(ungendered, [gendered], dic["sg"])
add_to_dict(ungendered, [gendered], dic["pl"])_____no_output_____dict_to_csvs(dic, "vienna_catalog")_____no_output_____
</code>
We can read this CSV back to a Python dictionary with the following method:_____no_output_____
<code>
list_ = csvs_to_list("vienna_catalog")
list_[:5]_____no_output_____
</code>
| {
"repository": "tech4germany/bam-inclusify",
"path": "data/vienna_catalog/main.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 11,
"size": 8118,
"hexsha": "cb27a7ebc9f9b90ccc6a21e9c36c6061b8f5f910",
"max_line_length": 366,
"avg_line_length": 33,
"alphanum_fraction": 0.4933481153
} |
# Notebook from docinfosci/canvasxpress-python
Path: tutorials/notebook/cx_site_chart_examples/bubble_4.ipynb
# Example: CanvasXpress bubble Chart No. 4
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/bubble-4.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block._____no_output_____
<code>
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="bubble4",
data={
"y": {
"vars": [
"CO2"
],
"smps": [
"AFG",
"ALB",
"DZA",
"AND",
"AGO",
"AIA",
"ATG",
"ARG",
"ARM",
"ABW",
"AUS",
"AUT",
"AZE",
"BHS",
"BHR",
"BGD",
"BRB",
"BLR",
"BEL",
"BLZ",
"BEN",
"BMU",
"BTN",
"BOL",
"BIH",
"BWA",
"BRA",
"VGB",
"BRN",
"BGR",
"BFA",
"BDI",
"KHM",
"CMR",
"CAN",
"CPV",
"CAF",
"TCD",
"CHL",
"CHN",
"COL",
"COM",
"COG",
"COK",
"CRI",
"HRV",
"CUB",
"CYP",
"CZE",
"COD",
"DNK",
"DJI",
"DOM",
"ECU",
"EGY",
"SLV",
"GNQ",
"ERI",
"EST",
"ETH",
"FJI",
"FIN",
"FRA",
"PYF",
"GAB",
"GMB",
"GEO",
"DEU",
"GHA",
"GRC",
"GRL",
"GRD",
"GTM",
"GIN",
"GNB",
"GUY",
"HTI",
"HND",
"HKG",
"HUN",
"ISL",
"IND",
"IDN",
"IRN",
"IRQ",
"IRL",
"ISR",
"ITA",
"JAM",
"JPN",
"JOR",
"KAZ",
"KEN",
"KIR",
"KWT",
"KGZ",
"LAO",
"LVA",
"LBN",
"LSO",
"LBR",
"LBY",
"LIE",
"LTU",
"LUX",
"MAC",
"MDG",
"MWI",
"MYS",
"MDV",
"MLI",
"MLT",
"MHL",
"MRT",
"MUS",
"MEX",
"MDA",
"MNG",
"MNE",
"MAR",
"MOZ",
"MMR",
"NAM",
"NRU",
"NPL",
"NLD",
"NCL",
"NZL",
"NIC",
"NER",
"NGA",
"NIU",
"PRK",
"MKD",
"NOR",
"OMN",
"PAK",
"PAN",
"PNG",
"PRY",
"PER",
"PHL",
"POL",
"PRT",
"QAT",
"ROU",
"RUS",
"RWA",
"SHN",
"KNA",
"LCA",
"SPM",
"VCT",
"WSM",
"STP",
"SAU",
"SRB",
"SYC",
"SLE",
"SGP",
"SVK",
"SVN",
"SLB",
"SOM",
"KOR",
"SSD",
"ESP",
"LKA",
"SDN",
"SUR",
"SWE",
"CHE",
"SYR",
"TWN",
"TJK",
"TZA",
"THA",
"TLS",
"TGO",
"TON",
"TTO",
"TUN",
"TUR",
"TKM",
"TUV",
"UGA",
"UKR",
"ARE",
"GBR",
"USA",
"URY",
"UZB",
"VUT",
"VEN",
"VNM",
"YEM",
"ZMB",
"ZWE"
],
"data": [
[
10.452666,
5.402999,
164.309295,
0.46421,
37.678605,
0.147145,
0.505574,
185.029897,
6.296603,
0.943234,
415.953947,
66.719678,
37.488394,
2.03001,
31.594487,
85.718805,
1.207134,
61.871676,
100.207836,
0.612205,
7.759753,
0.648945,
1.662172,
22.345503,
22.086102,
6.815418,
466.649304,
0.173555,
9.560399,
43.551599,
4.140342,
0.568028,
15.479031,
7.566796,
586.504635,
0.609509,
0.300478,
1.008035,
85.829114,
9956.568523,
92.228209,
0.245927,
3.518309,
0.072706,
8.249118,
17.718646,
26.084446,
7.332762,
104.411211,
2.231343,
34.65143,
0.389975,
25.305221,
41.817989,
251.460913,
6.018265,
5.90578,
0.708769,
17.710953,
16.184949,
2.123769,
45.849349,
331.725446,
0.780633,
4.803117,
0.56324,
9.862173,
755.362342,
14.479998,
71.797869,
0.511728,
0.278597,
19.411335,
3.032114,
0.308612,
2.342628,
3.366964,
10.470701,
42.505723,
49.628491,
3.674529,
2591.323739,
576.58439,
755.402186,
211.270294,
38.803394,
62.212641,
348.085029,
8.009662,
1135.688,
24.923803,
319.647412,
17.136703,
0.068879,
104.217567,
10.16888,
32.26245,
7.859287,
27.565431,
2.425558,
1.27446,
45.205986,
0.14375,
13.669492,
9.56852,
2.216456,
4.187806,
1.470252,
249.144498,
1.565092,
3.273276,
1.531581,
0.153065,
3.934804,
4.901611,
451.080829,
5.877784,
64.508256,
2.123147,
65.367444,
8.383478,
26.095603,
4.154302,
0.049746,
13.410432,
160.170147,
8.20904,
35.080341,
5.377193,
2.093847,
136.078346,
0.007653,
38.162935,
6.980909,
43.817657,
71.029916,
247.425382,
12.096333,
6.786146,
8.103032,
54.210259,
138.924391,
337.705742,
51.482481,
109.24468,
76.951219,
1691.360426,
1.080098,
0.011319,
0.249014,
0.362202,
0.079232,
0.264106,
0.267864,
0.126126,
576.757836,
46.0531,
0.60536,
0.987559,
38.28806,
36.087837,
14.487844,
0.298477,
0.658329,
634.934068,
1.539884,
269.654254,
22.973233,
22.372399,
2.551817,
41.766183,
36.895485,
25.877689,
273.104667,
7.473265,
11.501889,
292.452995,
0.520422,
3.167303,
0.164545,
37.865571,
30.357093,
419.194747,
78.034724,
0.01148,
5.384767,
231.694165,
188.541366,
380.138559,
5424.881502,
6.251839,
113.93837,
0.145412,
129.596274,
211.774129,
9.945288,
6.930094,
11.340575
]
]
},
"x": {
"Country": [
"Afghanistan",
"Albania",
"Algeria",
"Andorra",
"Angola",
"Anguilla",
"Antigua and Barbuda",
"Argentina",
"Armenia",
"Aruba",
"Australia",
"Austria",
"Azerbaijan",
"Bahamas",
"Bahrain",
"Bangladesh",
"Barbados",
"Belarus",
"Belgium",
"Belize",
"Benin",
"Bermuda",
"Bhutan",
"Bolivia",
"Bosnia and Herzegovina",
"Botswana",
"Brazil",
"British Virgin Islands",
"Brunei",
"Bulgaria",
"Burkina Faso",
"Burundi",
"Cambodia",
"Cameroon",
"Canada",
"Cape Verde",
"Central African Republic",
"Chad",
"Chile",
"China",
"Colombia",
"Comoros",
"Congo",
"Cook Islands",
"Costa Rica",
"Croatia",
"Cuba",
"Cyprus",
"Czechia",
"Democratic Republic of Congo",
"Denmark",
"Djibouti",
"Dominican Republic",
"Ecuador",
"Egypt",
"El Salvador",
"Equatorial Guinea",
"Eritrea",
"Estonia",
"Ethiopia",
"Fiji",
"Finland",
"France",
"French Polynesia",
"Gabon",
"Gambia",
"Georgia",
"Germany",
"Ghana",
"Greece",
"Greenland",
"Grenada",
"Guatemala",
"Guinea",
"Guinea-Bissau",
"Guyana",
"Haiti",
"Honduras",
"Hong Kong",
"Hungary",
"Iceland",
"India",
"Indonesia",
"Iran",
"Iraq",
"Ireland",
"Israel",
"Italy",
"Jamaica",
"Japan",
"Jordan",
"Kazakhstan",
"Kenya",
"Kiribati",
"Kuwait",
"Kyrgyzstan",
"Laos",
"Latvia",
"Lebanon",
"Lesotho",
"Liberia",
"Libya",
"Liechtenstein",
"Lithuania",
"Luxembourg",
"Macao",
"Madagascar",
"Malawi",
"Malaysia",
"Maldives",
"Mali",
"Malta",
"Marshall Islands",
"Mauritania",
"Mauritius",
"Mexico",
"Moldova",
"Mongolia",
"Montenegro",
"Morocco",
"Mozambique",
"Myanmar",
"Namibia",
"Nauru",
"Nepal",
"Netherlands",
"New Caledonia",
"New Zealand",
"Nicaragua",
"Niger",
"Nigeria",
"Niue",
"North Korea",
"North Macedonia",
"Norway",
"Oman",
"Pakistan",
"Panama",
"Papua New Guinea",
"Paraguay",
"Peru",
"Philippines",
"Poland",
"Portugal",
"Qatar",
"Romania",
"Russia",
"Rwanda",
"Saint Helena",
"Saint Kitts and Nevis",
"Saint Lucia",
"Saint Pierre and Miquelon",
"Saint Vincent and the Grenadines",
"Samoa",
"Sao Tome and Principe",
"Saudi Arabia",
"Serbia",
"Seychelles",
"Sierra Leone",
"Singapore",
"Slovakia",
"Slovenia",
"Solomon Islands",
"Somalia",
"South Korea",
"South Sudan",
"Spain",
"Sri Lanka",
"Sudan",
"Suriname",
"Sweden",
"Switzerland",
"Syria",
"Taiwan",
"Tajikistan",
"Tanzania",
"Thailand",
"Timor",
"Togo",
"Tonga",
"Trinidad and Tobago",
"Tunisia",
"Turkey",
"Turkmenistan",
"Tuvalu",
"Uganda",
"Ukraine",
"United Arab Emirates",
"United Kingdom",
"United States",
"Uruguay",
"Uzbekistan",
"Vanuatu",
"Venezuela",
"Vietnam",
"Yemen",
"Zambia",
"Zimbabwe"
],
"Continent": [
"Asia",
"Europe",
"Africa",
"Europe",
"Africa",
"North America",
"North America",
"South America",
"Asia",
"North America",
"Oceania",
"Europe",
"Europe",
"North America",
"Asia",
"Asia",
"North America",
"Europe",
"Europe",
"North America",
"Africa",
"North America",
"Asia",
"South America",
"Europe",
"Africa",
"South America",
"North America",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Africa",
"North America",
"Africa",
"Africa",
"Africa",
"South America",
"Asia",
"South America",
"Africa",
"Africa",
"Oceania",
"Central America",
"Europe",
"North America",
"Europe",
"Europe",
"Africa",
"Europe",
"Africa",
"North America",
"South America",
"Africa",
"Central America",
"Africa",
"Africa",
"Europe",
"Africa",
"Oceania",
"Europe",
"Europe",
"Oceania",
"Africa",
"Africa",
"Asia",
"Europe",
"Africa",
"Europe",
"North America",
"North America",
"Central America",
"Africa",
"Africa",
"South America",
"North America",
"Central America",
"Asia",
"Europe",
"Europe",
"Asia",
"Asia",
"Asia",
"Asia",
"Europe",
"Asia",
"Europe",
"North America",
"Asia",
"Asia",
"Asia",
"Africa",
"Oceania",
"Asia",
"Asia",
"Asia",
"Europe",
"Asia",
"Africa",
"Africa",
"Africa",
"Europe",
"Europe",
"Europe",
"Asia",
"Africa",
"Africa",
"Asia",
"Asia",
"Africa",
"Europe",
"Oceania",
"Africa",
"Africa",
"North America",
"Europe",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Africa",
"Oceania",
"Asia",
"Europe",
"Oceania",
"Oceania",
"Central America",
"Africa",
"Africa",
"Oceania",
"Asia",
"Europe",
"Europe",
"Asia",
"Asia",
"Central America",
"Oceania",
"South America",
"South America",
"Asia",
"Europe",
"Europe",
"Africa",
"Europe",
"Asia",
"Africa",
"Africa",
"North America",
"North America",
"North America",
"North America",
"Oceania",
"Africa",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Europe",
"Europe",
"Oceania",
"Africa",
"Asia",
"Africa",
"Europe",
"Asia",
"Africa",
"South America",
"Europe",
"Europe",
"Asia",
"Asia",
"Asia",
"Africa",
"Asia",
"Asia",
"Africa",
"Oceania",
"North America",
"Africa",
"Asia",
"Asia",
"Oceania",
"Africa",
"Europe",
"Asia",
"Europe",
"North America",
"South America",
"Asia",
"Oceania",
"South America",
"Asia",
"Asia",
"Africa",
"Africa"
]
}
},
config={
"circularType": "bubble",
"colorBy": "Continent",
"graphType": "Circular",
"hierarchy": [
"Continent",
"Country"
],
"theme": "paulTol",
"title": "Annual CO2 Emmisions in 2018"
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="bubble_4.html")
_____no_output_____
</code>
| {
"repository": "docinfosci/canvasxpress-python",
"path": "tutorials/notebook/cx_site_chart_examples/bubble_4.ipynb",
"matched_keywords": [
"bwa"
],
"stars": 4,
"size": 29278,
"hexsha": "cb28fc30e5a20c165e7bdb093233a94d7ef2a2a5",
"max_line_length": 29278,
"avg_line_length": 29278,
"alphanum_fraction": 0.2460550584
} |
# Notebook from naxvm/ML4all
Path: C3.Classification_LogReg/RegresionLogistica_student.ipynb
# Logistic Regression
Notebook version: 2.0 (Nov 21, 2017)
2.1 (Oct 19, 2018)
Author: Jesús Cid Sueiro ([email protected])
Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7)
Assumptions for regression model modified
v.2.1 - Minor changes regarding notation and assumptions_____no_output_____
<code>
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
_____no_output_____
</code>
# Logistic Regression
## 1. Introduction
### 1.1. Binary classification and decision theory. The MAP criterion
The goal of a classification problem is to assign a *class* or *category* to every *instance* or *observation* of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = \{0, 1\}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or *decision*. If $y=\hat{y}$, the decision is a *hit*, otherwise $y\neq \hat{y}$ and the decision is an *error*.
_____no_output_____
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P\{\hat{Y} \neq Y\}$ is minimum. Noting that
$$
P\{\hat{Y} \neq Y\} = \int P\{\hat{Y} \neq Y | {\bf x}\} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P\{\hat{y} \neq Y |{\bf x}\} \\
&= \arg\max_{\hat{y}} P\{\hat{y} = Y |{\bf x}\} \\
\end{align}_____no_output_____
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad P_{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually referred to as the MAP (*Maximum A Posteriori*) classifier. As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems._____no_output_____### 1.2. Parametric classification.
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal D = \{{\bf x}^{(k)}, y^{(k)}\}_{k=0}^{K-1}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}\}$ of independent and identically distributed (i.i.d.) samples from an ***unknown*** distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
_____no_output_____
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
_____no_output_____In practice, the dataset ${\mathcal D}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this lesson, we explore one of the most popular model-based parametric classification methods: **logistic regression**.
<img src="./figs/parametric_decision.png" width=400>
_____no_output_____## 2. Logistic regression.
### 2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in \{0,1\}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the *logistic* function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
_____no_output_____It is straightforward to see that the logistic function has the following properties:
- **P1**: Probabilistic output: $\quad 0 \le g(t) \le 1$
- **P2**: Symmetry: $\quad g(-t) = 1-g(t)$
- **P3**: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$
In the following we define a logistic function in python, and use it to plot a graphical representation._____no_output_____**Exercise 1**: Verify properties P2 and P3.
**Exercise 2**: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$._____no_output_____
<code>
# Define the logistic function
def logistic(t):
#<SOL>
#</SOL>
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$g(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()_____no_output_____
</code>
### 2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$. _____no_output_____
<code>
# Weight vector:
w = [4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = logistic(w[0]*xx0 + w[1]*xx1)
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()_____no_output_____
</code>
The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the representation color._____no_output_____
<code>
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
### 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$_____no_output_____**Exercise 3**: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$_____no_output_____
<code>
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
# Z = <FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()_____no_output_____CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
## 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times \{0,1\}, k=0,\ldots,{K-1}\}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png" width=400>
_____no_output_____We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
* Maximum Likelihood (ML): $\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$
* Maximum *A Posteriori* (MAP): $\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$
_____no_output_____
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})]
= g[-{\bf w}^\intercal{\bf z}({\bf x})]$$
we can write
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$
where $\overline{y} = 2y-1$ is a *symmetrized label* ($\overline{y}\in\{-1, 1\}$). _____no_output_____### 3.1. Model assumptions
In the following, we will make the following assumptions:
- **A1**. (Logistic Regression): We assume a logistic model for the *a posteriori* probability of ${Y}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[{\bar y}{\bf w}^\intercal{\bf z}({\bf x})].$$
- **A2**. All samples in ${\mathcal D}$ have been generated from the same distribution, $p_{{\bf X}, Y}({\bf x}, y)$.
- **A3**. Input variables $\bf x$ do not depend on $\bf w$. This implies that
$$p({\bf x}|{\bf w}) = p({\bf x})$$
- **A4**. Targets $y^{(0)}, \cdots, y^{(K-1)}$ are statistically independent given $\bf w$ and the inputs ${\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}$, that is:
$$P(y^{(0)}, \cdots, y^{(K-1)} | {\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) = \prod_{k=0}^{K-1} P(y^{(k)} | {\bf x}^{(k)}, {\bf w})$$
_____no_output_____### 3.2. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$
Ussing assumptions A2 and A3 above, we have that
\begin{align}
P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y^{(0)}, \cdots, y^{(K-1)},{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)})\end{align}
Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following optimization problem
\begin{align}
\hat {\bf w}_\text{ML} & = \arg \max_{\bf w} P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \\
& = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \max_{\bf w} \sum_{k=0}^{K-1} \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \min_{\bf w} \sum_{k=0}^{K-1} - \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w})
\end{align}
where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the **likelihood**, **log-likelihood** $\left[L(\bf w)\right]$, and **negative log-likelihood** $\left[\text{NLL}(\bf w)\right]$, respectively._____no_output_____
Now, using A1 (the logistic model)
\begin{align}
\text{NLL}({\bf w})
&= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \\
&= \sum_{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right]
\end{align}
where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$.
_____no_output_____
It can be shown that $\text{NLL}({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} \text{NLL}(\hat{\bf w}_{\text{ML}})
&= - \sum_{k=0}^{K-1}
\frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}}
{1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}
\right)} = \\
&= - \sum_{k=0}^{K-1} \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum._____no_output_____### 3.2. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}_{n+1} = {\bf w}_n - \rho_n \nabla_{\bf w} \text{NLL}({\bf w}_n)
\end{align}
where $\rho_n >0$ is the *learning step*.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n \sum_{k=0}^{K-1} \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)}
\end{align}
_____no_output_____
Defining vectors
\begin{align}
{\bf y} &= [y^{(0)},\ldots,y^{(K-1)}]^\top \\
\hat{\bf p}_n &= [g({\bf w}_n^\top {\bf z}^{(0)}), \ldots, g({\bf w}_n^\top {\bf z}^{(K-1)})]^\top
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}^{(0)},\ldots,{\bf z}^{(K-1)}\right]^\top
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset._____no_output_____#### 3.2.1 Example: Iris Dataset.
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (*setosa*, *versicolor* or *virginica*). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
We will try to fit the logistic regression model to discriminate between two classes using only two attributes.
First, we load the dataset and split them in training and test subsets._____no_output_____
<code>
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)_____no_output_____
</code>
Now, we select two classes and two attributes._____no_output_____
<code>
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)_____no_output_____
</code>
#### 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance._____no_output_____
<code>
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx_____no_output_____
</code>
Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set._____no_output_____
<code>
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)_____no_output_____
</code>
The following figure generates a plot of the normalized training data._____no_output_____
<code>
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
plt.show()_____no_output_____
</code>
In order to apply the gradient descent rule, we need to define two methods:
- A `fit` method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A `predict` method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions._____no_output_____
<code>
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric.
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
# Compute negative log-likelihood
# (note that this is not required for the weight update, only for nll tracking)
nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w))))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w)).flatten()
# Class
D = [int(round(pn)) for pn in p]
return p, D_____no_output_____
</code>
We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\top)^\top$._____no_output_____
<code>
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])_____no_output_____
</code>
#### 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
- Number of iterations
- Initialization
- Learning step_____no_output_____**Exercise 4**: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array `p` with `n`bins, you can use `plt.hist(p, n)`_____no_output_____##### 3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
_____no_output_____**Exercise 5**: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?_____no_output_____**Exercise 6**: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, \frac{1}{10}, \frac{1}{100}, \frac{1}{1000}, \ldots$_____no_output_____In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= \frac{1}{n}$. Another common choice is $\rho_n = \frac{\alpha}{1+\beta n}$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method._____no_output_____#### 3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights._____no_output_____
<code>
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
#### 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the `PolynomialFeatures` method in `sklearn.preprocessing`._____no_output_____
<code>
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])_____no_output_____
</code>
Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries._____no_output_____
<code>
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.legend(loc='best')
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
## 4. Regularization and MAP estimation.
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}_{\text{MAP}} & = \arg\max_{\bf w} \left\{ P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \right\}\\
& = \arg\max_{\bf w} \left\{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right\} \\
& = \arg\min_{\bf w} \left\{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right\}
\end{align}_____no_output_____
In the light of this expression, we can conclude that the MAP solution is affected by two terms:
- The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data (smaller $\text{NLL}$ values)
- The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our *a priori* preference for some solutions. **Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to smooth classification borders).**
_____no_output_____We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
### 4.1 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ follows a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right\}
\end{align}
where $C = 2v$. Noting that
$$\nabla_{\bf w}\left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right\}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
_____no_output_____### 4.2 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right\}
\end{align}
The additional term introduced by the prior in the optimization algorithm is usually named the *regularization term*. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the *inverse regularization strength*._____no_output_____**Exercise 7**: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior._____no_output_____## 5. Other optimization algorithms
### 5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs many more iterations to converge._____no_output_____**Exercise 8**: Modify logregFit to implement an algorithm that applies the SGD rule._____no_output_____### 5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}_0)
+ \nabla_{\bf w}^\top C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\top{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> *Hessian* matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}_0 - {\bf H}({\bf w}_0)^{-1} \nabla_{\bf w}^\top C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^*$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^*$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rule becomes
$$\hat{\bf w}_{n+1} = \hat{\bf w}_{n} - \rho_n {\bf H}({\bf w}_k)^{-1} \nabla_{{\bf w}}C({\bf w}_k)
$$
_____no_output_____
For instance, for the MAP estimate with Gaussian prior, the *Hessian* matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + \sum_{k=0}^{K-1} g({\bf w}^\top {\bf z}^{(k)}) \left[1-g({\bf w}^\top {\bf z}^{(k)})\right]{\bf z}^{(k)} ({\bf z}^{(k)})^\top
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left[g({\bf w}^\top {\bf z}^{(k)}) \left(1-g({\bf w}^\top {\bf z}^{(k)})\right)\right]
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
{\bf w}_{n+1} = {\bf w}_{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
_____no_output_____
<code>
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr_____no_output_____# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
print('The NLL after training is:', str(nll_tr[len(nll_tr)-1]))_____no_output_____
</code>
## 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm._____no_output_____
<code>
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.legend(loc='best')
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
| {
"repository": "naxvm/ML4all",
"path": "C3.Classification_LogReg/RegresionLogistica_student.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 58493,
"hexsha": "cb2b58b1c952517ddb870c8d03626e762b0fc00e",
"max_line_length": 517,
"avg_line_length": 34.6727919384,
"alphanum_fraction": 0.5348845161
} |
# Notebook from VCMason/PyGenToolbox
Path: notebooks/Lyna/SplitFastqBySeqLength_WT_E_50Mill.ipynb
<code>
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import SplitFastqFileBySeqLength
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as pltThe autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
2020-07-31 09:54:15.220646
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F260.sort.IESOnly.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE.fastq'
SplitFastqFileBySeqLength.main(f)On sequence: 0
On sequence: 100000
On sequence: 200000
On sequence: 300000
On sequence: 400000
On sequence: 500000
On sequence: 600000
On sequence: 700000
On sequence: 800000
On sequence: 900000
On sequence: 1000000
On sequence: 1100000
On sequence: 1200000
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES.fastq
Example output filename: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES92bp.fastq
Number of lines in input file: 5076872
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import FindScanRNAInFastq
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as pltThe autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
2020-07-31 10:06:24.707129
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES25bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES150bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)On sequence: 0
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES25bp.fastq
Example output filename: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES25bp.scnRNA.fastq
Number of sequences in input file: 2509
Number of sequences in output file: 164
On sequence: 0
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES150bp.fastq
Example output filename: No scnRNA found
Number of sequences in input file: 5157
Number of sequences in output file: 0
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F260.sort.IESOnly26bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)On sequence: 0
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F260.sort.IESOnly26bp.fastq
Example output filename: No scnRNA found
Number of sequences in input file: 37
Number of sequences in output file: 0
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import SplitFastqFileBySeqLength
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as pltThe autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
2020-07-31 08:16:07.285598
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE.fastq'
SplitFastqFileBySeqLength.main(f)On sequence: 0
On sequence: 100000
On sequence: 200000
On sequence: 300000
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly.fastq
Example output filename: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly100bp.fastq
Number of lines in input file: 1257876
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import FindScanRNAInFastq
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as pltThe autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
2020-07-31 08:18:07.672019
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly25bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)On sequence: 0
Made one fastq file for each sequence length:
Input file: D:\LinuxShare\Projects\Lyna\CharSeqPipe\EV_E_50MilReads\hisat2\Pt_51_MacAndIES\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly25bp.fastq
Example output filename: No scnRNA found
Number of sequences in input file: 39
Number of sequences in output file: 0
</code>
| {
"repository": "VCMason/PyGenToolbox",
"path": "notebooks/Lyna/SplitFastqBySeqLength_WT_E_50Mill.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 11449,
"hexsha": "cb2b64148c825db5cc3ad2e6106bd6c1d593c070",
"max_line_length": 230,
"avg_line_length": 36.6955128205,
"alphanum_fraction": 0.6369115207
} |
# Notebook from MaayanLab/jupyter-template-catalog
Path: appyters/SigCom_LINCS_Consensus_Appyter/SigCom LINCS Consensus Appyter.ipynb
<code>
#%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())_____no_output_____%%appyter hide_code
{% do SectionField(
name='PRIMARY',
title='1. Upload your data',
subtitle='Upload up and down gene-sets to perform two-sided rank enrichment. '+
'Upload up- or down-only gene-sets to perform rank analysis for that direction.',
img='file-upload.png'
) %}
{% do SectionField(
name='ENRICHMENT',
title='2. Choose libraries for enrichment',
subtitle='Select the libraries that would be used for consensus analysis, as well as the Enrichr and '+
'Drugmonizome libraries to use for enriching the consensus perturbagens.',
img='find-replace.png'
) %}
{% do SectionField(
name='PARAMETER',
title='3. Tweak the parameters',
subtitle='Modify the parameters to suit the needs of your analysis.',
img='hammer-screwdriver.png'
) %}_____no_output_____%%appyter markdown
{% set title = StringField(
name='title',
label='Notebook Name',
default='SigCom LINCS Consensus Signatures',
section="PRIMARY",
) %}
# {{ title.raw_value }}_____no_output_____
</code>
The SigCom LINCS hosts ranked L1000 [1] perturbation signatures from a variety of perturbation types including: drugs and other small molecules, CRISPR knockouts, shRNA knockdowns, and single gene overexpression. SigCom LINCS' RESTful APIs enable querying the signatures programmatically to identify mimickers or reversers for input up and down gene sets. This appyter extends this functionality by enabling analysis for a collection of input signatures to identify consistently reoccuring mimickers and reversers. The appyter takes as input a set of two-sided or one-sided gene sets and constructs a score matrix of mimicking and reversing signatures. From this matrix the appyter computes the consensus. The pipeline also includes (1) Clustergrammer [2] interactive heatmap, and (2) enrichment analysis of the top gene perturbations [3-6] to elucidate the pathways that are being targeted by the consensus perturbagens._____no_output_____
<code>
import re
import math
import time
import requests
import pandas as pd
import json
import scipy.stats as st
from IPython.display import display, IFrame, Markdown, HTML
import seaborn as sns
import matplotlib.pyplot as plt
from umap import UMAP
from sklearn.manifold import TSNE
from maayanlab_bioinformatics.normalization import quantile_normalize, zscore_normalize
from maayanlab_bioinformatics.harmonization import ncbi_genes_lookup
from tqdm import tqdm
import plotly.express as px
import numpy as np
from matplotlib.ticker import MaxNLocator_____no_output_____METADATA_API = "https://maayanlab.cloud/sigcom-lincs/metadata-api"
DATA_API = "https://maayanlab.cloud/sigcom-lincs/data-api/api/v1"
CLUSTERGRAMMER_URL = 'https://maayanlab.cloud/clustergrammer/matrix_upload/'
S3_PREFIX = "https://appyters.maayanlab.cloud/storage/LDP3Consensus/"
drugmonizome_meta_api = "https://maayanlab.cloud/drugmonizome/metadata-api"
drugmonizome_data_api = "https://maayanlab.cloud/drugmonizome/data-api/api/v1"
enrichr_api = 'https://maayanlab.cloud/Enrichr/'_____no_output_____table = 1
figure = 1_____no_output_____%%appyter code_exec
{% set up_gene_sets = FileField(
name='up_gene_sets',
label='Up Gene-sets',
default='covid19_up.gmt',
section="PRIMARY",
examples={
'covid19_up.gmt': 'https://appyters.maayanlab.cloud/storage/LDP3Consensus/covid19_up.gmt'
}
) %}
{% set down_gene_sets = FileField(
name='down_gene_sets',
label='Down Gene-sets',
default='covid19_down.gmt',
section="PRIMARY",
examples={
'covid19_down.gmt': 'https://appyters.maayanlab.cloud/storage/LDP3Consensus/covid19_down.gmt'
}
) %}
up_gene_sets = {{ up_gene_sets }}
down_gene_sets = {{ down_gene_sets }}_____no_output_____gene_set_direction = None
if up_gene_sets == '':
gene_set_direction = "down"
print("Up gene-sets was not uploaded. Gene-set direction is set to down.")
elif down_gene_sets == '':
gene_set_direction = "up"
print("Down gene-sets was not uploaded. Gene-set direction is set to up.")_____no_output_____%%appyter code_exec
datasets = {{ MultiChoiceField(name='datasets',
label='LINCS Datasets',
description='Select the LINCS datasets to use for the consensus analysis',
default=[
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 Chemical Perturbations (2021)",
],
section = 'ENRICHMENT',
choices=[
"LINCS L1000 Antibody Perturbations (2021)",
"LINCS L1000 Ligand Perturbations (2021)",
"LINCS L1000 Overexpression Perturbations (2021)",
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 shRNA Perturbations (2021)",
"LINCS L1000 Chemical Perturbations (2021)",
"LINCS L1000 siRNA Perturbations (2021)",
]
)
}}
drugmonizome_datasets = {{ MultiChoiceField(name='drugmonizome_datasets',
description='Select the Drugmonizome libraries to use for the enrichment analysis of the consensus drugs',
label='Drugmonizome Libraries',
default=["L1000FWD_GO_Biological_Processes_drugsetlibrary_up", "L1000FWD_GO_Biological_Processes_drugsetlibrary_down"],
section = 'ENRICHMENT',
choices=[
"L1000FWD_GO_Biological_Processes_drugsetlibrary_up",
"L1000FWD_GO_Biological_Processes_drugsetlibrary_down",
"L1000FWD_GO_Cellular_Component_drugsetlibrary_up",
"L1000FWD_GO_Cellular_Component_drugsetlibrary_down",
"L1000FWD_GO_Molecular_Function_drugsetlibrary_up",
"L1000FWD_GO_Molecular_Function_drugsetlibrary_down",
"L1000FWD_KEGG_Pathways_drugsetlibrary_up",
"L1000FWD_KEGG_Pathways_drugsetlibrary_down",
"L1000FWD_signature_drugsetlibrary_up",
"L1000FWD_signature_drugsetlibrary_down",
"L1000FWD_predicted_side_effects",
"KinomeScan_kinase_drugsetlibrary",
"Geneshot_associated_drugsetlibrary",
"Geneshot_predicted_generif_drugsetlibrary",
"Geneshot_predicted_coexpression_drugsetlibrary",
"Geneshot_predicted_tagger_drugsetlibrary",
"Geneshot_predicted_autorif_drugsetlibrary",
"Geneshot_predicted_enrichr_drugsetlibrary",
"SIDER_indications_drugsetlibrary",
"SIDER_side_effects_drugsetlibrary",
"DrugRepurposingHub_target_drugsetlibrary",
"ATC_drugsetlibrary",
"Drugbank_smallmolecule_target_drugsetlibrary",
"Drugbank_smallmolecule_enzyme_drugsetlibrary",
"Drugbank_smallmolecule_carrier_drugsetlibrary",
"Drugbank_smallmolecule_transporter_drugsetlibrary",
"STITCH_target_drugsetlibrary",
"PharmGKB_OFFSIDES_side_effects_drugsetlibrary",
"CREEDS_signature_drugsetlibrary_down",
"CREEDS_signature_drugsetlibrary_up",
"RDKIT_maccs_fingerprints_drugsetlibrary",
"DrugCentral_target_drugsetlibrary",
"PubChem_fingerprints_drugsetlibrary",
"DrugRepurposingHub_moa_drugsetlibrary",
"PharmGKB_snp_drugsetlibrary"
]
)
}}
transcription_libraries = {{ MultiChoiceField(name='transcription_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Transcription Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'ARCHS4_TFs_Coexp',
'ChEA_2016',
'ENCODE_and_ChEA_Consensus_TFs_from_ChIP-X',
'ENCODE_Histone_Modifications_2015',
'ENCODE_TF_ChIP-seq_2015',
'Epigenomics_Roadmap_HM_ChIP-seq',
'Enrichr_Submissions_TF-Gene_Coocurrence',
'Genome_Browser_PWMs',
'lncHUB_lncRNA_Co-Expression',
'miRTarBase_2017',
'TargetScan_microRNA_2017',
'TF-LOF_Expression_from_GEO',
'TF_Perturbations_Followed_by_Expression',
'Transcription_Factor_PPIs',
'TRANSFAC_and_JASPAR_PWMs',
'TRRUST_Transcription_Factors_2019'])
}}
pathways_libraries = {{ MultiChoiceField(name='pathways_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Pathway Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'ARCHS4_Kinases_Coexp',
'BioCarta_2016',
'BioPlanet_2019',
'BioPlex_2017',
'CORUM',
'Elsevier_Pathway_Collection',
'HMS_LINCS_KinomeScan',
'HumanCyc_2016',
'huMAP',
'KEA_2015',
'KEGG_2021_Human',
'KEGG_2019_Mouse',
'Kinase_Perturbations_from_GEO_down',
'Kinase_Perturbations_from_GEO_up',
'L1000_Kinase_and_GPCR_Perturbations_down',
'L1000_Kinase_and_GPCR_Perturbations_up',
'NCI-Nature_2016',
'NURSA_Human_Endogenous_Complexome',
'Panther_2016',
'Phosphatase_Substrates_from_DEPOD',
'PPI_Hub_Proteins',
'Reactome_2016',
'SILAC_Phosphoproteomics',
'SubCell_BarCode',
'Virus-Host_PPI_P-HIPSTer_2020',
'WikiPathway_2021_Human',
'WikiPathways_2019_Mouse'])
}}
ontologies_libraries = {{ MultiChoiceField(name='ontologies_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Ontology Libraries',
default=['GO_Biological_Process_2021'],
section = 'ENRICHMENT',
choices=[
'GO_Biological_Process_2021',
'GO_Cellular_Component_2021',
'GO_Molecular_Function_2021',
'Human_Phenotype_Ontology',
'Jensen_COMPARTMENTS',
'Jensen_DISEASES',
'Jensen_TISSUES',
'MGI_Mammalian_Phenotype_Level_4_2021'])
}}
diseases_drugs_libraries = {{ MultiChoiceField(name='diseases_drugs_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Disease/Drug Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'Achilles_fitness_decrease',
'Achilles_fitness_increase',
'ARCHS4_IDG_Coexp',
'ClinVar_2019',
'dbGaP',
'DepMap_WG_CRISPR_Screens_Broad_CellLines_2019',
'DepMap_WG_CRISPR_Screens_Sanger_CellLines_2019',
'DisGeNET',
'DrugMatrix',
'DSigDB',
'GeneSigDB',
'GWAS_Catalog_2019',
'LINCS_L1000_Chem_Pert_down',
'LINCS_L1000_Chem_Pert_up',
'LINCS_L1000_Ligand_Perturbations_down',
'LINCS_L1000_Ligand_Perturbations_up',
'MSigDB_Computational',
'MSigDB_Oncogenic_Signatures',
'Old_CMAP_down',
'Old_CMAP_up',
'OMIM_Disease',
'OMIM_Expanded',
'PheWeb_2019',
'Rare_Diseases_AutoRIF_ARCHS4_Predictions',
'Rare_Diseases_AutoRIF_Gene_Lists',
'Rare_Diseases_GeneRIF_ARCHS4_Predictions',
'Rare_Diseases_GeneRIF_Gene_Lists',
'UK_Biobank_GWAS_v1',
'Virus_Perturbations_from_GEO_down',
'Virus_Perturbations_from_GEO_up',
'VirusMINT'])
}}_____no_output_____%%appyter code_exec
alpha = {{FloatField(name='alpha', label='p-value cutoff', default=0.05, section='PARAMETER')}}
min_sigs = {{IntField(name='min_sigs',
label='min_sigs',
description='Minimum number of input gene sets that has the same hit required to consider it as a consensus signature',
default=2, section='PARAMETER')}}
top_perts = {{IntField(name='top_perts', label='top signatures', default=100, section='PARAMETER')}}
consensus_method = {{ ChoiceField(
name='consensus_method',
label='consensus method',
description='Please select a method for getting the consensus',
default='z-score',
choices={
'z-score': "'z-score'",
'top count': "'count'",
},
section='PARAMETER') }}
_____no_output_____
</code>
## Gene Harmonization
To ensure that the gene names are consistent throughout the analysis, the input gene sets are harmonized to NCBI Gene symbols [7-8] using an [in-house gene harmonization module](https://github.com/MaayanLab/maayanlab-bioinformatics)._____no_output_____
<code>
ncbi_lookup = ncbi_genes_lookup('Mammalia/Homo_sapiens')
print('Loaded NCBI genes!')_____no_output_____signatures = {}
if not up_gene_sets == '':
with open(up_gene_sets) as upfile:
for line in upfile:
unpacked = line.strip().split("\t")
if len(unpacked) < 3:
raise ValueError("GMT is not formatted properly, please consult the README of the appyter for proper formatting")
sigid = unpacked[0]
geneset = unpacked[2:]
genes = []
for i in geneset:
gene = i.split(",")[0]
gene_name = ncbi_lookup(gene.upper())
if gene_name:
genes.append(gene_name)
signatures[sigid] = {
"up_genes": genes,
"down_genes": []
}
if not down_gene_sets == '':
with open(down_gene_sets) as downfile:
for line in downfile:
unpacked = line.strip().split("\t")
if len(unpacked) < 3:
raise ValueError("GMT is not formatted properly, please consult the README of the appyter for proper formatting")
sigid = unpacked[0]
geneset = unpacked[2:]
if sigid not in signatures and gene_set_direction == None:
raise ValueError("%s did not match any of the up signatures, make sure that the signature names are the same for both up and down genes"%sigid)
else:
genes = []
for i in geneset:
gene = i.split(",")[0]
gene_name = ncbi_lookup(gene)
if gene_name:
genes.append(gene_name)
if sigid in signatures:
signatures[sigid]["down_genes"] = genes
else:
signatures[sigid] = {
"up_genes": [],
"down_genes": genes
}_____no_output_____
</code>
## Input Signatures Metadata_____no_output_____
<code>
enrichr_libraries = transcription_libraries + pathways_libraries + ontologies_libraries + diseases_drugs_libraries_____no_output_____dataset_map = {
"LINCS L1000 Antibody Perturbations (2021)": "l1000_aby",
"LINCS L1000 Ligand Perturbations (2021)": "l1000_lig",
"LINCS L1000 Overexpression Perturbations (2021)": "l1000_oe",
"LINCS L1000 CRISPR Perturbations (2021)": "l1000_xpr",
"LINCS L1000 shRNA Perturbations (2021)": "l1000_shRNA",
"LINCS L1000 Chemical Perturbations (2021)": "l1000_cp",
"LINCS L1000 siRNA Perturbations (2021)": "l1000_siRNA"
}
labeller = {
"LINCS L1000 Antibody Perturbations (2021)": "antibody",
"LINCS L1000 Ligand Perturbations (2021)": "ligand",
"LINCS L1000 Overexpression Perturbations (2021)": "overexpression",
"LINCS L1000 CRISPR Perturbations (2021)": "CRISPR",
"LINCS L1000 shRNA Perturbations (2021)": "shRNA",
"LINCS L1000 Chemical Perturbations (2021)": "chemical",
"LINCS L1000 siRNA Perturbations (2021)": "siRNA"
}
gene_page = {
"LINCS L1000 Ligand Perturbations (2021)",
"LINCS L1000 Overexpression Perturbations (2021)",
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 shRNA Perturbations (2021)",
"LINCS L1000 siRNA Perturbations (2021)"
}
drug_page = {
"LINCS L1000 Chemical Perturbations (2021)": "l1000_cp",
}_____no_output_____
</code>
## SigCom LINCS Signature Search
SigCom LINCS provides RESTful APIs to perform rank enrichment analysis on two-sided (up and down) gene-sets or one-sided (up-only, down-only) gene sets to get mimicking and reversing signatures that are ranked by z-score (one-sided gene-sets) or z-sum (absolute value sum of the z-scores of the up and down gene-sets for two-sided analysis)._____no_output_____
<code>
# functions
def convert_genes(up_genes=[], down_genes=[]):
try:
payload = {
"filter": {
"where": {
"meta.symbol": {"inq": up_genes + down_genes}
}
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/entities/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
up = set(up_genes)
down = set(down_genes)
if len(up_genes) == 0 or len(down_genes) == 0:
converted = {
"entities": [],
}
else:
converted = {
"up_entities": [],
"down_entities": []
}
for i in results:
symbol = i["meta"]["symbol"]
if "entities" in converted:
converted["entities"].append(i["id"])
elif symbol in up:
converted["up_entities"].append(i["id"])
elif symbol in down:
converted["down_entities"].append(i["id"])
return converted
except Exception as e:
print(e)
def signature_search(genes, library):
try:
payload = {
**genes,
"database": library,
"limit": 500,
}
timeout = 0.5
for i in range(5):
endpoint = "/enrich/rank" if "entities" in payload else "/enrich/ranktwosided"
res = requests.post(DATA_API + endpoint, json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
return res.json()["results"]
except Exception as e:
print(e)
def resolve_rank(s, gene_set_direction):
try:
sigs = {}
for i in s:
if i["p-value"] < alpha:
uid = i["uuid"]
direction = "up" if i["zscore"] > 0 else "down"
if direction == gene_set_direction:
i["type"] = "mimicker"
sigs[uid] = i
else:
i["type"] = "reverser"
sigs[uid] = i
payload = {
"filter": {
"where": {
"id": {"inq": list(sigs.keys())}
},
"fields": [
"id",
"meta.pert_name",
"meta.pert_type",
"meta.pert_time",
"meta.pert_dose",
"meta.cell_line",
"meta.local_id"
]
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/signatures/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
signatures = {
"mimickers": {},
"reversers": {}
}
for sig in results:
uid = sig["id"]
scores = sigs[uid]
sig["scores"] = scores
if "pert_name" in sig["meta"]:
local_id = sig["meta"].get("local_id", None)
if scores["type"] == "mimicker":
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["mimickers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-score": abs(scores.get("zscore", 0)),
"p-value": scores.get("p-value", 0)
}
elif scores["type"] == "reverser":
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["reversers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-score": abs(scores.get("zscore", 0)),
"p-value": scores.get("p-value", 0)
}
return signatures
except Exception as e:
print(e)
def resolve_ranktwosided(s):
try:
sigs = {}
for i in s:
if i['p-down'] < alpha and i['p-up'] < alpha:
uid = i["uuid"]
i['z-sum (abs)'] = abs(i['z-sum'])
if i['z-sum'] > 0:
i["type"] = "mimicker"
sigs[uid] = i
elif i['z-sum'] < 0:
i["type"] = "reverser"
sigs[uid] = i
payload = {
"filter": {
"where": {
"id": {"inq": list(sigs.keys())}
},
"fields": [
"id",
"meta.pert_name",
"meta.pert_type",
"meta.pert_time",
"meta.pert_dose",
"meta.cell_line",
"meta.local_id"
]
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/signatures/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
signatures = {
"mimickers": {},
"reversers": {}
}
for sig in results:
uid = sig["id"]
scores = sigs[uid]
sig["scores"] = scores
if "pert_name" in sig["meta"]:
local_id = sig["meta"].get("local_id", None)
if scores["type"] == "mimicker" and len(signatures["mimickers"]) < 100:
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["mimickers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-sum": scores.get("z-sum (abs)", 0)
}
elif scores["type"] == "reverser" and len(signatures["reversers"]) < 100:
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["reversers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-sum": scores.get("z-sum (abs)", 0)
}
return signatures
except Exception as e:
print(e)_____no_output_____# enriched = {lib:{"mimickers": {}, "reversers": {}} for lib in datasets}
enriched = {"mimickers": {lib: {} for lib in datasets}, "reversers": {lib: {} for lib in datasets}}
metadata = {}
for k,sig in tqdm(signatures.items()):
try:
time.sleep(0.1)
genes = convert_genes(sig["up_genes"],sig["down_genes"])
if ("entities" in genes and len(genes["entities"]) > 5) or (len(genes["up_entities"]) > 5 and len(genes["down_entities"]) > 5):
for lib in datasets:
library = dataset_map[lib]
s = signature_search(genes, library)
if gene_set_direction == None:
sigs = resolve_ranktwosided(s)
else:
sigs = resolve_rank(s, gene_set_direction)
enriched["mimickers"][lib][k] = sigs["mimickers"]
enriched["reversers"][lib][k] = sigs["reversers"]
for direction, entries in sigs.items():
for label, meta in entries.items():
if label not in metadata:
metadata[label] = {
"pert_name": meta.get("pert_name", None),
"pert_time": meta.get("pert_time", None),
"pert_dose": meta.get("pert_dose", None),
"cell_line": meta.get("cell_line", None),
}
time.sleep(0.1)
except Exception as e:
print(e)_____no_output_____def clustergrammer(df, name, figure, label="Clustergrammer"):
clustergram_df = df.rename(columns={i:"Signature: %s"%i for i in df.columns}, index={i:"Drug: %s"%i for i in df.index})
clustergram_df.to_csv(name, sep="\t")
response = ''
timeout = 0.5
for i in range(5):
try:
res = requests.post(CLUSTERGRAMMER_URL, files={'file': open(name, 'rb')})
if not res.ok:
response = res.text
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
clustergrammer_url = res.text.replace("http:","https:")
break
except Exception as e:
response = e
time.sleep(2)
else:
if type(response) == Exception:
raise response
else:
raise Exception(response)
display(IFrame(clustergrammer_url, width="1000", height="1000"))
display(Markdown("**Figure %d** %s [Go to url](%s)"%(figure, label, clustergrammer_url)))
figure += 1
return figure
cmap = sns.cubehelix_palette(50, hue=0.05, rot=0, light=1, dark=0)
def heatmap(df, filename, figure, label, width=15, height=15):
fig = plt.figure(figsize=(width,height))
cg = sns.clustermap(df, cmap=cmap, figsize=(width, height))
cg.ax_row_dendrogram.set_visible(False)
cg.ax_col_dendrogram.set_visible(False)
display(cg)
plt.show()
cg.savefig(filename)
display(Markdown("**Figure %d** %s"%(figure, label)))
figure+=1
return figure
def make_clickable(link):
# target _blank to open new window
# extract clickable text to display for your link
text = link.split('=')[1]
return f'<a target="_blank" href="{link}">{text}</a>'
annot_dict = {}
def bar_chart(enrichment, title=''):
bar_color = 'mediumspringgreen'
bar_color_not_sig = 'lightgrey'
edgecolor=None
linewidth=0
if len(enrichment) > 10:
enrichment = enrichment[0:10]
enrichment_names = [i["name"] for i in enrichment]
enrichment_scores = [i["pval"] for i in enrichment]
plt.figure(figsize=(10,4))
bar_colors = [bar_color if (x < 0.05) else bar_color_not_sig for x in enrichment_scores]
fig = sns.barplot(x=np.log10(enrichment_scores)*-1, y=enrichment_names, palette=bar_colors, edgecolor=edgecolor, linewidth=linewidth)
fig.axes.get_yaxis().set_visible(False)
fig.set_title(title.replace('_',' '),fontsize=20)
fig.set_xlabel('-Log10(p-value)',fontsize=19)
fig.xaxis.set_major_locator(MaxNLocator(integer=True))
fig.tick_params(axis='x', which='major', labelsize=20)
if max(np.log10(enrichment_scores)*-1)<1:
fig.xaxis.set_ticks(np.arange(0, max(np.log10(enrichment_scores)*-1), 0.1))
for ii,annot in enumerate(enrichment_names):
if annot in annot_dict.keys():
annot = annot_dict[annot]
if enrichment_scores[ii] < 0.05:
annot = ' *'.join([annot, str(str(np.format_float_scientific(enrichment_scores[ii],precision=2)))])
else:
annot = ' '.join([annot, str(str(np.format_float_scientific(enrichment_scores[ii],precision=2)))])
title_start= max(fig.axes.get_xlim())/200
fig.text(title_start,ii,annot,ha='left',wrap = True, fontsize = 12)
fig.patch.set_edgecolor('black')
fig.patch.set_linewidth('2')
plt.show()
def get_drugmonizome_plot(consensus, label, figure, dataset):
payload = {
"filter":{
"where": {
"meta.Name": {
"inq": [i.lower() for i in set(consensus['pert name'])]
}
}
}
}
res = requests.post(drugmonizome_meta_api + "/entities/find", json=payload)
entities = {}
for i in res.json():
name = i["meta"]["Name"]
uid = i["id"]
if name not in entities:
entities[name] = uid
query = {
"entities": list(entities.values()),
"limit": 1000,
"database": dataset
}
res = requests.post(drugmonizome_data_api + "/enrich/overlap", json=query)
scores = res.json()["results"]
uids = {i["uuid"]: i for i in scores}
payload = {
"filter":{
"where": {
"id": {
"inq": list(uids.keys())
}
}
}
}
res = requests.post(drugmonizome_meta_api + "/signatures/find", json=payload)
sigs = res.json()
sigs = res.json()
scores = []
for i in sigs:
score = uids[i["id"]]
scores.append({
"name": i["meta"]["Term"][0]["Name"],
"pval": score["p-value"]
})
scores.sort(key=lambda x: x['pval'])
if len(scores) > 0:
bar_chart(scores, dataset.replace("setlibrary", " set library"))
display(Markdown("**Figure %d** %s"%(figure, label)))
figure += 1
return figure
def get_enrichr_bar(userListId, enrichr_library, figure, label):
query_string = '?userListId=%s&backgroundType=%s'
res = requests.get(
enrichr_api + 'enrich' + query_string % (userListId, enrichr_library)
)
if not res.ok:
raise Exception('Error fetching enrichment results')
data = res.json()[enrichr_library]
scores = [{"name": i[1], "pval": i[2]} for i in data]
scores.sort(key=lambda x: x['pval'])
if len(scores) > 0:
bar_chart(scores, enrichr_library)
display(Markdown("**Figure %d** %s"%(figure, label)))
figure +=1
return figure
def enrichment(consensus, label, figure):
gene_names = [i.upper() for i in set(consensus['pert name'])]
genes_str = '\n'.join(gene_names)
description = label
payload = {
'list': (None, genes_str),
'description': (None, description)
}
res = requests.post(enrichr_api + 'addList', files=payload)
if not res.ok:
raise Exception('Error analyzing gene list')
data = res.json()
shortId = data["shortId"]
userListId = data["userListId"]
display(Markdown("Enrichr Link: https://maayanlab.cloud/Enrichr/enrich?dataset=%s"%shortId))
for d in enrichr_libraries:
l = "Enrichr %s top ranked terms for %s"%(d.replace("_", " "), label)
figure = get_enrichr_bar(userListId, d, figure, l)
return figure_____no_output_____
</code>
## Consensus Analysis
Mimicking and reversing perturbagen scores are organized into a matrix. Depending on the consensus method chosen by the user, the consensus signatures are computed either by ranking the sum of z-scores or z-sum; or by counts.
### Mimickers_____no_output_____
<code>
score_field = "z-sum" if gene_set_direction == None else "z-score"
top_n_signatures = 100_____no_output_____direction = "mimickers"
alternate = "mimicking"
for lib in datasets:
library = dataset_map[lib]
display(Markdown("#### Consensus %s %s signatures"%(alternate, labeller[lib])), display_id=alternate+lib)
index = set()
sig_dict = enriched[direction][lib]
for v in sig_dict.values():
index = index.union(v.keys())
df = pd.DataFrame(0, index=index, columns=sig_dict.keys())
for sig_name,v in sig_dict.items():
for local_id, meta in v.items():
df.at[local_id, sig_name] = meta[score_field]
filename = "sig_matrix_%s_%s.tsv"%(library.replace(" ","_"), direction)
df.to_csv(filename, sep="\t")
display(Markdown("Download score matrix for %s %s signatures ([download](./%s))"%
(alternate, labeller[lib], filename)))
if len(df.index) > 1 and len(df.columns) > 1:
top_index = df.index
if len(top_index) > top_n_signatures:
top_index = df[(df>0).sum(1) >= min_sigs].sum(1).sort_values(ascending=False).index
top_index = top_index if len(top_index) <= top_n_signatures else top_index[0:top_n_signatures]
if (df.loc[top_index].sum()>0).sum() < len(df.columns):
blank = df.loc[top_index].sum()==0
blank_indices = [i for i in blank.index if blank[i]]
top_index = list(top_index) + [df[i].idxmax() for i in blank_indices]
top_df = df.loc[top_index]
consensus_norm = quantile_normalize(top_df)
display(Markdown("##### Clustergrammer for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-clustergrammer-%s"%(alternate, lib))
label = "Clustergrammer of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "clustergrammer_%s_%s.tsv"%(library.replace(" ", "_"), direction)
figure = clustergrammer(consensus_norm, name, figure, label)
display(Markdown("#### Heatmap for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-heatmap-%s"%(alternate, lib))
label = "Heatmap of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "heatmap_%s_%s.png"%(library.replace(" ", "_"), direction)
figure = heatmap(consensus_norm, name, figure, label)
df = df[(df>0).sum(1) >= min_sigs]
if consensus_method == 'z-score':
df = df.loc[df.sum(1).sort_values(ascending=False).index[0:top_perts]]
else:
df = df.loc[(df > 0).sum(1).sort_values(ascending=False).index[0:top_perts]]
if lib in gene_page:
# "pert_name": sig["meta"].get("pert_name", None),
# "pert_time": sig["meta"].get("pert_time", None),
# "pert_dose": sig["meta"].get("pert_dose", None),
# "cell_line": sig["meta"].get("cell_line", None),
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert time", "cell line", "count", "z-sum", "Enrichr gene page"])
stat_df['count'] = (df > 0).sum(1)
# Compute zstat and p value
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df['Enrichr gene page'] = ["https://maayanlab.cloud/Enrichr/#find!gene=%s"%i for i in stat_df["pert name"]]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(lib.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
stat_df['Enrichr gene page'] = stat_df['Enrichr gene page'].apply(make_clickable)
stat_html = stat_df.head(25).to_html(escape=False)
display(HTML(stat_html))
else:
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert dose", "pert time", "cell line", "count", "z-sum"])
stat_df['count'] = (df > 0).sum(1)
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert dose"] = metadata[i]["pert_dose"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(library.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
display(stat_df.head(25))
display(Markdown("**Table %d** Top 25 consensus %s %s signatures([download](./%s))"%
(table, alternate, labeller[lib], filename)))
table+=1
# display(df.head())
# display(Markdown("**Table %d** Consensus %s %s signatures ([download](./%s))"%
# (table, alternate, labeller[lib], filename)))
# table+=1
if len(set(stat_df["pert name"])) > 5:
if lib in drug_page:
display(Markdown("#### Drugmonizome enrichment analysis for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
for d in drugmonizome_datasets:
label = "%s top ranked enriched terms for %s %s perturbagens"%(d.replace("_", " "), alternate, labeller[lib])
figure = get_drugmonizome_plot(stat_df, label, figure, d)
elif lib in gene_page:
display(Markdown("#### Enrichr link to analyze enriched terms for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
label = "%s L1000 %s perturbagens"%(alternate, labeller[lib])
figure = enrichment(stat_df, label, figure)_____no_output_____
</code>
### Reversers_____no_output_____
<code>
direction = "reversers"
alternate = "reversing"
for lib in datasets:
library = dataset_map[lib]
display(Markdown("#### Consensus %s %s signatures"%(alternate, labeller[lib])), display_id=alternate+lib)
index = set()
sig_dict = enriched[direction][lib]
for v in sig_dict.values():
index = index.union(v.keys())
df = pd.DataFrame(0, index=index, columns=sig_dict.keys())
for sig_name,v in sig_dict.items():
for local_id, meta in v.items():
df.at[local_id, sig_name] = meta[score_field]
filename = "sig_matrix_%s_%s.tsv"%(library.replace(" ","_"), direction)
df.to_csv(filename, sep="\t")
display(Markdown("Download score matrix for %s %s signatures ([download](./%s))"%
(alternate, labeller[lib], filename)))
if len(df.index) > 1 and len(df.columns) > 1:
top_index = df.index
if len(top_index) > top_n_signatures:
top_index = df[(df>0).sum(1) >= min_sigs].sum(1).sort_values(ascending=False).index
top_index = top_index if len(top_index) <= top_n_signatures else top_index[0:top_n_signatures]
if (df.loc[top_index].sum()>0).sum() < len(df.columns):
blank = df.loc[top_index].sum()==0
blank_indices = [i for i in blank.index if blank[i]]
top_index = list(top_index) + [df[i].idxmax() for i in blank_indices]
top_df = df.loc[top_index]
consensus_norm = quantile_normalize(top_df)
display(Markdown("##### Clustergrammer for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-clustergrammer-%s"%(alternate, lib))
label = "Clustergrammer of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "clustergrammer_%s_%s.tsv"%(library.replace(" ", "_"), direction)
figure = clustergrammer(consensus_norm, name, figure, label)
display(Markdown("#### Heatmap for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-heatmap-%s"%(alternate, lib))
label = "Heatmap of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "heatmap_%s_%s.png"%(library.replace(" ", "_"), direction)
figure = heatmap(consensus_norm, name, figure, label)
df = df[(df>0).sum(1) >= min_sigs]
if consensus_method == 'z-score':
df = df.loc[df.sum(1).sort_values(ascending=False).index[0:top_perts]]
else:
df = df.loc[(df > 0).sum(1).sort_values(ascending=False).index[0:top_perts]]
if lib in gene_page:
# "pert_name": sig["meta"].get("pert_name", None),
# "pert_time": sig["meta"].get("pert_time", None),
# "pert_dose": sig["meta"].get("pert_dose", None),
# "cell_line": sig["meta"].get("cell_line", None),
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert time", "cell line", "count", "z-sum", "Enrichr gene page"])
stat_df['count'] = (df > 0).sum(1)
# Compute zstat and p value
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df['Enrichr gene page'] = ["https://maayanlab.cloud/Enrichr/#find!gene=%s"%i for i in stat_df["pert name"]]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(lib.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
stat_df['Enrichr gene page'] = stat_df['Enrichr gene page'].apply(make_clickable)
stat_html = stat_df.head(25).to_html(escape=False)
display(HTML(stat_html))
else:
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert dose", "pert time", "cell line", "count", "z-sum"])
stat_df['count'] = (df > 0).sum(1)
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert dose"] = metadata[i]["pert_dose"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(library.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
display(stat_df.head(25))
display(Markdown("**Table %d** Top 25 consensus %s %s signatures([download](./%s))"%
(table, alternate, labeller[lib], filename)))
table+=1
# display(df.head())
# display(Markdown("**Table %d** Consensus %s %s signatures ([download](./%s))"%
# (table, alternate, labeller[lib], filename)))
# table+=1
if len(set(stat_df["pert name"])) > 5:
if lib in drug_page:
display(Markdown("#### Drugmonizome enrichment analysis for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
for d in drugmonizome_datasets:
label = "%s top ranked enriched terms for %s %s perturbagens"%(d.replace("_", " "), alternate, labeller[lib])
figure = get_drugmonizome_plot(stat_df, label, figure, d)
elif lib in gene_page:
display(Markdown("#### Enrichr link to analyze enriched terms for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
label = "%s L1000 %s perturbagens"%(alternate, labeller[lib])
figure = enrichment(stat_df, label, figure)_____no_output_____
</code>
## References
[1] Subramanian, A., Narayan, R., Corsello, S. M., Peck, D. D., Natoli, T. E., Lu, X., ... & Golub, T. R. (2017). A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell, 171(6), 1437-1452.
[2] Fernandez, N. F., Gundersen, G. W., Rahman, A., Grimes, M. L., Rikova, K., Hornbeck, P., & Ma’ayan, A. (2017). Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data. Scientific data, 4(1), 1-12.
[3] Chen, E. Y., Tan, C. M., Kou, Y., Duan, Q., Wang, Z., Meirelles, G. V., ... & Ma’ayan, A. (2013). Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC bioinformatics, 14(1), 1-14.
[4] Kuleshov, Maxim V., et al. "Enrichr: a comprehensive gene set enrichment analysis web server 2016 update." Nucleic acids research 44.W1 (2016): W90-W97.
[5] Xie, Z., Bailey, A., Kuleshov, M. V., Clarke, D. J., Evangelista, J. E., Jenkins, S. L., ... & Ma'ayan, A. (2021). Gene set knowledge discovery with Enrichr. Current protocols, 1(3), e90.
[6] Kropiwnicki, E., Evangelista, J. E., Stein, D. J., Clarke, D. J., Lachmann, A., Kuleshov, M. V., ... & Ma’ayan, A. (2021). Drugmonizome and Drugmonizome-ML: integration and abstraction of small molecule attributes for drug enrichment analysis and machine learning. Database, 2021.
[7] Maglott, D., Ostell, J., Pruitt, K. D., & Tatusova, T. (2005). Entrez Gene: gene-centered information at NCBI. Nucleic acids research, 33(suppl_1), D54-D58.
[8] Brown, G. R., Hem, V., Katz, K. S., Ovetsky, M., Wallin, C., Ermolaeva, O., ... & Murphy, T. D. (2015). Gene: a gene-centered information resource at NCBI. Nucleic acids research, 43(D1), D36-D42._____no_output_____
| {
"repository": "MaayanLab/jupyter-template-catalog",
"path": "appyters/SigCom_LINCS_Consensus_Appyter/SigCom LINCS Consensus Appyter.ipynb",
"matched_keywords": [
"CRISPR"
],
"stars": null,
"size": 64586,
"hexsha": "cb2b802957fc599bef9d9cbb9600c2c1196beacd",
"max_line_length": 927,
"avg_line_length": 51.0561264822,
"alphanum_fraction": 0.457157898
} |
# Notebook from LungCellAtlas/HLCA_reproducibility
Path: notebooks/1_building_and_annotating_the_atlas_core/03_hvg_selection_log_transf.ipynb
## Highly variable gene selection and log-transformation_____no_output_____In this notebook we select highly variable genes and perform log-transformation of normalized counts for downstream analysis_____no_output_____#### Import modules:_____no_output_____
<code>
import scanpy as sc
import matplotlib.pyplot as plt
import numpy as np_____no_output_____
</code>
#### Set paths:_____no_output_____
<code>
path_input_data = "../../data/HLCA_core_h5ads/HLCA_v1_intermediates/LCA_Bano_Barb_Jain_Kras_Lafy_Meye_Mish_MishBud_Nawi_Seib_Teic_SCRAN_normalized_filt.h5ad"
path_output_data = "../../data/HLCA_core_h5ads/HLCA_v1_intermediates/LCA_Bano_Barb_Jain_Kras_Lafy_Meye_Mish_MishBud_Nawi_Seib_Teic_log1p.h5ad"_____no_output_____
</code>
#### Perform hvg selection:_____no_output_____import data:_____no_output_____
<code>
adata = sc.read(path_input_data)_____no_output_____
</code>
select highly variable genes..._____no_output_____
<code>
# function to calculate variances on *sparse* matrix
def vars(a, axis=None):
""" Variance of sparse matrix a
var = mean(a**2) - mean(a)**2
"""
a_squared = a.copy()
a_squared.data **= 2
return a_squared.mean(axis) - np.square(a.mean(axis))_____no_output_____
</code>
calculate mean, variance, dispersion per gene:_____no_output_____
<code>
means = np.mean(adata.X, axis=0)_____no_output_____variances = vars(adata.X, axis=0)_____no_output_____dispersions = variances / means_____no_output_____
</code>
set min_mean cutoff (base this on the plot). We do not want to include the leftmost noisy genes that have high dispersions due to low means._____no_output_____
<code>
min_mean = 0.06_____no_output_____# plot mean versus dispersion plot:
# now plot
plt.scatter(
np.log1p(means).tolist()[0], np.log(dispersions).tolist()[0], s=2
)
plt.vlines(x=np.log1p(min_mean),ymin=-2,ymax=8,color='red')
plt.xlabel("log1p(mean)")
plt.ylabel("log(dispersion)")
plt.title("DISPERSION VERSUS MEAN")
plt.show()_____no_output_____
</code>
log-transform data:_____no_output_____
<code>
sc.pp.log1p(adata)_____no_output_____
</code>
now calculate highly variable genes:_____no_output_____
<code>
sc.pp.highly_variable_genes(adata, batch_key="dataset",min_mean=min_mean, flavor="cell_ranger",n_top_genes=2000)/home/icb/lisa.sikkema/miniconda3/envs/scRNAseq_analysis/lib/python3.7/site-packages/pandas/core/indexing.py:671: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._setitem_with_indexer(indexer, value)
</code>
check selection of genes:_____no_output_____
<code>
boolean_to_color = {
True: "crimson",
False: "steelblue",
} # make a dictionary that translates the boolean to colors
hvg_colors = adata.var.highly_variable.map(boolean_to_color) # 'convert' the boolean
# now plot
plt.scatter(
np.log1p(means).tolist()[0], np.log(dispersions).tolist()[0], s=1, c=hvg_colors
)
plt.xlabel("log1p(mean)")
plt.ylabel("log(dispersion)")
plt.title("DISPERSION VERSUS MEAN")
plt.show()_____no_output_____
</code>
store_____no_output_____
<code>
adata.write(path_output_data)_____no_output_____
</code>
| {
"repository": "LungCellAtlas/HLCA_reproducibility",
"path": "notebooks/1_building_and_annotating_the_atlas_core/03_hvg_selection_log_transf.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": null,
"size": 70182,
"hexsha": "cb2bee4291d3b8afa3bcc49dd56963a94bf8c2bf",
"max_line_length": 40556,
"avg_line_length": 231.6237623762,
"alphanum_fraction": 0.9251517483
} |
# Notebook from aggle/jwst_validation_notebooks
Path: jwst_validation_notebooks/ami3/run_ami_pipeline.ipynb
<a id="title_ID"></a>
# JWST Pipeline Validation Notebook: AMI3, AMI3 Pipeline
<span style="color:red"> **Instruments Affected**</span>: NIRISS
### Table of Contents
<div style="text-align: left">
<br> [Introduction](#intro)
<br> [JWST CalWG Algorithm](#algorithm)
<br> [Defining Terms](#terms)
<br> [Test Description](#description)
<br> [Data Description](#data_descr)
<br> [Set up Temporary Directory](#tempdir)
<br> [Imports](#imports)
<br> [Loading the Data](#data_load)
<br> [Run the Pipeline](#pipeline)
<br> [Test Results](#testing)
<br> [About This Notebook](#about)
</div>_____no_output_____<a id="intro"></a>
# Introduction
The notebook verifies that pipeline steps from `calwebb_detector1` through `calwebb_image2` and `calwebb_ami3` run without crashing. `calwebb_ami3` is run on various associations of target and calibrator pairs.
For more information on the `calwebb_ami3` pipeline stage visit the links below.
> Stage description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_ami3.html
>
> Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/ami
[Top of Page](#title_ID)_____no_output_____<a id="algorithm"></a>
# JWST CalWG Algorithm
`calwebb_ami3` is based on the `implaneia` algorithm:
> https://github.com/anand0xff/ImPlaneIA/tree/delivery
[Top of Page](#title_ID)_____no_output_____<a id="terms"></a>
# Defining Terms
Calibrator: reference star to measure PSF to calibrate out instrumental contributions to the interferometric observables
PSF: point spread function
Target: source of interest for science program
[Top of Page](#title_ID)_____no_output_____<a id="description"></a>
# Test Description
This test checks that simulated data runs through the `calwebb_detector1`, `calwebb_image2`, and `calwebb_ami3` steps of the pipeline without crashing. Association files are created for the target/calibrator pair at different dither positions. The notebook verifies that the `calwebb_ami3` runs on these association files.
[Top of Page](#title_ID)_____no_output_____<a id="data_descr"></a>
# Data Description
The data for this test are simulated AMI datasets that do not have bad pixels. The simulated source data is AB Dor, which is simulated with a 4-point dither pattern:
| Source | Filename| Dither Position |
|:----------------|:---------|:-----------------|
|AB Dor (Target) |jw01093001001_01101_00005_nis_uncal.fits| 1|
| |jw01093001001_01101_00006_nis_uncal.fits| 2 |
| |jw01093001001_01101_00007_nis_uncal.fits| 3 |
| |jw01093001001_01101_00005_nis_uncal.fits| 4 |
HD 37093 is the PSF reference star, which is also simulated with a 4-point dither pattern.
| Source | Filename| Dither Position |
|:----------------|:---------|:-----------------|
|HD 37093 (Calibrator)| jw01093002001_01101_00005_nis_uncal.fits | 1 |
| |jw01093002001_01101_00006_nis_uncal.fits | 2 |
| |jw01093002001_01101_00007_nis_uncal.fits | 3 |
| |jw01093002001_01101_00005_nis_uncal.fits | 4 |
Configuration files are also needed for the various `calwebb_ami3` steps:
- ami_analyze.cfg
- ami_normalize.cfg
- ami_average.cfg
- calwebb_ami3.cfig
Specific reference files are needed for the analysis, which also do not have bad pixels, and are loaded with this notebook.
[Top of Page](#title_ID)_____no_output_____<a id="tempdir"></a>
# Set up Temporary Directory
[Top of Page](#title_ID)_____no_output_____
<code>
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
odir = data_dir.name
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))_____no_output_____import os
if 'CRDS_CACHE_TYPE' in os.environ:
if os.environ['CRDS_CACHE_TYPE'] == 'local':
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif os.path.isdir(os.environ['CRDS_CACHE_TYPE']):
os.environ['CRDS_PATH'] = os.environ['CRDS_CACHE_TYPE']
print('CRDS cache location: {}'.format(os.environ['CRDS_PATH']))_____no_output_____
</code>
<a id="imports"></a>
# Imports
List the package imports and why they are relevant to this notebook.
* astropy.io for opening fits files
* numpy for working with arrays
* IPython.display for printing markdown output
* jwst.datamodels for building model for JWST Pipeline
* jwst.pipeline.collect_pipeline_cfgs for gathering configuration files
* jwst.pipeline for initiating various pipeline stages
* jwst.ami to call the AMI Analyze step
* jwst.associations for using association files
* from ci_watson.artifactory_helpers import get_bigdata for reading data from Artifactory
[Top of Page](#title_ID)_____no_output_____
<code>
from astropy.io import fits
import numpy as np
from IPython.display import Markdown
from jwst.datamodels import ImageModel
import jwst.datamodels as datamodels
from jwst.pipeline.collect_pipeline_cfgs import collect_pipeline_cfgs
from jwst.pipeline import Detector1Pipeline, Image2Pipeline, Image3Pipeline, Ami3Pipeline
from jwst.ami import AmiAnalyzeStep
from jwst.associations import asn_from_list
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from ci_watson.artifactory_helpers import get_bigdata_____no_output_____
</code>
<a id="data_load"></a>
# Loading the Data
[Top of Page](#title_ID)_____no_output_____
<code>
# Data files that will be imported by Artifactory:
datafiles = np.array(['jw01093001001_01101_00005_nis_uncal.fits',
'jw01093001001_01101_00006_nis_uncal.fits',
'jw01093001001_01101_00007_nis_uncal.fits',
'jw01093001001_01101_00008_nis_uncal.fits',
'jw01093002001_01101_00005_nis_uncal.fits',
'jw01093002001_01101_00006_nis_uncal.fits',
'jw01093002001_01101_00007_nis_uncal.fits',
'jw01093002001_01101_00008_nis_uncal.fits'])
# Read in reference files needed for this analysis (these don't have bad pixels, like simulations)
superbiasfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_superbias_sim.fits')
darkfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_dark_sub80_sim.fits')
flatfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_flat_general.fits')
# Read in configuration files
ami_analyze_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_analyze.cfg')
ami_normalize_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_normalize.cfg')
ami_average_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_average.cfg')
calwebb_ami3_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'calwebb_ami3.cfg')_____no_output_____
</code>
<a id="pipeline"></a>
# Run the Pipeline
Since this notebook tests whether the pipeline runs on all the datasets, we will run each stage of the pipeline in separate cells. That way, if a step fails, it will be easier to track down at what stage and step this error occurred.
[Top of Page](#title_ID)_____no_output_____## Run Detector1 stage of the pipeline to calibrate \*\_uncal.fits file_____no_output_____
<code>
for file in datafiles:
df = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
file)
# Modify a keyword in each data file: only necessary for now
# Next three lines are temporary to accommodate recent changes to Mirage and pipeline
# and for Mirage to work with the pipeline.
with datamodels.open(df) as model:
model.meta.dither.dither_points = int(model.meta.dither.dither_points)
model.save(df)
# Run Detector1 stage of pipeline, specifying superbias and dark reference files
result1 = Detector1Pipeline()
result1.superbias.override_superbias = superbiasfile
result1.dark_current.override_dark = darkfile
result1.ipc.skip = True
result1.save_results = True
result1.output_dir = odir
result1.run(df)_____no_output_____
</code>
## Run Image2 stage of the pipeline to calibrate \*\_rate.fits file_____no_output_____
<code>
for df in datafiles:
# Run Image2 stage of the pipeline on the file created above to create rate file,
# specifying flat field file
df_rate = os.path.join(odir, os.path.basename(df.replace('uncal','rate')))
result2 = Image2Pipeline()
result2.flat_field.override_flat = flatfile
result2.photom.skip = True
result2.resample.skip = True
result2.save_results = True
result2.output_dir = odir
result2.run(df_rate)_____no_output_____
</code>
## Run Image2 stage of the pipeline to calibrate \*\_rateints.fits file_____no_output_____
<code>
for df in datafiles:
# Run Image stage of the pipeline to create rateints file, specifying flat field file
df_rateints = os.path.join(odir,os.path.basename(df.replace('uncal','rateints')))
result3 = Image2Pipeline()
result3.flat_field.override_flat = flatfile
result3.photom.skip = True
result3.resample.skip = True
result3.save_results = True
result3.output_dir = odir
result3.run(df_rateints) _____no_output_____
</code>
## Run AmiAnalyze step on the \*\_cal.fits files created above_____no_output_____
<code>
for df in datafiles:
# Set up name of calibrated file
df_cal = os.path.join(odir,os.path.basename(df.replace('uncal','cal')))
# Run AMI Analyze Step of the pipeline
result5 = AmiAnalyzeStep.call(df_cal, config_file = ami_analyze_cfg,
output_dir = odir, save_results = True)_____no_output_____
</code>
## Run AmiAnalyze on various target/calibrator pairs
Create association files to test calibration of target at different dither positions. Run AmiAnalyze on these association files.
Note: the `program` and `targ_name` fields in the association files are the same for all pairs, so I have them set as default values in the `create_asn` routine._____no_output_____Routine for creating association files (in \*.json format)_____no_output_____
<code>
def create_asn(outdir, targ1, psf1, prod_name, out_file, asn_id,
program="1093_2_targets_f480m_2022.25coords_pipetesting",
targ_name='t001',
targ2=None, psf2=None):
# Create association file:
asn = asn_from_list.asn_from_list([os.path.join(outdir,targ1)],
product_name = prod_name,
output_file = os.path.join(outdir,out_file),
output_dir = outdir,rule = DMS_Level3_Base)
asn['products'][0]['members'].append({'expname': os.path.join(odir,psf1),
'exptype': 'psf'})
# check whether 2nd set of target/calibrator pairs was inputted
if targ2 is not None:
asn['products'][0]['members'].append({'expname':os.path.join(odir,targ2),
'exptype': 'science'})
asn['products'][0]['members'].append({'expname':os.path.join(odir,psf2),
'exptype': 'psf'})
asn['asn_type'] = 'ami3'
asn['asn_id'] = asn_id
asn['program'] = program
asn['target'] = targ_name
with open(os.path.join(outdir,out_file), 'w') as fp:
fp.write(asn.dump()[1])
fp.close()_____no_output_____
</code>
### Create association files and run AmiAnalyze on these pairs_____no_output_____Association file 1 to calibrate average of targets at dithers 2 and 3 with the average of calibrators at dithers 2 and 3. _____no_output_____
<code>
asn1_file = "ami_asn001_targets23_cals23.json"
targ1 = "jw01093001001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093001001_01101"
asn_id = '001'
# Add second target/calibrator pair at this dither step
targ2 = "jw01093001001_01101_00007_nis_cal.fits"
psf2 = "jw01093002001_01101_00007_nis_cal.fits"
create_asn(odir, targ1, psf1, prod_name, asn1_file, asn_id, targ2=targ2, psf2=psf2)
# Run AmiAnalyze
Ami3Pipeline.call(asn1_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 2 to calibrate target at POS1 with calibrator at POS1_____no_output_____
<code>
# Create association file:
asn2_file = "ami_asn002_calibrate_targ1_cal1.json"
targ1 = "jw01093001001_01101_00005_nis_cal.fits"
psf1 = 'jw01093002001_01101_00005_nis_cal.fits'
prod_name = "jw01093001001_01101_00005cal00005"
asn_id = '002'
create_asn(odir, targ1, psf1,prod_name, asn2_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn2_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 3 to calibrate target at POS2 with calibrator at POS2_____no_output_____
<code>
# Create association file:
asn3_file = "ami_asn003_calibrate_targ2_cal2.json"
targ1 = "jw01093001001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093001001_01101_00006cal00006"
asn_id = '003'
create_asn(odir, targ1, psf1, prod_name, asn3_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 4 to calibrate target at POS3 with calibrator at POS3_____no_output_____
<code>
# Create association file:
asn4_file = "ami_asn004_calibrate_targ3_cal3.json"
targ1 = "jw01093001001_01101_00007_nis_cal.fits"
psf1 = "jw01093002001_01101_00007_nis_cal.fits"
prod_name = "jw01093001001_01101_00007cal00007"
asn_id = '004'
create_asn(odir, targ1, psf1, prod_name, asn4_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 5 to calibrate target at POS4 with calibrator at POS4_____no_output_____
<code>
# Create association file:
asn5_file = "ami_asn005_calibrate_targ4_cal4.json"
targ1 = "jw01093001001_01101_00008_nis_cal.fits"
psf1 = "jw01093002001_01101_00008_nis_cal.fits"
prod_name = "jw01093001001_01101_00008cal00008"
asn_id = '005'
create_asn(odir, targ1, psf1, prod_name, asn5_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 6 to calibrate calibrator at POS2 with calibrator at POS3_____no_output_____
<code>
# Create association file:
asn6_file = "ami_asn006_calibrate_cal2_cal3.json"
targ1 = "jw01093002001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00007_nis_cal.fits"
prod_name = "jw01093002001_01101_00006cal00007"
asn_id = '006'
create_asn(odir, targ1, psf1, prod_name, asn6_file, asn_id, targ_name='t002')
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
Association file 7 to calibrate calibrator at POS3 with calibrator at POS2_____no_output_____
<code>
# Create association file:
asn7_file = "ami_asn007_calibrate_cal3_cal2.json"
targ1 = "jw01093002001_01101_00007_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093002001_01101_00007cal00006"
asn_id = '007'
create_asn(odir, targ1, psf1, prod_name, asn7_file, asn_id, targ_name='t002')
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir) _____no_output_____
</code>
<a id="testing"></a>
# Test Results
Did the above cells run without errors? If so, **huzzah** the test passed!
If not, track down why the pipeline failed to run on these datasets.
[Top of Page](#title_ID)_____no_output_____<a id="about_ID"></a>
## About this Notebook
**Authors:** Deepashri Thatte, Senior Staff Scientist, NIRISS
<br>Stephanie LaMassa, Scientist, NIRISS
<br>**Updated On:** 08/04/2021_____no_output_____[Top of Page](#title_ID)
<img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/> _____no_output_____
| {
"repository": "aggle/jwst_validation_notebooks",
"path": "jwst_validation_notebooks/ami3/run_ami_pipeline.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 23994,
"hexsha": "cb2cfe52ab994a61bca75b95090aa2c5bdd33c41",
"max_line_length": 331,
"avg_line_length": 33.325,
"alphanum_fraction": 0.5680170043
} |
# Notebook from Grarck/ML4all
Path: C3.Classification_LogReg/RegresionLogistica_student.ipynb
# Logistic Regression
Notebook version: 2.0 (Nov 21, 2017)
2.1 (Oct 19, 2018)
Author: Jesús Cid Sueiro ([email protected])
Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7)
Assumptions for regression model modified
v.2.1 - Minor changes regarding notation and assumptions_____no_output_____
<code>
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
_____no_output_____
</code>
# Logistic Regression
## 1. Introduction
### 1.1. Binary classification and decision theory. The MAP criterion
The goal of a classification problem is to assign a *class* or *category* to every *instance* or *observation* of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = \{0, 1\}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or *decision*. If $y=\hat{y}$, the decision is a *hit*, otherwise $y\neq \hat{y}$ and the decision is an *error*.
_____no_output_____
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P\{\hat{Y} \neq Y\}$ is minimum. Noting that
$$
P\{\hat{Y} \neq Y\} = \int P\{\hat{Y} \neq Y | {\bf x}\} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P\{\hat{y} \neq Y |{\bf x}\} \\
&= \arg\max_{\hat{y}} P\{\hat{y} = Y |{\bf x}\} \\
\end{align}_____no_output_____
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad P_{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually named MAP (*Maximum A Posteriori*). As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems._____no_output_____### 1.2. Parametric classification.
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal D = \{{\bf x}^{(k)}, y^{(k)}\}_{k=0}^{K-1}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}\}$ of independent and identically distributed (i.i.d.) samples from an ***unknown*** distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
_____no_output_____
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
_____no_output_____In practice, the dataset ${\mathcal S}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this lesson, we explore one of the most popular model-based parametric classification methods: **logistic regression**.
<img src="./figs/parametric_decision.png", width=400>_____no_output_____## 2. Logistic regression.
### 2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in \{0,1\}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the *logistic* function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
_____no_output_____It is straightforward to see that the logistic function has the following properties:
- **P1**: Probabilistic output: $\quad 0 \le g(t) \le 1$
- **P2**: Symmetry: $\quad g(-t) = 1-g(t)$
- **P3**: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$
In the following we define a logistic function in python, and use it to plot a graphical representation._____no_output_____**Exercise 1**: Verify properties P2 and P3.
**Exercise 2**: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$._____no_output_____
<code>
# Define the logistic function
def logistic(t):
#<SOL>
#</SOL>
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$g(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()_____no_output_____
</code>
### 2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$. _____no_output_____
<code>
# Weight vector:
w = [4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = logistic(w[0]*xx0 + w[1]*xx1)
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()_____no_output_____
</code>
The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the representation color._____no_output_____
<code>
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
### 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$_____no_output_____** Exercise 2**: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$_____no_output_____
<code>
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
# Z = <FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()_____no_output_____CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
## 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times \{0,1\}, k=0,\ldots,{K-1}\}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png", width=400>
_____no_output_____We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
* Maximum Likelihood (ML): $\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$
* Maximum *A Posteriori* (MAP): $\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$
_____no_output_____
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})]
= g[-{\bf w}^\intercal{\bf z}({\bf x})]$$
we can write
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$
where $\overline{y} = 2y-1$ is a *symmetrized label* ($\overline{y}\in\{-1, 1\}$). _____no_output_____### 3.1. Model assumptions
In the following, we will make the following assumptions:
- **A1**. (Logistic Regression): We assume a logistic model for the *a posteriori* probability of ${Y=1}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})].$$
- **A2**. All samples in ${\mathcal D}$ have been generated by the same distribution, $p_{{\bf X}, Y}({\bf x}, y)$.
- **A3**. Input variables $\bf x$ do not depend on $\bf w$. This implies that
$$p({\bf x}|{\bf w}) = p({\bf x})$$
- **A4**. Targets $y^{(0)}, \cdots, y^{(K-1)}$ are statistically independent given $\bf w$ and the inputs ${\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}$, that is:
$$p(y^{(0)}, \cdots, y^{(K-1)} | {\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) = \prod_{k=0}^{K-1} p(s^{(k)} | {\bf x}^{(k)}, {\bf w})$$
_____no_output_____### 3.2. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$
Ussing assumptions A2 and A3 above, we have that
\begin{align}
P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y^{(0)}, \cdots, y^{(K-1)},{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)})\end{align}
Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following optimization problem
\begin{align}
\hat {\bf w}_\text{ML} & = \arg \max_{\bf w} p(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \\
& = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \max_{\bf w} \sum_{k=0}^{K-1} \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \min_{\bf w} \sum_{k=0}^{K-1} - \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w})
\end{align}
where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the **likelihood**, **log-likelihood** $\left[L(\bf w)\right]$, and **negative log-likelihood** $\left[\text{NLL}(\bf w)\right]$, respectively._____no_output_____
Now, using A1 (the logistic model)
\begin{align}
\text{NLL}({\bf w})
&= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \\
&= \sum_{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right]
\end{align}
where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$.
_____no_output_____
It can be shown that $\text{NLL}({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} \text{NLL}(\hat{\bf w}_{\text{ML}})
&= - \sum_{k=0}^{K-1}
\frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}}
{1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}
\right)} = \\
&= - \sum_{k=0}^{K-1} \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum._____no_output_____### 3.2. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}_{n+1} = {\bf w}_n - \rho_n \nabla_{\bf w} L({\bf w}_n)
\end{align}
where $\rho_n >0$ is the *learning step*.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n \sum_{k=0}^{K-1} \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)}
\end{align}
_____no_output_____
Defining vectors
\begin{align}
{\bf y} &= [y^{(0)},\ldots,y^{(K-1)}]^\intercal \\
\hat{\bf p}_n &= [g({\bf w}_n^\intercal {\bf z}^{(0)}), \ldots, g({\bf w}_n^\intercal {\bf z}^{(K-1)})]^\intercal
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}^{(0)},\ldots,{\bf z}^{(K-1)}\right]^\intercal
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset._____no_output_____#### 3.2.1 Example: Iris Dataset.
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (*setosa*, *versicolor* or *virginica*). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
We will try to fit the logistic regression model to discriminate between two classes using only two attributes.
First, we load the dataset and split them in training and test subsets._____no_output_____
<code>
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)Train: 98
Test: 52
</code>
Now, we select two classes and two attributes._____no_output_____
<code>
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)_____no_output_____
</code>
#### 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance._____no_output_____
<code>
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx_____no_output_____
</code>
Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set._____no_output_____
<code>
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)_____no_output_____
</code>
The following figure generates a plot of the normalized training data._____no_output_____
<code>
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
plt.show()_____no_output_____
</code>
In order to apply the gradient descent rule, we need to define two methods:
- A `fit` method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A `predict` method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions._____no_output_____
<code>
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric.
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
# Compute negative log-likelihood
# (note that this is not required for the weight update, only for nll tracking)
nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w))))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w)).flatten()
# Class
D = [int(round(pn)) for pn in p]
return p, D_____no_output_____
</code>
We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$._____no_output_____
<code>
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])The optimal weights are:
[[-0.06915786]
[ 1.23157846]
[ 0.18660814]]
The final error rates are:
- Training: 0.25757575757575757
- Test: 0.2647058823529412
The NLL after training is 35.901900695
</code>
#### 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
- Number of iterations
- Initialization
- Learning step_____no_output_____**Exercise**: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array `p` with `n`bins, you can use `plt.hist(p, n)`_____no_output_____##### 3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
_____no_output_____**Exercise 3**: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?_____no_output_____**Exercise 4**: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, 1/10, 1/100, 1/1000, \ldots$_____no_output_____In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= 1/n$. Another common choice is $\rho_n = \alpha/(1+\beta n)$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method._____no_output_____#### 3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights._____no_output_____
<code>
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
#### 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the `PolynomialFeatures` method in `sklearn.preprocessing`._____no_output_____
<code>
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])The optimal weights are:
[[ 0.91252707]
[ 0.37799677]
[-0.86858232]
[-2.22819769]
[ 0.31935683]
[-0.7597234 ]
[ 1.62293685]
[-0.59935236]
[ 2.7594932 ]
[ 1.0980507 ]
[ 4.94578133]
[-1.06423937]
[-0.11460838]
[ 2.65741964]
[ 0.85181291]
[-0.96029884]
[-0.930799 ]
[ 1.26419888]
[ 3.67864276]
[-1.17890012]
[ 0.25785336]]
The final error rates are:
- Training: 0.19696969696969696
- Test: 0.4117647058823529
The NLL after training is 28.1853726886
</code>
Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries._____no_output_____
<code>
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.legend(loc='best')
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
## 4. Regularization and MAP estimation.
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}_{\text{MAP}} & = \arg\max_{\bf w} P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \\
& = \arg\max_{\bf w} \left\{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right\} \\
& = \arg\min_{\bf w} \left\{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right\}
\end{align}_____no_output_____
In the light of this expression, we can conclude that the MAP solution is affected by two terms:
- The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data
- The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our *a priori* preference for some solutions. Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to smooth classification borders).
_____no_output_____We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
### 4.1 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ is a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right\}
\end{align}
where $C = 2v$. Noting that
$$\nabla_{\bf w}\left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right\}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
_____no_output_____### 4.2 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate is
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right\}
\end{align}
The additional term introduced by the prior in the optimization algorithm is usually named the *regularization term*. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the *inverse regularization strength*._____no_output_____**Exercise 5**: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior._____no_output_____## 5. Other optimization algorithms
### 5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs more iterations to converge._____no_output_____**Exercise 6**: Modify logregFit to implement an algorithm that applies the SGD rule._____no_output_____### 5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}_0)
+ \nabla_{\bf w}^\intercal C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\intercal{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> *Hessian* matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}_0 - {\bf H}({\bf w}_0)^{-1} \nabla_{\bf w}^\intercal C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^*$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^*$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rules becomes
$$\hat{\bf w}_{n+1} = \hat{\bf w}_{n} - \rho_n {\bf H}({\bf w}_k)^{-1} \nabla_{{\bf w}}C({\bf w}_k)
$$
_____no_output_____
For instance, for the MAP estimate with Gaussian prior, the *Hessian* matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + \sum_{k=1}^K f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right){\bf z}^{(k)} ({\bf z}^{(k)})^\intercal
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left(f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right)\right)
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
\hat{\bf w}_{n+1} = \hat{\bf w}_{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}(\hat{\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
_____no_output_____
<code>
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr_____no_output_____# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
print('The NLL after training is:', str(nll_tr[len(nll_tr)-1]))_____no_output_____
</code>
## 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm._____no_output_____
<code>
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.legend(loc='best')
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()_____no_output_____
</code>
| {
"repository": "Grarck/ML4all",
"path": "C3.Classification_LogReg/RegresionLogistica_student.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 388266,
"hexsha": "cb2fda7565076d25fef8ead9fc3e92ba455475ac",
"max_line_length": 98640,
"avg_line_length": 208.4090177134,
"alphanum_fraction": 0.8868919761
} |
# Notebook from tawabshakeel/DataCamp-Projects
Path: A Network Analysis of Game of Thrones/notebook.ipynb
## 1. Winter is Coming. Let's load the dataset ASAP!
<p>If you haven't heard of <em>Game of Thrones</em>, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series <em>A Song of Ice and Fire</em> by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books. </p>
<p><img src="https://assets.datacamp.com/production/project_76/img/got_network.jpeg" style="width: 550px"></p>
<p>This dataset constitutes a network and is given as a text file describing the <em>edges</em> between characters, with some attributes attached to each edge. Let's start by loading in the data for the first book <em>A Game of Thrones</em> and inspect it.</p>_____no_output_____
<code>
# Importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Reading in datasets/book1.csv
book1 = pd.read_csv('datasets/book1.csv')
book1.head()
# Printing out the head of the dataset
# ... YOUR CODE FOR TASK 1 ..._____no_output_____
</code>
## 2. Time for some Network of Thrones
<p>The resulting DataFrame <code>book1</code> has 5 columns: <code>Source</code>, <code>Target</code>, <code>Type</code>, <code>weight</code>, and <code>book</code>. Source and target are the two nodes that are linked by an edge. A network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.</p>
<p>Once we have the data loaded as a pandas DataFrame, it's time to create a network. We will use <code>networkx</code>, a network analysis library, and create a graph object for the first book.</p>_____no_output_____
<code>
# Importing modules
# ... YOUR CODE FOR TASK 2 ...
import networkx as nx
# Creating an empty graph object
G_book1 = nx.Graph()_____no_output_____
</code>
## 3. Populate the network with the DataFrame
<p>Currently, the graph object <code>G_book1</code> is empty. Let's now populate it with the edges from <code>book1</code>. And while we're at it, let's load in the rest of the books too!</p>_____no_output_____
<code>
# Iterating through the DataFrame to add edges
# ... YOUR CODE FOR TASK 3 ...
for _, edge in book1.iterrows():
G_book1.add_edge(edge['Source'], edge['Target'], weight=edge['weight'])
# Creating a list of networks for all the books
books = [G_book1]
book_fnames = ['datasets/book2.csv', 'datasets/book3.csv', 'datasets/book4.csv', 'datasets/book5.csv']
for book_fname in book_fnames:
book = pd.read_csv(book_fname)
G_book = nx.Graph()
for _, edge in book.iterrows():
G_book.add_edge(edge['Source'], edge['Target'], weight=edge['weight'])
books.append(G_book)_____no_output_____
</code>
## 4. The most important character in Game of Thrones
<p>Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network science offers us many different metrics to measure the importance of a node in a network. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning.</p>
<p>First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called <em>degree centrality</em>.</p>
<p>Using this measure, let's extract the top ten important characters from the first book (<code>book[0]</code>) and the fifth book (<code>book[4]</code>).</p>_____no_output_____
<code>
# Calculating the degree centrality of book 1
deg_cen_book1 = nx.degree_centrality(books[0])
# Calculating the degree centrality of book 5
deg_cen_book5 = nx.degree_centrality(books[4])
# Sorting the dictionaries according to their degree centrality and storing the top 10
sorted_deg_cen_book1 = sorted(deg_cen_book1.items(), key=lambda x: x[1], reverse=True)[0:10]
# Sorting the dictionaries according to their degree centrality and storing the top 10
sorted_deg_cen_book5 = sorted(deg_cen_book5.items(), key=lambda x: x[1], reverse=True)[0:10]
print(sorted_deg_cen_book1)
print(sorted_deg_cen_book5)
# Printing out the top 10 of book1 and book5
# ... YOUR CODE FOR TASK 4 ...[('Eddard-Stark', 0.3548387096774194), ('Robert-Baratheon', 0.2688172043010753), ('Tyrion-Lannister', 0.24731182795698928), ('Catelyn-Stark', 0.23118279569892475), ('Jon-Snow', 0.19892473118279572), ('Sansa-Stark', 0.18817204301075272), ('Robb-Stark', 0.18817204301075272), ('Bran-Stark', 0.17204301075268819), ('Joffrey-Baratheon', 0.16129032258064518), ('Cersei-Lannister', 0.16129032258064518)]
[('Jon-Snow', 0.1962025316455696), ('Daenerys-Targaryen', 0.18354430379746836), ('Stannis-Baratheon', 0.14873417721518986), ('Theon-Greyjoy', 0.10443037974683544), ('Tyrion-Lannister', 0.10443037974683544), ('Cersei-Lannister', 0.08860759493670886), ('Barristan-Selmy', 0.07911392405063292), ('Hizdahr-zo-Loraq', 0.06962025316455696), ('Asha-Greyjoy', 0.056962025316455694), ('Melisandre', 0.05379746835443038)]
</code>
## 5. The evolution of character importance
<p>According to degree centrality, the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance of characters changes over the course of five books because, you know, stuff happens... ;)</p>
<p>Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, and Tyrion, which showed up in the top 10 of degree centrality in the first book.</p>_____no_output_____
<code>
%matplotlib inline
# Creating a list of degree centrality of all the books
evol = [nx.degree_centrality(book) for book in books]
# Creating a DataFrame from the list of degree centralities in all the books
degree_evol_df = pd.DataFrame.from_records(evol)
degree_evol_df[['Eddard-Stark','Tyrion-Lannister','Jon-Snow']].plot()
# Plotting the degree centrality evolution of Eddard-Stark, Tyrion-Lannister and Jon-Snow
# ... YOUR CODE FOR TASK 5 ..._____no_output_____
</code>
## 6. What's up with Stannis Baratheon?
<p>We can see that the importance of Eddard Stark dies off as the book series progresses. With Jon Snow, there is a drop in the fourth book but a sudden rise in the fifth book.</p>
<p>Now let's look at various other measures like <em>betweenness centrality</em> and <em>PageRank</em> to find important characters in our Game of Thrones character co-occurrence network and see if we can uncover some more interesting facts about this network. Let's plot the evolution of betweenness centrality of this network over the five books. We will take the evolution of the top four characters of every book and plot it.</p>_____no_output_____
<code>
# Creating a list of betweenness centrality of all the books just like we did for degree centrality
evol = [nx.betweenness_centrality(book,weight='weight') for book in books]
# Making a DataFrame from the list
betweenness_evol_df = pd.DataFrame.from_records(evol)
# Finding the top 4 characters in every book
set_of_char = set()
for i in range(5):
set_of_char |= set(list(betweenness_evol_df.T[i].sort_values(ascending=False)[0:4].index))
list_of_char = list(set_of_char)
betweenness_evol_df[list_of_char].plot()
# Plotting the evolution of the top characters
# ... YOUR CODE FOR TASK 6 ..._____no_output_____
</code>
## 7. What does Google PageRank tell us about GoT?
<p>We see a peculiar rise in the importance of Stannis Baratheon over the books. In the fifth book, he is significantly more important than other characters in the network, even though he is the third most important character according to degree centrality.</p>
<p>PageRank was the initial way Google ranked web pages. It evaluates the inlinks and outlinks of webpages in the world wide web, which is, essentially, a directed network. Let's look at the importance of characters in the Game of Thrones network according to PageRank. </p>_____no_output_____
<code>
# Creating a list of pagerank of all the characters in all the books
evol = [nx.pagerank(book) for book in books]
# Making a DataFrame from the list
pagerank_evol_df = pd.DataFrame.from_records(evol)
# Finding the top 4 characters in every book
set_of_char = set()
for i in range(5):
set_of_char |= set(list(pagerank_evol_df.T[i].sort_values(ascending=False)[0:4].index))
list_of_char = list(set_of_char)
pagerank_evol_df[list_of_char].plot(figsize=(13, 7))
# Plotting the top characters
# ... YOUR CODE FOR TASK 7 ..._____no_output_____
</code>
## 8. Correlation between different measures
<p>Stannis, Jon Snow, and Daenerys are the most important characters in the fifth book according to PageRank. Eddard Stark follows a similar curve but for degree centrality and betweenness centrality: He is important in the first book but dies into oblivion over the book series.</p>
<p>We have seen three different measures to calculate the importance of a node in a network, and all of them tells us something about the characters and their importance in the co-occurrence network. We see some names pop up in all three measures so maybe there is a strong correlation between them?</p>
<p>Let's look at the correlation between PageRank, betweenness centrality and degree centrality for the fifth book using Pearson correlation.</p>_____no_output_____
<code>
# Creating a list of pagerank, betweenness centrality, degree centrality
# of all the characters in the fifth book.
measures = [nx.pagerank(books[4]),
nx.betweenness_centrality(books[4], weight='weight'),
nx.degree_centrality(books[4])]
# Creating the correlation DataFrame
cor = pd.DataFrame.from_records(measures)
cor.corr()
# Calculating the correlation
# ... YOUR CODE FOR TASK 8 ..._____no_output_____
</code>
## 9. Conclusion
<p>We see a high correlation between these three measures for our character co-occurrence network.</p>
<p>So we've been looking at different ways to find the important characters in the Game of Thrones co-occurrence network. According to degree centrality, Eddard Stark is the most important character initially in the books. But who is/are the most important character(s) in the fifth book according to these three measures? </p>_____no_output_____
<code>
# Finding the most important character in the fifth book,
# according to degree centrality, betweenness centrality and pagerank.
p_rank, b_cent, d_cent = cor.idxmax(axis=1)
# Printing out the top character accoding to the three measures
# ... YOUR CODE FOR TASK 9 ..._____no_output_____
</code>
| {
"repository": "tawabshakeel/DataCamp-Projects",
"path": "A Network Analysis of Game of Thrones/notebook.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 5,
"size": 328665,
"hexsha": "cb335e95e75e610693094cdfc20ad57f5dcf7646",
"max_line_length": 328665,
"avg_line_length": 328665,
"alphanum_fraction": 0.8171451174
} |
# Notebook from BNN-UPC/ignnition
Path: notebooks/shortest_path.ipynb
<a href="https://colab.research.google.com/github/BNN-UPC/ignnition/blob/ignnition-nightly/notebooks/shortest_path.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# IGNNITION: Quick start tutorial
### **Problem**: Find the shortest path in graphs with a Graph Neural Network
Find more details on this quick-start tutorial at:
https://ignnition.net/doc/quick_tutorial/
_____no_output_____---
# Prepare the environment
#### **Note**: Follow the instructions below to finish the installation_____no_output_____
<code>
#@title Installing libraries and load resources
#@markdown ####Hit **"enter"** to complete the installation of libraries
!add-apt-repository ppa:deadsnakes/ppa
!apt-get update
!apt-get install python3.7
!python -m pip install --upgrade pip
!pip install jupyter-client==6.1.5
!pip install ignnition==1.2.2
!pip install ipython-autotime
_____no_output_____#@title Import libraries { form-width: "30%" }
import networkx as nx
import random
import json
from networkx.readwrite import json_graph
import os
import ignnition
%load_ext tensorboard
%load_ext autotimetime: 104 µs (started: 2021-09-29 16:40:02 +00:00)
#@markdown #### Download three YAML files we will need after (train_options.yaml, model_description.yaml, global_variables.yaml)
# Download YAML files for this tutorial
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/train_options.yaml
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/global_variables.yaml
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/model_description.yaml
_____no_output_____#@title Generate the datasets (training and validation)
import os
def generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p):
while True:
# Create a random Erdos Renyi graph
G = nx.erdos_renyi_graph(random.randint(min_nodes, max_nodes), p)
complement = list(nx.k_edge_augmentation(G, k=1, partial=True))
G.add_edges_from(complement)
nx.set_node_attributes(G, 0, 'src-tgt')
nx.set_node_attributes(G, 0, 'sp')
nx.set_node_attributes(G, 'node', 'entity')
# Assign randomly weights to graph edges
for (u, v, w) in G.edges(data=True):
w['weight'] = random.randint(min_edge_weight, max_edge_weight)
# Select a source and target nodes to compute the shortest path
src, tgt = random.sample(list(G.nodes), 2)
G.nodes[src]['src-tgt'] = 1
G.nodes[tgt]['src-tgt'] = 1
# Compute all the shortest paths between source and target nodes
try:
shortest_paths = list(nx.all_shortest_paths(G, source=src, target=tgt,weight='weight'))
except:
shortest_paths = []
# Check if there exists only one shortest path
if len(shortest_paths) == 1:
for node in shortest_paths[0]:
G.nodes[node]['sp'] = 1
return nx.DiGraph(G)
def generate_dataset(file_name, num_samples, min_nodes=5, max_nodes=15, min_edge_weight=1, max_edge_weight=10, p=0.3):
samples = []
for _ in range(num_samples):
G = generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p)
G.remove_nodes_from([node for node, degree in dict(G.degree()).items() if degree == 0])
samples.append(json_graph.node_link_data(G))
with open(file_name, "w") as f:
json.dump(samples, f)
root_dir="./data"
if not os.path.exists(root_dir):
os.makedirs(root_dir)
if not os.path.exists(root_dir+"/train"):
os.makedirs(root_dir+"/train")
if not os.path.exists(root_dir + "/validation"):
os.makedirs(root_dir + "/validation")
generate_dataset("./data/train/data.json", 20000)
generate_dataset("./data/validation/data.json", 1000)_____no_output_____
</code>
---
# GNN model training_____no_output_____
<code>
#@title Remove all the models previously trained (CheckPoints)
#@markdown (It is not needed to execute this the first time)
! rm -r ./CheckPoint
! rm -r ./computational_graphs_____no_output_____#@title Load TensorBoard to visualize the evolution of learning metrics along training
#@markdown **IMPORTANT NOTE**: Click on "settings" in the TensorBoard GUI and check the option "Reload data" to see the evolution in real time. Note you can set the reload time interval (in seconds).
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
dir="./CheckPoint"
if not os.path.exists(dir):
os.makedirs(dir)
%tensorboard --logdir $dir
# Para finalizar instancias anteriores de TensorBoard
# !kill 2953
# !ps aux_____no_output_____#@title Run the training of your GNN model
#@markdown </u>**Note**</u>: You can stop the training whenever you want to continue making predictions below
import ignnition
model = ignnition.create_model(model_dir= './')
model.computational_graph()
model.train_and_validate()
[1m
Processing the described model...
---------------------------------------------------------------------------
[0m
[1mCreating the GNN model...
---------------------------------------------------------------------------
[0m
[1mGenerating the computational graph...
---------------------------------------------------------------------------
[0m
/content/computational_graphs/experiment_2021_09_29_16_40_27
[1mStarting the training and validation process...
---------------------------------------------------------------------------
[0m
Number of devices: 1
Epoch 1/60
200/200 [==============================] - 18s 67ms/step - loss: 0.6197 - binary_accuracy: 0.6652 - precision: 0.3234 - recall: 0.0227 - val_loss: 0.4774 - val_binary_accuracy: 0.8521 - val_precision: 1.0000 - val_recall: 0.5970
Epoch 2/60
200/200 [==============================] - 11s 55ms/step - loss: 0.3923 - binary_accuracy: 0.8768 - precision: 0.9704 - recall: 0.6312 - val_loss: 0.2899 - val_binary_accuracy: 0.8872 - val_precision: 0.9361 - val_recall: 0.7433
Epoch 3/60
200/200 [==============================] - 11s 54ms/step - loss: 0.2391 - binary_accuracy: 0.9063 - precision: 0.8890 - recall: 0.7791 - val_loss: 0.2616 - val_binary_accuracy: 0.8828 - val_precision: 0.9750 - val_recall: 0.6985
Epoch 4/60
200/200 [==============================] - 11s 54ms/step - loss: 0.1912 - binary_accuracy: 0.9248 - precision: 0.9164 - recall: 0.8188 - val_loss: 0.2198 - val_binary_accuracy: 0.9124 - val_precision: 0.9100 - val_recall: 0.8448
Epoch 5/60
200/200 [==============================] - 11s 55ms/step - loss: 0.2172 - binary_accuracy: 0.9123 - precision: 0.9089 - recall: 0.8077 - val_loss: 0.2137 - val_binary_accuracy: 0.9179 - val_precision: 0.8939 - val_recall: 0.8806
Epoch 6/60
200/200 [==============================] - 10s 52ms/step - loss: 0.2128 - binary_accuracy: 0.9126 - precision: 0.8738 - recall: 0.8528 - val_loss: 0.2421 - val_binary_accuracy: 0.9025 - val_precision: 0.9805 - val_recall: 0.7493
Epoch 7/60
200/200 [==============================] - 10s 50ms/step - loss: 0.2011 - binary_accuracy: 0.9218 - precision: 0.9008 - recall: 0.8374 - val_loss: 0.2122 - val_binary_accuracy: 0.9124 - val_precision: 0.9022 - val_recall: 0.8537
Epoch 8/60
200/200 [==============================] - 10s 50ms/step - loss: 0.1970 - binary_accuracy: 0.9247 - precision: 0.8937 - recall: 0.8652 - val_loss: 0.2120 - val_binary_accuracy: 0.9157 - val_precision: 0.9082 - val_recall: 0.8567
Epoch 9/60
200/200 [==============================] - 9s 47ms/step - loss: 0.2141 - binary_accuracy: 0.9207 - precision: 0.9093 - recall: 0.8443 - val_loss: 0.2435 - val_binary_accuracy: 0.8861 - val_precision: 0.9833 - val_recall: 0.7015
Epoch 10/60
200/200 [==============================] - 10s 48ms/step - loss: 0.1778 - binary_accuracy: 0.9284 - precision: 0.9137 - recall: 0.8500 - val_loss: 0.2191 - val_binary_accuracy: 0.9014 - val_precision: 0.9487 - val_recall: 0.7731
Epoch 11/60
200/200 [==============================] - 9s 47ms/step - loss: 0.2096 - binary_accuracy: 0.9123 - precision: 0.8852 - recall: 0.8259 - val_loss: 0.2142 - val_binary_accuracy: 0.9146 - val_precision: 0.8579 - val_recall: 0.9194
Epoch 12/60
200/200 [==============================] - 9s 46ms/step - loss: 0.1921 - binary_accuracy: 0.9241 - precision: 0.8921 - recall: 0.8598 - val_loss: 0.2036 - val_binary_accuracy: 0.9189 - val_precision: 0.8991 - val_recall: 0.8776
Epoch 13/60
200/200 [==============================] - 9s 46ms/step - loss: 0.2240 - binary_accuracy: 0.9159 - precision: 0.8857 - recall: 0.8777 - val_loss: 0.2261 - val_binary_accuracy: 0.9113 - val_precision: 0.8629 - val_recall: 0.9015
Epoch 14/60
200/200 [==============================] - 9s 46ms/step - loss: 0.1691 - binary_accuracy: 0.9361 - precision: 0.9005 - recall: 0.8913 - val_loss: 0.2137 - val_binary_accuracy: 0.9113 - val_precision: 0.8872 - val_recall: 0.8687
Epoch 15/60
200/200 [==============================] - 9s 45ms/step - loss: 0.1764 - binary_accuracy: 0.9354 - precision: 0.9324 - recall: 0.8460 - val_loss: 0.1971 - val_binary_accuracy: 0.9233 - val_precision: 0.8841 - val_recall: 0.9104
Epoch 16/60
200/200 [==============================] - 9s 44ms/step - loss: 0.1600 - binary_accuracy: 0.9353 - precision: 0.9130 - recall: 0.8834 - val_loss: 0.2726 - val_binary_accuracy: 0.8894 - val_precision: 0.9795 - val_recall: 0.7134
Epoch 17/60
200/200 [==============================] - 9s 44ms/step - loss: 0.1968 - binary_accuracy: 0.9193 - precision: 0.9030 - recall: 0.8508 - val_loss: 0.1916 - val_binary_accuracy: 0.9146 - val_precision: 0.9028 - val_recall: 0.8597
Epoch 18/60
200/200 [==============================] - 9s 43ms/step - loss: 0.1860 - binary_accuracy: 0.9270 - precision: 0.9096 - recall: 0.8524 - val_loss: 0.1771 - val_binary_accuracy: 0.9310 - val_precision: 0.9024 - val_recall: 0.9104
Epoch 19/60
200/200 [==============================] - 8s 42ms/step - loss: 0.1921 - binary_accuracy: 0.9195 - precision: 0.8666 - recall: 0.8841 - val_loss: 0.1797 - val_binary_accuracy: 0.9222 - val_precision: 0.9258 - val_recall: 0.8567
Epoch 20/60
200/200 [==============================] - 8s 42ms/step - loss: 0.1585 - binary_accuracy: 0.9305 - precision: 0.9076 - recall: 0.8667 - val_loss: 0.1755 - val_binary_accuracy: 0.9299 - val_precision: 0.9094 - val_recall: 0.8985
Epoch 21/60
200/200 [==============================] - 8s 42ms/step - loss: 0.1698 - binary_accuracy: 0.9282 - precision: 0.9061 - recall: 0.8690 - val_loss: 0.2084 - val_binary_accuracy: 0.8905 - val_precision: 0.9719 - val_recall: 0.7224
Epoch 22/60
200/200 [==============================] - 8s 41ms/step - loss: 0.1597 - binary_accuracy: 0.9287 - precision: 0.9185 - recall: 0.8516 - val_loss: 0.1838 - val_binary_accuracy: 0.9266 - val_precision: 0.9497 - val_recall: 0.8448
Epoch 23/60
200/200 [==============================] - 8s 40ms/step - loss: 0.1858 - binary_accuracy: 0.9298 - precision: 0.9234 - recall: 0.8585 - val_loss: 0.1794 - val_binary_accuracy: 0.9321 - val_precision: 0.9124 - val_recall: 0.9015
Epoch 24/60
200/200 [==============================] - 8s 39ms/step - loss: 0.1377 - binary_accuracy: 0.9436 - precision: 0.9161 - recall: 0.8988 - val_loss: 0.2079 - val_binary_accuracy: 0.8981 - val_precision: 0.9802 - val_recall: 0.7373
Epoch 25/60
200/200 [==============================] - 8s 39ms/step - loss: 0.1571 - binary_accuracy: 0.9269 - precision: 0.9103 - recall: 0.8534 - val_loss: 0.1836 - val_binary_accuracy: 0.9321 - val_precision: 0.8845 - val_recall: 0.9373
Epoch 26/60
200/200 [==============================] - 8s 39ms/step - loss: 0.2113 - binary_accuracy: 0.9023 - precision: 0.8615 - recall: 0.8521 - val_loss: 0.1766 - val_binary_accuracy: 0.9255 - val_precision: 0.8658 - val_recall: 0.9433
Epoch 27/60
200/200 [==============================] - 8s 38ms/step - loss: 0.1590 - binary_accuracy: 0.9388 - precision: 0.9027 - recall: 0.8995 - val_loss: 0.1941 - val_binary_accuracy: 0.9124 - val_precision: 0.9412 - val_recall: 0.8119
Epoch 28/60
200/200 [==============================] - 8s 38ms/step - loss: 0.1843 - binary_accuracy: 0.9162 - precision: 0.8804 - recall: 0.8817 - val_loss: 0.1932 - val_binary_accuracy: 0.9124 - val_precision: 0.9073 - val_recall: 0.8478
Epoch 29/60
200/200 [==============================] - 7s 37ms/step - loss: 0.1696 - binary_accuracy: 0.9321 - precision: 0.9152 - recall: 0.8792 - val_loss: 0.1682 - val_binary_accuracy: 0.9299 - val_precision: 0.8839 - val_recall: 0.9313
Epoch 30/60
200/200 [==============================] - 7s 37ms/step - loss: 0.1491 - binary_accuracy: 0.9361 - precision: 0.8985 - recall: 0.8842 - val_loss: 0.1664 - val_binary_accuracy: 0.9321 - val_precision: 0.9361 - val_recall: 0.8746
Epoch 31/60
200/200 [==============================] - 7s 37ms/step - loss: 0.1512 - binary_accuracy: 0.9334 - precision: 0.9060 - recall: 0.8990 - val_loss: 0.1663 - val_binary_accuracy: 0.9222 - val_precision: 0.9000 - val_recall: 0.8866
Epoch 32/60
200/200 [==============================] - 7s 36ms/step - loss: 0.1951 - binary_accuracy: 0.9063 - precision: 0.8837 - recall: 0.8457 - val_loss: 0.1550 - val_binary_accuracy: 0.9376 - val_precision: 0.9162 - val_recall: 0.9134
Epoch 33/60
200/200 [==============================] - 7s 35ms/step - loss: 0.1204 - binary_accuracy: 0.9512 - precision: 0.9299 - recall: 0.9131 - val_loss: 0.1749 - val_binary_accuracy: 0.9299 - val_precision: 0.8883 - val_recall: 0.9254
Epoch 34/60
200/200 [==============================] - 7s 36ms/step - loss: 0.1350 - binary_accuracy: 0.9415 - precision: 0.9126 - recall: 0.8984 - val_loss: 0.1734 - val_binary_accuracy: 0.9310 - val_precision: 0.8636 - val_recall: 0.9642
Epoch 35/60
200/200 [==============================] - 7s 34ms/step - loss: 0.1680 - binary_accuracy: 0.9222 - precision: 0.8814 - recall: 0.8858 - val_loss: 0.1533 - val_binary_accuracy: 0.9409 - val_precision: 0.9297 - val_recall: 0.9075
Epoch 36/60
200/200 [==============================] - 7s 34ms/step - loss: 0.1573 - binary_accuracy: 0.9404 - precision: 0.9212 - recall: 0.8934 - val_loss: 0.1729 - val_binary_accuracy: 0.9244 - val_precision: 0.9156 - val_recall: 0.8746
Epoch 37/60
200/200 [==============================] - 7s 34ms/step - loss: 0.1486 - binary_accuracy: 0.9414 - precision: 0.9281 - recall: 0.8785 - val_loss: 0.1551 - val_binary_accuracy: 0.9398 - val_precision: 0.9545 - val_recall: 0.8776
Epoch 38/60
200/200 [==============================] - 7s 33ms/step - loss: 0.1593 - binary_accuracy: 0.9318 - precision: 0.9026 - recall: 0.8882 - val_loss: 0.1602 - val_binary_accuracy: 0.9266 - val_precision: 0.9161 - val_recall: 0.8806
Epoch 39/60
200/200 [==============================] - 7s 33ms/step - loss: 0.1580 - binary_accuracy: 0.9311 - precision: 0.8971 - recall: 0.8814 - val_loss: 0.1723 - val_binary_accuracy: 0.9277 - val_precision: 0.8944 - val_recall: 0.9104
Epoch 40/60
200/200 [==============================] - 7s 33ms/step - loss: 0.1659 - binary_accuracy: 0.9246 - precision: 0.8869 - recall: 0.8723 - val_loss: 0.1575 - val_binary_accuracy: 0.9354 - val_precision: 0.9132 - val_recall: 0.9104
Epoch 41/60
200/200 [==============================] - 6s 32ms/step - loss: 0.1279 - binary_accuracy: 0.9481 - precision: 0.9209 - recall: 0.9061 - val_loss: 0.1663 - val_binary_accuracy: 0.9321 - val_precision: 0.8740 - val_recall: 0.9522
Epoch 42/60
200/200 [==============================] - 6s 33ms/step - loss: 0.1592 - binary_accuracy: 0.9309 - precision: 0.8941 - recall: 0.8978 - val_loss: 0.1517 - val_binary_accuracy: 0.9376 - val_precision: 0.9427 - val_recall: 0.8836
Epoch 43/60
200/200 [==============================] - 6s 31ms/step - loss: 0.1339 - binary_accuracy: 0.9417 - precision: 0.9319 - recall: 0.8788 - val_loss: 0.1530 - val_binary_accuracy: 0.9321 - val_precision: 0.9252 - val_recall: 0.8866
Epoch 44/60
200/200 [==============================] - 6s 31ms/step - loss: 0.1570 - binary_accuracy: 0.9273 - precision: 0.9032 - recall: 0.8829 - val_loss: 0.1509 - val_binary_accuracy: 0.9299 - val_precision: 0.9094 - val_recall: 0.8985
Epoch 45/60
200/200 [==============================] - 6s 30ms/step - loss: 0.1390 - binary_accuracy: 0.9383 - precision: 0.9174 - recall: 0.8959 - val_loss: 0.1645 - val_binary_accuracy: 0.9266 - val_precision: 0.9408 - val_recall: 0.8537
Epoch 46/60
200/200 [==============================] - 6s 29ms/step - loss: 0.1411 - binary_accuracy: 0.9381 - precision: 0.9060 - recall: 0.8998 - val_loss: 0.1443 - val_binary_accuracy: 0.9441 - val_precision: 0.9128 - val_recall: 0.9373
Epoch 47/60
200/200 [==============================] - 6s 29ms/step - loss: 0.1311 - binary_accuracy: 0.9404 - precision: 0.9189 - recall: 0.8901 - val_loss: 0.1481 - val_binary_accuracy: 0.9419 - val_precision: 0.9147 - val_recall: 0.9284
Epoch 48/60
200/200 [==============================] - 6s 29ms/step - loss: 0.1400 - binary_accuracy: 0.9479 - precision: 0.9310 - recall: 0.8964 - val_loss: 0.1565 - val_binary_accuracy: 0.9332 - val_precision: 0.8892 - val_recall: 0.9343
Epoch 49/60
200/200 [==============================] - 6s 29ms/step - loss: 0.1343 - binary_accuracy: 0.9472 - precision: 0.9306 - recall: 0.9133 - val_loss: 0.1498 - val_binary_accuracy: 0.9376 - val_precision: 0.9264 - val_recall: 0.9015
Epoch 50/60
200/200 [==============================] - 6s 28ms/step - loss: 0.1013 - binary_accuracy: 0.9508 - precision: 0.9264 - recall: 0.9210 - val_loss: 0.1648 - val_binary_accuracy: 0.9310 - val_precision: 0.8636 - val_recall: 0.9642
Epoch 51/60
200/200 [==============================] - 6s 29ms/step - loss: 0.1530 - binary_accuracy: 0.9347 - precision: 0.8985 - recall: 0.8947 - val_loss: 0.1391 - val_binary_accuracy: 0.9463 - val_precision: 0.9133 - val_recall: 0.9433
Epoch 52/60
200/200 [==============================] - 5s 27ms/step - loss: 0.1455 - binary_accuracy: 0.9370 - precision: 0.8958 - recall: 0.9030 - val_loss: 0.1367 - val_binary_accuracy: 0.9485 - val_precision: 0.9337 - val_recall: 0.9254
Epoch 53/60
200/200 [==============================] - 5s 27ms/step - loss: 0.1180 - binary_accuracy: 0.9522 - precision: 0.9459 - recall: 0.9015 - val_loss: 0.1437 - val_binary_accuracy: 0.9387 - val_precision: 0.9189 - val_recall: 0.9134
Epoch 54/60
200/200 [==============================] - 5s 26ms/step - loss: 0.1351 - binary_accuracy: 0.9449 - precision: 0.9308 - recall: 0.9038 - val_loss: 0.1396 - val_binary_accuracy: 0.9430 - val_precision: 0.9492 - val_recall: 0.8925
Epoch 55/60
200/200 [==============================] - 5s 26ms/step - loss: 0.1435 - binary_accuracy: 0.9435 - precision: 0.9358 - recall: 0.8927 - val_loss: 0.1452 - val_binary_accuracy: 0.9430 - val_precision: 0.9436 - val_recall: 0.8985
Epoch 56/60
200/200 [==============================] - 5s 25ms/step - loss: 0.1497 - binary_accuracy: 0.9365 - precision: 0.9118 - recall: 0.8964 - val_loss: 0.1453 - val_binary_accuracy: 0.9452 - val_precision: 0.9495 - val_recall: 0.8985
Epoch 57/60
200/200 [==============================] - 5s 25ms/step - loss: 0.1148 - binary_accuracy: 0.9584 - precision: 0.9459 - recall: 0.9180 - val_loss: 0.1642 - val_binary_accuracy: 0.9288 - val_precision: 0.8709 - val_recall: 0.9463
Epoch 58/60
200/200 [==============================] - 5s 24ms/step - loss: 0.1478 - binary_accuracy: 0.9390 - precision: 0.9116 - recall: 0.8919 - val_loss: 0.1445 - val_binary_accuracy: 0.9419 - val_precision: 0.9247 - val_recall: 0.9164
Epoch 59/60
200/200 [==============================] - 5s 24ms/step - loss: 0.1533 - binary_accuracy: 0.9276 - precision: 0.8988 - recall: 0.8732 - val_loss: 0.1431 - val_binary_accuracy: 0.9409 - val_precision: 0.9489 - val_recall: 0.8866
Epoch 60/60
200/200 [==============================] - 5s 24ms/step - loss: 0.1056 - binary_accuracy: 0.9616 - precision: 0.9548 - recall: 0.9215 - val_loss: 0.1407 - val_binary_accuracy: 0.9376 - val_precision: 0.9513 - val_recall: 0.8746
time: 7min 56s (started: 2021-09-29 16:40:27 +00:00)
</code>
---
# Make predictions
## (This can be only excuted once the training is finished or stopped)_____no_output_____
<code>
#@title Load functions to generate random graphs and print them
import os
import networkx as nx
import matplotlib.pyplot as plt
import json
from networkx.readwrite import json_graph
import ignnition
import numpy as np
import random
%load_ext autotime
def generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p):
while True:
# Create a random Erdos Renyi graph
G = nx.erdos_renyi_graph(random.randint(min_nodes, max_nodes), p)
complement = list(nx.k_edge_augmentation(G, k=1, partial=True))
G.add_edges_from(complement)
nx.set_node_attributes(G, 0, 'src-tgt')
nx.set_node_attributes(G, 0, 'sp')
nx.set_node_attributes(G, 'node', 'entity')
# Assign randomly weights to graph edges
for (u, v, w) in G.edges(data=True):
w['weight'] = random.randint(min_edge_weight, max_edge_weight)
# Select the source and target nodes to compute the shortest path
src, tgt = random.sample(list(G.nodes), 2)
G.nodes[src]['src-tgt'] = 1
G.nodes[tgt]['src-tgt'] = 1
# Compute all the shortest paths between source and target nodes
try:
shortest_paths = list(nx.all_shortest_paths(G, source=src, target=tgt,weight='weight'))
except:
shortest_paths = []
# Check if there exists only one shortest path
if len(shortest_paths) == 1:
if len(shortest_paths[0])>=3 and len(shortest_paths[0])<=5:
for node in shortest_paths[0]:
G.nodes[node]['sp'] = 1
return shortest_paths[0], nx.DiGraph(G)
def print_graph_predictions(G, path, predictions,ax):
predictions = np.array(predictions)
node_border_colors = []
links = []
for i in range(len(path)-1):
links.append([path[i], path[i+1]])
links.append([path[i+1], path[i]])
# Add colors to node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
# Add colors for predictions [0,1]
node_colors = predictions
# Add colors for edges
edge_colors = []
for edge in G.edges(data=True):
e=[edge[0],edge[1]]
if e in links:
edge_colors.append('red')
else:
edge_colors.append('black')
pos= nx.shell_layout(G)
vmin = node_colors.min()
vmax = node_colors.max()
vmin = 0
vmax = 1
cmap = plt.cm.coolwarm
nx.draw_networkx_nodes(G, pos=pos, node_color=node_colors, cmap=cmap, vmin=vmin, vmax=vmax,
edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, edge_color=edge_colors, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=vmin, vmax=vmax))
sm.set_array([])
plt.colorbar(sm, ax=ax)
def print_graph_solution(G, path, predictions,ax, pred_th):
predictions = np.array(predictions)
node_colors = []
node_border_colors = []
links = []
for i in range(len(path)-1):
links.append([path[i], path[i+1]])
links.append([path[i+1], path[i]])
# Add colors on node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
# Add colors for predictions Blue or Red
cmap = plt.cm.get_cmap('coolwarm')
dark_red = cmap(1.0)
for p in predictions:
if p >= pred_th:
node_colors.append(dark_red)
else:
node_colors.append('blue')
# Add colors for edges
edge_colors = []
for edge in G.edges(data=True):
e=[edge[0],edge[1]]
if e in links:
edge_colors.append('red')
else:
edge_colors.append('black')
pos= nx.shell_layout(G)
nx.draw_networkx_nodes(G, pos=pos, node_color=node_colors, edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, edge_color=edge_colors, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)
def print_input_graph(G, ax):
node_colors = []
node_border_colors = []
# Add colors to node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
pos= nx.shell_layout(G)
nx.draw_networkx_nodes(G, pos=pos, edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)_____no_output_____#@title Make predictions on random graphs
#@markdown **NOTE**: IGNNITION will automatically load the latest trained model (CheckPoint) to make the predictions
dataset_samples = []
sh_path, G = generate_random_graph(min_nodes=8, max_nodes=12, min_edge_weight=1, max_edge_weight=10, p=0.3)
graph = G.to_undirected()
dataset_samples.append(json_graph.node_link_data(G))
# write prediction dataset
root_dir="./data"
if not os.path.exists(root_dir):
os.makedirs(root_dir)
if not os.path.exists(root_dir+"/test"):
os.makedirs(root_dir+"/test")
with open(root_dir+"/test/data.json", "w") as f:
json.dump(dataset_samples, f)
# Make predictions
predictions = model.predict()
# Print the results
fig, axes = plt.subplots(nrows=1, ncols=3)
ax = axes.flatten()
# Print input graph
ax1 = ax[0]
ax1.set_title("Input graph")
print_input_graph(graph, ax1)
# Print graph with predictions (soft values)
ax1 = ax[1]
ax1.set_title("GNN predictions (soft values)")
print_graph_predictions(graph, sh_path, predictions[0], ax1)
# Print solution of the GNN
pred_th = 0.5
ax1 = ax[2]
ax1.set_title("GNN solution (p >= "+str(pred_th)+")")
print_graph_solution(graph, sh_path, predictions[0], ax1, pred_th)
# Show plot in full screen
plt.rcParams['figure.figsize'] = [10, 4]
plt.rcParams['figure.dpi'] = 100
plt.tight_layout()
plt.show()
[1mStarting to make the predictions...
---------------------------------------------------------
[0m
</code>
---
# Try to improve your GNN model_____no_output_____**Optional exercise**:
The previous training was executed with some parameters set by default, so the accuracy of the GNN model is far from optimal.
Here, we propose an alternative configuration that defines better training parameters for the GNN model.
For this, you can check and modify the following YAML files to configure your GNN model:
* /content/model_description.yaml -> GNN model description
* /content/train_options.yaml -> Configuration of training parameters
Try to define an optimizer with learning rate decay and set the number of samples and epochs adding the following lines in the train_options.yaml file:
```
optimizer:
type: Adam
learning_rate: # define a schedule
type: ExponentialDecay
initial_learning_rate: 0.001
decay_steps: 10000
decay_rate: 0.5
...
batch_size: 1
epochs: 150
epoch_size: 200
```
Then, you can train a new model from scratch by executing al the code snippets from section "GNN model training"
Please note that the training process may take quite a long time depending on the machine where it is executed.
In this example, there are a total of 30,000 training samples:
1 sample/step * 200 steps/epoch * 150 epochs = 30.000 samples_____no_output_____
| {
"repository": "BNN-UPC/ignnition",
"path": "notebooks/shortest_path.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 18,
"size": 206741,
"hexsha": "cb337782fb39426ea1acbadb41290ed1206bf7f1",
"max_line_length": 167690,
"avg_line_length": 290.3665730337,
"alphanum_fraction": 0.8815619543
} |
# Notebook from daniel-koehn/Differential-equations-earth-system
Path: 03_Lorenz_equations/03_LorenzEquations_fdsolve.ipynb
###### Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Daniel Koehn based on Jupyter notebooks by Marc Spiegelman [Dynamical Systems APMA 4101](https://github.com/mspieg/dynamical-systems) and Kyle Mandli from his course [Introduction to numerical methods](https://github.com/mandli/intro-numerical-methods), notebook style sheet by L.A. Barba, N.C. Clementi [Engineering Computations](https://github.com/engineersCode)_____no_output_____
<code>
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())_____no_output_____
</code>
# Exploring the Lorenz Equations
The Lorenz Equations are a 3-D dynamical system that is a simplified model of Rayleigh-Benard thermal convection. They are derived and described in detail in Edward Lorenz' 1963 paper [Deterministic Nonperiodic Flow](http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2) in the Journal of Atmospheric Science. In their classical form they can be written
\begin{equation}
\begin{split}
\frac{\partial X}{\partial t} &= \sigma( Y - X)\\
\frac{\partial Y}{\partial t} &= rX - Y - XZ \\
\frac{\partial Z}{\partial t} &= XY -b Z
\end{split}
\tag{1}
\end{equation}
where $\sigma$ is the "Prandtl number", $r = \mathrm{Ra}/\mathrm{Ra}_c$ is a scaled "Rayleigh number" and $b$ is a parameter that is related to the the aspect ratio of a convecting cell in the original derivation.
Here, $X(t)$, $Y(t)$ and $Z(t)$ are the time dependent amplitudes of the streamfunction and temperature fields, expanded in a highly truncated Fourier Series where the streamfunction contains one cellular mode
$$
\psi(x,z,t) = X(t)\sin(a\pi x)\sin(\pi z)
$$
and temperature has two modes
$$
\theta(x,z,t) = Y(t)\cos(a\pi x)\sin(\pi z) - Z(t)\sin(2\pi z)
$$
This Jupyter notebook, will provide some simple python routines for numerical integration and visualization of the Lorenz Equations._____no_output_____## Numerical solution of the Lorenz Equations
We have to solve the uncoupled ordinary differential equations (1) using the finite difference method introduced in [this lecture](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/1_fd_intro.ipynb).
The approach is similar to the one used in [Exercise: How to sail without wind](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb), except that eqs.(1) are coupled ordinary differential equations, we have an additional differential equation and the RHS are more complex.
Approximating the temporal derivatives in eqs. (1) using the **backward FD operator**
\begin{equation}
\frac{df}{dt} = \frac{f(t)-f(t-dt)}{dt} \notag
\end{equation}
with the time sample interval $dt$ leads to
\begin{equation}
\begin{split}
\frac{X(t)-X(t-dt)}{dt} &= \sigma(Y - X)\\
\frac{Y(t)-Y(t-dt)}{dt} &= rX - Y - XZ\\
\frac{Y(t)-Y(t-dt)}{dt} &= XY -b Z\\
\end{split}
\notag
\end{equation}
After solving for $X(t), Y(t), Z(t)$, we get the **explicit time integration scheme** for the Lorenz equations:
\begin{equation}
\begin{split}
X(t) &= X(t-dt) + dt\; \sigma(Y - X)\\
Y(t) &= Y(t-dt) + dt\; (rX - Y - XZ)\\
Z(t) &= Z(t-dt) + dt\; (XY -b Z)\\
\end{split}
\notag
\end{equation}
and by introducing a temporal dicretization $t^n = n * dt$ with $n \in [0,1,...,nt]$, where $nt$ denotes the maximum time steps, the final FD code becomes:
\begin{equation}
\begin{split}
X^{n} &= X^{n-1} + dt\; \sigma(Y^{n-1} - X^{n-1})\\
Y^{n} &= Y^{n-1} + dt\; (rX^{n-1} - Y^{n-1} - X^{n-1}Z^{n-1})\\
Z^{n} &= Z^{n-1} + dt\; (X^{n-1}Y^{n-1} - b Z^{n-1})\\
\end{split}
\tag{2}
\end{equation}
The Python implementation is quite straightforward, because we can reuse some old codes ...
##### Exercise 1
Finish the function `Lorenz`, which computes and returns the RHS of eqs. (1) for a given $X$, $Y$, $Z$._____no_output_____
<code>
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D_____no_output_____def Lorenz(X,Y,Z,sigma,r,b):
'''
Returns the RHS of the Lorenz equations
'''
# ADD RHS OF LORENZ EQUATIONS (1) HERE!
X_dot_rhs =
Y_dot_rhs =
Z_dot_rhs =
# return the state derivatives
return X_dot_rhs, Y_dot_rhs, Z_dot_rhs_____no_output_____
</code>
Next, we write the function to solve the Lorenz equation `SolveLorenz` based on the `sailing_boring` code from the [Exercise: How to sail without wind](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb)
##### Exercise 2
Finish the FD-code implementation `SolveLorenz`_____no_output_____
<code>
def SolveLorenz(tmax, dt, X0, Y0, Z0, sigma=10.,r=28.,b=8./3.0):
'''
Integrate the Lorenz equations from initial condition (X0,Y0,Z0)^T at t=0
for parameters sigma, r, b
Returns: X, Y, Z, time
'''
# Compute number of time steps based on tmax and dt
nt = (int)(tmax/dt)
# vectors for storage of X, Y, Z positions and time t
X = np.zeros(nt + 1)
Y = np.zeros(nt + 1)
Z = np.zeros(nt + 1)
t = np.zeros(nt + 1)
# define initial position and time
X[0] = X0
Y[0] = Y0
Z[0] = Z0
# start time stepping over time samples n
for n in range(1,nt + 1):
# compute RHS of Lorenz eqs. (1) at current position (X,Y,Z)^T
X_dot_rhs, Y_dot_rhs, Z_dot_rhs = Lorenz(X[n-1],Y[n-1],Z[n-1],sigma,r,b)
# compute new position using FD approximation of time derivative
# ADD FD SCHEME OF THE LORENZ EQS. HERE!
X[n] =
Y[n] =
Z[n] =
t[n] = n * dt
return X, Y, Z, t_____no_output_____
</code>
Finally, we create a function to plot the solution (X,Y,Z)^T of the Lorenz eqs. ..._____no_output_____
<code>
def PlotLorenzXvT(X,Y,Z,t,sigma,r,b):
'''
Create time series plots of solutions of the Lorenz equations X(t),Y(t),Z(t)
'''
plt.figure()
ax = plt.subplot(111)
ax.plot(t,X,'r',label='X')
ax.plot(t,Y,'g',label='Y')
ax.plot(t,Z,'b',label='Z')
ax.set_xlabel('time t')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
# Shrink current axis's height by 10% on the bottom
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
# Put a legend below current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=3)
plt.show()_____no_output_____
</code>
... and a function to plot the trajectory in the **phase space portrait**:_____no_output_____
<code>
def PlotLorenz3D(X,Y,Z,sigma,r,b):
'''
Show 3-D Phase portrait using mplot3D
'''
# do some fancy 3D plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(X,Y,Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
plt.show()_____no_output_____
</code>
##### Exercise 3
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=0.5$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results._____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)_____no_output_____
</code>
##### Exercise 4
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=10$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results._____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)_____no_output_____
</code>
##### Exercise 5
Solve the Lorenz equations again for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=10$. However, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(-2,-3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. How does the solution change compared to exercise 4?_____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)_____no_output_____
</code>
##### Exercise 6
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=28$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. Compare with the previous results._____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)_____no_output_____
</code>
##### Exercise 7
In his 1963 paper Lorenz also investigated the influence of small changes of the initial conditions on the long-term evolution of the thermal convection problem for large Rayleigh numbers.
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=28$, however starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3.001,4)^T$. Plot the temporal evolution and compare with the solution of exercise 6. Describe and interpret the results.
Explain why Lorenz introduced the term **Butterfly effect** based on your results._____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X1, Y1, Z1, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize differences as a time series
PlotLorenzXvT(X-X1,Y-Y1,Z-Z1,t,sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X1,Y1,Z1,t,sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)_____no_output_____
</code>
##### Exercise 8
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=350$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. Compare with the previous result from exercise 8._____no_output_____
<code>
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 8.
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)_____no_output_____
</code>
## What we learned:
- How to solve the Lorenz equations using a simple finite-difference scheme.
- How to visualize the solution of ordinary differential equations using the temporal evolution and phase portrait.
- Exporing the dynamic of non-linear differential equations and the sensitivity of small changes of the initial conditions to the long term evolution of the system.
- Why physicists can only predict the time evolution of complex dynamical systems to some extent._____no_output_____
| {
"repository": "daniel-koehn/Differential-equations-earth-system",
"path": "03_Lorenz_equations/03_LorenzEquations_fdsolve.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 30,
"size": 25777,
"hexsha": "cb3448d410839139690980c2784f852304c60f8c",
"max_line_length": 656,
"avg_line_length": 35.4566712517,
"alphanum_fraction": 0.5357101292
} |
# Notebook from ranstotz/data-eng-nanodegree
Path: course_materials/project_03_data_warehouses/L3 Exercise 4 - Table Design - Solution.ipynb
# Exercise 4: Optimizing Redshift Table Design_____no_output_____
<code>
%load_ext sql_____no_output_____from time import time
import configparser
import matplotlib.pyplot as plt
import pandas as pd_____no_output_____config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY=config.get('AWS','key')
SECRET= config.get('AWS','secret')
DWH_DB= config.get("DWH","DWH_DB")
DWH_DB_USER= config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD= config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
_____no_output_____
</code>
# STEP 1: Get the params of the created redshift cluster
- We need:
- The redshift cluster <font color='red'>endpoint</font>
- The <font color='red'>IAM role ARN</font> that give access to Redshift to read from S3_____no_output_____
<code>
# FILL IN THE REDSHIFT ENDPOINT HERE
# e.g. DWH_ENDPOINT="redshift-cluster-1.csmamz5zxmle.us-west-2.redshift.amazonaws.com"
DWH_ENDPOINT="dwhcluster.csmamz5zxmle.us-west-2.redshift.amazonaws.com"
#FILL IN THE IAM ROLE ARN you got in step 2.2 of the previous exercise
#e.g DWH_ROLE_ARN="arn:aws:iam::988332130976:role/dwhRole"
DWH_ROLE_ARN="arn:aws:iam::988332130976:role/dwhRole"_____no_output_____
</code>
# STEP 2: Connect to the Redshift Cluster_____no_output_____
<code>
import os
conn_string="postgresql://{}:{}@{}:{}/{}".format(DWH_DB_USER, DWH_DB_PASSWORD, DWH_ENDPOINT, DWH_PORT,DWH_DB)
print(conn_string)
%sql $conn_string_____no_output_____
</code>
# STEP 3: Create Tables
- We are going to use a benchmarking data set common for benchmarking star schemas in data warehouses.
- The data is pre-loaded in a public bucket on the `us-west-2` region
- Our examples will be based on the Amazon Redshfit tutorial but in a scripted environment in our workspace.

_____no_output_____## 3.1 Create tables (no distribution strategy) in the `nodist` schema_____no_output_____
<code>
%%sql
CREATE SCHEMA IF NOT EXISTS nodist;
SET search_path TO nodist;
DROP TABLE IF EXISTS part cascade;
DROP TABLE IF EXISTS supplier;
DROP TABLE IF EXISTS supplier;
DROP TABLE IF EXISTS customer;
DROP TABLE IF EXISTS dwdate;
DROP TABLE IF EXISTS lineorder;
CREATE TABLE part
(
p_partkey INTEGER NOT NULL,
p_name VARCHAR(22) NOT NULL,
p_mfgr VARCHAR(6) NOT NULL,
p_category VARCHAR(7) NOT NULL,
p_brand1 VARCHAR(9) NOT NULL,
p_color VARCHAR(11) NOT NULL,
p_type VARCHAR(25) NOT NULL,
p_size INTEGER NOT NULL,
p_container VARCHAR(10) NOT NULL
);
CREATE TABLE supplier
(
s_suppkey INTEGER NOT NULL,
s_name VARCHAR(25) NOT NULL,
s_address VARCHAR(25) NOT NULL,
s_city VARCHAR(10) NOT NULL,
s_nation VARCHAR(15) NOT NULL,
s_region VARCHAR(12) NOT NULL,
s_phone VARCHAR(15) NOT NULL
);
CREATE TABLE customer
(
c_custkey INTEGER NOT NULL,
c_name VARCHAR(25) NOT NULL,
c_address VARCHAR(25) NOT NULL,
c_city VARCHAR(10) NOT NULL,
c_nation VARCHAR(15) NOT NULL,
c_region VARCHAR(12) NOT NULL,
c_phone VARCHAR(15) NOT NULL,
c_mktsegment VARCHAR(10) NOT NULL
);
CREATE TABLE dwdate
(
d_datekey INTEGER NOT NULL,
d_date VARCHAR(19) NOT NULL,
d_dayofweek VARCHAR(10) NOT NULL,
d_month VARCHAR(10) NOT NULL,
d_year INTEGER NOT NULL,
d_yearmonthnum INTEGER NOT NULL,
d_yearmonth VARCHAR(8) NOT NULL,
d_daynuminweek INTEGER NOT NULL,
d_daynuminmonth INTEGER NOT NULL,
d_daynuminyear INTEGER NOT NULL,
d_monthnuminyear INTEGER NOT NULL,
d_weeknuminyear INTEGER NOT NULL,
d_sellingseason VARCHAR(13) NOT NULL,
d_lastdayinweekfl VARCHAR(1) NOT NULL,
d_lastdayinmonthfl VARCHAR(1) NOT NULL,
d_holidayfl VARCHAR(1) NOT NULL,
d_weekdayfl VARCHAR(1) NOT NULL
);
CREATE TABLE lineorder
(
lo_orderkey INTEGER NOT NULL,
lo_linenumber INTEGER NOT NULL,
lo_custkey INTEGER NOT NULL,
lo_partkey INTEGER NOT NULL,
lo_suppkey INTEGER NOT NULL,
lo_orderdate INTEGER NOT NULL,
lo_orderpriority VARCHAR(15) NOT NULL,
lo_shippriority VARCHAR(1) NOT NULL,
lo_quantity INTEGER NOT NULL,
lo_extendedprice INTEGER NOT NULL,
lo_ordertotalprice INTEGER NOT NULL,
lo_discount INTEGER NOT NULL,
lo_revenue INTEGER NOT NULL,
lo_supplycost INTEGER NOT NULL,
lo_tax INTEGER NOT NULL,
lo_commitdate INTEGER NOT NULL,
lo_shipmode VARCHAR(10) NOT NULL
);_____no_output_____
</code>
## 3.1 Create tables (with a distribution strategy) in the `dist` schema_____no_output_____
<code>
%%sql
CREATE SCHEMA IF NOT EXISTS dist;
SET search_path TO dist;
DROP TABLE IF EXISTS part cascade;
DROP TABLE IF EXISTS supplier;
DROP TABLE IF EXISTS supplier;
DROP TABLE IF EXISTS customer;
DROP TABLE IF EXISTS dwdate;
DROP TABLE IF EXISTS lineorder;
CREATE TABLE part (
p_partkey integer not null sortkey distkey,
p_name varchar(22) not null,
p_mfgr varchar(6) not null,
p_category varchar(7) not null,
p_brand1 varchar(9) not null,
p_color varchar(11) not null,
p_type varchar(25) not null,
p_size integer not null,
p_container varchar(10) not null
);
CREATE TABLE supplier (
s_suppkey integer not null sortkey,
s_name varchar(25) not null,
s_address varchar(25) not null,
s_city varchar(10) not null,
s_nation varchar(15) not null,
s_region varchar(12) not null,
s_phone varchar(15) not null)
diststyle all;
CREATE TABLE customer (
c_custkey integer not null sortkey,
c_name varchar(25) not null,
c_address varchar(25) not null,
c_city varchar(10) not null,
c_nation varchar(15) not null,
c_region varchar(12) not null,
c_phone varchar(15) not null,
c_mktsegment varchar(10) not null)
diststyle all;
CREATE TABLE dwdate (
d_datekey integer not null sortkey,
d_date varchar(19) not null,
d_dayofweek varchar(10) not null,
d_month varchar(10) not null,
d_year integer not null,
d_yearmonthnum integer not null,
d_yearmonth varchar(8) not null,
d_daynuminweek integer not null,
d_daynuminmonth integer not null,
d_daynuminyear integer not null,
d_monthnuminyear integer not null,
d_weeknuminyear integer not null,
d_sellingseason varchar(13) not null,
d_lastdayinweekfl varchar(1) not null,
d_lastdayinmonthfl varchar(1) not null,
d_holidayfl varchar(1) not null,
d_weekdayfl varchar(1) not null)
diststyle all;
CREATE TABLE lineorder (
lo_orderkey integer not null,
lo_linenumber integer not null,
lo_custkey integer not null,
lo_partkey integer not null distkey,
lo_suppkey integer not null,
lo_orderdate integer not null sortkey,
lo_orderpriority varchar(15) not null,
lo_shippriority varchar(1) not null,
lo_quantity integer not null,
lo_extendedprice integer not null,
lo_ordertotalprice integer not null,
lo_discount integer not null,
lo_revenue integer not null,
lo_supplycost integer not null,
lo_tax integer not null,
lo_commitdate integer not null,
lo_shipmode varchar(10) not null
);_____no_output_____
</code>
# STEP 4: Copying tables
Our intent here is to run 5 COPY operations for the 5 tables respectively as show below.
However, we want to do accomplish the following:
- Make sure that the `DWH_ROLE_ARN` is substituted with the correct value in each query
- Perform the data loading twice once for each schema (dist and nodist)
- Collect timing statistics to compare the insertion times
Thus, we have scripted the insertion as found below in the function `loadTables` which
returns a pandas dataframe containing timing statistics for the copy operations
```sql
copy customer from 's3://awssampledbuswest2/ssbgz/customer'
credentials 'aws_iam_role=<DWH_ROLE_ARN>'
gzip region 'us-west-2';
copy dwdate from 's3://awssampledbuswest2/ssbgz/dwdate'
credentials 'aws_iam_role=<DWH_ROLE_ARN>'
gzip region 'us-west-2';
copy lineorder from 's3://awssampledbuswest2/ssbgz/lineorder'
credentials 'aws_iam_role=<DWH_ROLE_ARN>'
gzip region 'us-west-2';
copy part from 's3://awssampledbuswest2/ssbgz/part'
credentials 'aws_iam_role=<DWH_ROLE_ARN>'
gzip region 'us-west-2';
copy supplier from 's3://awssampledbuswest2/ssbgz/supplier'
credentials 'aws_iam_role=<DWH_ROLE_ARN>'
gzip region 'us-west-2';
```
_____no_output_____## 4.1 Automate the copying_____no_output_____
<code>
def loadTables(schema, tables):
loadTimes = []
SQL_SET_SCEMA = "SET search_path TO {};".format(schema)
%sql $SQL_SET_SCEMA
for table in tables:
SQL_COPY = """
copy {} from 's3://awssampledbuswest2/ssbgz/{}'
credentials 'aws_iam_role={}'
gzip region 'us-west-2';
""".format(table,table, DWH_ROLE_ARN)
print("======= LOADING TABLE: ** {} ** IN SCHEMA ==> {} =======".format(table, schema))
print(SQL_COPY)
t0 = time()
%sql $SQL_COPY
loadTime = time()-t0
loadTimes.append(loadTime)
print("=== DONE IN: {0:.2f} sec\n".format(loadTime))
return pd.DataFrame({"table":tables, "loadtime_"+schema:loadTimes}).set_index('table')_____no_output_____#-- List of the tables to be loaded
tables = ["customer","dwdate","supplier", "part", "lineorder"]
#-- Insertion twice for each schema (WARNING!! EACH CAN TAKE MORE THAN 10 MINUTES!!!)
nodistStats = loadTables("nodist", tables)
distStats = loadTables("dist", tables)_____no_output_____
</code>
## 4.1 Compare the load performance results_____no_output_____
<code>
#-- Plotting of the timing results
stats = distStats.join(nodistStats)
stats.plot.bar()
plt.show()_____no_output_____
</code>
# STEP 5: Compare Query Performance_____no_output_____
<code>
oneDim_SQL ="""
set enable_result_cache_for_session to off;
SET search_path TO {};
select sum(lo_extendedprice*lo_discount) as revenue
from lineorder, dwdate
where lo_orderdate = d_datekey
and d_year = 1997
and lo_discount between 1 and 3
and lo_quantity < 24;
"""
twoDim_SQL="""
set enable_result_cache_for_session to off;
SET search_path TO {};
select sum(lo_revenue), d_year, p_brand1
from lineorder, dwdate, part, supplier
where lo_orderdate = d_datekey
and lo_partkey = p_partkey
and lo_suppkey = s_suppkey
and p_category = 'MFGR#12'
and s_region = 'AMERICA'
group by d_year, p_brand1
"""
drill_SQL = """
set enable_result_cache_for_session to off;
SET search_path TO {};
select c_city, s_city, d_year, sum(lo_revenue) as revenue
from customer, lineorder, supplier, dwdate
where lo_custkey = c_custkey
and lo_suppkey = s_suppkey
and lo_orderdate = d_datekey
and (c_city='UNITED KI1' or
c_city='UNITED KI5')
and (s_city='UNITED KI1' or
s_city='UNITED KI5')
and d_yearmonth = 'Dec1997'
group by c_city, s_city, d_year
order by d_year asc, revenue desc;
"""
oneDimSameDist_SQL ="""
set enable_result_cache_for_session to off;
SET search_path TO {};
select lo_orderdate, sum(lo_extendedprice*lo_discount) as revenue
from lineorder, part
where lo_partkey = p_partkey
group by lo_orderdate
order by lo_orderdate
"""
def compareQueryTimes(schema):
queryTimes =[]
for i,query in enumerate([oneDim_SQL, twoDim_SQL, drill_SQL, oneDimSameDist_SQL]):
t0 = time()
q = query.format(schema)
%sql $q
queryTime = time()-t0
queryTimes.append(queryTime)
return pd.DataFrame({"query":["oneDim","twoDim", "drill", "oneDimSameDist"], "queryTime_"+schema:queryTimes}).set_index('query')_____no_output_____noDistQueryTimes = compareQueryTimes("nodist")
distQueryTimes = compareQueryTimes("dist") _____no_output_____queryTimeDF =noDistQueryTimes.join(distQueryTimes)
queryTimeDF.plot.bar()
plt.show()_____no_output_____improvementDF = queryTimeDF["distImprovement"] =100.0*(queryTimeDF['queryTime_nodist']-queryTimeDF['queryTime_dist'])/queryTimeDF['queryTime_nodist']
improvementDF.plot.bar(title="% dist Improvement by query")
plt.show()_____no_output_____
</code>
| {
"repository": "ranstotz/data-eng-nanodegree",
"path": "course_materials/project_03_data_warehouses/L3 Exercise 4 - Table Design - Solution.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 18327,
"hexsha": "cb3571fc3ee90910c16da0b7f83278d70ec5a645",
"max_line_length": 160,
"avg_line_length": 33.2010869565,
"alphanum_fraction": 0.5482621269
} |
# Notebook from fakecoinbase/sweetpandslashAlgorithms
Path: NLP Data Prep.ipynb
<code>
from fastai.text import *_____no_output_____from fastai.tabular import *_____no_output_____path = Path('')_____no_output_____data = pd.read_csv('good_small_dataset.csv', engine='python')_____no_output_____data.head()_____no_output_____df = data.dropna()_____no_output_____df.to_csv('good_small_dataset_drop_missing.csv')_____no_output_____data_lm = TextLMDataBunch.from_csv(path, 'good_small_dataset_drop_missing.csv', text_cols = 'content', label_cols = 'type')
data_lm.save('data_lm_export.pkl')_____no_output_____data_clas = TextClasDataBunch.from_csv(path, 'good_small_dataset_drop_missing.csv', vocab=data_lm.train_ds.vocab, text_cols = 'content', label_cols = 'type',bs=16)
data_clas.save('data_clas_export.pkl')_____no_output_____from fastai.text import *_____no_output_____data_lm = load_data('NLP/', 'data_lm_export.pkl')_____no_output_____data_clas = load_data('', 'data_clas_export.pkl')_____no_output_____learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.5)_____no_output_____learn.save('initial')_____no_output_____learn.fit_one_cycle(1, 1e-2)_____no_output_____learn.save('initial')_____no_output_____learn.unfreeze()_____no_output_____learn.fit_one_cycle(1, 1e-3)_____no_output_____learn.save_encoder('ft_enc')_____no_output_____learn.save('ft_encoder_model')_____no_output_____learn.predict("The President today spoke on", n_words=10)_____no_output_____learn.predict("Kim Kardashian released a new photo depicting her doing", n_words=6)_____no_output_____learn.predict("World War Three has begun between", n_words=10)_____no_output_____learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5);_____no_output_____learn.load_encoder('ft_enc')
learn.load('good_model_epoc_2');_____no_output_____learn.summary()_____no_output_____data_clas.show_batch()_____no_output_____learn.fit_one_cycle(1, 1e-2)
learn.save('good_model')_____no_output_____learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(5e-3/2., 5e-3))
learn.save('good_model_epoc_2')_____no_output_____learn.unfreeze()
learn.fit_one_cycle(1, slice(2e-3/100, 2e-3))
learn.save('good_model_epoc_3')_____no_output_____# BBC
learn.predict("Israeli PM Benjamin Netanyahu has said he will annex Jewish settlements in the occupied West Bank if he is re-elected.Israelis go to the polls on Tuesday and Mr Netanyahu is competing for votes with right-wing parties who support annexing part of the West Bank.The settlements are illegal under international law, though Israel disputes this.Last month the US recognised the occupied Golan Heights, seized from Syria in 1967, as Israeli territory.Can Jewish settlement issue be resolved?What Trump’s Golan Heights move really meansIsrael's Benjamin Netanyahu: Commando turned PMIsrael has settled about 400,000 Jews in West Bank settlements, with another 200,000 living in East Jerusalem. There are about 2.5 million Palestinians living in the West Bank.Palestinians want to establish a state in the occupied West Bank, East Jerusalem and the Gaza Strip.What happens to the settlements is one of the most contentious issues between Israel and the Palestinians - Palestinians say the presence of settlements make a future independent state impossible.Israel says the Palestinians are using the issue of settlements as a pretext to avoid direct peace talks. It says settlements are not a genuine obstacle to peace and are negotiable.What exactly did Netanyahu say?He was asked during an interview on Israeli TV why he had not extended Israeli sovereignty to large settlements in the West Bank.'You are asking whether we are moving on to the next stage - the answer is yes, we will move to the next stage,' he said.Image copyrightREUTERSImage captionMr Netanyahu is seeking re-election'I am going to extend [Israeli] sovereignty and I don't distinguish between settlement blocs and the isolated settlements.'A spokesman for Palestinian leader Mahmoud Abbas told Reuters: 'Any measures and any announcements will not change the facts. Settlements are illegal and they will be removed.'Potentially explosive commentsBy Sebastian Usher, BBC Arab affairs editorThese comments by Benjamin Netanyahu are potentially explosive over an issue that has helped stall peace efforts for years.They will resonate with several parties with which he'll try to form a coalition government if he wins the biggest share of votes.But the very idea of annexation will rouse new Palestinian fury, as well as international condemnation.Mr Netanyahu may have been emboldened by the Trump administration, which just last month recognised Israeli sovereignty over the Golan Heights.What is the political background?Mr Netanyahu's right-wing Likud party is in a tight race with the new centre-right Blue and White alliance.However other parties, some of which support annexation, could end up being kingmakers when they try to form a governing coalition.Israel election: Who are the key candidates?The ex-military chief trying to unseat NetanyahuIn Mr Netanyahu's own Likud party, 28 out of the 29 lawmakers running for re-election are on record as supporting this approach. Until now the prime minister was the only exception.What is the situation of peace negotiations?Mr Trump's administration is preparing to unveil a long-awaited Middle East peace plan, which US officials say will be fair.However the Trump administration has carried out a series of actions that have inflamed Palestinian opinion and generally pleased Israel.In 2017 Mr Trump announced that the US recognised Jerusalem as Israel's capital, overturning decades of official US policy.In response Mr Abbas cut off relations with the US, saying the US could no longer be a peace broker.Last year the US stopped contributing to the UN Relief and Works Agency (Unrwa), which has been looking after Palestinian refugees since 1949.Last month President Trump officially recognised Israeli sovereignty over the occupied Golan Heights.Peace negotiations between Israel and the Palestinians have been at a standstill since 2014, when a US-brokered attempt to reach a deal collapsed.")_____no_output_____# Fox News:
learn.predict("Former President Barack Obama said on Saturday that he is worried that progressives are creating a “circular firing squad” as prospective Democratic presidential candidates race to the left on a number of hot topic issues ahead of the 2020 election.“The way we structure democracy requires you to take into account people who don’t agree with you,” he said at an Obama Foundation town hall event in Berlin, according to The New York Post. “And that by definition means you’re not going to get 100 percent of what you want.”BARACK OBAMA STILL BELIEVES BIDEN WOULD BE 'AN EXCELLENT PRESIDENT' AMID INAPPROPRIATE TOUCHING ALLEGATIONS: REPORT“One of the things I do worry about sometimes among progressives … we start sometimes creating what’s called a ‘circular firing squad’ where you start shooting at your allies because one of them has strayed from purity on the issues,” he said.Obama’s remarks come as freshman House Democrats such as Rep. Alexandria Ocasio-Cortez, D-N.Y., have pushed once-fringe positions on Medicare-for-all, the Green New Deal and reparations for slavery. In turn, 2020 presidential hopefuls have also taken some of those positions.In that climate, candidates have come under criticism for their past stances from activists. South Bend Mayor Pete Buttigieg was forced this week to address remarks he made in 2015 when he said that “all lives matter” -- which some activists say is a counterslogan to the “black lives matter” sloganSen. Kamala Harris, D-Calif., meanwhile has been hit by controversy over her past as a prosecutor. A scathing op-ed published in January in The New York Times, written by law professor Lara Bazelon, has kickstarted renewed scrutiny.Obama reportedly warns freshmen House Democrats about pricey policy proposalsVideoBazelon says Harris previously 'fought tooth and nail to uphold wrongful convictions that had been secured through official misconduct that included evidence tampering, false testimony and the suppression of crucial information by prosecutors.'Bazelon further suggested that Harris should 'apologize to the wrongfully convicted people she has fought to keep in prison and to do what she can to make sure they get justice' or otherwise make clear she has 'radically broken from her past.'Former vice president under Obama, Joe Biden, meanwhile has faced criticism for inappropriate past physical contact with women, as well a a 1993 speech on crime in which he warned of “predators on our streets”'They are beyond the pale many of those people, beyond the pale,' Biden continued. 'And it's a sad commentary on society. We have no choice but to take them out of society.'The latter was reminiscent of heat 2016 presidential nominee Hillary Clinton took from activists for her description of some gang members as “superpredators” in 1996.Obama himself may not escape criticism in the election cycle. His signature health care legislation, the Affordable Care Act, is quickly being eclipsed by calls from Democrats for single-payer and Medicare-for-all plans. Meanwhile, a number of Democrats have said they are open to reparations for black Americans for slavery -- something that Obama opposed when he was in office.")_____no_output_____# BrightBert again
learn.predict("The border agencies need tougher leadership, President Donald Trump declared Friday as he dropped plans to appoint a long-time agency staffer to run the Immigration and Customs Enforcement agency (ICE).'Ron [Vitiello is] a good man,” Trump told reporters. 'But we’re going in a tougher direction. We want to go in a tougher direction.” Trump’s 'tougher direction” statement suggests he may pressure Department of Homeland Secretary (DHS) Secretary Kirstjen Nielsen to implement policies that top agency staffers oppose, such as rejecting legal interpretations and bureaucratic practices set by former President Barack Obama. Immigration reformers blame those Obama policies for encouraging the wave of economic migrants from Central America.Breitbart TVDonald Trump Says Everything Jared Kushner Touched ‘Turned To Gold’The shift comes amid the growing wave of Central American economic migrants who are using Obama-era legal loopholes to walk through the border wall and into jobs, neighborhoods, and blue-collar schools throughout the United States. That wave is expected to deliver one million migrants into the United States by October, and it is made possible because Democrats are blocking any reform the border loopholes.Immigration reformers fear that Obama-appointed staffers and former business lobbyists are keeping Trump in the dark about ways to improve operation at the DHS. 'I don’t now know if the President is getting the information he needs about what powers he has,” according to Rosemary Jenks, policy director at the Center for Immigration Studies. 'Secretary Nielsen and some of the attorneys in DHS are blocking the information because they are afraid of implementing some of the things they can do,” partly because they are afraid of lawsuits, she said.For example, many so-called 'Unaccompanied Alien Children” are being smuggled up the border because Trump’s agencies will pass them to their illegal immigrant parents living throughout the United States, under policies set by Obama. But those youths and children should be sent home, said Jenks, because the 2008 law only protects trafficked victims, such as forced prostitutes, not youths and children who have parents in the United States or who are willingly smuggled up to the border. According to the Washington Post, Vitiello’s exit was prompted by Steve Miller, one of Trump’s first aides who earlier played a key role in derailing the 2013 'Gang of Eight” amnesty and cheap labor bill. The Post said:Six administration officials said Friday that the decision to jettison Vitiello was a sign of the expanding influence that Miller now wields over immigration matters in the White House, particularly as Trump lashes out at Mexico and Central American nations — as well as Homeland Security officials and aides who express doubts about the legality of his ideas.The New York Times reported:One person familiar with the president’s thinking said that Mr. Trump believed that Mr. Vitiello did not favor closing the border, as the president had proposed before backing off that threat this week.Another person said that Stephen Miller, the president’s chief policy adviser and a supporter of curtailing legal and illegal immigration, did not support Mr. Vitiello’s nomination.Vitiello’s defenders lashed out at Miller. The Washington Post highlighted the complaints:'Ron Vitiello has spent as much time defending our nation’s borders as Stephen Miller has been alive,” one official said of Miller, who is 33.One senior official said: 'This is part of an increasingly desperate effort by Stephen to throw people under the bus when the policies he has advocated are not effective. Once it becomes clear that Stephen’s policies aren’t working, he tells the president, ‘They’re not the right people.’” But Vitiello’s appointment was opposed by the ICE officers’ union, the National ICE Council. Vitiello 'lacks the judgment and professionalism to effectively lead a federal agency,” said a February letter from union President Chris Crane.")_____no_output_____# BBC
learn.predict("Israeli PM Benjamin Netanyahu has said he will annex Jewish settlements in the occupied West Bank if he is re-elected.Israelis go to the polls on Tuesday and Mr Netanyahu is competing for votes with right-wing parties who support annexing part of the West Bank.The settlements are illegal under international law, though Israel disputes this.Last month the US recognised the occupied Golan Heights, seized from Syria in 1967, as Israeli territory.Can Jewish settlement issue be resolved?What Trump’s Golan Heights move really meansIsrael's Benjamin Netanyahu: Commando turned PMIsrael has settled about 400,000 Jews in West Bank settlements, with another 200,000 living in East Jerusalem. There are about 2.5 million Palestinians living in the West Bank.Palestinians want to establish a state in the occupied West Bank, East Jerusalem and the Gaza Strip.What happens to the settlements is one of the most contentious issues between Israel and the Palestinians - Palestinians say the presence of settlements make a future independent state impossible.Israel says the Palestinians are using the issue of settlements as a pretext to avoid direct peace talks. It says settlements are not a genuine obstacle to peace and are negotiable.What exactly did Netanyahu say?He was asked during an interview on Israeli TV why he had not extended Israeli sovereignty to large settlements in the West Bank.'You are asking whether we are moving on to the next stage - the answer is yes, we will move to the next stage,' he said.Image copyrightREUTERSImage captionMr Netanyahu is seeking re-election'I am going to extend [Israeli] sovereignty and I don't distinguish between settlement blocs and the isolated settlements.'A spokesman for Palestinian leader Mahmoud Abbas told Reuters: 'Any measures and any announcements will not change the facts. Settlements are illegal and they will be removed.'Potentially explosive commentsBy Sebastian Usher, BBC Arab affairs editorThese comments by Benjamin Netanyahu are potentially explosive over an issue that has helped stall peace efforts for years.They will resonate with several parties with which he'll try to form a coalition government if he wins the biggest share of votes.But the very idea of annexation will rouse new Palestinian fury, as well as international condemnation.Mr Netanyahu may have been emboldened by the Trump administration, which just last month recognised Israeli sovereignty over the Golan Heights.What is the political background?Mr Netanyahu's right-wing Likud party is in a tight race with the new centre-right Blue and White alliance.However other parties, some of which support annexation, could end up being kingmakers when they try to form a governing coalition.Israel election: Who are the key candidates?The ex-military chief trying to unseat NetanyahuIn Mr Netanyahu's own Likud party, 28 out of the 29 lawmakers running for re-election are on record as supporting this approach. Until now the prime minister was the only exception.What is the situation of peace negotiations?Mr Trump's administration is preparing to unveil a long-awaited Middle East peace plan, which US officials say will be fair.However the Trump administration has carried out a series of actions that have inflamed Palestinian opinion and generally pleased Israel.In 2017 Mr Trump announced that the US recognised Jerusalem as Israel's capital, overturning decades of official US policy.In response Mr Abbas cut off relations with the US, saying the US could no longer be a peace broker.Last year the US stopped contributing to the UN Relief and Works Agency (Unrwa), which has been looking after Palestinian refugees since 1949.Last month President Trump officially recognised Israeli sovereignty over the occupied Golan Heights.Peace negotiations between Israel and the Palestinians have been at a standstill since 2014, when a US-brokered attempt to reach a deal collapsed.")_____no_output_____# Fox News:
learn.predict("Former President Barack Obama said on Saturday that he is worried that progressives are creating a “circular firing squad” as prospective Democratic presidential candidates race to the left on a number of hot topic issues ahead of the 2020 election.“The way we structure democracy requires you to take into account people who don’t agree with you,” he said at an Obama Foundation town hall event in Berlin, according to The New York Post. “And that by definition means you’re not going to get 100 percent of what you want.”BARACK OBAMA STILL BELIEVES BIDEN WOULD BE 'AN EXCELLENT PRESIDENT' AMID INAPPROPRIATE TOUCHING ALLEGATIONS: REPORT“One of the things I do worry about sometimes among progressives … we start sometimes creating what’s called a ‘circular firing squad’ where you start shooting at your allies because one of them has strayed from purity on the issues,” he said.Obama’s remarks come as freshman House Democrats such as Rep. Alexandria Ocasio-Cortez, D-N.Y., have pushed once-fringe positions on Medicare-for-all, the Green New Deal and reparations for slavery. In turn, 2020 presidential hopefuls have also taken some of those positions.In that climate, candidates have come under criticism for their past stances from activists. South Bend Mayor Pete Buttigieg was forced this week to address remarks he made in 2015 when he said that “all lives matter” -- which some activists say is a counterslogan to the “black lives matter” sloganSen. Kamala Harris, D-Calif., meanwhile has been hit by controversy over her past as a prosecutor. A scathing op-ed published in January in The New York Times, written by law professor Lara Bazelon, has kickstarted renewed scrutiny.Obama reportedly warns freshmen House Democrats about pricey policy proposalsVideoBazelon says Harris previously 'fought tooth and nail to uphold wrongful convictions that had been secured through official misconduct that included evidence tampering, false testimony and the suppression of crucial information by prosecutors.'Bazelon further suggested that Harris should 'apologize to the wrongfully convicted people she has fought to keep in prison and to do what she can to make sure they get justice' or otherwise make clear she has 'radically broken from her past.'Former vice president under Obama, Joe Biden, meanwhile has faced criticism for inappropriate past physical contact with women, as well a a 1993 speech on crime in which he warned of “predators on our streets”'They are beyond the pale many of those people, beyond the pale,' Biden continued. 'And it's a sad commentary on society. We have no choice but to take them out of society.'The latter was reminiscent of heat 2016 presidential nominee Hillary Clinton took from activists for her description of some gang members as “superpredators” in 1996.Obama himself may not escape criticism in the election cycle. His signature health care legislation, the Affordable Care Act, is quickly being eclipsed by calls from Democrats for single-payer and Medicare-for-all plans. Meanwhile, a number of Democrats have said they are open to reparations for black Americans for slavery -- something that Obama opposed when he was in office.")_____no_output_____# BrightBert again
learn.predict("The border agencies need tougher leadership, President Donald Trump declared Friday as he dropped plans to appoint a long-time agency staffer to run the Immigration and Customs Enforcement agency (ICE).'Ron [Vitiello is] a good man,' Trump told reporters. 'But we’re going in a tougher direction. We want to go in a tougher direction.' Trump’s 'tougher direction' statement suggests he may pressure Department of Homeland Secretary (DHS) Secretary Kirstjen Nielsen to implement policies that top agency staffers oppose, such as rejecting legal interpretations and bureaucratic practices set by former President Barack Obama. Immigration reformers blame those Obama policies for encouraging the wave of economic migrants from Central America.Breitbart TVDonald Trump Says Everything Jared Kushner Touched ‘Turned To Gold’The shift comes amid the growing wave of Central American economic migrants who are using Obama-era legal loopholes to walk through the border wall and into jobs, neighborhoods, and blue-collar schools throughout the United States. That wave is expected to deliver one million migrants into the United States by October, and it is made possible because Democrats are blocking any reform the border loopholes.Immigration reformers fear that Obama-appointed staffers and former business lobbyists are keeping Trump in the dark about ways to improve operation at the DHS. 'I don’t now know if the President is getting the information he needs about what powers he has,' according to Rosemary Jenks, policy director at the Center for Immigration Studies. 'Secretary Nielsen and some of the attorneys in DHS are blocking the information because they are afraid of implementing some of the things they can do,' partly because they are afraid of lawsuits, she said.For example, many so-called 'Unaccompanied Alien Children' are being smuggled up the border because Trump’s agencies will pass them to their illegal immigrant parents living throughout the United States, under policies set by Obama. But those youths and children should be sent home, said Jenks, because the 2008 law only protects trafficked victims, such as forced prostitutes, not youths and children who have parents in the United States or who are willingly smuggled up to the border. According to the Washington Post, Vitiello’s exit was prompted by Steve Miller, one of Trump’s first aides who earlier played a key role in derailing the 2013 'Gang of Eight' amnesty and cheap labor bill. The Post said:Six administration officials said Friday that the decision to jettison Vitiello was a sign of the expanding influence that Miller now wields over immigration matters in the White House, particularly as Trump lashes out at Mexico and Central American nations — as well as Homeland Security officials and aides who express doubts about the legality of his ideas.The New York Times reported:One person familiar with the president’s thinking said that Mr. Trump believed that Mr. Vitiello did not favor closing the border, as the president had proposed before backing off that threat this week.Another person said that Stephen Miller, the president’s chief policy adviser and a supporter of curtailing legal and illegal immigration, did not support Mr. Vitiello’s nomination.Vitiello’s defenders lashed out at Miller. The Washington Post highlighted the complaints:'Ron Vitiello has spent as much time defending our nation’s borders as Stephen Miller has been alive,' one official said of Miller, who is 33.One senior official said: 'This is part of an increasingly desperate effort by Stephen to throw people under the bus when the policies he has advocated are not effective. Once it becomes clear that Stephen’s policies aren’t working, he tells the president, ‘They’re not the right people.’' But Vitiello’s appointment was opposed by the ICE officers’ union, the National ICE Council. Vitiello 'lacks the judgment and professionalism to effectively lead a federal agency,' said a February letter from union President Chris Crane.")_____no_output_____# BBC
learn.predict("Israeli PM Benjamin Netanyahu has said he will annex Jewish settlements in the occupied West Bank if he is re-elected.Israelis go to the polls on Tuesday and Mr Netanyahu is competing for votes with right-wing parties who support annexing part of the West Bank.The settlements are illegal under international law, though Israel disputes this.Last month the US recognised the occupied Golan Heights, seized from Syria in 1967, as Israeli territory.Can Jewish settlement issue be resolved?What Trump’s Golan Heights move really meansIsrael's Benjamin Netanyahu: Commando turned PMIsrael has settled about 400,000 Jews in West Bank settlements, with another 200,000 living in East Jerusalem. There are about 2.5 million Palestinians living in the West Bank.Palestinians want to establish a state in the occupied West Bank, East Jerusalem and the Gaza Strip.What happens to the settlements is one of the most contentious issues between Israel and the Palestinians - Palestinians say the presence of settlements make a future independent state impossible.Israel says the Palestinians are using the issue of settlements as a pretext to avoid direct peace talks. It says settlements are not a genuine obstacle to peace and are negotiable.What exactly did Netanyahu say?He was asked during an interview on Israeli TV why he had not extended Israeli sovereignty to large settlements in the West Bank.'You are asking whether we are moving on to the next stage - the answer is yes, we will move to the next stage,' he said.Image copyrightREUTERSImage captionMr Netanyahu is seeking re-election'I am going to extend [Israeli] sovereignty and I don't distinguish between settlement blocs and the isolated settlements.'A spokesman for Palestinian leader Mahmoud Abbas told Reuters: 'Any measures and any announcements will not change the facts. Settlements are illegal and they will be removed.'Potentially explosive commentsBy Sebastian Usher, BBC Arab affairs editorThese comments by Benjamin Netanyahu are potentially explosive over an issue that has helped stall peace efforts for years.They will resonate with several parties with which he'll try to form a coalition government if he wins the biggest share of votes.But the very idea of annexation will rouse new Palestinian fury, as well as international condemnation.Mr Netanyahu may have been emboldened by the Trump administration, which just last month recognised Israeli sovereignty over the Golan Heights.What is the political background?Mr Netanyahu's right-wing Likud party is in a tight race with the new centre-right Blue and White alliance.However other parties, some of which support annexation, could end up being kingmakers when they try to form a governing coalition.Israel election: Who are the key candidates?The ex-military chief trying to unseat NetanyahuIn Mr Netanyahu's own Likud party, 28 out of the 29 lawmakers running for re-election are on record as supporting this approach. Until now the prime minister was the only exception.What is the situation of peace negotiations?Mr Trump's administration is preparing to unveil a long-awaited Middle East peace plan, which US officials say will be fair.However the Trump administration has carried out a series of actions that have inflamed Palestinian opinion and generally pleased Israel.In 2017 Mr Trump announced that the US recognised Jerusalem as Israel's capital, overturning decades of official US policy.In response Mr Abbas cut off relations with the US, saying the US could no longer be a peace broker.Last year the US stopped contributing to the UN Relief and Works Agency (Unrwa), which has been looking after Palestinian refugees since 1949.Last month President Trump officially recognised Israeli sovereignty over the occupied Golan Heights.Peace negotiations between Israel and the Palestinians have been at a standstill since 2014, when a US-brokered attempt to reach a deal collapsed.")_____no_output_____# Fox News:
learn.predict("Former President Barack Obama said on Saturday that he is worried that progressives are creating a “circular firing squad” as prospective Democratic presidential candidates race to the left on a number of hot topic issues ahead of the 2020 election.“The way we structure democracy requires you to take into account people who don’t agree with you,” he said at an Obama Foundation town hall event in Berlin, according to The New York Post. “And that by definition means you’re not going to get 100 percent of what you want.”BARACK OBAMA STILL BELIEVES BIDEN WOULD BE 'AN EXCELLENT PRESIDENT' AMID INAPPROPRIATE TOUCHING ALLEGATIONS: REPORT“One of the things I do worry about sometimes among progressives … we start sometimes creating what’s called a ‘circular firing squad’ where you start shooting at your allies because one of them has strayed from purity on the issues,” he said.Obama’s remarks come as freshman House Democrats such as Rep. Alexandria Ocasio-Cortez, D-N.Y., have pushed once-fringe positions on Medicare-for-all, the Green New Deal and reparations for slavery. In turn, 2020 presidential hopefuls have also taken some of those positions.In that climate, candidates have come under criticism for their past stances from activists. South Bend Mayor Pete Buttigieg was forced this week to address remarks he made in 2015 when he said that “all lives matter” -- which some activists say is a counterslogan to the “black lives matter” sloganSen. Kamala Harris, D-Calif., meanwhile has been hit by controversy over her past as a prosecutor. A scathing op-ed published in January in The New York Times, written by law professor Lara Bazelon, has kickstarted renewed scrutiny.Obama reportedly warns freshmen House Democrats about pricey policy proposalsVideoBazelon says Harris previously 'fought tooth and nail to uphold wrongful convictions that had been secured through official misconduct that included evidence tampering, false testimony and the suppression of crucial information by prosecutors.'Bazelon further suggested that Harris should 'apologize to the wrongfully convicted people she has fought to keep in prison and to do what she can to make sure they get justice' or otherwise make clear she has 'radically broken from her past.'Former vice president under Obama, Joe Biden, meanwhile has faced criticism for inappropriate past physical contact with women, as well a a 1993 speech on crime in which he warned of “predators on our streets”'They are beyond the pale many of those people, beyond the pale,' Biden continued. 'And it's a sad commentary on society. We have no choice but to take them out of society.'The latter was reminiscent of heat 2016 presidential nominee Hillary Clinton took from activists for her description of some gang members as “superpredators” in 1996.Obama himself may not escape criticism in the election cycle. His signature health care legislation, the Affordable Care Act, is quickly being eclipsed by calls from Democrats for single-payer and Medicare-for-all plans. Meanwhile, a number of Democrats have said they are open to reparations for black Americans for slavery -- something that Obama opposed when he was in office.")_____no_output_____#Pseudoscience
learn.predict("Have you ever clicked on a link like 'What does your favorite animal say about you?' wondering what your love of hedgehogs reveals about your psyche? Or filled out a personality assessment to gain new understanding into whether you’re an introverted or extroverted 'type'? People love turning to these kinds of personality quizzes and tests on the hunt for deep insights into themselves. People tend to believe they have a 'true' and revealing self hidden somewhere deep within, so it’s natural that assessments claiming to unveil it will be appealing.As psychologists, we noticed something striking about assessments that claim to uncover people’s 'true type.' Many of the questions are poorly constructed – their wording can be ambiguous and they often contain forced choices between options that are not opposites. This can be true of BuzzFeed-type quizzes as well as more seemingly sober assessments.On the other hand, assessments created by trained personality psychologists use questions that are more straightforward to interpret. The most notable example is probably the well-respected Big Five Inventory. Rather than sorting people into 'types,' it scores people on the established psychological dimensions of openness to new experience, conscientiousness, extroversion, agreeableness and neuroticism. This simplicity is by design; psychology researchers know that the more respondents struggle to understand the question, the worse the question is.But the lack of rigor in 'type' assessments turns out to be a feature, not a bug, for the general public. What makes tests less valid can ironically make them more interesting. Since most people aren’t trained to think about psychology in a scientifically rigorous way, it stands to reason they also won’t be great at evaluating those assessments. We recently conducted series of studies to investigate how consumers view these tests. When people try to answer these harder questions, do they think to themselves 'This question is poorly written'? Or instead do they focus on its difficulty and think 'This question’s deep'? Our results suggest that a desire for deep insight can lead to deep confusion.Confusing difficult for deepIn our first study, we showed people items from both the Big Five and from the Keirsey Temperament Sorter (KTS), a popular 'type' assessment that contains many questions we suspected people find comparatively difficult. Our participants rated each item in two ways. First, they rated difficulty. That is, how confusing and ambiguous did they find it? Second, what was its perceived 'depth'? In other words, to what extent did they feel the item seemed to be getting at something hidden deep in the unconscious?Sure enough, not only were these perceptions correlated, the KTS was seen as both more difficult and deeper. In follow-up studies, we experimentally manipulated difficulty. In one study, we modified Big Five items to make them harder to answer like the KTS items, and again we found that participants rated the more difficult versions as 'deeper.'We also noticed that some personality assessments seem to derive their intrigue from having seemingly nothing to do with personality at all. Take one BuzzFeed quiz, for example, that asks about which colors people associate with abstract concepts like letters and days of the week and then outputs 'the true age of your soul.' Even if people trust BuzzFeed more for entertainment than psychological truths, perhaps they are actually on board with the idea that these difficult, abstract decisions do reveal some deep insights. In fact, that is the entire idea behind classically problematic measures such as the Rorschach, or 'ink blot,' test.In two studies inspired by that BuzzFeed quiz, we found exactly that. We gave people items from purported 'personality assessment' checklists. In one study, we assigned half the participants to the 'difficult' condition, wherein the assessment items required them to choose which of two colors they associated with abstract concepts, like the letter 'M.' In the 'easier' condition, respondents were still required to rate colors on how much they associated them with those abstract concepts, but they more simply rated one color at a time instead of choosing between two.Again, participants rated the difficult version as deeper. Seemingly, the sillier the assessment, the better people think it can read the hidden self.Intuition may steer you wrongOne of the implications of this research is that people are going to have a hard time leaving behind the bad ideas baked into popular yet unscientific personality assessments. The most notable example is the Myers-Briggs Type Indicator, which infamously remains quite popular while doing a fairly poor job of assessing personality, due to longstanding issues with the assessment itself and the long-discredited Jungian theory behind it. Our findings suggest that Myers-Briggs-like assessments that have largely been debunked by experts might persist in part because their formats overlap quite well with people’s intuitions about what will best access the “true self.”People’s intuitions do them no favors here. Intuitions often undermine scientific thinking on topics like physics and biology. Psychology is no different. People arbitrarily divide parts of themselves into “true” and superficial components and seem all too willing to believe in tests that claim to definitively make those distinctions. But the idea of a “true self” doesn’t really work as a scientific concept.Some people might be stuck in a self-reinforcing yet unproductive line of thought: Personality assessments can cause confusion. That confusion in turn overlaps with intuitions of how they think their deep psychology works, and then they tell themselves the confusion is profound. So intuitions about psychology might be especially pernicious. Following them too closely could lead you to know less about yourself, not more.", thresh=0.5)_____no_output_____learn.predict("PETALUMA, CA — An incident in which a white man was reportedly beaten in downtown Petaluma by a group of suspects the victim described as four or five black men is being investigated as a hate crime and an assault, the Petaluma Police Department said Tuesday in a news release.Petaluma police Lt. Ed Crosby said officers immediately responded at 9:03 p.m. Saturday, March 9 to the intersection of Mary Street at Petaluma Boulevard North to a woman's report that her domestic partner, a 60-year-old white man, had just been attacked.The lieutenant said when officers arrived they found the victim on the ground suffering from numerous facial injuries.The man was rushed to Santa Rosa Memorial Hospital where according to police, he stayed two days. Injuries to the victim were confirmed as a fractured left eye socket, a broken nose and other abrasions to his face including facial swelling, Crosby said.The couple told police that the night of the incident they had just finished eating dinner at a restaurant on Petaluma Boulevard North and were walking westbound toward their car, which was parked on Mary Street, when they passed a group of several African-American men who looked to be in their 20s, standing around a four-door, emerald green Honda Civic.The couple said they did not interact with the group and were continuing on their way when one of the men by the green Honda 'hurled profanity at the victim and referred to his [the victim's] race,' Crosby said.'The victim turned around and saw one of the males rushing at him, swinging his arms,' Crosby said.'The victim grabbed the advancing male, brought him to the ground, and pinned him,' Crosby said. 'In response, the other males by the green Honda repeatedly kicked the victim in the face before getting into the green Honda and fleeing the scene.'Petaluma police are asking anyone with information about the incident to contact or leave a message for Petaluma Police Department Officer Ron Flores by calling 707-778-4372.The victim and his female companion were not able to give many descriptive details about the suspects, the lieutenant said, and thus far, officers' efforts in canvassing the downtown area for any witnesses or video footage that would help identify the suspects have not been successful.The green Honda was missing a front license plate; the back license plate may possibly include the numbers 611, according to police.", thresh=.5)_____no_output_____learn.data.classes_____no_output_____
</code>
| {
"repository": "fakecoinbase/sweetpandslashAlgorithms",
"path": "NLP Data Prep.ipynb",
"matched_keywords": [
"biology"
],
"stars": 3,
"size": 89898,
"hexsha": "cb385fb2a4833d31cbd0142d7089a492f0b71e8d",
"max_line_length": 5978,
"avg_line_length": 96.3536977492,
"alphanum_fraction": 0.7010612027
} |
# Notebook from EDSEL-skoltech/maxvol_sampling
Path: Boxplots_Interpolation.ipynb
## Boxplot plots
_______
tg: @misha_grol and [email protected]
Boxplots for features based on DEM and NDVI_____no_output_____
<code>
# Uncomment for Google colab
# !pip install maxvolpy
# !pip install clhs
# !git clone https://github.com/EDSEL-skoltech/maxvol_sampling
# %cd maxvol_sampling/_____no_output_____import csv
import seaborn as sns
import argparse
import numpy as np
import osgeo.gdal as gdal
import os
import pandas as pd
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from numpy import genfromtxt
import gdal
import xarray as xr
import clhs as cl
from scipy.spatial import ConvexHull, convex_hull_plot_2d
from scipy.spatial import voronoi_plot_2d, Voronoi
from scipy.spatial import distance
from scipy.stats import entropy
from scipy.special import kl_div
from scipy.stats import ks_2samp
from scipy.stats import wasserstein_distance
%matplotlib inline
from src.util import MaxVolSampling_____no_output_____# Uncoment "Times New Roman" and "science" stule plt if you have it
# plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams.update({'font.size': 16})
#use science style for plots
# plt.style.use(['science', 'grid'])
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 20
_____no_output_____
</code>
## Interpolation plots_____no_output_____
<code>
import matplotlib.pyplot as plt
from matplotlib import gridspec
from tqdm.notebook import tqdm
from scipy.stats import ks_2samp
dict_for_dict_wasserstein = {}
csv_file_to_process = './src/data_v0.csv'
df_name = list(pd.read_csv(csv_file_to_process, sep=',').columns)
soil_parameters = df_name
path_to_inter_npy_files = './experiments/cLHS_10_000/Interpolation_data/'
np.random.seed(42)
units = ['Soil moisture 10 cm, %','Soil moisture 30 cm, %','Soil moisture 80 cm, %','Mean crop yield, c/ha', 'Penetration resistance 10 cm, kPa','Penetration resistance 30 cm, kPa','Penetration resistance 80 cm, kPa', 'Soil Temperature 10 cm, °C','Soil Temperature 30 cm, °C','Soil Temperature 80 cm, °C']
interpolation_files = sorted(os.listdir('./experiments/cLHS_10_000/Interpolation_data/'))
path = './experiments/cLHS_million_steps'
for index, file in enumerate(interpolation_files):
list_to_test_zeros = []
print('Parameter:', file)
df_for_plots = pd.DataFrame(columns=['Sampling', 'Points', 'Value'])
dict_for_parameter = {'MAXVOL':{},
'cLHS':{},
'Random':{}}
dict_for_wasserstein = {'MAXVOL':{},
'cLHS':{},
'Random':{}}
dict_for_plots = {'MAXVOL':{},
'cLHS':{},
'Random':{}}
number_of_points = [10,15,20,25,30]
from itertools import compress
list_of_cLHS_million_runs = sorted(os.listdir('./experiments/cLHS_million_steps'))
selection = ['NDVI' in name for name in list_of_cLHS_million_runs]
cLHS_points_files = list(compress(list_of_cLHS_million_runs, selection))
for num_points, csv_file in zip(number_of_points, cLHS_points_files):
dict_for_parameter['cLHS'][num_points] = np.genfromtxt(os.path.join(path, csv_file),delimiter=',', dtype=int)
SAR = MaxVolSampling()
SAR.soil_feature = soil_parameters[index]
SAR.num_of_points = num_points
SAR.soil_data = pd.read_csv('./src/data_v0.csv', sep=',')
SAR.path_to_file_with_indices = None
SAR.wd = './DEM_files/'
SAR.path_to_interpolation_file = os.path.join(path_to_inter_npy_files, file)
_ =SAR.data_preparation(SAR.wd, data_m=3, dem_dir = None)
SAR.original_soil_data(SAR.soil_feature)
#data from interpolation
interpolation_map = SAR.interpolation_array
#Points selection by MAXVOL
MAXVOL = interpolation_map[SAR.i_am_maxvol_function()]
print
for value in MAXVOL:
df_for_plots.loc[len(df_for_plots)]=['MAXVOL', num_points, value]
cLHS = interpolation_map[dict_for_parameter['cLHS'][num_points]]
for value in cLHS:
df_for_plots.loc[len(df_for_plots)]=['cLHS', num_points, value]
RANDOM = interpolation_map[SAR.i_am_random()]
for value in RANDOM:
df_for_plots.loc[len(df_for_plots)]=['Random', num_points, value]
#original distribution
df_original = pd.DataFrame(data={'Points':[51]*len(SAR.original_data), 'Value':SAR.original_data})
fig = plt.figure(figsize=(18,18))
gs = gridspec.GridSpec(4, 5, wspace=.25)
ax_1 = fig.add_subplot(gs[:,:4])
ax_2 = fig.add_subplot(gs[:,4])
sns.boxplot(ax = ax_1, x="Points", y="Value",
hue="Sampling", palette=["#1F77B4", "#2CA02C", "#FF7F0E"],
data=df_for_plots, width=0.8)
sns.boxplot(ax = ax_2, x='Points', y="Value", palette=["#CCCCCC"],
data=df_original, width=0.25)
fig.set_figwidth(16)
fig.set_figheight(7)
ax_2.set_xticklabels([])
ax_2.set_ylabel('')
ax_2.set_xlabel('')
ax_2.grid(True)
ax_1.set_xticklabels([])
ax_1.set_xlabel('')
ax_1.set_ylabel(units[index], fontsize = 17)
ax_1.axhline(np.quantile(SAR.original_data, 0.25), color='grey', linestyle='--',zorder=0)
ax_1.axhline(np.quantile(SAR.original_data, 0.50), color='grey', linestyle='--',zorder=0)
ax_1.axhline(np.quantile(SAR.original_data, 0.75), color='grey', linestyle='--',zorder=0)
ax_1.get_shared_y_axes().join(ax_1, ax_2)
ax_1.get_legend().remove()
ax_1.grid(True)
ax_2.set_yticklabels([])
# plt.savefig('../plots/agricultural_systems_plots/boxplots_interpolation/'+str(soil_parameters[index])+'boxplot.svg')
# plt.savefig('../plots/agricultural_systems_plots/boxplots_interpolation/'+str(soil_parameters[index])+'boxplot.png', dpi=300)
plt.show()
# break
Parameter: Moisture_perc_10.npy
</code>
## Plots of Wasserstein distance evolution _____no_output_____
<code>
fig, ((ax0, ax1), (ax2, ax3), (ax4, ax5), (ax6, ax7),(ax8, ax9)) = plt.subplots(nrows=5, ncols=2, sharex=True,figsize=(18, 25))
names_for_plots = ['Soil moisture 10 cm, %','Soil moisture 30 cm, %',
'Soil moisture 80 cm, %','Mean crop yield, c/ha',
'Penetration resistance 10 cm, kPa','Penetration resistance 30 cm, kPa',
'Penetration resistance 80 cm, kPa', 'Soil Temperature 10 cm, °C',
'Soil Temperature 30 cm, °C','Soil Temperature 80 cm, °C']
path = './experiments/cLHS_10_000/exp_fem_poins/npy_files/'
files_with_points = os.listdir(path)
range_files_allocation=[]
for file in files_with_points:
range_files_allocation.append(np.load(os.path.join(path,file), allow_pickle=True)[None])
res = np.load(os.path.join(path,file), allow_pickle=True)
dict_for_indices = {'MAXVOL':[], 'cLHS':[], 'Random':[]}
from collections import ChainMap
for sampling in [*range_files_allocation[0][0].keys()]:
loc_list = [dict(loc_dict[0][sampling]) for loc_dict in range_files_allocation]
dict_for_indices[sampling] = dict(ChainMap(*loc_list))
n = 0
number_of_points = range(7,31)
csv_file_to_process = './src/data_v0.csv'
for row in ((ax0, ax1), (ax2, ax3), (ax4, ax5), (ax6, ax7),(ax8, ax9)):
for col in row:
# COMPUTE WASSERSTEIN DISTANCE
df_name = list(pd.read_csv(csv_file_to_process, sep=',').columns)
soil_parameters = df_name
path_to_inter_npy_files = './experiments/cLHS_10_000/Interpolation_data/'
np.random.seed(42)
units = ['Soil moisture 10 cm, %','Soil moisture 30 cm, %','Soil moisture 80 cm, %','Mean crop yield, c/ha', 'Penetration resistance 10 cm, kPa','Penetration resistance 30 cm, kPa','Penetration resistance 80 cm, kPa', 'Soil Temperature 10 cm, °C','Soil Temperature 30 cm, °C','Soil Temperature 80 cm, °C']
interpolation_files = sorted(os.listdir('./experiments/cLHS_10_000/Interpolation_data/'))
print('Parameter:', interpolation_files[n])
dict_for_plots = {'MAXVOL':{},
'cLHS':{},
'Random':{}}
dict_for_new_maxvol = {'MAXVOL_NEW': {}}
for points in number_of_points:
SAR = MaxVolSampling()
SAR.soil_feature = soil_parameters[n]
SAR.num_of_points = points
SAR.soil_data = pd.read_csv(csv_file_to_process, sep=',')
SAR.path_to_file_with_indices = None
SAR.wd = './DEM_files//'
SAR.path_to_interpolation_file = os.path.join(path_to_inter_npy_files, interpolation_files[n])
_ =SAR.data_preparation(SAR.wd, data_m=3, dem_dir = None)
SAR.original_soil_data(SAR.soil_feature)
interpolation_map = SAR.interpolation_array[::-1]
MAXVOL_ = interpolation_map[SAR.i_am_maxvol_function()]
# List to iterate over 100 realization of cLHS and Random
cLHS_ = [interpolation_map[dict_for_indices['cLHS'][points][i]] for i in range(100)]
Random_ = [interpolation_map[dict_for_indices['Random'][points][i]] for i in range(100)]
dict_for_plots['MAXVOL'][points] = wasserstein_distance(SAR.original_data, MAXVOL_)
dict_for_plots['cLHS'][points] = [wasserstein_distance(SAR.original_data, mdt) for mdt in cLHS_]
dict_for_plots['Random'][points] = [wasserstein_distance(SAR.original_data, mdt) for mdt in Random_]
quantile_lower_random = np.array([np.quantile(dict_for_plots['Random'][i], .10) for i in number_of_points])
quantile_upper_random = np.array([np.quantile(dict_for_plots['Random'][i], .90) for i in number_of_points])
median_random = np.array([np.median(dict_for_plots['Random'][i]) for i in number_of_points])
quantile_lower_cLHS = np.array([np.quantile(dict_for_plots['cLHS'][i], .10) for i in number_of_points])
quantile_upper_cLHS = np.array([np.quantile(dict_for_plots['cLHS'][i], .90) for i in number_of_points])
median_cLHS = np.array([np.median(dict_for_plots['cLHS'][i]) for i in number_of_points])
col.plot(number_of_points, [*dict_for_plots['MAXVOL'].values()], '-.',label='Maxvol',linewidth=4,markersize=10 )
col.plot(number_of_points, median_random, label='Random median',linewidth=3,markersize=10 )
col.plot(number_of_points, median_cLHS,'--',label='cLHS median',linewidth=3,markersize=14)
col.fill_between(number_of_points, quantile_lower_random, quantile_upper_random , alpha=0.1, color='orange', label='CI Random')
col.fill_between(number_of_points, quantile_lower_cLHS, quantile_upper_cLHS , alpha=0.1, color='green', label='CI cLHS')
col.set_xlim(min(number_of_points), max(number_of_points))
# col.set_xticks(number_of_points)
col.set_title(names_for_plots[n])
col.grid(True)
col.set(ylabel="Wasserstein distance")
if n==8 or n==9:
col.set(xlabel="Number of points for sampling", ylabel="Wasserstein distance")
# plt.show()
n+=1
# plt.legend()
# plt.savefig('../plots/agricultural_systems_plots/plots_with_evolution_of_wassersterin/wasserstein_disctance_IQR.png', dpi=300)
# plt.savefig('../plots/agricultural_systems_plots/plots_with_evolution_of_wassersterin/nwasserstein_disctance_IQR.svg') _____no_output_____
</code>
| {
"repository": "EDSEL-skoltech/maxvol_sampling",
"path": "Boxplots_Interpolation.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 303060,
"hexsha": "cb39e4e616e9b5d0c6424066222d823cbbc36a29",
"max_line_length": 31824,
"avg_line_length": 550.0181488203,
"alphanum_fraction": 0.9400283772
} |
# Notebook from RuishanLiu/TrialPathfinder
Path: tutorial/tutorial.ipynb
<code>
import pandas as pd
import numpy as np
import TrialPathfinder as tp_____no_output_____
</code>
# Trial PathFinder_____no_output_____## Load Data Tables
TrialPathfinder reads tables in Pandas dataframe structure (pd.dataframe) as default. The date information should be read as datetime (use function pd.to_datetime to convert if not).
**1. Features**:
- <font color=darkblue>*Patient ID*</font>
- Treatment Information
- <font color=darkblue>*Drug name*</font>.
- <font color=darkblue>*Start date*</font>.
- <font color=darkblue>*Date of outcome*</font>. For example, if overall survival (OS) is used as metric, the date of outcome is the date of death. If progression-free survival (PFS) is used as metric, the date of outcome is the date of progression.
- <font color=darkblue>*Date of last visit*</font>. The patient's last record date of visit, used for censoring.
- <font color=darkblue>*Covariates (optional)*</font>: adjusted to emulate the blind assignment, used by Inverse probability of treatment weighting (IPTW) or propensity score matching (PSM). Some examples: age, gender, composite race/ethnicity, histology, smoking status, staging, ECOG, and biomarkers status.
**2. Tables used by eligibility criteria.**
- Use the same Patient ID as the features table.
We provide a synthetic example data in directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data). The features (*'features.csv'*) contain the required treatment information and three covariates (gender, race, ecog). Two tables (*'demographics.csv'* and *'lab.csv'*) are used by its eligibility criteria (*'criteria.csv'*)._____no_output_____[tutorial.ipynb](https://github.com/RuishanLiu/TrialPathfinder/blob/master/tutorial/tutorial.ipynb) provides a detailed tutorial and example to use the library TrialPathFinder.
We provide a synthetic example data in directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data).
- Eligibility criteria (*'criteria.csv'*) have five rules: Age, Histology_Squamous, ECOG, Platelets, Bilirubin.
- The features (*'features.csv'*) contain the required treatment information and three covariates (gender, race, ecog).
- Two tables (*'demographics.csv'* and *'lab.csv'*) are used._____no_output_____
<code>
features = pd.read_csv('data/features.csv')
demographics = pd.read_csv('data/demographics.csv')
lab = pd.read_csv('data/lab.csv')
indicator_miss = 'Missing'
# Process date information to be datetime format and explicitly define annotation for missing values
for table in [lab, features, demographics]:
for col in table.columns:
if 'Date' in col:
table[col] = pd.to_datetime(table[col])
table.loc[table[col].isna(), col] = indicator_miss_____no_output_____features.head()_____no_output_____demographics.head()_____no_output_____lab.head()_____no_output_____
</code>
## Stadards of encoding eligibility criteria
We built a computational workflow to encode the description of eligibility criteria in the protocols into standardized instructions which can be parsed by Trial Pathfinder for cohort selection use.
**1. Basic logic.**
- Name of the criteria is written in the first row.
- A new statement starts with “#inclusion” or “#exclusion” to indicate the criterion’s type. Whether to include patients who have missing entries in the criteria: “(missing include)” or “(missing exclude)”. The default choice is including patients with missing entries.
- Data name format: “Table[‘featurename’]”. For example, “demographics[‘birthdate’]” denotes column date of birth in table demographics.
- Equation: ==, !=, <, <=, >, >=.
- Logic: AND, OR.
- Other operations: MIN, MAX, ABS.
- Time is encoded as “DAYS(80)”: 80 days; “MONTHS(4)”: 4 months; “YEARS(3)”: 3 years.
---
*Example: criteria "Age" - include patients more than 18 years old when they received the treatment.*
> Age \
\#Inclusion \
features['StartDate'] >= demographics['BirthDate'] + @YEARS(18)
---
**2. Complex rule with hierachy.**
- Each row is operated in sequential order
- The tables are prepared before the last row.
- The patients are selected at the last row.
---
*Example: criteria "Platelets" - include patients whose platelet count ≥ 100 x 10^3/μL*. \
To encode this criterion, we follow the procedure:
1. Prepare the lab table:
1. Pick the lab tests for platelet count
2. The lab test date should be within a -28 to +0 window around the treatment start date
3. Use the record closest to the treatment start date to do selection.
2. Select patients: lab value larger than 100 x 10^3/μL.
> Platelets \
\#Inclusion \
(lab['LabName'] == 'Platelet count') \
(lab['TestDate'] >= features['StartDate'] - @DAYS(28) ) AND (lab['TestDate'] <= features['StartDate']) \
MIN(ABS(lab['TestDate'] - features['StartDate'])) \
(lab['LabValue'] >= 100)
---
Here we load the example criteria 'criteria.csv' under directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data).
_____no_output_____
<code>
criteria = pd.read_csv('data/criteria.csv', header=None).values.reshape(-1)
print(*criteria, sep='\n\n')Age
#Inclusion
features['StartDate'] >= demographics['BirthDate'] + @YEARS(18)
Histology_Squamous
#Inclusion
(demographics['Histology'] == 'Squamous cell carcinoma')
ECOG
#Inclusion
(features['ECOG'] == 0) OR (features['ECOG'] == 1)
Platelets
#Inclusion
(lab['LabName'] == 'Platelet count')
(lab['TestDate'] >= features['StartDate'] - @DAYS(28) ) AND (lab['TestDate'] <= features['StartDate'])
MIN(ABS(lab['TestDate'] - features['StartDate']))
lab['LabValue'] >= 100
Bilirubin
#Inclusion
(lab['LabName'] == 'Total bilirubin')
(lab['TestDate'] >= features['StartDate'] - @DAYS(28) ) AND (lab['TestDate'] <= features['StartDate'])
MIN(ABS(lab['TestDate'] - features['StartDate']))
lab['LabValue'] <= 1
</code>
## Preparation
Before simulating real trials, we first encode all the eligibility criteria --- load pseudo-code, input to the algorithm and figure out which patient is excluded by each rule._____no_output_____1. Create an empty cohort object
- tp.cohort_selection() requires all the patients ids used in the study. Here we analyze all the patients in the dataset, people can also use a subset of patients based on their needs._____no_output_____
<code>
patientids = features['PatientID']
cohort = tp.cohort_selection(patientids, name_PatientID='PatientID')_____no_output_____
</code>
2. Add the tables needed in the eligibility criterion._____no_output_____
<code>
cohort.add_table('demographics', demographics)
cohort.add_table('lab', lab)
cohort.add_table('features', features)_____no_output_____
</code>
3. Add individual eligibility criterion_____no_output_____
<code>
# Option 1: add rules individually
for rule in criteria[:]:
name_rule, select, missing = cohort.add_rule(rule)
print('Rule %s: exclude patients %d/%d' % (name_rule, select.shape[0]-np.sum(select), select.shape[0]))
# # Option 2: add the list of criteria
# cohort.add_rules(criteria)Rule Age: exclude patients 0/4000
Rule Histology_Squamous: exclude patients 2020/4000
Rule ECOG: exclude patients 1797/4000
Rule Platelets: exclude patients 1943/4000
Rule Bilirubin: exclude patients 2316/4000
</code>
# Analysis
- Treatment drug: B
- Control drug: A
- Criteria used: Age, ECOG, Histology_Squamous, Platelets, Bilirubin_____no_output_____
<code>
drug_treatment = ['drug B']
drug_control = ['drug A']
name_rules = ['Age', 'Histology_Squamous', 'ECOG', 'Platelets', 'Bilirubin']
covariates_cat = ['Gender', 'Race', 'ECOG'] # categorical covariates
covariates_cont = [] # continuous covariates_____no_output_____
</code>
1. Original trial crieria
- Criteria includes Age, ECOG, Histology_Squamous, Platelets, Bilirubin._____no_output_____
<code>
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, name_rules,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))Hazard Ratio: 0.63 (0.41-0.96)
Number of Patients: 223
</code>
2. Fully-relaxed criteria
- No rule applied (name_rules=[])._____no_output_____
<code>
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, [],
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))Hazard Ratio: 0.77 (0.71-0.84)
Number of Patients: 4000
</code>
3. Compute shapley values_____no_output_____
<code>
shapley_values = tp.shapley_computation(cohort, features, drug_treatment, drug_control, name_rules,
tolerance=0.01, iter_max=1000,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss,
random_seed=1001, verbose=1)Shapley Computation Iteration 0 | SEM = 0.0000
Shapley Computation Iteration 1 | SEM = 0.0223
Shapley Computation Iteration 2 | SEM = 0.0172
Shapley Computation Iteration 3 | SEM = 0.0146
Shapley Computation Iteration 4 | SEM = 0.0131
Shapley Computation Iteration 5 | SEM = 0.0117
Shapley Computation Iteration 6 | SEM = 0.0109
Shapley Computation Iteration 7 | SEM = 0.0099
Stopping criteria satisfied!
pd.DataFrame([shapley_values], columns=name_rules, index=['Shapley Value'])_____no_output_____
</code>
4. Data-driven criteria_____no_output_____
<code>
name_rules_relax = np.array(name_rules)[shapley_values < 0]
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, name_rules_relax,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))Hazard Ratio: 0.67 (0.57-0.80)
Number of Patients: 1053
</code>
| {
"repository": "RuishanLiu/TrialPathfinder",
"path": "tutorial/tutorial.ipynb",
"matched_keywords": [
"biomarkers"
],
"stars": 27,
"size": 26617,
"hexsha": "cb64f1cb4aab31f4a5d10edddb24d7120bc25584",
"max_line_length": 369,
"avg_line_length": 33.7351077313,
"alphanum_fraction": 0.4856670549
} |
# Notebook from DeloitteHux/tensor-house
Path: time-series/lstm-forecasting.ipynb
# Enterprise Time Series Forecasting and Decomposition Using LSTM
This notebook is a tutorial on time series forecasting and decomposition using LSTM.
* First, we generate a signal (time series) that includes several components that are commonly found in enterprise applications: trend, seasonality, covariates, and covariates with memory effects.
* Second, we fit a basic LSTM model, produce the forecast, and introspect the evolution of the hidden state of the model.
* Third, we fit the LSTM with attention model and visualize attention weights that provide some insights into the memory effects.
## Detailed Description
Please see blog post [D006](https://github.com/ikatsov/tensor-house/blob/master/resources/descriptions.md) for more details.
## Data
This notebook generates synthetic data internally, no external datset are used.
---_____no_output_____# Step 1: Generate the Data
We generate a time series that includes a trend, seasonality, covariates, and covariates. This signals mimics some of the effects usually found in sales data (cannibalization, halo, pull forward, and other effects). The covariates are just independent variables, but they can enter the signal in two modes:
* Linear. The covariate series is directly added to the main signal with some coeffecient. E.g. the link function is indentity.
* Memory. The covariate is transformed using a link function that include some delay and can be nonlinear. We use a simple smoothing filter as a link function. We observe the original covariate, but the link function is unknown. _____no_output_____
<code>
import numpy as np
import pandas as pd
import datetime
import collections
from matplotlib import pylab as plt
plt.style.use('ggplot')
import seaborn as sns
import matplotlib.dates as mdates
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
pd.options.mode.chained_assignment = None
import tensorflow as tf
from sklearn import preprocessing
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import LSTM, Dense, Input
from tensorflow.keras.layers import Lambda, RepeatVector, Permute, Flatten, Activation, Multiply
from tensorflow.keras.constraints import NonNeg
from tensorflow.keras import backend as K
from tensorflow.keras.regularizers import l1
from tensorflow.keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
def step_series(n, mean, scale, n_steps):
s = np.zeros(n)
step_idx = np.random.randint(0, n, n_steps)
value = mean
for t in range(n):
s[t] = value
if t in step_idx:
value = mean + scale * np.random.randn()
return s
def linear_link(x):
return x
def mem_link(x, length = 50):
mfilter = np.exp(np.linspace(-10, 0, length))
return np.convolve(x, mfilter/np.sum(mfilter), mode='same')
def create_signal(links = [linear_link, linear_link]):
days_year = 365
quaters_year = 4
days_week = 7
# three years of data, daily resolution
idx = pd.date_range(start='2017-01-01', end='2020-01-01', freq='D')
df = pd.DataFrame(index=idx, dtype=float)
df = df.fillna(0.0)
n = len(df.index)
trend = np.zeros(n)
seasonality = np.zeros(n)
for t in range(n):
trend[t] = 2.0 * t/n
seasonality[t] = 4.0 * np.sin(np.pi * t/days_year*quaters_year)
covariates = [step_series(n, 0, 1.0, 80), step_series(n, 0, 1.0, 80)]
covariate_links = [ links[i](covariates[i]) for i in range(2) ]
noise = 0.5 * np.random.randn(n)
signal = trend + seasonality + np.sum(covariate_links, axis=0) + noise
df['signal'], df['trend'], df['seasonality'], df['noise'] = signal, trend, seasonality, noise
for i in range(2):
df[f'covariate_0{i+1}'] = covariates[i]
df[f'covariate_0{i+1}_link'] = covariate_links[i]
return df
df = create_signal()
fig, ax = plt.subplots(len(df.columns), figsize=(20, 15))
for i, c in enumerate(df.columns):
ax[i].plot(df.index, df[c])
ax[i].set_title(c)
plt.tight_layout()
plt.show()_____no_output_____
</code>
# Step 2: Define and Fit the Basic LSTM Model
We fit LSTM model that consumes patches of the observed signal and covariates, i.e. each input sample is a matrix where rows are time steps and columns are observed metrics (signal, covariate, and calendar features)._____no_output_____
<code>
#
# engineer features and create input tensors
#
def prepare_features_rnn(df):
df_rnn = df[['signal', 'covariate_01', 'covariate_02']]
df_rnn['year'] = df_rnn.index.year
df_rnn['month'] = df_rnn.index.month
df_rnn['day_of_year'] = df_rnn.index.dayofyear
def normalize(df):
x = df.values
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
return pd.DataFrame(x_scaled, index=df.index, columns=df.columns)
return normalize(df_rnn)
#
# train-test split and adjustments
#
def train_test_split(df, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval):
# lenght of the input time window for each sample (the offset of the oldest sample in the input)
input_window_size = n_time_steps*time_step_interval
split_t = int(len(df)*train_ratio)
x_train, y_train = [], []
x_test, y_test = [], []
y_col_idx = list(df.columns).index('signal')
for i in range(input_window_size, len(df)):
t_start = df.index[i - input_window_size]
t_end = df.index[i]
# we zero out last forecast_days_ahead signal observations, but covariates are assumed to be known
x_t = df[t_start:t_end:time_step_interval].values.copy()
if time_step_interval <= forecast_days_ahead:
x_t[-int((forecast_days_ahead) / time_step_interval):, y_col_idx] = 0
y_t = df.iloc[i]['signal']
if i < split_t:
x_train.append(x_t)
y_train.append(y_t)
else:
x_test.append(x_t)
y_test.append(y_t)
return np.stack(x_train), np.hstack(y_train), np.stack(x_test), np.hstack(y_test)
#
# parameters
#
n_time_steps = 40 # lenght of LSTM input in samples
time_step_interval = 2 # sampling interval, days
hidden_units = 8 # LSTM state dimensionality
forecast_days_ahead = 7
train_ratio = 0.8
#
# generate data and fit the model
#
df = create_signal()
df_rnn = prepare_features_rnn(df)
x_train, y_train, x_test, y_test = train_test_split(df_rnn, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval)
print(f'Input tensor shape {x_train.shape}')
n_samples = x_train.shape[0]
n_features = x_train.shape[2]
input_model = Input(shape=(n_time_steps, n_features))
lstm_state_seq, state_h, state_c = LSTM(hidden_units, return_sequences=True, return_state=True)(input_model)
output_dense = Dense(1)(state_c)
model_lstm = Model(inputs=input_model, outputs=output_dense)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
model_lstm.compile(loss='mean_squared_error', metrics=['mean_absolute_percentage_error'], optimizer='RMSprop')
model_lstm.summary()
model_lstm.fit(x_train, y_train, epochs=20, batch_size=4, validation_data=(x_test, y_test), use_multiprocessing=True, verbose=1)
score = model_lstm.evaluate(x_test, y_test, verbose=0)
print('Test MSE:', score[0])
print('Test MAPE:', score[1])Input tensor shape (796, 41, 6)
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 40, 6)] 0
_________________________________________________________________
lstm (LSTM) [(None, 40, 8), (None, 8) 480
_________________________________________________________________
dense (Dense) (None, 1) 9
=================================================================
Total params: 489
Trainable params: 489
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
199/199 [==============================] - 2s 9ms/step - loss: 0.0443 - mean_absolute_percentage_error: 135541.7031 - val_loss: 0.0111 - val_mean_absolute_percentage_error: 22.2035
Epoch 2/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0134 - mean_absolute_percentage_error: 90105.8125 - val_loss: 0.0108 - val_mean_absolute_percentage_error: 20.5822
Epoch 3/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0063 - mean_absolute_percentage_error: 148959.5625 - val_loss: 0.0040 - val_mean_absolute_percentage_error: 13.3330
Epoch 4/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0039 - mean_absolute_percentage_error: 113662.0000 - val_loss: 0.0037 - val_mean_absolute_percentage_error: 10.9933
Epoch 5/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0033 - mean_absolute_percentage_error: 101148.6328 - val_loss: 0.0039 - val_mean_absolute_percentage_error: 11.1413
Epoch 6/20
199/199 [==============================] - 2s 8ms/step - loss: 0.0031 - mean_absolute_percentage_error: 139972.2188 - val_loss: 0.0021 - val_mean_absolute_percentage_error: 9.0054
Epoch 7/20
199/199 [==============================] - 2s 11ms/step - loss: 0.0028 - mean_absolute_percentage_error: 103846.0000 - val_loss: 0.0063 - val_mean_absolute_percentage_error: 13.9849
Epoch 8/20
199/199 [==============================] - 2s 8ms/step - loss: 0.0026 - mean_absolute_percentage_error: 90294.8125 - val_loss: 0.0047 - val_mean_absolute_percentage_error: 13.6537
Epoch 9/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0024 - mean_absolute_percentage_error: 98722.0781 - val_loss: 0.0042 - val_mean_absolute_percentage_error: 12.7327
Epoch 10/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0023 - mean_absolute_percentage_error: 68499.7188 - val_loss: 0.0051 - val_mean_absolute_percentage_error: 13.6249
Epoch 11/20
199/199 [==============================] - 2s 8ms/step - loss: 0.0021 - mean_absolute_percentage_error: 84607.8203 - val_loss: 0.0020 - val_mean_absolute_percentage_error: 9.4657
Epoch 12/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0022 - mean_absolute_percentage_error: 66879.0312 - val_loss: 0.0035 - val_mean_absolute_percentage_error: 10.3418
Epoch 13/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0020 - mean_absolute_percentage_error: 77705.5781 - val_loss: 0.0016 - val_mean_absolute_percentage_error: 8.0591
Epoch 14/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0019 - mean_absolute_percentage_error: 55856.9336 - val_loss: 0.0016 - val_mean_absolute_percentage_error: 7.8036
Epoch 15/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0019 - mean_absolute_percentage_error: 56749.0938 - val_loss: 0.0026 - val_mean_absolute_percentage_error: 8.9563
Epoch 16/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0017 - mean_absolute_percentage_error: 62305.7891 - val_loss: 0.0014 - val_mean_absolute_percentage_error: 7.6600
Epoch 17/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0017 - mean_absolute_percentage_error: 71059.7266 - val_loss: 0.0021 - val_mean_absolute_percentage_error: 9.3004
Epoch 18/20
199/199 [==============================] - 1s 7ms/step - loss: 0.0016 - mean_absolute_percentage_error: 42330.2109 - val_loss: 0.0030 - val_mean_absolute_percentage_error: 10.4733
Epoch 19/20
199/199 [==============================] - 2s 8ms/step - loss: 0.0016 - mean_absolute_percentage_error: 43206.3906 - val_loss: 0.0028 - val_mean_absolute_percentage_error: 10.8260
Epoch 20/20
199/199 [==============================] - 2s 8ms/step - loss: 0.0016 - mean_absolute_percentage_error: 5500.9253 - val_loss: 0.0016 - val_mean_absolute_percentage_error: 7.5576
Test MSE: 0.0016421948093920946
Test MAPE: 7.557589530944824
</code>
# Step 3: Visualize the Forecast and Evolution of the Hidden State
We first plot the forecast to show that models fits well. Next, we visualize how individual components of the hidden state evolve over time:
* We can see that some states actually extract seasonal and trend components, but this is not guaranteed.
* We also overlay plots with covariates, to check if there are any correlation between states and covariates. We see that states do not correlate much with the covariate patterns._____no_output_____
<code>
input_window_size = n_time_steps*time_step_interval
x = np.vstack([x_train, x_test])
y_hat = model_lstm.predict(x)
forecast = np.append(np.zeros(input_window_size), y_hat)
#
# plot the forecast
#
fig, ax = plt.subplots(1, figsize=(20, 5))
ax.plot(df_rnn.index, forecast, label=f'Forecast ({forecast_days_ahead} days ahead)')
ax.plot(df_rnn.index, df_rnn['signal'], label='Signal')
ax.axvline(x=df.index[int(len(df) * train_ratio)], linestyle='--')
ax.legend()
plt.show()
#
# plot the evolution of the LSTM state
#
lstm_state_tap = Model(model_lstm.input, lstm_state_seq)
lstm_state_trace = lstm_state_tap.predict(x)
state_series = lstm_state_trace[:, -1, :].T
fig, ax = plt.subplots(len(state_series), figsize=(20, 15))
for i, state in enumerate(state_series):
ax[i].plot(df_rnn.index[:len(state)], state, label=f'State dimension {i}')
for j in [1, 2]:
ax[i].plot(df_rnn.index[:len(state)], df_rnn[f'covariate_0{j}'][:len(state)], color='#bbbbbb', label=f'Covariate 0{j}')
ax[i].legend(loc='upper right')
plt.show()_____no_output_____
</code>
# Step 4: Define and Fit LSTM with Attention (LSTM-A) Model
We fit the LSTM with attention model to analyze the contribution of individual time steps from input patches. It can help to reconstruct the (unknown) memory link function. _____no_output_____
<code>
#
# parameters
#
n_time_steps = 5 # lenght of LSTM input in samples
time_step_interval = 10 # sampling interval, days
hidden_units = 256 # LSTM state dimensionality
forecast_days_ahead = 14
train_ratio = 0.8
def fit_lstm_a(df, train_verbose = 0, score_verbose = 0):
df_rnn = prepare_features_rnn(df)
x_train, y_train, x_test, y_test = train_test_split(df_rnn, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval)
n_steps = x_train.shape[0]
n_features = x_train.shape[2]
#
# define the model: LSTM with attention
#
main_input = Input(shape=(n_steps, n_features))
activations = LSTM(hidden_units, recurrent_dropout=0.1, return_sequences=True)(main_input)
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax', name = 'attention_weigths')(attention)
attention = RepeatVector(hidden_units * 1)(attention)
attention = Permute([2, 1])(attention)
weighted_activations = Multiply()([activations, attention])
weighted_activations = Lambda(lambda xin: K.sum(xin, axis=-2), output_shape=(hidden_units,))(weighted_activations)
main_output = Dense(1, activation='sigmoid')(weighted_activations)
model_attn = Model(inputs=main_input, outputs=main_output)
#
# fit the model
#
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
model_attn.compile(optimizer='rmsprop', loss='mean_squared_error', metrics=['mean_absolute_percentage_error'])
history = model_attn.fit(x_train, y_train, batch_size=4, epochs=30, verbose=train_verbose, validation_data=(x_test, y_test))
score = model_attn.evaluate(x_test, y_test, verbose=0)
if score_verbose > 0:
print(f'Test MSE [{score[0]}], MAPE [{score[1]}]')
return model_attn, df_rnn, x_train, x_test
df = create_signal(links = [linear_link, linear_link])
model_attn, df_rnn, x_train, x_test = fit_lstm_a(df, train_verbose = 1, score_verbose = 1)Epoch 1/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0531 - mean_absolute_percentage_error: 554625.5625 - val_loss: 0.0354 - val_mean_absolute_percentage_error: 51.5320
Epoch 2/30
207/207 [==============================] - 3s 16ms/step - loss: 0.0267 - mean_absolute_percentage_error: 415985.8125 - val_loss: 0.0247 - val_mean_absolute_percentage_error: 45.0028
Epoch 3/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0194 - mean_absolute_percentage_error: 315432.1562 - val_loss: 0.0214 - val_mean_absolute_percentage_error: 43.1922
Epoch 4/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0156 - mean_absolute_percentage_error: 210736.5000 - val_loss: 0.0192 - val_mean_absolute_percentage_error: 36.7590
Epoch 5/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0150 - mean_absolute_percentage_error: 197102.4844 - val_loss: 0.0151 - val_mean_absolute_percentage_error: 30.5381
Epoch 6/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0128 - mean_absolute_percentage_error: 230763.5625 - val_loss: 0.0130 - val_mean_absolute_percentage_error: 30.0115
Epoch 7/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0115 - mean_absolute_percentage_error: 157021.1562 - val_loss: 0.0283 - val_mean_absolute_percentage_error: 35.5017
Epoch 8/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0112 - mean_absolute_percentage_error: 190294.4062 - val_loss: 0.0162 - val_mean_absolute_percentage_error: 36.7094
Epoch 9/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0100 - mean_absolute_percentage_error: 139721.3438 - val_loss: 0.0099 - val_mean_absolute_percentage_error: 27.9008
Epoch 10/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0088 - mean_absolute_percentage_error: 152096.2031 - val_loss: 0.0187 - val_mean_absolute_percentage_error: 40.2046
Epoch 11/30
207/207 [==============================] - 3s 14ms/step - loss: 0.0077 - mean_absolute_percentage_error: 121705.4141 - val_loss: 0.0100 - val_mean_absolute_percentage_error: 27.0696
Epoch 12/30
207/207 [==============================] - 3s 16ms/step - loss: 0.0066 - mean_absolute_percentage_error: 170348.6719 - val_loss: 0.0067 - val_mean_absolute_percentage_error: 16.7347
Epoch 13/30
207/207 [==============================] - 4s 17ms/step - loss: 0.0054 - mean_absolute_percentage_error: 150606.1250 - val_loss: 0.0062 - val_mean_absolute_percentage_error: 20.2170
Epoch 14/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0050 - mean_absolute_percentage_error: 179741.5469 - val_loss: 0.0125 - val_mean_absolute_percentage_error: 30.2266
Epoch 15/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0047 - mean_absolute_percentage_error: 135016.8281 - val_loss: 0.0042 - val_mean_absolute_percentage_error: 14.7045
Epoch 16/30
207/207 [==============================] - 3s 16ms/step - loss: 0.0046 - mean_absolute_percentage_error: 165125.8438 - val_loss: 0.0038 - val_mean_absolute_percentage_error: 13.8870
Epoch 17/30
207/207 [==============================] - 3s 16ms/step - loss: 0.0041 - mean_absolute_percentage_error: 147541.7500 - val_loss: 0.0043 - val_mean_absolute_percentage_error: 15.7311
Epoch 18/30
207/207 [==============================] - 4s 17ms/step - loss: 0.0039 - mean_absolute_percentage_error: 102583.0781 - val_loss: 0.0052 - val_mean_absolute_percentage_error: 15.9809
Epoch 19/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0040 - mean_absolute_percentage_error: 109036.5156 - val_loss: 0.0086 - val_mean_absolute_percentage_error: 19.8476
Epoch 20/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0038 - mean_absolute_percentage_error: 125796.0547 - val_loss: 0.0053 - val_mean_absolute_percentage_error: 15.0676
Epoch 21/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0036 - mean_absolute_percentage_error: 93179.3828 - val_loss: 0.0032 - val_mean_absolute_percentage_error: 13.7895
Epoch 22/30
207/207 [==============================] - 4s 20ms/step - loss: 0.0036 - mean_absolute_percentage_error: 167399.2188 - val_loss: 0.0029 - val_mean_absolute_percentage_error: 13.8099
Epoch 23/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0035 - mean_absolute_percentage_error: 160599.3594 - val_loss: 0.0026 - val_mean_absolute_percentage_error: 13.1479
Epoch 24/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0034 - mean_absolute_percentage_error: 144749.8281 - val_loss: 0.0028 - val_mean_absolute_percentage_error: 13.4400
Epoch 25/30
207/207 [==============================] - 4s 18ms/step - loss: 0.0034 - mean_absolute_percentage_error: 188223.5781 - val_loss: 0.0039 - val_mean_absolute_percentage_error: 13.9034
Epoch 26/30
207/207 [==============================] - 3s 17ms/step - loss: 0.0034 - mean_absolute_percentage_error: 131071.7188 - val_loss: 0.0029 - val_mean_absolute_percentage_error: 13.2374
Epoch 27/30
207/207 [==============================] - 3s 16ms/step - loss: 0.0033 - mean_absolute_percentage_error: 121069.7266 - val_loss: 0.0072 - val_mean_absolute_percentage_error: 19.0120
Epoch 28/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0032 - mean_absolute_percentage_error: 121252.3906 - val_loss: 0.0032 - val_mean_absolute_percentage_error: 13.2904
Epoch 29/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0032 - mean_absolute_percentage_error: 127607.8750 - val_loss: 0.0034 - val_mean_absolute_percentage_error: 13.7815
Epoch 30/30
207/207 [==============================] - 3s 15ms/step - loss: 0.0030 - mean_absolute_percentage_error: 144656.9062 - val_loss: 0.0029 - val_mean_absolute_percentage_error: 14.1534
Test MSE [0.002931196242570877], MAPE [14.153406143188477]
input_window_size = n_time_steps*time_step_interval
x = np.vstack([x_train, x_test])
y_hat = model_attn.predict(x)
forecast = np.append(np.zeros(input_window_size), y_hat)
#
# plot the forecast
#
fig, ax = plt.subplots(1, figsize=(20, 5))
ax.plot(df_rnn.index, forecast, label=f'Forecast ({forecast_days_ahead} days ahead)')
ax.plot(df_rnn.index, df_rnn['signal'], label='Signal')
ax.axvline(x=df.index[int(len(df) * train_ratio)], linestyle='--')
ax.legend()
plt.show()_____no_output_____
</code>
# Step 5: Analyze LSTM-A Model
The LSTM with attention model allows to extract the matrix of attention weights. For each time step, we have a vector of weights where each weight corresponds to one time step (lag) in the input patch.
* For the linear link, only the contemporaneous covariates/features have high contribution weights.
* For the memory link, the "LSTMogram" is more blurred becasue lagged samples have high contribution as well._____no_output_____
<code>
#
# evaluate atention weights for each time step
#
attention_model = Model(inputs=model_attn.input, outputs=model_attn.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
print(f'Weight matrix shape {a.shape}')
fig, ax = plt.subplots(1, figsize=(10, 2))
ax.imshow(a.T, cmap='viridis', interpolation='nearest', aspect='auto')
ax.grid(None)Weight matrix shape (826, 6)
#
# generate multiple datasets and perform LSTM-A analysis for each of them
#
n_evaluations = 4
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
fig, ax = plt.subplots(n_evaluations, 2, figsize=(16, n_evaluations * 2))
for j, link in enumerate([linear_link, mem_link]):
for i in range(n_evaluations):
print(f'Evaluating LSTMogram for link [{link.__name__}], trial [{i}]...')
df = create_signal(links = [link, link])
model_attn, df_rnn, x_train, _ = fit_lstm_a(df, score_verbose = 0)
attention_model = Model(inputs=model_attn.input, outputs=model_attn.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
ax[i, j].imshow(a.T, cmap='viridis', interpolation='nearest', aspect='auto')
ax[i, j].grid(None)Evaluating LSTMogram for link [linear_link], trial [0]...
Evaluating LSTMogram for link [linear_link], trial [1]...
Evaluating LSTMogram for link [linear_link], trial [2]...
Evaluating LSTMogram for link [linear_link], trial [3]...
Evaluating LSTMogram for link [mem_link], trial [0]...
Evaluating LSTMogram for link [mem_link], trial [1]...
Evaluating LSTMogram for link [mem_link], trial [2]...
Evaluating LSTMogram for link [mem_link], trial [3]...
</code>
| {
"repository": "DeloitteHux/tensor-house",
"path": "time-series/lstm-forecasting.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 954343,
"hexsha": "cb658c452e888bbb8809fec6fc4b43d91398bfa2",
"max_line_length": 336192,
"avg_line_length": 1441.6057401813,
"alphanum_fraction": 0.9533658234
} |
# Notebook from CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium
Path: Chapter6/Fig6_2_Trumpler1930.ipynb
# Trumpler 1930 Dust Extinction
Figure 6.2 from Chapter 6 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021,
Cambridge University Press.
Data are from [Trumpler, R. 1930, Lick Observatory Bulletin #420, 14, 154](https://ui.adsabs.harvard.edu/abs/1930LicOB..14..154T), Table 3. The extinction curve derived
uses a different normalization in the bulletin paper than in the oft-reproduced figure from the Trumpler
1930 PASP paper ([Trumpler, R. 1930, PASP, 42, 267](https://ui.adsabs.harvard.edu/abs/1930PASP...42..267T),
Figure 1).
Table 3 gives distances and linear diameters to open star clusters. We've created two data files:
* Trumpler_GoodData.txt - Unflagged
* Trumpler_BadData.txt - Trumpler's "somewhat uncertain or less reliable" data, designed by the entry being printed in italics in Table 3.
The distances we use are from Table 3 column 8 ("Obs."distance from spectral types) and column 10
("from diam."), both converted to kiloparsecs._____no_output_____
<code>
%matplotlib inline
import os
import sys
import math
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter
import warnings
warnings.filterwarnings('ignore',category=UserWarning, append=True)_____no_output_____
</code>
## Standard Plot Format
Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style._____no_output_____
<code>
figName = 'Fig6_2'
# graphic aspect ratio = width/height
aspect = 4.0/3.0 # 4:3
# Text width in inches - don't change, this is defined by the print layout
textWidth = 6.0 # inches
# output format and resolution
figFmt = 'png'
dpi = 600
# Graphic dimensions
plotWidth = dpi*textWidth
plotHeight = plotWidth/aspect
axisFontSize = 10
labelFontSize = 6
lwidth = 0.5
axisPad = 5
wInches = textWidth
hInches = wInches/aspect
# Plot filename
plotFile = f'{figName}.{figFmt}'
# LaTeX is used throughout for markup of symbols, Times-Roman serif font
plt.rc('text', usetex=True)
plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'})
# Font and line weight defaults for axes
matplotlib.rc('axes',linewidth=lwidth)
matplotlib.rcParams.update({'font.size':axisFontSize})
# axis and label padding
plt.rcParams['xtick.major.pad'] = f'{axisPad}'
plt.rcParams['ytick.major.pad'] = f'{axisPad}'
plt.rcParams['axes.labelpad'] = f'{axisPad}'_____no_output_____
</code>
## Trumpler (1930) Data and Extinction Curve
The data are derived from the Table 3 in Trumpler 1930, converted to modern units of kiloparsecs. We've divided
the data into two files based on Trumpler's 2-fold division of the data into reliable and ""somewhat uncertain
or less reliable", which we abbreviate as "good" and "bad", respectively. This is the division used for
Trumpler's original diagram.
The Trumpler extintion curve is of the form:
$$ d_{L} = d_{A} e^{\kappa d_{A}/2}$$
where the extinction coefficient plotted is $\kappa=0.6$kpc$^{-1}$, plotted as a dashed line._____no_output_____
<code>
# Good data
data = pd.read_csv('Trumpler_GoodData.txt',sep=r'\s+',comment='#')
dLgood = np.array(data['dL']) # luminosity distance
dAgood = np.array(data['dA']) # angular diameter distance
# Bad data
data = pd.read_csv('Trumpler_BadData.txt',sep=r'\s+',comment='#')
dLbad = np.array(data['dL']) # luminosity distance
dAbad = np.array(data['dA']) # angular diameter distance
# Trumpler extinction curve
k = 0.6 # kpc^-1 [modern units]
dAext = np.linspace(0.0,4.0,401)
dLext = dAext*np.exp(k*dAext/2)_____no_output_____
</code>
## Cluster angular diameter distance vs. luminosity distance
Plot open cluster angular distance against luminosity distance (what Trumpler called "photometric distance").
Good data are ploted as filled circles, the bad (less-reliable) data are plotted as open circles.
The unextincted 1:1 relation is plotted as a dotted line._____no_output_____
<code>
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_minor_locator(MultipleLocator(0.2))
ax.set_xlabel(r'Luminosity distance [kpc]')
ax.set_xlim(0,5)
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(0.2))
ax.set_ylabel(r'Angular diameter distance [kpc]')
ax.set_ylim(0,4)
plt.plot(dLgood,dAgood,'o',mfc='black',mec='black',ms=3,zorder=10,mew=0.5)
plt.plot(dLbad,dAbad,'o',mfc='white',mec='black',ms=3,zorder=9,mew=0.5)
plt.plot([0,4],[0,4],':',color='black',lw=1,zorder=5)
plt.plot(dLext,dAext,'--',color='black',lw=1,zorder=7)
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')_____no_output_____
</code>
| {
"repository": "CambridgeUniversityPress/Interstellar-and-Intergalactic-Medium",
"path": "Chapter6/Fig6_2_Trumpler1930.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 10,
"size": 7195,
"hexsha": "cb661cc31daeb12034b6660a0d9cfd5ab9bb8e6d",
"max_line_length": 178,
"avg_line_length": 32.7045454545,
"alphanum_fraction": 0.5898540653
} |
# Notebook from labs15-rv-life/data-science
Path: app_reviews/app_reviews_scattertext.ipynb
<code>
import pandas as pd_____no_output_____ios = pd.read_csv('app_reviews/rv_ios_app_reviews.csv')
ios['Content'] = ios['label_title']+ios['review']
ios = ios.drop(['app_name','app_version','label_title','review'], axis=1)
print(ios.shape)
ios.head()(1962, 2)
ios['store_rating'].value_counts()_____no_output_____android = pd.read_csv('app_reviews/rv_android_reviews.csv')
# android = android.drop(['Unnamed: 0','Unnamed: 0.1','Date','App_name'], axis=1)
# android = android.rename(columns={'Numbers':'store_rating'})
print(android.shape)
android.head()(639, 2)
android = pd.read_csv('app_reviews/rv_android_reviews.csv')
# android = android.drop(['Unnamed: 0','Unnamed: 0.1','Date','App_name'], axis=1)
# android = android.rename(columns={'Numbers':'store_rating'})
print(android.shape)
android.head()(6746, 6)
android['Star'].value_counts()_____no_output_____df = pd.concat([android, ios], join='outer', ignore_index=True, axis=0, sort=True)
print(df.shape)
df.head()(2601, 2)
df.isnull().sum()_____no_output_____df['store_rating'].value_counts()_____no_output_____df['rating'] = df['store_rating'].replace({4:5, 2:1, 3:1}).astype(float).astype(str)
# df['rating'] = df['store_rating']
# df['rating'] = df['rating']
df['rating'].value_counts()_____no_output_____df.dtypes_____no_output_____import spacy
import scattertext
import pandas as pd
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 200)_____no_output_____nlp = spacy.load("en_core_web_lg")_____no_output_____# add stop words
with open('stopwords.txt', 'r') as f:
str = f.read()
set_stopwords = set(str.split('\n'))
nlp.Defaults.stop_words |= set_stopwords_____no_output_____corpus = (scattertext.CorpusFromPandas(Shopping_yelp_sample,
category_col='rating',
text_col='Content',
nlp=nlp)
.build()
.remove_terms(nlp.Defaults.stop_words, ignore_absences=True)
)_____no_output_____term_freq_df = corpus.get_term_freq_df()
term_freq_df['highratingscore'] = corpus.get_scaled_f_scores('5.0')
term_freq_df['poorratingscore'] = corpus.get_scaled_f_scores('1.0')
df_high = term_freq_df.sort_values(by='highratingscore',
ascending = False)
df_poor = term_freq_df.sort_values(by='poorratingscore',
ascending=False)
# df_high = df_high[['highratingscore', 'poorratingscore']]
df_high['highratingscore'] = round(df_high['highratingscore'], 2)
df_high['poorratingscore'] = round(df_high['poorratingscore'], 2)
df_high = df_high.reset_index(drop=False)
# df_high = df_high.head(20)
# df_poor = df_poor[['highratingscore', 'poorratingscore']]
df_poor['highratingscore'] = round(df_poor['highratingscore'], 2)
df_poor['poorratingscore'] = round(df_poor['poorratingscore'], 2)
df_poor = df_poor.reset_index(drop=False)
# df_poor = df_poor.head(20)
# df_terms = pd.concat([df_high, df_poor],
# ignore_index=True)
# df_terms_____no_output_____Shopping_yelp_sample_high = df_high
Shopping_yelp_sample_poor = df_poor_____no_output_____Shopping_yelp_sample_high.sort_values(by='5.0 freq', ascending = False).head(100)_____no_output_____Shopping_yelp_sample_poor.sort_values(by='1.0 freq', ascending = False).head(100)_____no_output_____Auto_Repair_yelp_high.sort_values(by='5.0 freq', ascending = False).head(100)_____no_output_____Auto_Repair_yelp_poor.sort_values(by='1.0 freq', ascending = False).head(100)_____no_output_____Hair_Salons_yelp_high.sort_values(by='5.0 freq', ascending = False).head(100)_____no_output_____Hair_Salons_yelp_poor.sort_values(by='1.0 freq', ascending = False).head(100)_____no_output_____Fashion_yelp_high.sort_values(by='5.0 freq', ascending = False)_____no_output_____Fashion_yelp_poor.sort_values(by='1.0 freq', ascending = False).head(100)_____no_output_____Professional_Services_yelp_poor_json = Professional_Services_yelp_poor_json.drop([136,3577,2707,664626,1979,2678])
Professional_Services_yelp_poor_json.sort_values(by='1 freq', ascending = False).head(10)_____no_output_____Professional_Services_yelp_poor.sort_values(by='1 freq', ascending = False).head(100)_____no_output_____Professional_Services_yelp_high_json = Professional_Services_yelp_high_json.drop([1777])
Professional_Services_yelp_best_json = Professional_Services_yelp_high_json.sort_values(by='5 freq', ascending = False).head()_____no_output_____Professional_Services_yelp_high.sort_values(by='5 freq', ascending = False).head()
Professional_Services_yelp_poor_json.sort_values(by='1 freq', ascending = False).head()_____no_output_____df = df.head()
df.to_json('Fashion_yelp_high_rating_words.json', orient='records', lines=True)_____no_output_____
</code>
| {
"repository": "labs15-rv-life/data-science",
"path": "app_reviews/app_reviews_scattertext.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 204273,
"hexsha": "cb66b2c00f5d387bd30d37806d9fbea12a81a583",
"max_line_length": 601,
"avg_line_length": 34.6577875806,
"alphanum_fraction": 0.2834540052
} |
# Notebook from seungkyoon/SimpleITK-Notebooks
Path: Python/10_matplotlib's_imshow.ipynb
# Using `matplotlib` to display inline images
In this notebook we will explore using `matplotlib` to display images in our notebooks, and work towards developing a reusable function to display 2D,3D, color, and label overlays for SimpleITK images.
We will also look at the subtleties of working with image filters that require the input images' to be overlapping. _____no_output_____
<code>
import matplotlib.pyplot as plt
%matplotlib inline
import SimpleITK as sitk
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata_____no_output_____
</code>
SimpleITK has a built in `Show` method which saves the image to disk and launches a user configurable program ( defaults to ImageJ ), to display the image. _____no_output_____
<code>
img1 = sitk.ReadImage(fdata("cthead1.png"))
sitk.Show(img1, title="cthead1")_____no_output_____img2 = sitk.ReadImage(fdata("VM1111Shrink-RGB.png"))
sitk.Show(img2, title="Visible Human Head")_____no_output_____nda = sitk.GetArrayViewFromImage(img1)
plt.imshow(nda)_____no_output_____nda = sitk.GetArrayViewFromImage(img2)
ax = plt.imshow(nda)_____no_output_____def myshow(img):
nda = sitk.GetArrayViewFromImage(img)
plt.imshow(nda)_____no_output_____myshow(img2)_____no_output_____myshow(sitk.Expand(img2, [10]*5))_____no_output_____
</code>
This image does not appear bigger.
There are numerous improvements that we can make:
- support 3d images
- include a title
- use physical pixel size for axis labels
- show the image as gray values_____no_output_____
<code>
def myshow(img, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
if nda.ndim == 3:
# fastest dim, either component or x
c = nda.shape[-1]
# the the number of components is 3 or 4 consider it an RGB image
if not c in (3,4):
nda = nda[nda.shape[0]//2,:,:]
elif nda.ndim == 4:
c = nda.shape[-1]
if not c in (3,4):
raise Runtime("Unable to show 3D-vector Image")
# take a z-slice
nda = nda[nda.shape[0]//2,:,:,:]
ysize = nda.shape[0]
xsize = nda.shape[1]
# Make a figure big enough to accommodate an axis of xpixels by ypixels
# as well as the ticklabels, etc...
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
# Make the axis the right size...
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
t = ax.imshow(nda,extent=extent,interpolation=None)
if nda.ndim == 2:
t.set_cmap("gray")
if(title):
plt.title(title)_____no_output_____myshow(sitk.Expand(img2,[2,2]), title="Big Visibile Human Head")_____no_output_____
</code>
## Tips and Tricks for Visualizing Segmentations
We start by loading a segmented image. As the segmentation is just an image with integral data, we can display the labels as we would any other image._____no_output_____
<code>
img1_seg = sitk.ReadImage(fdata("2th_cthead1.png"))
myshow(img1_seg, "Label Image as Grayscale")_____no_output_____
</code>
We can also map the scalar label image to a color image as shown below._____no_output_____
<code>
myshow(sitk.LabelToRGB(img1_seg), title="Label Image as RGB")_____no_output_____
</code>
Most filters which take multiple images as arguments require that the images occupy the same physical space. That is the pixel you are operating must refer to the same location. Luckily for us our image and labels do occupy the same physical space, allowing us to overlay the segmentation onto the original image._____no_output_____
<code>
myshow(sitk.LabelOverlay(img1, img1_seg), title="Label Overlayed")_____no_output_____
</code>
We can also overlay the labels as contours._____no_output_____
<code>
myshow(sitk.LabelOverlay(img1, sitk.LabelContour(img1_seg), 1.0))_____no_output_____
</code>
## Tips and Tricks for 3D Image Visualization
Now lets move on to visualizing real MRI images with segmentations. The Surgical Planning Laboratory at Brigham and Women's Hospital provides a wonderful Multi-modality MRI-based Atlas of the Brain that we can use.
Please note, what is done here is for convenience and is not the common way images are displayed for radiological work._____no_output_____
<code>
img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd"))
img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd"))
img_labels = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/hncma-atlas.nrrd"))_____no_output_____myshow(img_T1)
myshow(img_T2)
myshow(sitk.LabelToRGB(img_labels))_____no_output_____size = img_T1.GetSize()
myshow(img_T1[:,size[1]//2,:])_____no_output_____slices =[img_T1[size[0]//2,:,:], img_T1[:,size[1]//2,:], img_T1[:,:,size[2]//2]]
myshow(sitk.Tile(slices, [3,1]), dpi=20)_____no_output_____nslices = 5
slices = [ img_T1[:,:,s] for s in range(0, size[2], size[0]//(nslices+1))]
myshow(sitk.Tile(slices, [1,0]))_____no_output_____
</code>
Let's create a version of the show methods which allows the selection of slices to be displayed._____no_output_____
<code>
def myshow3d(img, xslices=[], yslices=[], zslices=[], title=None, margin=0.05, dpi=80):
size = img.GetSize()
img_xslices = [img[s,:,:] for s in xslices]
img_yslices = [img[:,s,:] for s in yslices]
img_zslices = [img[:,:,s] for s in zslices]
maxlen = max(len(img_xslices), len(img_yslices), len(img_zslices))
img_null = sitk.Image([0,0], img.GetPixelID(), img.GetNumberOfComponentsPerPixel())
img_slices = []
d = 0
if len(img_xslices):
img_slices += img_xslices + [img_null]*(maxlen-len(img_xslices))
d += 1
if len(img_yslices):
img_slices += img_yslices + [img_null]*(maxlen-len(img_yslices))
d += 1
if len(img_zslices):
img_slices += img_zslices + [img_null]*(maxlen-len(img_zslices))
d +=1
if maxlen != 0:
if img.GetNumberOfComponentsPerPixel() == 1:
img = sitk.Tile(img_slices, [maxlen,d])
#TO DO check in code to get Tile Filter working with vector images
else:
img_comps = []
for i in range(0,img.GetNumberOfComponentsPerPixel()):
img_slices_c = [sitk.VectorIndexSelectionCast(s, i) for s in img_slices]
img_comps.append(sitk.Tile(img_slices_c, [maxlen,d]))
img = sitk.Compose(img_comps)
myshow(img, title, margin, dpi)
_____no_output_____myshow3d(img_T1,yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____myshow3d(img_T2,yslices=range(50,size[1]-50,30), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____myshow3d(sitk.LabelToRGB(img_labels),yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____
</code>
We next visualize the T1 image with an overlay of the labels._____no_output_____
<code>
# Why doesn't this work? The images do overlap in physical space.
myshow3d(sitk.LabelOverlay(img_T1,img_labels),yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____
</code>
Two ways to solve our problem: (1) resample the labels onto the image grid (2) resample the image onto the label grid. The difference between the two from a computation standpoint depends on the grid sizes and on the interpolator used to estimate values at non-grid locations.
Note interpolating a label image with an interpolator that can generate non-label values is problematic as you may end up with an image that has more classes/labels than your original. This is why we only use the nearest neighbor interpolator when working with label images._____no_output_____
<code>
# Option 1: Resample the label image using the identity transformation
resampled_img_labels = sitk.Resample(img_labels, img_T1, sitk.Transform(), sitk.sitkNearestNeighbor,
0.0, img_labels.GetPixelID())
# Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can
# be combined with the color overlay (we use an alpha blending of 0.5).
myshow3d(sitk.LabelOverlay(sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8),resampled_img_labels, 0.5),
yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____# Option 2: Resample the T1 image using the identity transformation
resampled_T1 = sitk.Resample(img_T1, img_labels, sitk.Transform(), sitk.sitkLinear,
0.0, img_T1.GetPixelID())
# Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can
# be combined with the color overlay (we use an alpha blending of 0.5).
myshow3d(sitk.LabelOverlay(sitk.Cast(sitk.RescaleIntensity(resampled_T1), sitk.sitkUInt8),img_labels, 0.5),
yslices=range(50,size[1]-50,20), zslices=range(50,size[2]-50,20), dpi=30)_____no_output_____
</code>
Why are the two displays above different? (hint: in the calls to the "myshow3d" function the indexes of the y and z slices are the same). _____no_output_____### There and back again
In some cases you may want to work with the intensity values or labels outside of SimpleITK, for example you implement an algorithm in Python and you don't care about the physical spacing of things (you are actually assuming the volume is isotropic).
How do you get back to the physical space?_____no_output_____
<code>
def my_algorithm(image_as_numpy_array):
# res is the image result of your algorithm, has the same grid size as the original image
res = image_as_numpy_array
return res
# There (run your algorithm), and back again (convert the result into a SimpleITK image)
res_image = sitk.GetImageFromArray(my_algorithm(sitk.GetArrayFromImage(img_T1)))
# Now lets add the original image to the processed one, why doesn't this work?
res_image + img_T1_____no_output_____
</code>
When we converted the SimpleITK image to a numpy array we lost the spatial information (origin, spacing, direction cosine matrix). Converting the numpy array back to a SimpleITK image simply set the value of these components to reasonable defaults. We can easily restore the original values to deal with this issue. _____no_output_____
<code>
res_image.CopyInformation(img_T1)
res_image + img_T1_____no_output_____
</code>
The ``myshow`` and ``myshow3d`` functions are really useful. They have been copied into a "myshow.py" file so that they can be imported into other notebooks._____no_output_____
| {
"repository": "seungkyoon/SimpleITK-Notebooks",
"path": "Python/10_matplotlib's_imshow.ipynb",
"matched_keywords": [
"ImageJ"
],
"stars": null,
"size": 17380,
"hexsha": "cb67a92d335303c9b6ac1b7344396ea6bf83251a",
"max_line_length": 323,
"avg_line_length": 29.5578231293,
"alphanum_fraction": 0.5734177215
} |
# Notebook from fomightez/cl_demo-binder
Path: notebooks/Using Biopython PDB Header Parser to get missing residues.ipynb
# Using Biopython's PDB Header parser to get missing residues
Previously this worked out and had to be run at that time with a development version of Biopython that I got working [here](https://github.com/fomightez/BernBiopython). Now current Bioython has the essential functionality about missing residues in structure files, and so this notebook can be run here where the current Biopython is installed.
You may also be interested in the notebook entitled, 'Using Biopython's PDB module to list resolved residues and construct fit commands'. Think of this notebook as complementing that one. Depending on what you are trying to do (or use as the source of information), that one may be better suited._____no_output_____
<code>
#get stuctures
import os
files_needed = ["6AGB.pdb","6AH3.pdb"]
for file_needed in files_needed:
if not os.path.isfile(file_needed):
#os.system(f"curl -OL https://files.rcsb.org/download/{file_needed}.gz") #version of next line that works outside Jupyter/IPython
!curl -OL https://files.rcsb.org/download/{file_needed}.gz # gives more feedback in Jupyter
# os.system(f"gunzip {file_needed}.gz") #version of next line that works outside Jupyter/IPYthon
!gunzip {file_needed}.gz % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 491k 100 491k 0 0 666k 0 --:--:-- --:--:-- --:--:-- 665k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 519k 100 519k 0 0 872k 0 --:--:-- --:--:-- --:--:-- 871k
from Bio.PDB import *
h =parse_pdb_header('6AGB.pdb')
h['has_missing_residues']_____no_output_____from Bio.PDB import *
h =parse_pdb_header('6AGB.pdb')
h['missing_residues']_____no_output_____# Missing residue positions for specific chains
from Bio.PDB import *
from collections import defaultdict
h =parse_pdb_header('6AGB.pdb')
#parse per chain
chains_of_interest = ["F","G"]
# make a dictionary for each chain of interest with value of a list. The list will be the list of residues later
missing_per_chain = defaultdict(list)
# go through missing residues and populate each chain's list
for residue in h['missing_residues']:
if residue["chain"] in chains_of_interest:
missing_per_chain[residue["chain"]].append(residue["ssseq"])
#print(missing_per_chain)
print('')
print("Missing from chain 'G':\n{}".format(missing_per_chain['G']))
print('\n\n')
for chain in missing_per_chain:
print(chain,missing_per_chain[chain])
Missing from chain 'G':
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 108, 109, 110, 111, 112, 113, 114]
F [1]
G [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 108, 109, 110, 111, 112, 113, 114]
# Missing residue positions for ALL chains
from Bio.PDB import *
from collections import defaultdict
# extract information on chains in structure
structure = PDBParser().get_structure('6AGB', '6AGB.pdb')
chains = [each.id for each in structure.get_chains()]
h =parse_pdb_header('6AGB.pdb')
# make a dictionary for each chain of interest with value of a list. The list will be the list of residue positions later
missing_per_chain = defaultdict(list)
# go through missing residues and populate each chain's list
for residue in h['missing_residues']:
if residue["chain"] in chains:
missing_per_chain[residue["chain"]].append(residue["ssseq"])
print(missing_per_chain)
print('')
print("Missing from chain 'K':\n{}".format(missing_per_chain['K']))defaultdict(<class 'list'>, {'B': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 525, 526, 527, 528, 529, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 743, 744, 745, 746, 747, 748, 749, 750, 751], 'C': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 189, 190, 191, 192, 193, 194, 195], 'D': [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76], 'E': [1, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173], 'F': [1], 'G': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 108, 109, 110, 111, 112, 113, 114], 'H': [1, 2], 'I': [243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293], 'K': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]})
Missing from chain 'K':
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
</code>
----
#### Compare two structures_____no_output_____
<code>
# Does 6AH3 have any residues missing that 6AGB doesn't, for chains shared between them?
from Bio.PDB import *
from collections import defaultdict
# extract information on chains in structure
structure = PDBParser().get_structure('6AGB', '6AGB.pdb') # USING CHAIN LISTING FROM THAT BECAUSE ONLY CARE ABOUT ONES SHARED
chains = [each.id for each in structure.get_chains()]
h =parse_pdb_header('6AH3.pdb')
# make a dictionary for each chain of interest with value of a list. The list will be the list of residue positions later
missing_per_chainh3 = defaultdict(list)
# go through missing residues and populate each chain's list
for residue in h['missing_residues']:
if residue["chain"] in chains:
missing_per_chainh3[residue["chain"]].append(residue["ssseq"])
print("Missing from chains in AGH3:\n{}".format(missing_per_chainh3))
print('')
print("Missing from chain 'K':\n{}".format(missing_per_chainh3['K']))
print ('')
same_result = missing_per_chainh3 == missing_per_chain
print("Same residues missing for chains shared by 6AGB and 6AH3?:\n{}".format(same_result))
print ('')
print("Chain by chain accounting of whether missing same residues between 6AGB and 6AH3?:")
same_residues_present_list = []
for chain in chains:
print(chain)
print (missing_per_chainh3[chain] == missing_per_chain[chain])
if missing_per_chainh3[chain] != missing_per_chain[chain]:
same_residues_present_list.append(chain)
print("\n\nFurther details on those chains where not the same residues missing:")
for chain in same_residues_present_list:
print("chain '{}' in 6AH3 has more missing residues than 6AGB:\n{}".format(chain,len(missing_per_chainh3[chain]) > len(missing_per_chain[chain]) ))
print("\ntotal # residues missing in chain '{}' of 6AH3: {}".format(chain,len(missing_per_chainh3[chain]) ))
print("total # residues missing in chain '{}' of 6AGB: {}\n".format(chain,len(missing_per_chain[chain]) ))Missing from chains in AGH3:
defaultdict(<class 'list'>, {'B': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 525, 526, 527, 528, 529, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 743, 744, 745, 746, 747, 748, 749, 750, 751], 'C': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 189, 190, 191, 192, 193, 194, 195], 'D': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76], 'E': [1, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173], 'F': [1], 'G': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 108, 109, 110, 111, 112, 113, 114], 'H': [1, 2], 'I': [243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293], 'K': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]})
Missing from chain 'K':
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
Same residues missing for chains shared by 6AGB and 6AH3?:
False
Chain by chain accounting of whether missing same residues between 6AGB and 6AH3?:
A
True
B
True
C
True
D
False
E
True
F
True
G
True
H
True
I
True
J
True
K
True
Further details on those chains where not the same residues missing:
chain 'D' in 6AH3 has more missing residues than 6AGB:
True
total # residues missing in chain 'D' of 6AH3: 76
total # residues missing in chain 'D' of 6AGB: 52
</code>
Chain D is the only one with differences. The one in 6AGB(without the substrate) has more residues total, as it has less missing. (**Note that depending on how the gaps are distributed the the chain with the most residues may not have more information than the other for a region you are particularly interested in. It is just a rough metric and you should look into details. The 'Protein' tab at [PDBsum](https://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=index.html) is particularly useful for comparing missing residues in specific regions of proteins in structures.**)
Python's set math can be used to look into some of the specific missing residues. Here that is done to provide some insight into the parts to examine for Chain D:_____no_output_____
<code>
# How does Chain D compare specifically:
print("total missing:",len(missing_per_chain['D']))
print("total missing:",len(missing_per_chainh3['D']))
A= set(missing_per_chainh3['D'])
B= set(missing_per_chain['D'])
print("These are missing in 6AH3 Chain D but present in 6AGB:",A-B) # set math based on https://stackoverflow.com/a/1830195/8508004
print("These are present in 6AH3 Chain D but missing in 6AGB:",B-A) # set math based on https://stackoverflow.com/a/1830195/8508004total missing: 52
total missing: 76
These are missing in 6AH3 Chain D but present in 6AGB: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24}
These are present in 6AH3 Chain D but missing in 6AGB: set()
</code>
Note that there are none present in 6AH3 Chain D but missing in 6AGB and so the result is an empty set for this case._____no_output_____---- _____no_output_____
| {
"repository": "fomightez/cl_demo-binder",
"path": "notebooks/Using Biopython PDB Header Parser to get missing residues.ipynb",
"matched_keywords": [
"BioPython"
],
"stars": null,
"size": 56837,
"hexsha": "cb6c05b8188aa3c6ca7bb0214c2ae2067485f06e",
"max_line_length": 1395,
"avg_line_length": 32.6274397245,
"alphanum_fraction": 0.3964143076
} |
# Notebook from ArunKara/web_scraping_hw
Path: mission_to_mars.ipynb
<code>
from bs4 import BeautifulSoup
from splinter import Browser
from pprint import pprint
import pymongo
import pandas as pd
import requests_____no_output_____!which chromedriver/usr/local/bin/chromedriver
executable_path = {'executable_path': 'chromedriver'}
browser = Browser("chrome", **executable_path)_____no_output_____url = ('https://mars.nasa.gov/news/')_____no_output_____browser.visit(url)_____no_output_____#response = requests.get(url)
url_html = browser.html
soup = BeautifulSoup(url_html, 'html.parser')_____no_output_____print(soup.prettify())<html class="no-flash cookies geolocation svg picture canvas video webgl srcdoc supports hiddenscroll no-touchevents fullscreen flexbox cssanimations flexboxlegacy no-flexboxtweener csstransforms csstransforms3d csstransitions preserve3d -webkit-" lang="en" style="--vh:710px;" xml:lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<script src="https://m.addthis.com/live/red_lojson/300lo.json?si=5f4d8c4c188abfcf&bkl=0&bl=1&pdt=320&sid=5f4d8c4c188abfcf&pub=ra-5a690e4c1320e328&rev=v8.28.7-wp&ln=en&pc=men&cb=0&ab=-&dp=mars.nasa.gov&fp=news%2F&fr=&of=1&pd=0&irt=0&vcl=0&md=0&ct=1&tct=0&abt=0&cdn=0&pi=1&rb=0&gen=100&chr=UTF-8&mk=Mars%2Cmissions%2CNASA%2Crover%2CCuriosity%2COpportunity%2CInSight%2CMars%20Reconnaissance%20Orbiter%2Cfacts&colc=1598917708392&jsl=1&skipb=1&callback=addthis.cbs.jsonp__92801136953713590" type="text/javascript">
</script>
<script src="https://v1.addthisedge.com/live/boost/ra-5a690e4c1320e328/_ate.track.config_resp" type="text/javascript">
</script>
<script async="" src="https://www.google-analytics.com/analytics.js">
</script>
<script src="https://z.moatads.com/addthismoatframe568911941483/moatframe.js" type="text/javascript">
</script>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<!-- Always force latest IE rendering engine or request Chrome Frame -->
<meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible"/>
<!-- Responsiveness -->
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<!-- Favicon -->
<link href="/apple-touch-icon.png" rel="apple-touch-icon" sizes="180x180"/>
<link href="/favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/>
<link href="/favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/>
<link href="/manifest.json" rel="manifest"/>
<link color="#e48b55" href="/safari-pinned-tab.svg" rel="mask-icon"/>
<meta content="#000000" name="theme-color"/>
<meta content="authenticity_token" name="csrf-param"/>
<meta content="qY/qEkGry0owVWKTMPURkCVJ7yg6c+aKIG+pcxgvBximU7xMit6oVkAiQIV4vRotuV+3MQQPYMH4TOIsiJWUUg==" name="csrf-token"/>
<title>
News – NASA’s Mars Exploration Program
</title>
<meta content="NASA’s Mars Exploration Program " property="og:site_name"/>
<meta content="mars.nasa.gov" name="author"/>
<meta content="Mars, missions, NASA, rover, Curiosity, Opportunity, InSight, Mars Reconnaissance Orbiter, facts" name="keywords"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." name="description"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." property="og:description"/>
<meta content="News – NASA’s Mars Exploration Program " property="og:title"/>
<meta content="https://mars.nasa.gov/news" property="og:url"/>
<meta content="article" property="og:type"/>
<meta content="2017-09-22 19:53:22 UTC" property="og:updated_time"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_mars-nasa-gov.jpg" property="og:image"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_mars-nasa-gov.jpg" name="twitter:image"/>
<link href="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_mars-nasa-gov.jpg" rel="image_src"/>
<meta content="195570401081308" property="fb:app_id"/>
<style data-href="https://fonts.googleapis.com/css?family=Montserrat:200,300,400,500,600,700|Raleway:300,400" media="">
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_aZA3gTD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_aZA3g3D_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_aZA3gbD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_aZA3gfD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 200;
src: local('Montserrat ExtraLight'), local('Montserrat-ExtraLight'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_aZA3gnD_vx3rCs.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_cJD3gTD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_cJD3g3D_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_cJD3gbD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_cJD3gfD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 300;
src: local('Montserrat Light'), local('Montserrat-Light'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_cJD3gnD_vx3rCs.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v14/JTUSjIg1_i6t8kCHKm459WRhyyTh89ZNpQ.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v14/JTUSjIg1_i6t8kCHKm459W1hyyTh89ZNpQ.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v14/JTUSjIg1_i6t8kCHKm459WZhyyTh89ZNpQ.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v14/JTUSjIg1_i6t8kCHKm459WdhyyTh89ZNpQ.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 400;
src: local('Montserrat Regular'), local('Montserrat-Regular'), url(https://fonts.gstatic.com/s/montserrat/v14/JTUSjIg1_i6t8kCHKm459WlhyyTh89Y.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_ZpC3gTD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_ZpC3g3D_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_ZpC3gbD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_ZpC3gfD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 500;
src: local('Montserrat Medium'), local('Montserrat-Medium'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_ZpC3gnD_vx3rCs.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_bZF3gTD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_bZF3g3D_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_bZF3gbD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_bZF3gfD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 600;
src: local('Montserrat SemiBold'), local('Montserrat-SemiBold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_bZF3gnD_vx3rCs.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_dJE3gTD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_dJE3g3D_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_dJE3gbD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_dJE3gfD_vx3rCubqg.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'), url(https://fonts.gstatic.com/s/montserrat/v14/JTURjIg1_i6t8kCHKm45_dJE3gnD_vx3rCs.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCAIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCkIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCIIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCMIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 300;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyC0IT4ttDfA.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
/* cyrillic-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCAIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;
}
/* cyrillic */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCkIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;
}
/* vietnamese */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCIIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;
}
/* latin-ext */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyCMIT4ttDfCmxA.woff2) format('woff2');
unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;
}
/* latin */
@font-face {
font-family: 'Raleway';
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/s/raleway/v17/1Ptug8zYS_SKggPNyC0IT4ttDfA.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;
}
</style>
<style data-href="/assets/public_manifest-9467122247163ea3bf3012b71b5b39bf2bce26b82917aca331fb838f7f4a4b7e.css" media="all">
.ui-helper-hidden{display:none}.ui-helper-hidden-accessible{position:absolute !important;clip:rect(1px 1px 1px 1px);clip:rect(1px, 1px, 1px, 1px)}.ui-helper-reset{margin:0;padding:0;border:0;outline:0;line-height:1.3;text-decoration:none;font-size:100%;list-style:none}.ui-helper-clearfix:before,.ui-helper-clearfix:after{content:"";display:table}.ui-helper-clearfix:after{clear:both}.ui-helper-clearfix{zoom:1}.ui-helper-zfix{width:100%;height:100%;top:0;left:0;position:absolute;opacity:0;filter:Alpha(Opacity=0)}.ui-state-disabled{cursor:default !important}.ui-icon{display:block;text-indent:-99999px;overflow:hidden;background-repeat:no-repeat}.ui-widget-overlay{position:absolute;top:0;left:0;width:100%;height:100%}.ui-slider{position:relative;text-align:left}.ui-slider .ui-slider-handle{position:absolute;z-index:2;width:6.2em;height:.7em;cursor:default}.ui-slider .ui-slider-range{position:absolute;z-index:1;font-size:.7em;display:block;border:0;background-position:0 0}.ui-slider-horizontal{height:.8em}.ui-slider-horizontal .ui-slider-handle{top:0;margin-left:0}.ui-slider-horizontal .ui-slider-range{top:0;height:100%}.ui-slider-horizontal .ui-slider-range-min{left:0}.ui-slider-horizontal .ui-slider-range-max{right:0}.ui-slider-vertical{width:.8em;height:100px}.ui-slider-vertical .ui-slider-handle{left:-.3em;margin-left:0;margin-bottom:-.6em}.ui-slider-vertical .ui-slider-range{left:0;width:100%}.ui-slider-vertical .ui-slider-range-min{bottom:0}.ui-slider-vertical .ui-slider-range-max{top:0}.ui-widget{font-family:Segoe UI,Arial,sans-serif;font-size:1.1em}.ui-widget .ui-widget{font-size:1em}.ui-widget input,.ui-widget select,.ui-widget textarea,.ui-widget button{font-family:Segoe UI,Arial,sans-serif;font-size:1em}.ui-widget-content{border:1px solid #666666;background:#000000;color:#ffffff}.ui-widget-content a{color:#ffffff}.ui-widget-header{border:1px solid #333333;background:#333333;color:#ffffff;font-weight:700}.ui-widget-header a{color:#ffffff}.ui-state-default,.ui-widget-content .ui-state-default,.ui-widget-header .ui-state-default{border:1px solid #666666;background:#555555;font-weight:700;color:#eeeeee}.ui-state-default a,.ui-state-default a:link,.ui-state-default a:visited{color:#eeeeee;text-decoration:none}.ui-state-hover a,.ui-state-hover a:hover{color:#ffffff;text-decoration:none}.ui-state-active,.ui-widget-content .ui-state-active,.ui-widget-header .ui-state-active{border:1px solid #ffaf0f;background:#f58400;font-weight:700;color:#ffffff}.ui-state-active a,.ui-state-active a:link,.ui-state-active a:visited{color:#ffffff;text-decoration:none}.ui-state-highlight,.ui-widget-content .ui-state-highlight,.ui-widget-header .ui-state-highlight{border:1px solid #cccccc;background:#eeeeee;color:#2e7db2}.ui-state-highlight a,.ui-widget-content .ui-state-highlight a,.ui-widget-header .ui-state-highlight a{color:#2e7db2}.ui-state-error,.ui-widget-content .ui-state-error,.ui-widget-header .ui-state-error{border:1px solid #ffb73d;background:#ffc73d;color:#111111}.ui-state-error a,.ui-widget-content .ui-state-error a,.ui-widget-header .ui-state-error a{color:#111111}.ui-state-error-text,.ui-widget-content .ui-state-error-text,.ui-widget-header .ui-state-error-text{color:#111111}.ui-priority-primary,.ui-widget-content .ui-priority-primary,.ui-widget-header .ui-priority-primary{font-weight:700}.ui-priority-secondary,.ui-widget-content .ui-priority-secondary,.ui-widget-header .ui-priority-secondary{opacity:.7;filter:Alpha(Opacity=70);font-weight:400}.ui-state-disabled,.ui-widget-content .ui-state-disabled,.ui-widget-header .ui-state-disabled{opacity:.35;filter:Alpha(Opacity=35);background-image:none}.ui-icon{width:16px;height:16px}.ui-icon-carat-1-n{background-position:0 0}.ui-icon-carat-1-ne{background-position:-16px 0}.ui-icon-carat-1-e{background-position:-32px 0}.ui-icon-carat-1-se{background-position:-48px 0}.ui-icon-carat-1-s{background-position:-64px 0}.ui-icon-carat-1-sw{background-position:-80px 0}.ui-icon-carat-1-w{background-position:-96px 0}.ui-icon-carat-1-nw{background-position:-112px 0}.ui-icon-carat-2-n-s{background-position:-128px 0}.ui-icon-carat-2-e-w{background-position:-144px 0}.ui-icon-triangle-1-n{background-position:0 -16px}.ui-icon-triangle-1-ne{background-position:-16px -16px}.ui-icon-triangle-1-e{background-position:-32px -16px}.ui-icon-triangle-1-se{background-position:-48px -16px}.ui-icon-triangle-1-s{background-position:-64px -16px}.ui-icon-triangle-1-sw{background-position:-80px -16px}.ui-icon-triangle-1-w{background-position:-96px -16px}.ui-icon-triangle-1-nw{background-position:-112px -16px}.ui-icon-triangle-2-n-s{background-position:-128px -16px}.ui-icon-triangle-2-e-w{background-position:-144px -16px}.ui-icon-arrow-1-n{background-position:0 -32px}.ui-icon-arrow-1-ne{background-position:-16px -32px}.ui-icon-arrow-1-e{background-position:-32px -32px}.ui-icon-arrow-1-se{background-position:-48px -32px}.ui-icon-arrow-1-s{background-position:-64px -32px}.ui-icon-arrow-1-sw{background-position:-80px -32px}.ui-icon-arrow-1-w{background-position:-96px -32px}.ui-icon-arrow-1-nw{background-position:-112px -32px}.ui-icon-arrow-2-n-s{background-position:-128px -32px}.ui-icon-arrow-2-ne-sw{background-position:-144px -32px}.ui-icon-arrow-2-e-w{background-position:-160px -32px}.ui-icon-arrow-2-se-nw{background-position:-176px -32px}.ui-icon-arrowstop-1-n{background-position:-192px -32px}.ui-icon-arrowstop-1-e{background-position:-208px -32px}.ui-icon-arrowstop-1-s{background-position:-224px -32px}.ui-icon-arrowstop-1-w{background-position:-240px -32px}.ui-icon-arrowthick-1-n{background-position:0 -48px}.ui-icon-arrowthick-1-ne{background-position:-16px -48px}.ui-icon-arrowthick-1-e{background-position:-32px -48px}.ui-icon-arrowthick-1-se{background-position:-48px -48px}.ui-icon-arrowthick-1-s{background-position:-64px -48px}.ui-icon-arrowthick-1-sw{background-position:-80px -48px}.ui-icon-arrowthick-1-w{background-position:-96px -48px}.ui-icon-arrowthick-1-nw{background-position:-112px -48px}.ui-icon-arrowthick-2-n-s{background-position:-128px -48px}.ui-icon-arrowthick-2-ne-sw{background-position:-144px -48px}.ui-icon-arrowthick-2-e-w{background-position:-160px -48px}.ui-icon-arrowthick-2-se-nw{background-position:-176px -48px}.ui-icon-arrowthickstop-1-n{background-position:-192px -48px}.ui-icon-arrowthickstop-1-e{background-position:-208px -48px}.ui-icon-arrowthickstop-1-s{background-position:-224px -48px}.ui-icon-arrowthickstop-1-w{background-position:-240px -48px}.ui-icon-arrowreturnthick-1-w{background-position:0 -64px}.ui-icon-arrowreturnthick-1-n{background-position:-16px -64px}.ui-icon-arrowreturnthick-1-e{background-position:-32px -64px}.ui-icon-arrowreturnthick-1-s{background-position:-48px -64px}.ui-icon-arrowreturn-1-w{background-position:-64px -64px}.ui-icon-arrowreturn-1-n{background-position:-80px -64px}.ui-icon-arrowreturn-1-e{background-position:-96px -64px}.ui-icon-arrowreturn-1-s{background-position:-112px -64px}.ui-icon-arrowrefresh-1-w{background-position:-128px -64px}.ui-icon-arrowrefresh-1-n{background-position:-144px -64px}.ui-icon-arrowrefresh-1-e{background-position:-160px -64px}.ui-icon-arrowrefresh-1-s{background-position:-176px -64px}.ui-icon-arrow-4{background-position:0 -80px}.ui-icon-arrow-4-diag{background-position:-16px -80px}.ui-icon-extlink{background-position:-32px -80px}.ui-icon-newwin{background-position:-48px -80px}.ui-icon-refresh{background-position:-64px -80px}.ui-icon-shuffle{background-position:-80px -80px}.ui-icon-transfer-e-w{background-position:-96px -80px}.ui-icon-transferthick-e-w{background-position:-112px -80px}.ui-icon-folder-collapsed{background-position:0 -96px}.ui-icon-folder-open{background-position:-16px -96px}.ui-icon-document{background-position:-32px -96px}.ui-icon-document-b{background-position:-48px -96px}.ui-icon-note{background-position:-64px -96px}.ui-icon-mail-closed{background-position:-80px -96px}.ui-icon-mail-open{background-position:-96px -96px}.ui-icon-suitcase{background-position:-112px -96px}.ui-icon-comment{background-position:-128px -96px}.ui-icon-person{background-position:-144px -96px}.ui-icon-print{background-position:-160px -96px}.ui-icon-trash{background-position:-176px -96px}.ui-icon-locked{background-position:-192px -96px}.ui-icon-unlocked{background-position:-208px -96px}.ui-icon-bookmark{background-position:-224px -96px}.ui-icon-tag{background-position:-240px -96px}.ui-icon-home{background-position:0 -112px}.ui-icon-flag{background-position:-16px -112px}.ui-icon-calendar{background-position:-32px -112px}.ui-icon-cart{background-position:-48px -112px}.ui-icon-pencil{background-position:-64px -112px}.ui-icon-clock{background-position:-80px -112px}.ui-icon-disk{background-position:-96px -112px}.ui-icon-calculator{background-position:-112px -112px}.ui-icon-zoomin{background-position:-128px -112px}.ui-icon-zoomout{background-position:-144px -112px}.ui-icon-search{background-position:-160px -112px}.ui-icon-wrench{background-position:-176px -112px}.ui-icon-gear{background-position:-192px -112px}.ui-icon-heart{background-position:-208px -112px}.ui-icon-star{background-position:-224px -112px}.ui-icon-link{background-position:-240px -112px}.ui-icon-cancel{background-position:0 -128px}.ui-icon-plus{background-position:-16px -128px}.ui-icon-plusthick{background-position:-32px -128px}.ui-icon-minus{background-position:-48px -128px}.ui-icon-minusthick{background-position:-64px -128px}.ui-icon-close{background-position:-80px -128px}.ui-icon-closethick{background-position:-96px -128px}.ui-icon-key{background-position:-112px -128px}.ui-icon-lightbulb{background-position:-128px -128px}.ui-icon-scissors{background-position:-144px -128px}.ui-icon-clipboard{background-position:-160px -128px}.ui-icon-copy{background-position:-176px -128px}.ui-icon-contact{background-position:-192px -128px}.ui-icon-image{background-position:-208px -128px}.ui-icon-video{background-position:-224px -128px}.ui-icon-script{background-position:-240px -128px}.ui-icon-alert{background-position:0 -144px}.ui-icon-info{background-position:-16px -144px}.ui-icon-notice{background-position:-32px -144px}.ui-icon-help{background-position:-48px -144px}.ui-icon-check{background-position:-64px -144px}.ui-icon-bullet{background-position:-80px -144px}.ui-icon-radio-on{background-position:-96px -144px}.ui-icon-radio-off{background-position:-112px -144px}.ui-icon-pin-w{background-position:-128px -144px}.ui-icon-pin-s{background-position:-144px -144px}.ui-icon-play{background-position:0 -160px}.ui-icon-pause{background-position:-16px -160px}.ui-icon-seek-next{background-position:-32px -160px}.ui-icon-seek-prev{background-position:-48px -160px}.ui-icon-seek-end{background-position:-64px -160px}.ui-icon-seek-start{background-position:-80px -160px}.ui-icon-seek-first{background-position:-80px -160px}.ui-icon-stop{background-position:-96px -160px}.ui-icon-eject{background-position:-112px -160px}.ui-icon-volume-off{background-position:-128px -160px}.ui-icon-volume-on{background-position:-144px -160px}.ui-icon-power{background-position:0 -176px}.ui-icon-signal-diag{background-position:-16px -176px}.ui-icon-signal{background-position:-32px -176px}.ui-icon-battery-0{background-position:-48px -176px}.ui-icon-battery-1{background-position:-64px -176px}.ui-icon-battery-2{background-position:-80px -176px}.ui-icon-battery-3{background-position:-96px -176px}.ui-icon-circle-plus{background-position:0 -192px}.ui-icon-circle-minus{background-position:-16px -192px}.ui-icon-circle-close{background-position:-32px -192px}.ui-icon-circle-triangle-e{background-position:-48px -192px}.ui-icon-circle-triangle-s{background-position:-64px -192px}.ui-icon-circle-triangle-w{background-position:-80px -192px}.ui-icon-circle-triangle-n{background-position:-96px -192px}.ui-icon-circle-arrow-e{background-position:-112px -192px}.ui-icon-circle-arrow-s{background-position:-128px -192px}.ui-icon-circle-arrow-w{background-position:-144px -192px}.ui-icon-circle-arrow-n{background-position:-160px -192px}.ui-icon-circle-zoomin{background-position:-176px -192px}.ui-icon-circle-zoomout{background-position:-192px -192px}.ui-icon-circle-check{background-position:-208px -192px}.ui-icon-circlesmall-plus{background-position:0 -208px}.ui-icon-circlesmall-minus{background-position:-16px -208px}.ui-icon-circlesmall-close{background-position:-32px -208px}.ui-icon-squaresmall-plus{background-position:-48px -208px}.ui-icon-squaresmall-minus{background-position:-64px -208px}.ui-icon-squaresmall-close{background-position:-80px -208px}.ui-icon-grip-dotted-vertical{background-position:0 -224px}.ui-icon-grip-dotted-horizontal{background-position:-16px -224px}.ui-icon-grip-solid-vertical{background-position:-32px -224px}.ui-icon-grip-solid-horizontal{background-position:-48px -224px}.ui-icon-gripsmall-diagonal-se{background-position:-64px -224px}.ui-icon-grip-diagonal-se{background-position:-80px -224px}.ui-corner-all,.ui-corner-top,.ui-corner-left,.ui-corner-tl{border-top-left-radius:6px}.ui-corner-all,.ui-corner-top,.ui-corner-right,.ui-corner-tr{border-top-right-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-left,.ui-corner-bl{border-bottom-left-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-right,.ui-corner-br{border-bottom-right-radius:6px}#simplemodal-overlay{background-color:#000}#simplemodal-container{height:360px;width:600px;color:#fff;background-color:#000;border:0;padding:0}#simplemodal-container .simplemodal-data{padding:20px 50px}.slick-slider{position:relative;display:block;box-sizing:border-box;-moz-box-sizing:border-box;-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;touch-action:pan-y;-webkit-tap-highlight-color:transparent}.slick-list{position:relative;overflow:hidden;display:block;margin:0;padding:0}.slick-list:focus{outline:none}.slick-loading .slick-list{background:#fff url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/ajax-loader.gif") center center no-repeat}.slick-list.dragging{cursor:pointer;cursor:hand}.slick-slider .slick-list,.slick-track,.slick-slide,.slick-slide img{transform:translate3d(0, 0, 0)}.slick-track{position:relative;left:0;top:0;display:block;zoom:1}.slick-track:before,.slick-track:after{content:"";display:table}.slick-track:after{clear:both}.slick-loading .slick-track{visibility:hidden}.slick-slide{float:left;height:100%;min-height:1px;display:none}.slick-slide img{display:block}.slick-slide.slick-loading img{display:none}.slick-slide.dragging img{pointer-events:none}.slick-initialized .slick-slide{display:block}.slick-loading .slick-slide{visibility:hidden}.slick-vertical .slick-slide{display:block;height:auto;border:1px solid transparent}@font-face{font-family:"slick";src:url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.eot");src:url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.eot?#iefix") format("embedded-opentype"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.woff") format("woff"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.ttf") format("truetype"),url("https://cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.6.0/fonts/slick.svg#slick") format("svg");font-weight:normal;font-style:normal}.slick-prev,.slick-next{position:absolute;display:block;height:20px;width:20px;line-height:0;font-size:0;cursor:pointer;background:transparent;color:transparent;top:50%;margin-top:-10px;padding:0;border:none;outline:none}.slick-prev:hover,.slick-prev:focus,.slick-next:hover,.slick-next:focus{outline:none;background:transparent;color:transparent}.slick-prev:hover:before,.slick-prev:focus:before,.slick-next:hover:before,.slick-next:focus:before{opacity:1}.slick-prev.slick-disabled:before,.slick-next.slick-disabled:before{opacity:0.25}.slick-prev:before,.slick-next:before{font-family:"slick";font-size:20px;line-height:1;color:white;opacity:0.75;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.slick-prev{left:-25px}.slick-prev:before{content:"\2190"}.slick-next{right:-25px}.slick-next:before{content:"\2192"}.slick-slider{margin-bottom:30px}.slick-dots{position:absolute;bottom:-45px;list-style:none;display:block;text-align:center;padding:0;width:100%}.slick-dots li{position:relative;display:inline-block;height:20px;width:20px;margin:0 5px;padding:0;cursor:pointer}.slick-dots li button{border:0;background:transparent;display:block;height:20px;width:20px;outline:none;line-height:0;font-size:0;color:transparent;padding:5px;cursor:pointer}.slick-dots li button:hover,.slick-dots li button:focus{outline:none}.slick-dots li button:hover:before,.slick-dots li button:focus:before{opacity:1}.slick-dots li button:before{position:absolute;top:0;left:0;content:"\2022";width:20px;height:20px;font-family:"slick";font-size:6px;line-height:20px;text-align:center;color:black;opacity:0.25;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.slick-dots li.slick-active button:before{color:black;opacity:0.75}[dir="rtl"] .slick-next{right:auto;left:-25px}[dir="rtl"] .slick-next:before{content:"\2190"}[dir="rtl"] .slick-prev{right:-25px;left:auto}[dir="rtl"] .slick-prev:before{content:"\2192"}[dir="rtl"] .slick-slide{float:right}@font-face{font-family:"foundation-icons";src:url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.eot");src:url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.eot?#iefix") format("embedded-opentype"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.woff") format("woff"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.ttf") format("truetype"),url("https://mars.nasa.gov/assets/gulp/vendor/fonts/foundation-icons.svg#fontcustom") format("svg");font-weight:normal;font-style:normal}.fi-address-book:before,.fi-alert:before,.fi-align-center:before,.fi-align-justify:before,.fi-align-left:before,.fi-align-right:before,.fi-anchor:before,.fi-annotate:before,.fi-archive:before,.fi-arrow-down:before,.fi-arrow-left:before,.fi-arrow-right:before,.fi-arrow-up:before,.fi-arrows-compress:before,.fi-arrows-expand:before,.fi-arrows-in:before,.fi-arrows-out:before,.fi-asl:before,.fi-asterisk:before,.fi-at-sign:before,.fi-background-color:before,.fi-battery-empty:before,.fi-battery-full:before,.fi-battery-half:before,.fi-bitcoin-circle:before,.fi-bitcoin:before,.fi-blind:before,.fi-bluetooth:before,.fi-bold:before,.fi-book-bookmark:before,.fi-book:before,.fi-bookmark:before,.fi-braille:before,.fi-burst-new:before,.fi-burst-sale:before,.fi-burst:before,.fi-calendar:before,.fi-camera:before,.fi-check:before,.fi-checkbox:before,.fi-clipboard-notes:before,.fi-clipboard-pencil:before,.fi-clipboard:before,.fi-clock:before,.fi-closed-caption:before,.fi-cloud:before,.fi-comment-minus:before,.fi-comment-quotes:before,.fi-comment-video:before,.fi-comment:before,.fi-comments:before,.fi-compass:before,.fi-contrast:before,.fi-credit-card:before,.fi-crop:before,.fi-crown:before,.fi-css3:before,.fi-database:before,.fi-die-five:before,.fi-die-four:before,.fi-die-one:before,.fi-die-six:before,.fi-die-three:before,.fi-die-two:before,.fi-dislike:before,.fi-dollar-bill:before,.fi-dollar:before,.fi-download:before,.fi-eject:before,.fi-elevator:before,.fi-euro:before,.fi-eye:before,.fi-fast-forward:before,.fi-female-symbol:before,.fi-female:before,.fi-filter:before,.fi-first-aid:before,.fi-flag:before,.fi-folder-add:before,.fi-folder-lock:before,.fi-folder:before,.fi-foot:before,.fi-foundation:before,.fi-graph-bar:before,.fi-graph-horizontal:before,.fi-graph-pie:before,.fi-graph-trend:before,.fi-guide-dog:before,.fi-hearing-aid:before,.fi-heart:before,.fi-home:before,.fi-html5:before,.fi-indent-less:before,.fi-indent-more:before,.fi-info:before,.fi-italic:before,.fi-key:before,.fi-laptop:before,.fi-layout:before,.fi-lightbulb:before,.fi-like:before,.fi-link:before,.fi-list-bullet:before,.fi-list-number:before,.fi-list-thumbnails:before,.fi-list:before,.fi-lock:before,.fi-loop:before,.fi-magnifying-glass:before,.fi-mail:before,.fi-male-female:before,.fi-male-symbol:before,.fi-male:before,.fi-map:before,.fi-marker:before,.fi-megaphone:before,.fi-microphone:before,.fi-minus-circle:before,.fi-minus:before,.fi-mobile-signal:before,.fi-mobile:before,.fi-monitor:before,.fi-mountains:before,.fi-music:before,.fi-next:before,.fi-no-dogs:before,.fi-no-smoking:before,.fi-page-add:before,.fi-page-copy:before,.fi-page-csv:before,.fi-page-delete:before,.fi-page-doc:before,.fi-page-edit:before,.fi-page-export-csv:before,.fi-page-export-doc:before,.fi-page-export-pdf:before,.fi-page-export:before,.fi-page-filled:before,.fi-page-multiple:before,.fi-page-pdf:before,.fi-page-remove:before,.fi-page-search:before,.fi-page:before,.fi-paint-bucket:before,.fi-paperclip:before,.fi-pause:before,.fi-paw:before,.fi-paypal:before,.fi-pencil:before,.fi-photo:before,.fi-play-circle:before,.fi-play-video:before,.fi-play:before,.fi-plus:before,.fi-pound:before,.fi-power:before,.fi-previous:before,.fi-price-tag:before,.fi-pricetag-multiple:before,.fi-print:before,.fi-prohibited:before,.fi-projection-screen:before,.fi-puzzle:before,.fi-quote:before,.fi-record:before,.fi-refresh:before,.fi-results-demographics:before,.fi-results:before,.fi-rewind-ten:before,.fi-rewind:before,.fi-rss:before,.fi-safety-cone:before,.fi-save:before,.fi-share:before,.fi-sheriff-badge:before,.fi-shield:before,.fi-shopping-bag:before,.fi-shopping-cart:before,.fi-shuffle:before,.fi-skull:before,.fi-social-500px:before,.fi-social-adobe:before,.fi-social-amazon:before,.fi-social-android:before,.fi-social-apple:before,.fi-social-behance:before,.fi-social-bing:before,.fi-social-blogger:before,.fi-social-delicious:before,.fi-social-designer-news:before,.fi-social-deviant-art:before,.fi-social-digg:before,.fi-social-dribbble:before,.fi-social-drive:before,.fi-social-dropbox:before,.fi-social-evernote:before,.fi-social-facebook:before,.fi-social-flickr:before,.fi-social-forrst:before,.fi-social-foursquare:before,.fi-social-game-center:before,.fi-social-github:before,.fi-social-google-plus:before,.fi-social-hacker-news:before,.fi-social-hi5:before,.fi-social-instagram:before,.fi-social-joomla:before,.fi-social-lastfm:before,.fi-social-linkedin:before,.fi-social-medium:before,.fi-social-myspace:before,.fi-social-orkut:before,.fi-social-path:before,.fi-social-picasa:before,.fi-social-pinterest:before,.fi-social-rdio:before,.fi-social-reddit:before,.fi-social-skillshare:before,.fi-social-skype:before,.fi-social-smashing-mag:before,.fi-social-snapchat:before,.fi-social-spotify:before,.fi-social-squidoo:before,.fi-social-stack-overflow:before,.fi-social-steam:before,.fi-social-stumbleupon:before,.fi-social-treehouse:before,.fi-social-tumblr:before,.fi-social-twitter:before,.fi-social-vimeo:before,.fi-social-windows:before,.fi-social-xbox:before,.fi-social-yahoo:before,.fi-social-yelp:before,.fi-social-youtube:before,.fi-social-zerply:before,.fi-social-zurb:before,.fi-sound:before,.fi-star:before,.fi-stop:before,.fi-strikethrough:before,.fi-subscript:before,.fi-superscript:before,.fi-tablet-landscape:before,.fi-tablet-portrait:before,.fi-target-two:before,.fi-target:before,.fi-telephone-accessible:before,.fi-telephone:before,.fi-text-color:before,.fi-thumbnails:before,.fi-ticket:before,.fi-torso-business:before,.fi-torso-female:before,.fi-torso:before,.fi-torsos-all-female:before,.fi-torsos-all:before,.fi-torsos-female-male:before,.fi-torsos-male-female:before,.fi-torsos:before,.fi-trash:before,.fi-trees:before,.fi-trophy:before,.fi-underline:before,.fi-universal-access:before,.fi-unlink:before,.fi-unlock:before,.fi-upload-cloud:before,.fi-upload:before,.fi-usb:before,.fi-video:before,.fi-volume-none:before,.fi-volume-strike:before,.fi-volume:before,.fi-web:before,.fi-wheelchair:before,.fi-widget:before,.fi-wrench:before,.fi-x-circle:before,.fi-x:before,.fi-yen:before,.fi-zoom-in:before,.fi-zoom-out:before{font-family:"foundation-icons";font-style:normal;font-weight:normal;font-variant:normal;text-transform:none;line-height:1;-webkit-font-smoothing:antialiased;display:inline-block;text-decoration:inherit}.fi-address-book:before{content:"\f100"}.fi-alert:before{content:"\f101"}.fi-align-center:before{content:"\f102"}.fi-align-justify:before{content:"\f103"}.fi-align-left:before{content:"\f104"}.fi-align-right:before{content:"\f105"}.fi-anchor:before{content:"\f106"}.fi-annotate:before{content:"\f107"}.fi-archive:before{content:"\f108"}.fi-arrow-down:before{content:"\f109"}.fi-arrow-left:before{content:"\f10a"}.fi-arrow-right:before{content:"\f10b"}.fi-arrow-up:before{content:"\f10c"}.fi-arrows-compress:before{content:"\f10d"}.fi-arrows-expand:before{content:"\f10e"}.fi-arrows-in:before{content:"\f10f"}.fi-arrows-out:before{content:"\f110"}.fi-asl:before{content:"\f111"}.fi-asterisk:before{content:"\f112"}.fi-at-sign:before{content:"\f113"}.fi-background-color:before{content:"\f114"}.fi-battery-empty:before{content:"\f115"}.fi-battery-full:before{content:"\f116"}.fi-battery-half:before{content:"\f117"}.fi-bitcoin-circle:before{content:"\f118"}.fi-bitcoin:before{content:"\f119"}.fi-blind:before{content:"\f11a"}.fi-bluetooth:before{content:"\f11b"}.fi-bold:before{content:"\f11c"}.fi-book-bookmark:before{content:"\f11d"}.fi-book:before{content:"\f11e"}.fi-bookmark:before{content:"\f11f"}.fi-braille:before{content:"\f120"}.fi-burst-new:before{content:"\f121"}.fi-burst-sale:before{content:"\f122"}.fi-burst:before{content:"\f123"}.fi-calendar:before{content:"\f124"}.fi-camera:before{content:"\f125"}.fi-check:before{content:"\f126"}.fi-checkbox:before{content:"\f127"}.fi-clipboard-notes:before{content:"\f128"}.fi-clipboard-pencil:before{content:"\f129"}.fi-clipboard:before{content:"\f12a"}.fi-clock:before{content:"\f12b"}.fi-closed-caption:before{content:"\f12c"}.fi-cloud:before{content:"\f12d"}.fi-comment-minus:before{content:"\f12e"}.fi-comment-quotes:before{content:"\f12f"}.fi-comment-video:before{content:"\f130"}.fi-comment:before{content:"\f131"}.fi-comments:before{content:"\f132"}.fi-compass:before{content:"\f133"}.fi-contrast:before{content:"\f134"}.fi-credit-card:before{content:"\f135"}.fi-crop:before{content:"\f136"}.fi-crown:before{content:"\f137"}.fi-css3:before{content:"\f138"}.fi-database:before{content:"\f139"}.fi-die-five:before{content:"\f13a"}.fi-die-four:before{content:"\f13b"}.fi-die-one:before{content:"\f13c"}.fi-die-six:before{content:"\f13d"}.fi-die-three:before{content:"\f13e"}.fi-die-two:before{content:"\f13f"}.fi-dislike:before{content:"\f140"}.fi-dollar-bill:before{content:"\f141"}.fi-dollar:before{content:"\f142"}.fi-download:before{content:"\f143"}.fi-eject:before{content:"\f144"}.fi-elevator:before{content:"\f145"}.fi-euro:before{content:"\f146"}.fi-eye:before{content:"\f147"}.fi-fast-forward:before{content:"\f148"}.fi-female-symbol:before{content:"\f149"}.fi-female:before{content:"\f14a"}.fi-filter:before{content:"\f14b"}.fi-first-aid:before{content:"\f14c"}.fi-flag:before{content:"\f14d"}.fi-folder-add:before{content:"\f14e"}.fi-folder-lock:before{content:"\f14f"}.fi-folder:before{content:"\f150"}.fi-foot:before{content:"\f151"}.fi-foundation:before{content:"\f152"}.fi-graph-bar:before{content:"\f153"}.fi-graph-horizontal:before{content:"\f154"}.fi-graph-pie:before{content:"\f155"}.fi-graph-trend:before{content:"\f156"}.fi-guide-dog:before{content:"\f157"}.fi-hearing-aid:before{content:"\f158"}.fi-heart:before{content:"\f159"}.fi-home:before{content:"\f15a"}.fi-html5:before{content:"\f15b"}.fi-indent-less:before{content:"\f15c"}.fi-indent-more:before{content:"\f15d"}.fi-info:before{content:"\f15e"}.fi-italic:before{content:"\f15f"}.fi-key:before{content:"\f160"}.fi-laptop:before{content:"\f161"}.fi-layout:before{content:"\f162"}.fi-lightbulb:before{content:"\f163"}.fi-like:before{content:"\f164"}.fi-link:before{content:"\f165"}.fi-list-bullet:before{content:"\f166"}.fi-list-number:before{content:"\f167"}.fi-list-thumbnails:before{content:"\f168"}.fi-list:before{content:"\f169"}.fi-lock:before{content:"\f16a"}.fi-loop:before{content:"\f16b"}.fi-magnifying-glass:before{content:"\f16c"}.fi-mail:before{content:"\f16d"}.fi-male-female:before{content:"\f16e"}.fi-male-symbol:before{content:"\f16f"}.fi-male:before{content:"\f170"}.fi-map:before{content:"\f171"}.fi-marker:before{content:"\f172"}.fi-megaphone:before{content:"\f173"}.fi-microphone:before{content:"\f174"}.fi-minus-circle:before{content:"\f175"}.fi-minus:before{content:"\f176"}.fi-mobile-signal:before{content:"\f177"}.fi-mobile:before{content:"\f178"}.fi-monitor:before{content:"\f179"}.fi-mountains:before{content:"\f17a"}.fi-music:before{content:"\f17b"}.fi-next:before{content:"\f17c"}.fi-no-dogs:before{content:"\f17d"}.fi-no-smoking:before{content:"\f17e"}.fi-page-add:before{content:"\f17f"}.fi-page-copy:before{content:"\f180"}.fi-page-csv:before{content:"\f181"}.fi-page-delete:before{content:"\f182"}.fi-page-doc:before{content:"\f183"}.fi-page-edit:before{content:"\f184"}.fi-page-export-csv:before{content:"\f185"}.fi-page-export-doc:before{content:"\f186"}.fi-page-export-pdf:before{content:"\f187"}.fi-page-export:before{content:"\f188"}.fi-page-filled:before{content:"\f189"}.fi-page-multiple:before{content:"\f18a"}.fi-page-pdf:before{content:"\f18b"}.fi-page-remove:before{content:"\f18c"}.fi-page-search:before{content:"\f18d"}.fi-page:before{content:"\f18e"}.fi-paint-bucket:before{content:"\f18f"}.fi-paperclip:before{content:"\f190"}.fi-pause:before{content:"\f191"}.fi-paw:before{content:"\f192"}.fi-paypal:before{content:"\f193"}.fi-pencil:before{content:"\f194"}.fi-photo:before{content:"\f195"}.fi-play-circle:before{content:"\f196"}.fi-play-video:before{content:"\f197"}.fi-play:before{content:"\f198"}.fi-plus:before{content:"\f199"}.fi-pound:before{content:"\f19a"}.fi-power:before{content:"\f19b"}.fi-previous:before{content:"\f19c"}.fi-price-tag:before{content:"\f19d"}.fi-pricetag-multiple:before{content:"\f19e"}.fi-print:before{content:"\f19f"}.fi-prohibited:before{content:"\f1a0"}.fi-projection-screen:before{content:"\f1a1"}.fi-puzzle:before{content:"\f1a2"}.fi-quote:before{content:"\f1a3"}.fi-record:before{content:"\f1a4"}.fi-refresh:before{content:"\f1a5"}.fi-results-demographics:before{content:"\f1a6"}.fi-results:before{content:"\f1a7"}.fi-rewind-ten:before{content:"\f1a8"}.fi-rewind:before{content:"\f1a9"}.fi-rss:before{content:"\f1aa"}.fi-safety-cone:before{content:"\f1ab"}.fi-save:before{content:"\f1ac"}.fi-share:before{content:"\f1ad"}.fi-sheriff-badge:before{content:"\f1ae"}.fi-shield:before{content:"\f1af"}.fi-shopping-bag:before{content:"\f1b0"}.fi-shopping-cart:before{content:"\f1b1"}.fi-shuffle:before{content:"\f1b2"}.fi-skull:before{content:"\f1b3"}.fi-social-500px:before{content:"\f1b4"}.fi-social-adobe:before{content:"\f1b5"}.fi-social-amazon:before{content:"\f1b6"}.fi-social-android:before{content:"\f1b7"}.fi-social-apple:before{content:"\f1b8"}.fi-social-behance:before{content:"\f1b9"}.fi-social-bing:before{content:"\f1ba"}.fi-social-blogger:before{content:"\f1bb"}.fi-social-delicious:before{content:"\f1bc"}.fi-social-designer-news:before{content:"\f1bd"}.fi-social-deviant-art:before{content:"\f1be"}.fi-social-digg:before{content:"\f1bf"}.fi-social-dribbble:before{content:"\f1c0"}.fi-social-drive:before{content:"\f1c1"}.fi-social-dropbox:before{content:"\f1c2"}.fi-social-evernote:before{content:"\f1c3"}.fi-social-facebook:before{content:"\f1c4"}.fi-social-flickr:before{content:"\f1c5"}.fi-social-forrst:before{content:"\f1c6"}.fi-social-foursquare:before{content:"\f1c7"}.fi-social-game-center:before{content:"\f1c8"}.fi-social-github:before{content:"\f1c9"}.fi-social-google-plus:before{content:"\f1ca"}.fi-social-hacker-news:before{content:"\f1cb"}.fi-social-hi5:before{content:"\f1cc"}.fi-social-instagram:before{content:"\f1cd"}.fi-social-joomla:before{content:"\f1ce"}.fi-social-lastfm:before{content:"\f1cf"}.fi-social-linkedin:before{content:"\f1d0"}.fi-social-medium:before{content:"\f1d1"}.fi-social-myspace:before{content:"\f1d2"}.fi-social-orkut:before{content:"\f1d3"}.fi-social-path:before{content:"\f1d4"}.fi-social-picasa:before{content:"\f1d5"}.fi-social-pinterest:before{content:"\f1d6"}.fi-social-rdio:before{content:"\f1d7"}.fi-social-reddit:before{content:"\f1d8"}.fi-social-skillshare:before{content:"\f1d9"}.fi-social-skype:before{content:"\f1da"}.fi-social-smashing-mag:before{content:"\f1db"}.fi-social-snapchat:before{content:"\f1dc"}.fi-social-spotify:before{content:"\f1dd"}.fi-social-squidoo:before{content:"\f1de"}.fi-social-stack-overflow:before{content:"\f1df"}.fi-social-steam:before{content:"\f1e0"}.fi-social-stumbleupon:before{content:"\f1e1"}.fi-social-treehouse:before{content:"\f1e2"}.fi-social-tumblr:before{content:"\f1e3"}.fi-social-twitter:before{content:"\f1e4"}.fi-social-vimeo:before{content:"\f1e5"}.fi-social-windows:before{content:"\f1e6"}.fi-social-xbox:before{content:"\f1e7"}.fi-social-yahoo:before{content:"\f1e8"}.fi-social-yelp:before{content:"\f1e9"}.fi-social-youtube:before{content:"\f1ea"}.fi-social-zerply:before{content:"\f1eb"}.fi-social-zurb:before{content:"\f1ec"}.fi-sound:before{content:"\f1ed"}.fi-star:before{content:"\f1ee"}.fi-stop:before{content:"\f1ef"}.fi-strikethrough:before{content:"\f1f0"}.fi-subscript:before{content:"\f1f1"}.fi-superscript:before{content:"\f1f2"}.fi-tablet-landscape:before{content:"\f1f3"}.fi-tablet-portrait:before{content:"\f1f4"}.fi-target-two:before{content:"\f1f5"}.fi-target:before{content:"\f1f6"}.fi-telephone-accessible:before{content:"\f1f7"}.fi-telephone:before{content:"\f1f8"}.fi-text-color:before{content:"\f1f9"}.fi-thumbnails:before{content:"\f1fa"}.fi-ticket:before{content:"\f1fb"}.fi-torso-business:before{content:"\f1fc"}.fi-torso-female:before{content:"\f1fd"}.fi-torso:before{content:"\f1fe"}.fi-torsos-all-female:before{content:"\f1ff"}.fi-torsos-all:before{content:"\f200"}.fi-torsos-female-male:before{content:"\f201"}.fi-torsos-male-female:before{content:"\f202"}.fi-torsos:before{content:"\f203"}.fi-trash:before{content:"\f204"}.fi-trees:before{content:"\f205"}.fi-trophy:before{content:"\f206"}.fi-underline:before{content:"\f207"}.fi-universal-access:before{content:"\f208"}.fi-unlink:before{content:"\f209"}.fi-unlock:before{content:"\f20a"}.fi-upload-cloud:before{content:"\f20b"}.fi-upload:before{content:"\f20c"}.fi-usb:before{content:"\f20d"}.fi-video:before{content:"\f20e"}.fi-volume-none:before{content:"\f20f"}.fi-volume-strike:before{content:"\f210"}.fi-volume:before{content:"\f211"}.fi-web:before{content:"\f212"}.fi-wheelchair:before{content:"\f213"}.fi-widget:before{content:"\f214"}.fi-wrench:before{content:"\f215"}.fi-x-circle:before{content:"\f216"}.fi-x:before{content:"\f217"}.fi-yen:before{content:"\f218"}.fi-zoom-in:before{content:"\f219"}.fi-zoom-out:before{content:"\f21a"}/*! normalize.css v1.1.2 | MIT License | git.io/normalize */article,aside,details,figcaption,figure,footer,header,hgroup,main,nav,section,summary{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}audio:not([controls]){display:none;height:0}[hidden]{display:none}html{font-size:100%;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}html,button,input,select,textarea{font-family:sans-serif}body{margin:0}a:focus{outline:thin dotted}a:active,a:hover{outline:0}h1{font-size:2em;margin:0.67em 0}h2{font-size:1.5em;margin:0.83em 0}h3{font-size:1.17em;margin:1em 0}h4{font-size:1em;margin:1.33em 0}h5{font-size:0.83em;margin:1.67em 0}h6{font-size:0.67em;margin:2.33em 0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:bold}blockquote{margin:1em 40px}dfn{font-style:italic}hr{box-sizing:content-box;height:0}mark{background:#ff0;color:#000}p,pre{margin:1em 0}code,kbd,pre,samp{font-family:monospace, serif;_font-family:'courier new', monospace;font-size:1em}pre{white-space:pre;white-space:pre-wrap;word-wrap:break-word}q{quotes:none}q:before,q:after{content:'';content:none}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-0.5em}sub{bottom:-0.25em}dl,menu,ol,ul{margin:1em 0}dd{margin:0 0 0 40px}menu,ol,ul{padding:0 0 0 40px}nav ul,nav ol{list-style:none;list-style-image:none}img{border:0;-ms-interpolation-mode:bicubic}svg:not(:root){overflow:hidden}figure{margin:0}form{margin:0}fieldset{border:1px solid #c0c0c0;margin:0 2px;padding:0.35em 0.625em 0.75em}legend{border:0;padding:0;white-space:normal;*margin-left:-7px}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,select{text-transform:none}button,html input[type="button"],input[type="reset"],input[type="submit"]{-webkit-appearance:button;cursor:pointer;*overflow:visible}button[disabled],html input[disabled]{cursor:default}input[type="checkbox"],input[type="radio"]{box-sizing:border-box;padding:0;*height:13px;*width:13px}input[type="search"]{-webkit-appearance:textfield}input[type="search"]::-webkit-search-cancel-button,input[type="search"]::-webkit-search-decoration{-webkit-appearance:none}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}textarea{overflow:auto;vertical-align:top}table{border-collapse:collapse;border-spacing:0}html,button,input,select,textarea{color:#222}body{font-size:1em;line-height:1.4}::-moz-selection{background:#b3d4fc;text-shadow:none}::selection{background:#b3d4fc;text-shadow:none}hr{display:block;height:1px;border:0;border-top:1px solid #ccc;margin:1em 0;padding:0}audio,canvas,img,video{vertical-align:middle}fieldset{border:0;margin:0;padding:0}textarea{resize:vertical}.browsehappy{margin:0.2em 0;background:#ccc;color:#000;padding:0.2em 0}.ir{background-color:transparent;border:0;overflow:hidden;*text-indent:-9999px}.ir:before{content:"";display:block;width:0;height:150%}.hidden{display:none !important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.clearfix:before,.meganav:before,.meganav_columnar_section:before,.nav_area:before,.body_form fieldset.radio_buttons .option:before,.upper_footer:before,#site_footer .sitemap_directory:before,.main_area_sitemap .sitemap_directory:before,article:before,.grid_gallery.list_view li.slide:before,.main_carousel .slick-nav:before,.main_carousel.module .slick-slider .content_body:before,.advanced_search .filter_bar .search_row:before,.content_page #primary_column:before,#secondary_column aside.list_view_module li:before,.wysiwyg_content .related_content_module ul:before,#secondary_column .related_content_module ul:before,.wysiwyg_content .related_content_module li:before,#secondary_column .related_content_module li:before,blockquote:before,.faq_section ul.q_and_a .text.answer:before,.double_teaser .teaser_container:before,ul.item_list:before,ul.item_list>li:before,ul.item_list .list_content:before,.raw_image_gallery:before,form.pagination_form:before,.pagination_options:before,.faceted_search.list_view li.slide:before,.clearfix:after,.meganav:after,.meganav_columnar_section:after,.nav_area:after,.body_form fieldset.radio_buttons .option:after,.upper_footer:after,#site_footer .sitemap_directory:after,.main_area_sitemap .sitemap_directory:after,article:after,.grid_gallery.list_view li.slide:after,.main_carousel .slick-nav:after,.main_carousel.module .slick-slider .content_body:after,.advanced_search .filter_bar .search_row:after,.content_page #primary_column:after,#secondary_column aside.list_view_module li:after,.wysiwyg_content .related_content_module ul:after,#secondary_column .related_content_module ul:after,.wysiwyg_content .related_content_module li:after,#secondary_column .related_content_module li:after,blockquote:after,.faq_section ul.q_and_a .text.answer:after,.double_teaser .teaser_container:after,ul.item_list:after,ul.item_list>li:after,ul.item_list .list_content:after,.raw_image_gallery:after,form.pagination_form:after,.pagination_options:after,.faceted_search.list_view li.slide:after{content:" ";display:table}.clearfix:after,.meganav:after,.meganav_columnar_section:after,.nav_area:after,.body_form fieldset.radio_buttons .option:after,.upper_footer:after,#site_footer .sitemap_directory:after,.main_area_sitemap .sitemap_directory:after,article:after,.grid_gallery.list_view li.slide:after,.main_carousel .slick-nav:after,.main_carousel.module .slick-slider .content_body:after,.advanced_search .filter_bar .search_row:after,.content_page #primary_column:after,#secondary_column aside.list_view_module li:after,.wysiwyg_content .related_content_module ul:after,#secondary_column .related_content_module ul:after,.wysiwyg_content .related_content_module li:after,#secondary_column .related_content_module li:after,blockquote:after,.faq_section ul.q_and_a .text.answer:after,.double_teaser .teaser_container:after,ul.item_list:after,ul.item_list>li:after,ul.item_list .list_content:after,.raw_image_gallery:after,form.pagination_form:after,.pagination_options:after,.faceted_search.list_view li.slide:after{clear:both}.clearfix,.meganav,.meganav_columnar_section,.nav_area,.body_form fieldset.radio_buttons .option,.upper_footer,#site_footer .sitemap_directory,.main_area_sitemap .sitemap_directory,article,.grid_gallery.list_view li.slide,.main_carousel .slick-nav,.main_carousel.module .slick-slider .content_body,.advanced_search .filter_bar .search_row,.content_page #primary_column,#secondary_column aside.list_view_module li,.wysiwyg_content .related_content_module ul,#secondary_column .related_content_module ul,.wysiwyg_content .related_content_module li,#secondary_column .related_content_module li,blockquote,.faq_section ul.q_and_a .text.answer,.double_teaser .teaser_container,ul.item_list,ul.item_list>li,ul.item_list .list_content,.raw_image_gallery,form.pagination_form,.pagination_options,.faceted_search.list_view li.slide{*zoom:1}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h2,h3{page-break-after:avoid}}html,button,input,select,textarea{color:#3c3c3c}.browsehappy{background:white;color:#333;padding:1em;position:absolute;top:0;left:0;z-index:9999;width:100%;height:100%}html.touch.-webkit-{-webkit-tap-highlight-color:transparent}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{position:absolute}:root{--breakpoint_for_desktop_nav:1199px}.site_header_area .brand_area{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:100%;display:inline-block;width:54px;height:54px}.site_header_area .brand_area .brand1{height:100%;float:left;text-indent:-9999px}.site_header_area .brand_area .brand2{display:block;float:left;height:100%;text-indent:-9999px}.site_header_area .brand_area a.top_logo,.site_header_area .brand_area a.sub_logo{width:100%;float:left}.site_header_area .brand_area a.top_logo{height:39%;width:35%}.site_header_area .brand_area a.sub_logo{height:48%}.site_header_area .brand_area a.single_logo{width:100%;float:left;height:82%}.site_header_area .brand_area .nasa_logo{width:100%;height:100%;display:block}*,*:before,*:after{box-sizing:border-box}body{margin-left:auto;margin-right:auto;margin-top:0;background-color:white}@media (max-width: 1199px){body.nav_overlay_true{overflow:hidden}}img{width:100%}p{line-height:1.4em;margin-bottom:17px;margin-top:0;font-size:16px;color:#222}@media (min-width: 600px), print{p{font-size:18px}}@media (min-width: 769px), print{p{margin-bottom:20px;font-size:16px}}@media (min-width: 1024px), print{p{font-size:17px}}@media (min-width: 1200px){p{font-size:18px}}a{text-decoration:none;color:#257cdf}a:hover{text-decoration:underline}a[name]{position:relative;display:block;visibility:hidden;margin:0;padding:0}@media (max-width: 1199px){a[name]{top:-58px}}@media (max-width: 1199px) and (min-width: 480px){a[name]{top:-58px}}@media (max-width: 1199px) and (min-width: 600px), print and (max-width: 1199px){a[name]{top:-58px}}@media (max-width: 1199px) and (min-width: 769px), print and (max-width: 1199px){a[name]{top:-70px}}@media (min-width: 1200px){a[name]{top:-47px}}dl,menu,ol,ul{margin:0;padding:0}ul{list-style-type:none}ol{list-style-position:inside}hr,.gradient_line,.related.module .gradient_line_module_top{clear:both;margin:1em 0}.print_only{display:none}.page_cover{position:absolute;display:none;background-color:rgba(0,0,0,0.5);width:100%;height:100%;z-index:21}.svg_icon_container{display:inline-block;height:24px;width:24px;vertical-align:middle}.svg_icon_container svg{width:100%;height:100%}@font-face{font-family:'Whitney';src:url("https://mars.nasa.gov/assets/fonts/Whitney-Book.otf")}@font-face{font-family:'Whitney-Bold';src:url("https://mars.nasa.gov/assets/fonts/Whitney-Bold.otf")}@font-face{font-family:'WhitneyCondensed-Bold';src:url("https://mars.nasa.gov/assets/fonts/WhitneyCondensed-Bold.otf")}.button,.outline_button,.primary_media_feature .floating_text_area .button,.banner_header_overlay .button{font-weight:700;display:inline-block;margin-bottom:.5em;margin-left:auto;margin-right:auto;background-color:#3b788b;color:white;line-height:1em;border:0;text-decoration:none;border-radius:4px;cursor:pointer;text-shadow:none;font-size:13px;padding:12px 24px;text-transform:uppercase;white-space:nowrap}@media (min-width: 769px), print{.button,.outline_button,.primary_media_feature .floating_text_area .button,.banner_header_overlay .button{font-size:14px}}.button:hover,.outline_button:hover,.primary_media_feature .floating_text_area .button:hover{background-color:#5097ad;text-decoration:none}.outline_button,.primary_media_feature .floating_text_area .button,.banner_header_overlay .button{border-radius:10px;border:2px solid white;background:none;color:#FFF}.outline_button:hover,.primary_media_feature .floating_text_area .button:hover,.banner_header_overlay .button:hover{background-color:#5097ad;border-color:#5097ad}.msl .vital_signs_teaser .button{font-size:16px}.msl .button{background-color:#e34e41}.msl .button:hover{background-color:#FC6B5F;color:white}.insight_page .button{color:white;background-color:#3D75B3}.insight_page .button:hover{background-color:#4888D0}.meganav_container{display:none;line-height:1.4;position:absolute;z-index:30;left:0;top:58px;margin-top:-4px;width:370px;background-color:white;padding:1em 0 1.5em 0;white-space:normal;text-align:left;cursor:default;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);border-radius:3px}@media (min-width: 600px), print{.meganav_container{top:58px}}@media (min-width: 600px), print{.meganav_container{top:58px}}@media (min-width: 769px), print{.meganav_container{top:70px}}@media (min-width: 1024px), print{.meganav_container{top:74px}}@media (min-width: 1200px){.meganav_container{top:82px}}@media (min-width: 1700px){.meganav_container{top:88px}}.site_header.centered_nav .meganav_container{margin-top:-35px}@media (min-width: 1200px){.site_header.centered_nav .meganav_container{margin-top:-43px}}.meganav_container:before{content:'';position:absolute;width:100%;display:block;height:16px;top:-16px}.meganav_container .nav_item_indicator{position:absolute;display:none;z-index:50;top:-16px;left:calc(50% - 15px);width:0;height:0;border-bottom:16px solid white;border-left:16px solid transparent;border-right:16px solid transparent}.meganav_container.no_feature{padding:1em 0;width:352px}.meganav_container.no_feature .sections_container{padding-bottom:0}.meganav_container.wide{width:695px}.meganav_container .meganav_columnar_section .col{width:50%;float:left}.meganav_container .meganav_columnar_section .col a{display:block}.meganav{color:#2b2b2b;text-align:left;height:100%;margin:0 1.5em}.meganav p{color:#2b2b2b}.meganav .sections_container{width:100%;padding-bottom:0.8em}.meganav .sections_container:not(:only-child){margin-bottom:.5em}.meganav .sections_container li:first-child{padding-top:0}.meganav .sections_container li a{display:block;padding:.4em 0;font-size:1em;color:#5F5F5F;font-weight:600;text-decoration:none}.meganav .sections_container li a:hover,.touchevents .meganav .sections_container li a.pressed{color:black}.meganav .section_title{margin-top:.5em;font-size:17px}.meganav .section_description{margin:.5em 0 1em}.meganav .detail_container{height:calc(240px + 1.4em);width:auto;padding-top:1.4em;overflow:hidden;border-top:1px solid #dddfe1}.meganav.columnar{max-height:calc(100vh - 10em);overflow-y:scroll;overflow-x:hidden;padding:1rem;font-size:1rem;font-weight:400}.meganav.columnar .sections_container{-moz-column-width:9rem;column-width:9rem;-moz-column-gap:3.5rem;column-gap:3.5rem;-moz-column-rule:1px solid #dddfe1;column-rule:1px solid #dddfe1}.meganav.columnar .sections_container ul li{-moz-column-break-inside:avoid;break-inside:avoid}.meganav.columnar .sections_container ul li.top>a{padding-top:0}.meganav.columnar .sections_container ul li a{padding:1.3em 0 0.5em;color:#2b2b2b}.meganav.columnar .sections_container ul li a:hover,.touchevents .meganav.columnar .sections_container ul li a.pressed{color:black}.meganav.columnar .sections_container ul li ul li a{font-weight:500;padding:.4em 0;font-size:.9rem;color:#606061}.feature_info{height:100%;padding-top:0;max-width:1200px}.feature_info .image_container{vertical-align:top;height:100%;background-size:cover;background-position:center;position:relative}.feature_info .block_layout{height:100%}.feature_info .block_layout .image_container{display:none;width:49.15254%;float:left}.feature_info .block_layout .image_container:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.feature_info .block_layout .image_container:nth-child(2n+2){margin-left:50.84746%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(-n+2){display:block}@media (min-width: 1200px){.feature_info .block_layout .image_container{width:32.20339%;float:left}.feature_info .block_layout .image_container:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.feature_info .block_layout .image_container:nth-child(3n+2){margin-left:33.89831%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(3n+3){margin-left:67.79661%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(-n+3){display:block}}.feature_info .single_layout{height:100%}.feature_info .single_layout .image_container{display:block;float:left;width:320px}.feature_info .single_layout .description_container{width:43%;margin-left:.5em}@media (min-width: 1200px){.feature_info .single_layout .description_container{width:50%}}.feature_info .single_layout .description_container header{margin-bottom:0.3em}.feature_info .single_layout .description_container .content_title{color:#222;font-size:1.3em;letter-spacing:-.02em}.feature_info .single_layout .description_container .description{font-size:1em}.feature_info .category_title,.feature_info .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .feature_info .category_title{text-transform:uppercase;font-size:0.7em;font-weight:600;color:#999}.feature_info .content_title{font-size:1.1em;font-weight:600}.feature_info .description_container{float:left;width:60%;padding:1em;max-width:600px}.feature_info .description_container .description{font-size:0.9em}.feature_info .bottom_description{transition:height 350ms ease-in-out;padding:1em;position:absolute;bottom:0;width:100%;height:35%;background-color:rgba(0,0,0,0.6);overflow:hidden}.feature_info .bottom_description .category_title{margin-bottom:0}.feature_info .bottom_description .content_title{color:#eee}.feature_info .bottom_description .description{color:#EAEAEA;opacity:0;transition:opacity 350ms ease-in-out;margin-top:4px}.feature_info .bottom_description.tall_description{height:45%}.feature_info .image_container:hover .bottom_description,.feature_info .image_container.pressed .bottom_description{height:100%;margin-top:4px}.feature_info .image_container:hover .bottom_description .description,.feature_info .image_container.pressed .bottom_description .description{opacity:1}.feature_info .image_container:hover .bottom_description .content_title,.feature_info .image_container.pressed .bottom_description .content_title{white-space:normal;overflow:auto}.touchevents #site_nav_container li.isHovered .nav_title a{color:#FFF}.meganav_container.dark_theme{background-color:#1f1f1f;color:white}.meganav_container.dark_theme .sections_container li a{color:#9f9f9f}.meganav_container.dark_theme .sections_container li a:hover,.touchevents .meganav_container.dark_theme .sections_container li a.pressed{color:white}.meganav_container.dark_theme .detail_container{border-top:1px solid #363636}.meganav_container.dark_theme .nav_item_indicator{border-bottom-color:#1f1f1f}.meganav_container.dark_theme .meganav_columnar_section{border-top:1px solid #363636}.meganav_container.dark_theme .meganav_columnar_section .title{color:#E3E3E3}.meganav_container.dark_theme .meganav_columnar_section .nav_item a{color:#9f9f9f}.meganav_container.dark_theme .meganav_columnar_section .nav_item a:hover,.touchevents .meganav_container.dark_theme .meganav_columnar_section .nav_item a.pressed{color:white}.meganav span.external_link_icon svg,.subnav span.external_link_icon svg{display:inline-block}.meganav span.no_wrap,.subnav span.no_wrap{display:inline;white-space:nowrap}.meganav_columnar_section{border-top:1px solid #d4d4d4;padding:1rem 0 0.5rem}.meganav_columnar_section .title{font-weight:600;font-size:0.75rem;font-family:"Montserrat",Helvetica,Arial,sans-serif;color:#7b7b7b;margin-bottom:0.5rem}.meganav_columnar_section .col{width:100%;float:left}.meganav_columnar_section .col a{display:block}.meganav_columnar_section .nav_item a{color:#5f5f5f;font-weight:600;margin-bottom:0.5rem;text-decoration:none}.meganav_columnar_section .nav_item a:hover{color:black}.meganav>.meganav_columnar_section:first-child{border-top:none;padding-top:.5rem}@media (min-width: 1200px){.nav_area{width:100%;height:100%;float:right;bottom:0;right:0;position:absolute;text-align:right;z-index:50;display:block !important}.nav_area .social_nav{display:none}}.no-touchevents .nav_is_fixed .fancybox-wrap #sticky_nav_spacer{display:none}.no-touchevents .nav_is_fixed .fancybox-wrap .nav_area{position:relative}@media (min-width: 1200px){#site_nav_container{padding:0;position:relative;display:inline-block !important;width:100%;height:100%;position:relative;bottom:0}}@media (min-width: 1200px) and (min-width: 1024px), print and (min-width: 1200px){#site_nav_container{bottom:0}}@media (min-width: 1200px) and (min-width: 1200px){#site_nav_container{bottom:0}}@media (min-width: 1200px){#site_nav_container .site_nav{width:100%;padding:0;position:relative;overflow-y:visible;min-height:0;height:100%;top:auto;left:auto;padding-top:15px}}@media (min-width: 1200px) and (min-width: 1200px){#site_nav_container .site_nav{padding-top:18px}}@media (min-width: 1200px) and (min-width: 1700px){#site_nav_container .site_nav{padding-top:20px;padding-right:0.8em}}@media (min-width: 1200px) and (min-width: 1200px){.center_header_container #site_nav_container .site_nav{padding-top:14px}}@media (min-width: 1200px) and (min-width: 1700px){.center_header_container #site_nav_container .site_nav{padding-top:16px}}@media (min-width: 1200px){#site_nav_container ul.nav{margin-bottom:0;display:inline-block;margin-right:.6em;height:100%}#site_nav_container ul.nav>li{padding:0;border:none;display:inline-block;cursor:pointer;height:auto;border-top-left-radius:2px;border-top-right-radius:2px;white-space:nowrap}#site_nav_container ul.nav>li:hover .nav_title a,#site_nav_container ul.nav>li.current .nav_title a{color:#eee}.center_header_container #site_nav_container ul.nav>li:hover .nav_title a,.center_header_container #site_nav_container ul.nav>li.current .nav_title a{color:#fff}#site_nav_container ul.nav>li:hover .global_subnav_container{display:block !important}#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top,#site_nav_container ul.nav>li .arrow_box{display:none}#site_nav_container .nav_title{display:block;position:relative;line-height:1.4em;font-weight:700;margin-bottom:0;height:100%;cursor:pointer}#site_nav_container .nav_title a{color:#fff;text-decoration:none;font-size:.9em;padding:14px 9px 28px;display:block}}@media (min-width: 1200px) and (min-width: 1024px), print and (min-width: 1200px){#site_nav_container .nav_title a{padding-left:7px;padding-right:7px;font-size:.88em}}@media (min-width: 1200px) and (min-width: 1200px){#site_nav_container .nav_title a{padding:.5em 1em 2em;font-size:1em}}@media (min-width: 1200px) and (min-width: 1700px){#site_nav_container .nav_title a{font-size:1em}}@media (min-width: 1200px){.center_header_container #site_nav_container .nav_title a{padding-bottom:20px}#site_nav_container .nav_title a span.no_wrap{display:inline;white-space:nowrap}#site_nav_container ul.subnav{margin-bottom:0;min-width:190px;display:none;position:absolute;margin:0}.no-touchevents .nav_is_fixed #site_nav_container ul.subnav{box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}#site_nav_container ul.subnav li{text-align:left;clear:both;display:block}#site_nav_container ul.subnav li:first-child{border-top:none}#site_nav_container ul.subnav a{line-height:1.4em;text-decoration:none;display:block;font-weight:700;white-space:normal;padding:5px 9px 6px}#site_nav_container li.admin_site_nav_item{background-color:#D94F34}#site_nav_container li.admin_site_nav_item .nav_title a{color:white !important;padding-bottom:14px}#site_nav_container li.admin_site_nav_item:hover .nav_title,#site_nav_container li.admin_site_nav_item.current .nav_title{background-color:#FF7054 !important}#site_nav_container li.admin_site_nav_item:hover .nav_title a,#site_nav_container li.admin_site_nav_item.current .nav_title a{color:white}#site_nav_container li.admin_site_nav_item:hover .subnav{display:block !important}#site_nav_container li.admin_site_nav_item ul.subnav{border:none}#site_nav_container li.admin_site_nav_item ul.subnav a{color:white;padding:10px 9px 12px}#site_nav_container li.admin_site_nav_item ul.subnav li{background-color:#D94F34;border:none}#site_nav_container li.admin_site_nav_item ul.subnav li:hover{background-color:#FF7054}#site_nav_container li.admin_site_nav_item .global_subnav_container>.meganav_container{display:none !important}}@media (max-width: 1199px){.nav_area{display:none;position:fixed;left:0;width:104%;overflow:hidden;height:100%;min-height:100%;background-color:#5a2017;z-index:10000;top:58px}}@media (max-width: 1199px) and (min-width: 480px){.nav_area{top:58px}}@media (max-width: 1199px) and (min-width: 600px), print and (max-width: 1199px){.nav_area{top:58px}}@media (max-width: 1199px) and (min-width: 769px), print and (max-width: 1199px){.nav_area{top:70px}}@media (max-width: 1199px){#site_nav_container{width:100%;text-align:center;overflow-y:auto;padding:0 8.8% 150px 4.8%;height:100%;min-height:100%;-webkit-overflow-scrolling:touch}#site_nav_container .site_nav{display:block}#site_nav_container ul.nav{margin-bottom:2em}#site_nav_container ul.nav>li{display:block;padding:1em 0 0}#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top{margin:1em 0 0 0}#site_nav_container ul.nav>li .arrow_box{padding:20px 20px;width:52px;float:right;cursor:pointer;margin:-0.4em -.8em 0 0;display:block;text-align:center}#site_nav_container ul.nav>li .arrow_box.reverse{transform:rotate(180deg);-ms-filter:"progid:DXImageTransform.Microsoft.Matrix(M11=-1, M12=1.2246063538223773e-16, M21=-1.2246063538223773e-16, M22=-1, SizingMethod='auto expand')"}#site_nav_container ul.nav>li .arrow_box .arrow_down{width:0;height:0;border-left:6px solid rgba(255,255,255,0);border-right:6px solid rgba(255,255,255,0);border-top:8px solid #fff;float:right}#site_nav_container .meganav_container{display:none !important}#site_nav_container .nav_title{margin-bottom:.3em;display:block;line-height:1.4em;font-weight:700;text-align:left;width:80%}#site_nav_container .nav_title a{font-size:1.2em;color:#fff;display:block;width:100%;height:100%;width:100%;padding:.4em .4em .4em 0}#site_nav_container .nav_title a:hover{text-decoration:none}#site_nav_container .global_subnav_container{display:none}#site_nav_container ul.subnav li{text-align:left}#site_nav_container ul.subnav a{color:#84B0DD;font-size:1em;line-height:1.4em;text-decoration:none;display:block;padding:.4em 0;font-weight:700;white-space:nowrap}#site_nav_container ul.nav>li.admin_site_nav_item .arrow_box .arrow_up,#site_nav_container ul.nav>li.admin_site_nav_item .arrow_box .arrow_down{border-top-color:#F45F5F}.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_up,.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_down{border-top-color:white}#site_nav_container ul.nav>li.admin_site_nav_item .nav_title a,#site_nav_container ul.nav>li.admin_site_nav_item ul.subnav a{color:#F45F5F}.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item .nav_title a:hover,.no-touchevents #site_nav_container ul.nav>li.admin_site_nav_item ul.subnav a:hover{color:white}#site_nav_container .overlay_search,#site_nav_container .meganav_overlay_search{margin-bottom:2em;width:100%;max-width:320px}#site_nav_container .overlay_search .search_field,#site_nav_container .meganav_overlay_search .search_field{width:100%}#site_nav_container .social_nav,#site_nav_container .social_icons{color:white;display:block;font-size:1.3em;width:100%;border-radius:4px;padding:1.3em 0 1.7em;max-width:320px;margin:0 auto}#site_nav_container .social_nav .nav_title,#site_nav_container .social_icons .nav_title{margin-bottom:1em;text-align:center;width:auto}#site_nav_container .social_nav .social_icons{padding:0}}body.admin_only #site_nav_container ul.nav>li .gradient_line,body.admin_only #site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module body.admin_only #site_nav_container ul.nav>li .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#BEBEBE;background:linear-gradient(to right, rgba(190,190,190,0), #BEBEBE, rgba(190,190,190,0));background:-webkit-linear-gradient(to right, rgba(190,190,190,0), #BEBEBE, rgba(190,190,190,0));margin:1em 0 0 0}.section_search,.blog_search_form,.overlay_search,.meganav_overlay_search{color:white;display:inline-block;position:relative}.section_search .search_field,.blog_search_form .search_field,.overlay_search .search_field,.meganav_overlay_search .search_field{color:white;background-color:#282828;background-color:rgba(255,255,255,0.1);font-weight:500;font-size:16px;border:none;border-radius:4px;height:40px;padding-left:1.1em;padding-right:40px;width:155px}.section_search .search_field.placeholder,.blog_search_form .search_field.placeholder,.overlay_search .search_field.placeholder,.meganav_overlay_search .search_field.placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field:-moz-placeholder,.blog_search_form .search_field:-moz-placeholder,.overlay_search .search_field:-moz-placeholder,.meganav_overlay_search .search_field:-moz-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field::-moz-placeholder,.blog_search_form .search_field::-moz-placeholder,.overlay_search .search_field::-moz-placeholder,.meganav_overlay_search .search_field::-moz-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field::-webkit-input-placeholder,.blog_search_form .search_field::-webkit-input-placeholder,.overlay_search .search_field::-webkit-input-placeholder,.meganav_overlay_search .search_field::-webkit-input-placeholder{color:rgba(255,255,255,0.8);-webkit-font-smoothing:antialiased;opacity:1 !important;font-family:"Montserrat",Helvetica,Arial,sans-serif}.section_search .search_field::-ms-clear,.blog_search_form .search_field::-ms-clear,.overlay_search .search_field::-ms-clear,.meganav_overlay_search .search_field::-ms-clear{display:none;width:0;height:0}.section_search .search_submit,.blog_search_form .search_submit,.overlay_search .search_submit,.meganav_overlay_search .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -127px -5px;background-size:300px;position:absolute;right:-5px;top:-3px;border:none;margin-left:-44px;opacity:.8}.section_search .search_submit:hover,.blog_search_form .search_submit:hover,.overlay_search .search_submit:hover,.meganav_overlay_search .search_submit:hover,.section_search .search_submit.active,.blog_search_form .search_submit.active,.overlay_search .search_submit.active,.meganav_overlay_search .search_submit.active{background:url("https://mars.nasa.gov/assets/[email protected]") -127px -5px;background-size:300px}.section_search .search_field,.blog_search_form .search_field{background-color:#F3F4F8;color:#222}.section_search .search_field.placeholder,.blog_search_form .search_field.placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field:-moz-placeholder,.blog_search_form .search_field:-moz-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field::-moz-placeholder,.blog_search_form .search_field::-moz-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_field::-webkit-input-placeholder,.blog_search_form .search_field::-webkit-input-placeholder{color:rgba(255,255,255,0.8);opacity:1 !important}.section_search .search_submit,.blog_search_form .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -127px -54px;background-size:300px;opacity:.6}.section_search .search_submit:hover,.blog_search_form .search_submit:hover,.section_search .search_submit.active,.blog_search_form .search_submit.active{background:url("https://mars.nasa.gov/assets/[email protected]") -127px -54px;background-size:300px}form.nav_search .search_field,form.meganav_search .search_field{padding-right:20px;height:34px}form.nav_search input:-webkit-autofill,form.overlay_search input:-webkit-autofill{-webkit-box-shadow:0 0 0px 1000px #989898 inset;-webkit-text-fill-color:white !important}.overlay_search .search_field,.meganav_overlay_search .search_field{color:white;background-color:rgba(255,255,255,0.3)}.overlay_search .search_field.placeholder,.meganav_overlay_search .search_field.placeholder{color:white}.overlay_search .search_field:-moz-placeholder,.meganav_overlay_search .search_field:-moz-placeholder{color:white}.overlay_search .search_field::-moz-placeholder,.meganav_overlay_search .search_field::-moz-placeholder{color:white}.overlay_search .search_field::-webkit-input-placeholder,.meganav_overlay_search .search_field::-webkit-input-placeholder{color:white}.overlay_search label.search_label,.meganav_overlay_search label.search_label{display:none}.overlay_search .search_submit,.meganav_overlay_search .search_submit{padding:0;cursor:pointer;width:42px;height:42px;background:url("https://mars.nasa.gov/assets/[email protected]") -131px -5px;background-size:300px}.overlay_search .search_submit:hover,.overlay_search .search_submit.active,.meganav_overlay_search .search_submit:hover,.meganav_overlay_search .search_submit.active{background:url("https://mars.nasa.gov/assets/[email protected]") -131px -5px;background-size:300px}.blog_search_form .search_field.placeholder{color:#5a6470;font-weight:400}.blog_search_form .search_field:-moz-placeholder{color:#5a6470;font-weight:400}.blog_search_form .search_field::-moz-placeholder{color:#5a6470;font-weight:400}.blog_search_form .search_field::-webkit-input-placeholder{color:#5a6470;font-weight:400}.body_form label{display:block;margin-bottom:.3em}.body_form input:not([type="submit"]):not([type="reset"]),.body_form textarea{font-size:16px}.body_form input[type="text"]:not(#recaptcha_response_field),.body_form input[type="tel"],.body_form input[type="email"]{height:40px}.body_form input:not(#recaptcha_response_field):not(.inline_button):not([type="submit"]):not([type="radio"]):not([type="checkbox"]),.body_form textarea{width:100%;border:1px solid #a7a8a8;background-color:white;border-radius:4px;padding:10px 12px}.body_form input,.body_form textarea{margin-bottom:1em}.body_form .button{margin-top:1em}.body_form select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin-bottom:1em}.body_form select::-ms-expand{display:none}.body_form select option{padding:0.5em 1em}.body_form label{font-weight:700}.body_form .radio_title{margin-bottom:.5em;font-weight:700}.body_form fieldset.radio_buttons .option{white-space:nowrap;margin-bottom:1em}.body_form fieldset.radio_buttons label{white-space:normal;vertical-align:middle;display:inline}.body_form fieldset.radio_buttons input[type="radio"],.body_form fieldset.radio_buttons input[type="checkbox"]{display:inline-block;margin:0 .5em 0 0;vertical-align:middle}.body_form fieldset.radio_buttons input[type="radio"]+label,.body_form fieldset.radio_buttons input[type="checkbox"]+label{font-weight:400}.body_form .centered{text-align:center}@media (max-width: 480px){#recaptcha_widget_div{overflow:hidden}#recaptcha_widget_div #recaptcha_area{margin:0 auto}}.event_location,.event_date{margin-bottom:1em}form#ic_signupform .form-header{display:none}form#ic_signupform label{display:block !important}form#ic_signupform label:not(.checkbox){margin-bottom:3px !important}form#ic_signupform .elcontainer .formEl.required label span.indicator.required,form#ic_signupform .elcontainer .formEl.required h3 span.indicator.required{color:red;padding-left:.2em}form#ic_signupform .elcontainer .formEl.required:before{display:none}form#ic_signupform .elcontainer .formEl.required.fieldtype-checkbox h3{margin-bottom:3px}form#ic_signupform .elcontainer .formEl.required .option-container{padding-top:0}form#ic_signupform .elcontainer .formEl.required label.checkbox{position:relative;padding-left:1.8em}form#ic_signupform .elcontainer .formEl.required label.checkbox input[type='checkbox']{position:absolute;top:50%;margin-top:-7px;left:.5em}form#ic_signupform .submit-container input[type='submit'].btn-submit{font-weight:700;display:inline-block;margin-bottom:.5em;margin-left:auto;margin-right:auto;background-color:#1d56a3;color:white;line-height:1;border:0;text-decoration:none;border-radius:4px;cursor:pointer;text-shadow:none;font-size:13px;padding:12px 24px;text-transform:uppercase;white-space:nowrap}form#ic_signupform .submit-container input[type='submit'].btn-submit:hover{background-color:#4A81C1;text-decoration:none}form#ic_signupform+img{display:none}.site_header_area{height:58px}@media (min-width: 600px), print{.site_header_area{height:58px}}@media (min-width: 600px), print{.site_header_area{height:58px}}@media (min-width: 769px), print{.site_header_area{height:70px}}@media (min-width: 1024px), print{.site_header_area{height:74px}}@media (min-width: 1200px){.site_header_area{height:82px}}@media (min-width: 1700px){.site_header_area{height:88px}}.site_header_area .brand_area{top:4px;margin-left:8px;height:49px;width:260px;transition:width .3s, height .3s;background-size:contain}.site_header_area .site_logo_container{top:16px;margin-left:0;width:189px}.site_header_area .menu_button,.site_header_area #modal_close{top:8px;right:8px}@media (min-width: 769px), print{.site_header_area .brand_area{top:6px;margin-left:12px;height:60px;width:313px}.site_header_area .site_logo_container{top:23px;margin-left:13px;width:189px}.site_header_area .menu_button,.site_header_area #modal_close{top:12px;right:12px}}@media (min-width: 1024px), print{.site_header_area .brand_area{top:12px;margin-left:10px;height:54px;width:288px}.site_header_area .site_logo_container{width:120px}}@media (min-width: 1200px){.site_header_area .brand_area{top:14px;margin-left:17px;height:60px;width:338px}.site_header_area .site_logo_container{top:30px;width:210px}}@media (min-width: 1700px){.site_header_area .brand_area{top:14px;margin-left:30px}.site_header_area .site_logo_container{top:30px;width:246px}}@media (min-width: 769px), print{#home:not(.nav_is_fixed) .brand_area{height:60px;width:329px}}@media (min-width: 1024px), print{#home:not(.nav_is_fixed) .brand_area{width:300px;height:55px}}@media (min-width: 1200px){#home:not(.nav_is_fixed) .brand_area{width:368px;height:68px}}@media (min-width: 1700px){#home:not(.nav_is_fixed) .brand_area{width:420px;height:78px}}.site_header_area{background-color:#5a2017;width:100%;position:absolute;z-index:22}.site_header_area.opaque{transition:background-color .5s ease-in-out}.main_feature_present .site_header_area,.main_feature_present .site_header_area.subsite{background-color:transparent}.main_feature_present .site_header_area.opaque{background-color:transparent}#home.nav_is_fixed .site_header_area{z-index:42}.nav_is_fixed .site_header_area{box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);transition:background-color .5s ease-in-out;background-color:#5a2017}.site_header_area .site_header{width:100%;height:100%}.site_header_area .brand_area{position:relative;display:inline-block;z-index:100;background-image:url("https://mars.nasa.gov/assets/[email protected]")}.site_header_area .brand_area .brand1{width:23%}.site_header_area .brand_area .brand2{width:70%;margin-left:3px}@media (min-width: 769px){.site_header_area .brand_area .brand2{display:block}}.site_header_area .site_logo_container{position:relative;display:inline-block;vertical-align:top;z-index:100}@media (min-width: 769px), print{.site_header_area .site_logo_container:before{content:"";height:110%;background-color:rgba(255,255,255,0.4);width:1px;position:absolute;left:-9px;top:0px}}@media (min-width: 1024px), print{.site_header_area .site_logo_container:before{height:150%;top:1px}}@media (min-width: 1200px){.site_header_area .site_logo_container:before{height:110%;top:-2px}}@media (min-width: 1700px){.site_header_area .site_logo_container:before{top:0px}}.site_header_area .site_logo_container a{display:block}.site_header_area .site_logo_container a:hover{text-decoration:none}.site_header_area .site_logo_container img.site_logo{display:block;position:relative;width:130px;top:3px}@media (min-width: 769px), print{.site_header_area .site_logo_container img.site_logo{width:170px;top:0px}}@media (min-width: 1024px), print{.site_header_area .site_logo_container img.site_logo{width:120px;top:6px}}@media (min-width: 1200px){.site_header_area .site_logo_container img.site_logo{width:188px;top:0}}@media (min-width: 1700px){.site_header_area .site_logo_container img.site_logo{width:215px}}.site_header_area .site_logo_container img.site_logo_truncated{display:none}@media (min-width: 1024px), print{.site_header_area .site_logo_container img.site_logo_truncated{display:block}}@media (min-width: 1200px){.site_header_area .site_logo_container img.site_logo_truncated{display:none}}.site_header_area img.site_logo_black{display:none}.site_header_area form.nav_search,.site_header_area form.meganav_search{display:inline-block;vertical-align:middle;margin-right:1em}.main_feature_present.insight_page.secondary_nav_is_fixed .site_header_area.subsite{background-color:#172E46;top:0}.header_mask{display:none}@media (min-width: 1200px){.header_mask{height:58px;display:block}}@media (min-width: 1200px) and (min-width: 600px), print and (min-width: 1200px){.header_mask{height:58px}}@media (min-width: 1200px) and (min-width: 600px), print and (min-width: 1200px){.header_mask{height:58px}}@media (min-width: 1200px) and (min-width: 769px), print and (min-width: 1200px){.header_mask{height:70px}}@media (min-width: 1200px) and (min-width: 1024px), print and (min-width: 1200px){.header_mask{height:74px}}@media (min-width: 1200px) and (min-width: 1200px){.header_mask{height:82px}}@media (min-width: 1200px) and (min-width: 1700px){.header_mask{height:88px}}.site_header_area .brand_area.subsite_brand{background-image:none;display:inline-flex;align-items:center;color:white;width:auto;top:5px}@media (min-width: 1024px), print{.site_header_area .brand_area.subsite_brand{top:10px}}.site_header_area .brand_area.subsite_brand .brand1{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:100%;background-position:center;width:17%;width:50px}@media (min-width: 769px), print{.site_header_area .brand_area.subsite_brand .brand1{width:65px}}@media (min-width: 1024px), print{.site_header_area .brand_area.subsite_brand .brand1{width:69px}}.site_header_area .brand_area.subsite_brand .site_title{text-transform:uppercase;vertical-align:middle;font-size:22px;margin-left:0.2rem;font-size:1.6rem}@media (min-width: 480px){.site_header_area .brand_area.subsite_brand .site_title{margin-left:0.6rem}}.site_header_area .brand_area.subsite_brand .subsite_title{vertical-align:middle;font-size:5.1vw;color:white;font-weight:500;margin-left:.4rem}.site_header_area .brand_area.subsite_brand .subsite_title:hover{text-decoration:none}@media (min-width: 480px){.site_header_area .brand_area.subsite_brand .subsite_title{font-size:1.5rem}}.magic_shell_page #site_nav_container ul.nav>li:hover{background-color:transparent}.magic_shell_page #site_nav_container ul.nav>li:hover ul.subnav{text-indent:0}@media (min-width: 1200px){.magic_shell_page #site_nav_container ul.nav>li:hover ul.subnav{display:none}}.magic_shell_page #site_nav_container .external_nav_item.expanded .meganav_container{display:block !important}@media (max-width: 1199px){#sticky_nav_spacer{height:58px}}@media (max-width: 1199px) and (min-width: 480px){#sticky_nav_spacer{height:58px}}@media (max-width: 1199px) and (min-width: 600px), print and (max-width: 1199px){#sticky_nav_spacer{height:58px}}@media (max-width: 1199px) and (min-width: 769px), print and (max-width: 1199px){#sticky_nav_spacer{height:70px}}@media (max-width: 1199px){.main_feature_present #sticky_nav_spacer{display:none}}.site_header_area .menu_icon{color:transparent;font-size:0}@media (max-width: 1199px){.site_header_area{display:block;position:fixed}.fixfixed .site_header_area{position:absolute;box-shadow:none}.nav_is_fixed .site_header_area{box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}.nav_overlay_true .site_header_area{background-color:#5a2017;transition:none}.site_header_area img.grace_logo_black{display:none}.site_header_area .right_header_container{width:300px}.site_header_area .right_header_container .menu_button{position:absolute;display:block;vertical-align:middle;padding:10px;text-decoration:none;-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.site_header_area .right_header_container .menu_button .menu_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0 0;background-size:300px}.site_header_area .right_header_container .menu_button .menu_icon:hover,.site_header_area .right_header_container .menu_button .menu_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") 0 0;background-size:300px}.site_header_area .right_header_container #modal_close{position:absolute;padding:10px;text-decoration:none;display:none;-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.site_header_area .right_header_container #modal_close .modal_close_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.site_header_area .right_header_container #modal_close .modal_close_icon:hover,.site_header_area .right_header_container #modal_close .modal_close_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.site_header_area .right_header_container form.nav_search,.site_header_area .right_header_container form.meganav_search{margin-right:0}.site_header_area.menu_open #modal_close{display:block}.site_header_area.menu_open .nav_area{display:block}.site_header_area.menu_open #menu_button{display:none}.site_header_area .meganav_container{display:none !important}}@media (min-width: 1200px){.site_header_area{display:block !important}.site_header_area form.nav_search,.site_header_area form.meganav_search{display:inline-block;max-width:216px}.site_header_area form.nav_search .search_field,.site_header_area form.meganav_search .search_field{width:37px;padding-right:0;padding-left:0;height:34px}.site_header_area form.nav_search .search_open,.site_header_area form.meganav_search .search_open{padding-left:.8em;padding-right:38px;width:100%}.nav_is_fixed .site_header_area{bottom:auto;top:0;position:fixed;width:100%;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15);margin-top:0px}.site_header_area .menu_button{display:none}}body.megasection_nav_present.insight_page .secondary_megasection_nav .megasection_nav.megasection_nav--nav-hidden{display:none}#site_footer{padding:0;background:black;background-size:100%;position:relative;line-height:1.4}@media (min-width: 600px), print{#site_footer{background:#000 url("https://mars.nasa.gov/assets/footer_bg.png") center no-repeat;background-size:cover}}@media (min-width: 1200px){#site_footer{background-position:center 70%}}#site_footer .gradient_line,#site_footer .related.module .gradient_line_module_top,.related.module #site_footer .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#a7abd2;background:linear-gradient(to right, rgba(167,171,210,0), #a7abd2, rgba(167,171,210,0));background:-webkit-linear-gradient(to right, rgba(167,171,210,0), #a7abd2, rgba(167,171,210,0));width:90%}@media (min-width: 769px), print{#site_footer .gradient_line,#site_footer .related.module .gradient_line_module_top,.related.module #site_footer .gradient_line_module_top{width:50%}}#site_footer .footer_line{display:none;margin-left:auto;margin-right:auto;content:" ";width:85%;height:1px;clear:both;background-color:rgba(255,255,255,0.25)}@media (min-width: 600px), print{#site_footer .footer_line{display:block;width:65%}}.mars_now_footer{display:block;margin:0 auto;text-align:center;width:0;height:0;margin-bottom:5em}.mars_now_footer .mars_now a:before{transform:translate(-50%, -20%)}@media (min-width: 480px){.mars_now_footer .mars_now a:before{transform:translate(-50%, -30%)}}@media (min-width: 600px), print{.mars_now_footer .mars_now a:before{transform:translate(-50%, -60%)}}@media (min-width: 769px), print{.mars_now_footer .mars_now a:before{transform:translate(-50%, -80%)}}@media (min-width: 1024px), print{.mars_now_footer .mars_now a:before{position:absolute;transform:translate(-50%, -50%)}}.mars_now_footer .footer_title{color:white;font-size:.8em;font-weight:bold;width:200px;text-align:center;position:relative;left:-100px;top:-20px}@media (min-width: 600px), print{.mars_now_footer .footer_title{top:-45px}}@media (min-width: 769px), print{.mars_now_footer .footer_title{top:-65px}}@media (min-width: 1024px), print{.mars_now_footer .footer_title{top:25px}}.upper_footer{padding:2em 0 0em;width:100%;margin:0 auto}.msl .upper_footer{padding:0}@media (min-width: 600px), print{.upper_footer{padding:4em 0 4em}}@media (min-width: 1024px), print{.upper_footer{width:85%}}@media (min-width: 1200px){.upper_footer{width:70%}}.upper_footer .share,.upper_footer .footer_newsletter,.upper_footer .newsletter{text-align:center;margin-bottom:2.7em;width:90%}@media (min-width: 480px){.upper_footer .share,.upper_footer .footer_newsletter,.upper_footer .newsletter{width:75%}}@media (min-width: 600px), print{.upper_footer .share,.upper_footer .footer_newsletter,.upper_footer .newsletter{margin-bottom:4em;width:60%}}@media (min-width: 769px), print{.upper_footer .share,.upper_footer .footer_newsletter,.upper_footer .newsletter{width:48%}}@media (min-width: 1024px), print{.upper_footer .share,.upper_footer .footer_newsletter,.upper_footer .newsletter{width:44%}}.upper_footer .share h2,.upper_footer .footer_newsletter h2,.upper_footer .newsletter h2{font-size:1.8em;font-weight:400;margin-bottom:0.6em;color:#ccdeef;letter-spacing:-.035em;margin-top:0}.share_newsletter_container{display:flex;flex-direction:column;align-items:center}@media (min-width: 769px), print{.share_newsletter_container{flex-direction:row;align-items:unset;justify-content:space-around}}@media (min-width: 1024px), print{.share_newsletter_container{justify-content:space-between}}.newsletter form{display:inline-block;position:relative;width:100%}.newsletter form input:-webkit-autofill,.newsletter form input:-webkit-autofill:hover,.newsletter form input:-webkit-autofill:focus,.newsletter form textarea:-webkit-autofill,.newsletter form textarea:-webkit-autofill:hover,.newsletter form textarea:-webkit-autofill:focus,.newsletter form select:-webkit-autofill,.newsletter form select:-webkit-autofill:hover,.newsletter form select:-webkit-autofill:focus,.newsletter form input:-internal-autofill-selected{-webkit-box-shadow:0 0 0px 1000px #3b4351 inset;-webkit-text-fill-color:#fff !important}.newsletter input[type=email]{background-color:rgba(59,67,81,0.75);color:white;width:100%;padding:.7em .5em .7em 15px;border:0;border-radius:5px}.newsletter input[type=submit]{padding:0;cursor:pointer;width:40px;height:29px;background:url("https://mars.nasa.gov/assets/[email protected]") -94px -30px;background-size:300px;position:absolute;right:7px;top:8px;border:none;opacity:1}.newsletter ::-moz-placeholder{color:white;opacity:1}.newsletter :-ms-input-placeholder{color:white;opacity:1}.newsletter ::-ms-input-placeholder{color:white;opacity:1}.newsletter ::placeholder{color:white;opacity:1}.lower_footer{padding-bottom:4em}@media (min-width: 769px), print{.lower_footer{padding-bottom:9em}}.lower_footer .nav_container{margin:0 auto 1em;position:relative;left:0;width:100%}@media (min-width: 769px), print{.lower_footer .nav_container{padding-top:0.5em}}.lower_footer nav{font-size:1em;text-transform:uppercase;text-align:center;margin-left:auto;margin-right:auto;color:#98c7fc}.lower_footer nav a{padding:0 .4em;font-weight:600;color:#98c7fc;font-size:.85em;text-decoration:none;line-height:2em}@media (min-width: 769px), print{.lower_footer nav a{padding:0 .6em}}.no-touchevents .lower_footer nav a:hover{color:white}.lower_footer nav li{display:inline}.lower_footer nav li:not(:last-child):after{content:"|"}.lower_footer .credits{position:relative;float:none;width:auto;text-align:center}.lower_footer .credits .footer_brands_top,.lower_footer .credits .staff,.lower_footer .credits p{color:#ccdeef;font-weight:700;font-size:1em;text-align:center;line-height:1.3em}@media (min-width: 769px), print{.lower_footer .credits .footer_brands_top,.lower_footer .credits .staff,.lower_footer .credits p{font-size:1em}}.lower_footer .credits .footer_brands_top p{font-weight:400}.lower_footer .credits .footer_brands_top p:last-child{margin-bottom:.4em}.lower_footer .credits .footer_brands{color:#ccdeef;margin-bottom:1em}.lower_footer .credits .footer_brands .caltech{font-weight:300}.lower_footer .credits .staff,.lower_footer .credits .staff p{line-height:1.6em;margin:.3em 0;font-weight:400}.lower_footer .credits a{color:#ccdeef;font-weight:700}.no-touchevents .lower_footer .credits a:hover{color:white}.megasection_nav_present.insight_page #site_footer{background-image:url("https://mars.nasa.gov/assets/insight_footer_subsurface.jpg");background-position:top center;background-repeat:no-repeat;background-size:cover}.megasection_nav_present.insight_page #site_footer .lower_footer{padding-bottom:28em}.megasection_nav_present.msl #site_footer .upper_footer{margin:0 auto;padding:4em 0 4em}.msl_sitemap_title{margin:0 auto;color:white;text-align:center;display:inherit;font-size:1.4em;margin-bottom:1.3em;padding-top:1.2em}@media (max-width: 1199px){#site_nav_container .nav_title a{color:#FFF}#site_nav_container .overlay_search{margin-bottom:2em;width:100%;max-width:320px}#site_nav_container .overlay_search .search_field{width:100%}#site_nav_container .social_nav,#site_nav_container .social_icons{background-color:#394862}}@media (max-width: 1199px){.subsite.site_header_area{background-color:#172E46}.subsite.site_header_area.menu_open,.main_feature_present.nav_is_fixed .subsite.site_header_area{background-color:#172E46}.subsite.site_header_area .nav_area{background-color:#172E46}.subsite.site_header_area .nav_area .gradient_line,.subsite.site_header_area .nav_area .related.module .gradient_line_module_top,.related.module .subsite.site_header_area .nav_area .gradient_line_module_top{width:60%;margin-top:1.5em;margin-bottom:1.5em;margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#9BACBE;background:linear-gradient(to right, rgba(155,172,190,0), #9BACBE, rgba(155,172,190,0));background:-webkit-linear-gradient(to right, rgba(155,172,190,0), #9BACBE, rgba(155,172,190,0))}}.qqq{border:0px solid yellow}@media (min-width: 1200px){#site_nav_container ul.nav{padding-top:9px}#site_nav_container ul.nav>li{border-radius:2px}#site_nav_container ul.nav>li:hover{border-bottom-left-radius:0;border-bottom-right-radius:0;z-index:20}#site_nav_container ul.nav>li:hover .subnav{display:block;z-index:1}#site_nav_container ul.nav>li .gradient_line,#site_nav_container ul.nav>li .related.module .gradient_line_module_top,.related.module #site_nav_container ul.nav>li .gradient_line_module_top{display:none}#site_nav_container ul.nav>li .arrow_box{display:none}#site_nav_container ul.nav>li:last-child .global_subnav_container>ul.subnav{border-top-left-radius:2px;right:-52px}#site_nav_container .nav_title{display:inline-block}#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{display:block;font-size:.88rem;font-weight:600;padding:.5em 6px;color:white}}@media (min-width: 1200px) and (min-width: 1200px){#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{padding:.5em 0.8em;font-size:.9rem}}@media (min-width: 1200px) and (min-width: 1700px){#site_nav_container .nav_title a,#site_nav_container .nav_title .main_nav_item{padding:.5em 1em}}@media (min-width: 1200px){#site_nav_container .admin_site_nav_item .nav_title{padding-bottom:0}#site_nav_container ul.subnav{display:none !important;padding:0.4em 0;border-bottom-left-radius:2px;border-bottom-right-radius:2px;border-top-right-radius:2px}.main_feature_present #site_nav_container ul.subnav{background-color:rgba(0,0,0,0.5)}.main_feature_present.nav_is_fixed #site_nav_container ul.subnav{background-color:#9a4739}#site_nav_container ul.subnav li{text-align:left;display:block !important}#site_nav_container ul.subnav li:hover{background-color:rgba(0,0,0,0.3)}#site_nav_container ul.subnav a{white-space:normal;font-size:.85em;color:white;padding:.4em 1em}}@media (min-width: 1200px) and (min-width: 1024px), print and (min-width: 1200px){#site_nav_container ul.subnav a{padding:.4em 1.1em}}@media (min-width: 1200px){.no-touchevents #site_nav_container ul.subnav a:hover{color:white}#site_nav_container .social_nav{display:none}#site_nav_container li.admin_site_nav_item ul.subnav,ul.subnav .main_feature_present.nav_is_fixed #site_nav_container li.admin_site_nav_item,#site_nav_container li.admin_site_nav_item:hover ul.subnav,ul.subnav .main_feature_present.nav_is_fixed #site_nav_container li.admin_site_nav_item:hover{background-color:#D94F34}}#site_nav_container .nav_title,#site_nav_container ul.subnav a{font-weight:600}.site_header_area ul.nav .mars_now a.icon::before,.mars_now_footer .mars_now a::before{content:"";display:block;width:6.25em;height:4.6875em;background-image:url("https://mars.nasa.gov/assets/mars.svg");background-repeat:no-repeat;background-size:contain}@media (max-width: 1199px){.site_header_area ul.nav .mars_now a{padding-left:2em !important}.site_header_area ul.nav .mars_now a::before{transform:translate(-80%, -33%);font-size:0.75em;margin-bottom:-4.6875em}}@media (min-width: 1024px) and (max-width: 1100px){.site_header_area .brand_area{width:248px}}@media (min-width: 1200px){.site_header_area .brand_area{top:8px}.site_header_area ul.nav .mars_now{margin-right:.8em}.site_header_area ul.nav .mars_now a::before{position:absolute;transform:translate(-60%, -50%)}.site_header_area ul.nav .mars_now .section_title{margin-bottom:0}.site_header_area ul.nav .mars_now .section_description,.site_header_area ul.nav .mars_now .feature_info .image_container:hover .bottom_description .description,.site_header_area ul.nav .mars_now .feature_info .image_container.pressed .bottom_description .description{font-weight:400}.site_header_area ul.nav .mars_now,.mars_now_footer .mars_now{background-color:transparent !important}.site_header_area ul.nav .mars_now a span,.mars_now_footer .mars_now a span{position:absolute;left:-9999px}#site_nav_container .nav_title .main_nav_item.icon{padding-right:.8em;padding-left:2.2em}}@media (min-width: 1200px){.subsite :hover .subnav,.main_feature_present.nav_is_fixed .subsite :hover .subnav{display:none}.subsite .site_nav ::-webkit-input-placeholder{color:transparent}.subsite .site_nav :-moz-placeholder{color:transparent}.subsite .site_nav ::-moz-placeholder{color:transparent}.subsite .site_nav :-ms-input-placeholder{color:transparent}.nav_is_fixed .site_header_area.subsite,.site_header_area.subsite{background-color:#172E46}}@media (min-width: 1200px){body#home .subsite .nav_title>a{color:white}}@media (min-width: 1200px){.magic_shell_site_nav #site_nav_container .insight_site_nav ul.nav>li:not(.admin_site_nav_item){height:auto}.insight_page #site_nav_container .insight_site_nav ul.nav>li:hover{background-color:transparent}#site_nav_container .insight_site_nav ul.nav>li:hover .subnav{display:none}#site_nav_container .insight_site_nav .meganav_container{background-color:white;margin-top:-11px}}@media (min-width: 1700px){#site_nav_container .insight_site_nav .meganav_container{margin-top:-15px}}#site_footer .sitemap{font-weight:400;z-index:10;position:relative;margin-bottom:2em}@media (min-width: large){#site_footer .sitemap .grid_layout{width:97%}}#site_footer .sitemap_directory{margin-bottom:2em}#site_footer .sitemap_directory .footer_sitemap_item{margin-bottom:1.8em}@media (min-width: 600px), print{#site_footer .sitemap_directory .footer_sitemap_item{margin-bottom:2em}}@media (min-width: 1024px), print{#site_footer .sitemap_directory .footer_sitemap_item{margin-left:10%}}#site_footer .sitemap_title{font-weight:400;text-transform:capitalize;font-size:1em;margin-bottom:.4em;margin-top:0}#site_footer .sitemap_title a,#site_footer .sitemap_title .no_link_nav_item{color:white;text-decoration:none}@media (min-width: 600px), print{#site_footer .sitemap_title{font-size:1.1em;margin-bottom:.4em}}@media (min-width: 1024px), print{#site_footer .sitemap_title{font-size:1.1em}}#site_footer .sitemap_block{text-align:center;width:100%}@media (min-width: 600px), print{#site_footer .sitemap_block{box-sizing:border-box;width:25%;float:left;padding-left:1.66667%;padding-right:1.66667%;text-align:left}}@media (min-width: 1024px), print{#site_footer .sitemap_block{box-sizing:border-box;width:16.66667%;float:left;padding-left:1.66667%;padding-right:1.66667%}}#site_footer ul.subnav{margin-bottom:1em}#site_footer ul.subnav li{padding-left:1em;text-indent:-1em;margin:0 0 .25em 0}#site_footer ul.subnav a{color:#98c7fc;text-decoration:none;font-size:1em}@media (min-width: 600px), print{#site_footer ul.subnav a{font-size:.85em}}@media (min-width: 1024px), print{#site_footer ul.subnav a{font-size:.95em}}.no-touchevents #site_footer ul.subnav a:hover{color:white}@media (min-width: 600px), print{.main_area_sitemap .grid_layout{width:100%}}.main_area_sitemap .sitemap_directory{padding:2em 0 0}.main_area_sitemap .sitemap_directory .footer_sitemap_item{margin-bottom:1.8em}@media (min-width: 600px), print{.main_area_sitemap .sitemap_directory .footer_sitemap_item{margin-bottom:2em}}.main_area_sitemap .sitemap_block{text-align:center;width:100%}@media (min-width: 600px), print{.main_area_sitemap .sitemap_block{box-sizing:border-box;width:25%;float:left;padding-left:1.66667%;padding-right:1.66667%;text-align:left}}@media (min-width: 1024px), print{.main_area_sitemap .sitemap_block{box-sizing:border-box;width:16.66667%;float:left;padding-left:1.66667%;padding-right:1.66667%}}.main_area_sitemap .sitemap_block a{word-wrap:normal}.main_area_sitemap .sitemap_title{margin-top:0}.main_area_sitemap .sitemap_title a,.main_area_sitemap .sitemap_title .no_link_nav_item{color:#222}.main_area_sitemap .subnav a{display:block}@media (min-width: 600px), print{.main_area_sitemap .subnav a{padding-left:1em;text-indent:-1em;margin:.1em 0}}.social_icons{display:block}.social_icons .icon{width:44px !important;height:44px !important;display:inline-block;overflow:hidden}.social_icons .icon+.icon{margin-left:.7em}@media (min-width: 769px), print{.social_icons .icon+.icon{margin-left:.9em}}.social_icons .icon img{opacity:1 !important;height:100%;max-width:none}.triple_teaser .social_icons{max-width:188px;white-space:nowrap}@media (min-width: 769px), print{.triple_teaser .social_icons{max-width:none}}.triple_teaser .social_icons .icon{width:44px;height:44px}.triple_teaser .social_icons .icon+.icon{margin-left:.7em}@media (min-width: 600px), print{.triple_teaser .social_icons .icon{width:38px;height:38px}.triple_teaser .social_icons .icon+.icon{margin-left:.4em;margin-left:calc((100% - 152px)/3)}}@media (min-width: 769px), print{.triple_teaser .social_icons .icon{width:44px;height:44px}.triple_teaser .social_icons .icon+.icon{margin-left:.8em}}.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300m{padding:0 !important;float:none !important}#_atssh{display:none}#at4-share,#at4-soc{top:60%;bottom:auto}#secondary_column ul.sidebar_share{margin-left:0.3em !important}#secondary_column ul.sidebar_share li{list-style:none;margin-bottom:0}#secondary_column ul.sidebar_share li a.icon{width:48px;display:inline-block;margin-right:0.5em;margin-bottom:0.5em}#secondary_column ul.sidebar_share li .share_text{display:inline-block;vertical-align:middle}#secondary_column ul.sidebar_share li+li{margin-top:0.5em}.meganav_container{display:none;line-height:1.4;position:absolute;z-index:30;left:0;margin-top:-5px;width:352px;background-color:white;padding:1em 0 1.5em 0;white-space:normal;text-align:left;cursor:default;box-shadow:0 1px 1px 1px rgba(0,0,0,0.15);border-radius:3px}.meganav_container .nav_item_indicator{position:absolute;z-index:50;top:-15px;left:calc(50% - 15px);width:0;height:0;border-bottom:16px solid white;border-left:16px solid transparent;border-right:16px solid transparent}.meganav_container.no_feature{padding:1em 0}.meganav_container.no_feature .sections_container{padding:0.2rem 0 0.3em}.external_nav_item{left:auto;right:0}.external_nav_item .sections_container+.sections_container{margin-top:1rem;border-top:1px solid #BEBEBE;padding-top:1rem}.meganav{color:#2b2b2b;text-align:left;height:100%;margin:0 1.5em}.meganav p{color:#2b2b2b}.meganav .sections_container{height:100%;width:100%;display:inline-block;padding-bottom:0.8em}.meganav .sections_container li:first-child{padding-top:0}.meganav .sections_container li a:hover,.meganav .sections_container li a .touchevents a.pressed{color:#222}.meganav .detail_container{height:240px;width:auto;padding-top:1.4em;overflow:hidden;border-top:1px solid #dddfe1}.feature_info{height:100%;padding-top:0;max-width:1200px}.feature_info .image_container{vertical-align:top;height:100%;background-size:cover;background-position:center;position:relative}.feature_info .block_layout{height:100%}.feature_info .block_layout .image_container{display:none;width:49.15254%;float:left}.feature_info .block_layout .image_container:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.feature_info .block_layout .image_container:nth-child(2n+2){margin-left:50.84746%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(-n+2){display:block}@media (min-width: 1200px){.feature_info .block_layout .image_container{width:32.20339%;float:left}.feature_info .block_layout .image_container:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.feature_info .block_layout .image_container:nth-child(3n+2){margin-left:33.89831%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(3n+3){margin-left:67.79661%;margin-right:-100%;clear:none}.feature_info .block_layout .image_container:nth-child(-n+3){display:block}}.feature_info .single_layout{height:100%}.feature_info .single_layout .image_container{display:block;float:left;width:300px}.feature_info .single_layout .description_container{width:43%;margin-left:.5em}@media (min-width: 1200px){.feature_info .single_layout .description_container{width:50%}}.feature_info .single_layout .description_container header{margin-bottom:0.3em}.feature_info .single_layout .description_container .content_title{color:#222;font-size:1.3em;letter-spacing:-.02em}.feature_info .single_layout .description_container .description{font-size:1em}.feature_info .category_title,.feature_info .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .feature_info .category_title{text-transform:uppercase;font-size:0.7em;font-weight:600;color:#999}.feature_info .content_title{font-size:1.1em;font-weight:600}.feature_info .description_container{float:left;width:60%;padding:1em;max-width:600px}.feature_info .description_container .description{font-size:0.9em}.feature_info .bottom_description{transition:height 350ms ease-in-out;padding:1em;position:absolute;bottom:0;width:100%;height:35%;background-color:rgba(0,0,0,0.6);overflow:hidden}.feature_info .bottom_description .category_title{margin-bottom:0}.feature_info .bottom_description .content_title{color:#eee}.feature_info .bottom_description .description{color:#EAEAEA;opacity:0;transition:opacity 350ms ease-in-out;margin-top:4px}.feature_info .bottom_description.tall_description{height:45%}.feature_info .image_container:hover .bottom_description,.feature_info .image_container.pressed .bottom_description{height:100%;margin-top:4px}.feature_info .image_container:hover .bottom_description .description,.feature_info .image_container.pressed .bottom_description .description{opacity:1}.feature_info .image_container:hover .bottom_description .content_title,.feature_info .image_container.pressed .bottom_description .content_title{white-space:normal;overflow:auto}.touchevents #site_nav_container li.isHovered .nav_title a{color:#FFF}.meganav_container.dark_theme{background-color:#1f1f1f;color:white}.meganav_container.dark_theme .sections_container li a{color:#9f9f9f}.meganav_container.dark_theme .sections_container li a:hover,.touchevents .meganav_container.dark_theme .sections_container li a.pressed{color:white}.meganav_container.dark_theme .detail_container{border-top:1px solid #363636}.meganav_container.dark_theme .nav_item_indicator{border-bottom-color:#1f1f1f}.meganav_columnar_section{border-top:1px solid #d4d4d4;padding:1rem 0 0.5rem}.meganav_columnar_section .title{font-weight:600;font-size:0.75rem;margin-bottom:.2rem;font-family:"Raleway",Helvetica,Arial,sans-serif;color:#939393;margin-bottom:0.5rem}.meganav_columnar_section .col{width:50%;float:left}.meganav_columnar_section .col a{display:block}.meganav_columnar_section .nav_item a{color:#5f5f5f;font-weight:600;margin-bottom:0.5rem;text-decoration:none}.meganav_columnar_section .nav_item a:hover{color:black}.site_header_area.subsite .nav_title{padding-bottom:15px}.admin_site_nav_item.expanded .meganav_container{display:none !important}#site_nav_container ul.nav li:hover .subnav.hide{display:none}@media (min-width: 1200px){.site_header_area.missions_gallery_subnav_open,#home .site_header_area.missions_gallery_subnav_open,#home.nav_is_fixed .site_header_area.missions_gallery_subnav_open{background-color:#000}}.arrow_expand{display:inline-block;width:28px;height:22px;pointer-events:none}.arrow_expand.open{transform:rotateX(-180deg)}.missions_gallery_subnav_item.open .arrow_expand{transform:rotateX(-180deg)}#site_nav_container .missions_gallery_subnav_item .global_subnav_container ul.subnav,.missions_gallery_subnav_item .global_subnav_container ul.subnav body.admin_only #site_nav_container{display:none !important}.missions_gallery_subnav_item .global_subnav_container .meganav_container{display:none !important}@media (max-width: 1199px){.missions_gallery_subnav_item .nav_title a{pointer-events:none}}.missions_gallery_subnav_item .nav_title:hover{cursor:pointer}#site_nav_container ul.nav>li.missions_gallery_subnav_item.open,#site_nav_container ul.nav>li:hover.missions_gallery_subnav_item.open{padding-bottom:25px;background-color:#1B2021}@media (max-width: 1199px){#site_nav_container ul.nav>li.missions_gallery_subnav_item.open,#site_nav_container ul.nav>li:hover.missions_gallery_subnav_item.open{background-color:inherit}}.missions_gallery_subnav{display:none;position:absolute;top:58px;left:0;right:0;bottom:0;width:116%;left:-5.8%}@media (min-width: 600px), print{.missions_gallery_subnav{top:58px}}@media (min-width: 600px), print{.missions_gallery_subnav{top:58px}}@media (min-width: 769px), print{.missions_gallery_subnav{top:70px}}@media (min-width: 1024px), print{.missions_gallery_subnav{top:74px}}@media (min-width: 1200px){.missions_gallery_subnav{top:82px}}@media (min-width: 1700px){.missions_gallery_subnav{top:88px}}@media (min-width: 1200px){.missions_gallery_subnav{width:100%;left:0}}.missions_gallery_subnav .toggle_button{pointer-events:none}@media (max-width: 1199px){.nav_overlay_true .main_nav_item .arrow_expand{display:none}.nav_overlay_true .missions_gallery_subnav{position:relative;top:0;background-color:#421711}.nav_overlay_true .missions_gallery_subnav.open{background-color:#32110e}.nav_overlay_true .missions_gallery_subnav.open .missions_gallery_subnav_title{background-color:#32110e;color:#fff;font-size:1.3em;font-weight:500}.nav_overlay_true .missions_gallery_subnav.open .mission_items{background-color:#32110e;max-height:none;overflow-y:hidden}.nav_overlay_true .missions_gallery_subnav.open .mission_items .mission_item{width:32%;flex-basis:100%;margin:.35em 5%}.nav_overlay_true .missions_gallery_subnav.open .missions_links{background-color:#32110e}.nav_overlay_true .missions_gallery_subnav.open .missions_links li,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover{background-color:inherit}}@media (max-width: 1199px) and (max-width: 480px){.nav_overlay_true .missions_gallery_subnav.open .missions_links li,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover{display:block}}@media (max-width: 1199px){.nav_overlay_true .missions_gallery_subnav.open .missions_links li a.button,.nav_overlay_true .missions_gallery_subnav.open .missions_links li a.button:hover,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover a.button,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover a.button:hover{background-color:inherit;color:#98c7fc}}@media (max-width: 1199px) and (max-width: 1199px){.nav_overlay_true .missions_gallery_subnav.open .missions_links li a.button,.nav_overlay_true .missions_gallery_subnav.open .missions_links li a.button:hover,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover a.button,.nav_overlay_true .missions_gallery_subnav.open .missions_links li:hover a.button:hover{text-transform:capitalize;font-size:1em}}@media (max-width: 1199px){.nav_overlay_true .missions_gallery_subnav.open a.close_button{display:none}}@media (max-width: 1199px) and (min-width: 480px){.nav_overlay_true .missions_gallery_subnav.open .mission_items .mission_item{margin:.3em 1%;flex-basis:48%}}@media (max-width: 1199px) and (min-width: 1024px), print and (max-width: 1199px){.nav_overlay_true .missions_gallery_subnav.open .mission_items .mission_item{margin:.5em 1%;flex-basis:29%}}.missions_gallery_subnav.open{display:block;background-color:#1B2021;font-family:"Montserrat",Helvetica,Arial,sans-serif}.missions_gallery_subnav.open .missions_gallery_subnav_title{background-color:#1B2021;color:#b1b1b1;font-weight:400;text-align:center;margin-bottom:0;margin-top:0;padding:1.5em 0 1em;cursor:default}.missions_gallery_subnav.open .mission_items{width:100%;background-color:#1B2021;padding:0 6%;display:flex;flex-direction:row;align-items:flex-start;align-content:space-between;white-space:normal;flex-flow:row wrap;justify-content:center;overflow-y:auto;cursor:auto}.missions_gallery_subnav.open .mission_items::-webkit-scrollbar{width:8px}.missions_gallery_subnav.open .mission_items::-webkit-scrollbar-thumb{background-color:rgba(107,107,107,0.6)}.missions_gallery_subnav.open .mission_items::-webkit-scrollbar-track{background-color:rgba(157,157,157,0.4)}.missions_gallery_subnav.open .mission_items .mission_item{display:inline-block;width:15%;margin:.5%;position:relative}.missions_gallery_subnav.open .mission_items .mission_item:hover .mission_title{display:none}.missions_gallery_subnav.open .mission_items .mission_title{text-align:left;font-size:.9em;font-weight:500;color:#fff;position:absolute;margin-left:.8em;margin-bottom:.9em;bottom:0;padding:0 5px}.missions_gallery_subnav.open .mission_items .mission_title span.external_link_icon{margin-left:.1em}.missions_gallery_subnav.open .mission_items .mission_title span.external_link_icon svg{display:inline-block}.missions_gallery_subnav.open .mission_items .mission_image{overflow:hidden;-o-object-fit:cover;object-fit:cover}.missions_gallery_subnav.open .mission_items .mission_link{width:100%}.missions_gallery_subnav.open .mission_items .mission_description{display:none;max-width:100%;color:#fff;position:absolute}.missions_gallery_subnav.open .mission_items .mission_item:hover .mission_description{padding:.9rem;position:absolute;opacity:1;height:auto;top:0;right:0;width:100%;height:100%;color:white;background-color:rgba(0,0,0,0.9);text-align:left;cursor:pointer;font-size:.95rem;line-height:1.3;display:block}.missions_gallery_subnav.open .mission_items .rollover_description .overlay_arrow{position:absolute;width:14px;height:14px;bottom:14px;right:14px}.missions_gallery_subnav.open .mission_items .mission_item_overlay{position:absolute;height:50%;width:100%;top:50%;left:0;background-image:linear-gradient(transparent, rgba(0,0,0,0.9))}.missions_gallery_subnav.open .missions_links{background-color:#1B2021;text-align:center;padding:2em 0;cursor:default}.missions_gallery_subnav.open .missions_links li{display:inline-block;padding:0 .5em}.missions_gallery_subnav.open .missions_links a.button{background-color:#34393c}@media (min-width: 1200px){.missions_gallery_subnav.open .missions_links a.button{font-weight:600}}.missions_gallery_subnav.open .missions_links li:hover a.button{background-color:#454b4e}.missions_gallery_subnav.open a.close_button{position:absolute;top:.8em;right:.8em;z-index:99;opacity:.5;top:.8em;right:.8em;z-index:99;opacity:.5;display:block;width:25px;height:25px}.no-touchevents .missions_gallery_subnav.open a.close_button:hover{opacity:1}@media (min-width: 600px), print{.missions_gallery_subnav.open a.close_button{top:1.5em;right:1.5em}}@media (min-width: 1700px){.missions_gallery_subnav.open a.close_button{top:1.2em;right:1.2em}}.missions_gallery_subnav.open a.close_button .close_icon{display:block;height:100%;position:relative}.missions_gallery_subnav.open a.close_button .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#d4d4d4}.missions_gallery_subnav.open a.close_button .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#d4d4d4}@media all and (-ms-high-contrast: none), (-ms-high-contrast: active){body .image_of_the_day.no_parallax .featured_image picture{display:block}body .image_of_the_day.no_parallax .featured_image picture img{position:absolute;transform:translateY(-50%);top:50%}}html,html a,select,input,button{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}html.no-touchevents{text-rendering:optimizeLegibility}html.no-touchevents html a,html.no-touchevents select,html.no-touchevents input,html.no-touchevents button{text-rendering:optimizeLegibility}input.placeholder,textarea.placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input:-moz-placeholder,textarea:-moz-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input::-moz-placeholder,textarea::-moz-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}input::-webkit-input-placeholder,textarea::-webkit-input-placeholder{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;text-rendering:optimizeLegibility}html,button,input,select,textarea{font-family:"Montserrat",Helvetica,Arial,sans-serif;color:#222}html{min-height:100%}body{font-family:"Montserrat",Helvetica,Arial,sans-serif;font-weight:300;font-size:96%;line-height:1.4;min-height:100%;position:relative;background-color:transparent}body.noscroll{overflow-y:hidden}@media (min-width: 600px), print{body{font-size:98%}}@media (min-width: 769px), print{body{font-size:100%}}@media (min-width: 1024px), print{body{font-size:102%}}@media (min-width: 1200px){body{font-size:104%}}h1,h2,h3,h4,h5{line-height:1.2em;font-weight:700;letter-spacing:-.01em;margin:1.2em 0 .5em}@media (min-width: 769px), print{h1,h2,h3,h4,h5{margin:1.5em 0 .5em}}h1{font-size:2.2em;letter-spacing:-.03em}h2{font-size:1.8em;letter-spacing:-.03em}h3{font-size:1.4em;letter-spacing:-.02em}h4{font-size:1.1em;letter-spacing:-.02em}img{width:100%}img,embed,object,video{max-width:100%}.jwplayer video{max-width:none}.jw-rightclick{display:none !important}i{font-style:italic}strong{font-weight:700}p{margin:1em 0;font-size:100%}.gradient_line,.related.module .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#b76b5f;background:linear-gradient(to right, rgba(183,107,95,0), #b76b5f, rgba(183,107,95,0));background:-webkit-linear-gradient(to right, rgba(183,107,95,0), #b76b5f, rgba(183,107,95,0))}.gradient_line_extra_margin{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#BEBEBE;background:linear-gradient(to right, rgba(190,190,190,0), #BEBEBE, rgba(190,190,190,0));background:-webkit-linear-gradient(to right, rgba(190,190,190,0), #BEBEBE, rgba(190,190,190,0));margin:2em 0}@media (min-width: 769px), print{.gradient_line_extra_margin{margin:3em 0}}.insight_page #at-share-dock,.insight_page .addthis-smartlayers-mobile{display:none;width:0;height:0}td.gssb_a img{width:auto}.module_title,.media_feature_title,.sitemap_title,.nav_title,.article_title,.sidebar_title,#secondary_column .related_content_module .module_title,.right_col .related_content_module .module_title,.rollover_title{letter-spacing:-.02em}.module_title{letter-spacing:-.02em}.rollover_title{font-size:2.34em;margin-bottom:0em}@media (min-width: 600px), print{.rollover_title{font-size:2.7em;margin-bottom:0em}}@media (min-width: 769px), print{.rollover_title{font-size:3.06em;margin-bottom:0em}}@media (min-width: 1024px), print{.rollover_title{font-size:3.24em;margin-bottom:0em}}@media (min-width: 1200px){.rollover_title{font-size:3.42em;margin-bottom:0em}}.content_title{letter-spacing:0;font-weight:600}.module_title{font-size:1.69em;margin-bottom:.35em;text-align:center;font-weight:600;margin-top:0}@media (min-width: 600px), print{.module_title{font-size:1.95em;margin-bottom:.63em}}@media (min-width: 769px), print{.module_title{font-size:2.21em;margin-bottom:.91em}}@media (min-width: 1024px), print{.module_title{font-size:2.34em;margin-bottom:1.015em}}@media (min-width: 1200px){.module_title{font-size:2.47em;margin-bottom:1.12em}}@media (min-width: 600px), print{.grid_gallery .module_title{text-align:left;width:80%}}.module_title_small,.double_teaser .module_title{font-size:1.4em}@media (min-width: 600px), print{.module_title_small,.double_teaser .module_title{font-size:1.8em;margin-bottom:.85em}}.filter_bar .module_title_small,.filter_bar .double_teaser .module_title{text-align:left;width:90%}@media (min-width: 600px), print{.filter_bar .module_title_small,.filter_bar .double_teaser .module_title{text-align:center}}.category_title,.homepage_carousel .floating_text_area .category_title{font-size:.9em;font-weight:500;color:#f08d77;text-transform:uppercase;margin-bottom:6px;margin-top:0}.multimedia_teaser .category_title,.multimedia_teaser .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .multimedia_teaser .category_title{font-size:.8em}.primary_media_feature .media_feature_title{font-size:1.43em;margin-bottom:0em;font-weight:400;color:white}@media (min-width: 600px), print{.primary_media_feature .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.primary_media_feature .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.primary_media_feature .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.primary_media_feature .media_feature_title{font-size:2.09em;margin-bottom:0em}}.image_of_the_day .media_feature_title{font-size:1.43em;margin-bottom:0em;font-weight:600;color:white}@media (min-width: 600px), print{.image_of_the_day .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.image_of_the_day .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.image_of_the_day .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.image_of_the_day .media_feature_title{font-size:2.09em;margin-bottom:0em}}.multimedia_module_gallery .media_feature_title{font-size:1.43em;margin-bottom:0em;color:white;font-weight:600}@media (min-width: 600px), print{.multimedia_module_gallery .media_feature_title{font-size:1.65em;margin-bottom:0em}}@media (min-width: 769px), print{.multimedia_module_gallery .media_feature_title{font-size:1.87em;margin-bottom:0em}}@media (min-width: 1024px), print{.multimedia_module_gallery .media_feature_title{font-size:1.98em;margin-bottom:0em}}@media (min-width: 1200px){.multimedia_module_gallery .media_feature_title{font-size:2.09em;margin-bottom:0em}}.article_title{font-size:1.82em;margin-bottom:0em;font-weight:600;margin-top:0}@media (min-width: 600px), print{.article_title{font-size:2.1em;margin-bottom:0em}}@media (min-width: 769px), print{.article_title{font-size:2.38em;margin-bottom:0em}}@media (min-width: 1024px), print{.article_title{font-size:2.52em;margin-bottom:0em}}@media (min-width: 1200px){.article_title{font-size:2.66em;margin-bottom:0em}}.magic_shell_title,#iframe_overlay .magic_shell_title{font-family:WhitneyCondensed-Bold,Helvetica,Arial,sans-serif;background-color:#000;font-size:2.6em;font-weight:normal;padding:.9em .5em;text-align:center;line-height:.8}@media (min-width: 600px), print{.magic_shell_title,#iframe_overlay .magic_shell_title{padding:.9em .5em;text-align:left;font-size:2.6em}}.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{display:block}.magic_shell_title .parent_title a,#iframe_overlay .magic_shell_title .parent_title a{color:#f08d77;transition:color 400ms}.magic_shell_title .parent_title a:hover,#iframe_overlay .magic_shell_title .parent_title a:hover{text-decoration:none;color:white}@media (min-width: 600px), print{.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{display:inline;margin-right:0.1em}}.magic_shell_title .article_title,#iframe_overlay .magic_shell_title .article_title{color:#FFF;font-weight:normal}.magic_shell_title .article_title,#iframe_overlay .magic_shell_title .article_title,.magic_shell_title .parent_title,#iframe_overlay .magic_shell_title .parent_title{font-size:.7em;text-transform:uppercase;letter-spacing:normal}.sidebar_title,#secondary_column .related_content_module .module_title,.right_col .related_content_module .module_title{font-size:1.55em;margin-bottom:0.6em;font-weight:700;margin-left:-1px;margin-top:0}.links_module a{font-size:1em;cursor:pointer}.content_page .bread_and_nav_container{margin-top:0;margin-bottom:1.1rem}.content_page .bread_and_nav_container .gradient_line,.content_page .bread_and_nav_container .related.module .gradient_line_module_top,.related.module .content_page .bread_and_nav_container .gradient_line_module_top{display:none}.bread_and_nav_container .gradient_line,.bread_and_nav_container .related.module .gradient_line_module_top,.related.module .bread_and_nav_container .gradient_line_module_top{margin-left:auto;margin-right:auto;content:" ";width:100%;height:1px;clear:both;background:#8c8c8c;background:linear-gradient(to right, rgba(140,140,140,0), #8c8c8c, rgba(140,140,140,0));background:-webkit-linear-gradient(to right, rgba(140,140,140,0), #8c8c8c, rgba(140,140,140,0));max-width:435px;margin:2rem auto 2.3rem}.breadcrumbs{color:#257cdf;margin:0 0 1.2rem 0}.breadcrumbs .crumb{text-transform:uppercase;font-size:.9rem;font-weight:400}.breadcrumbs .crumb:before{content:"\0203A";font-size:1rem;margin:0 .3rem}.breadcrumbs .crumb:first-of-type:before{content:""}.breadcrumbs.centered{text-align:center}.module{padding:2.5em 0 2.2em;position:relative}@media (min-width: 769px), print{.module{padding:4.8em 0 5em}}.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:95%}.grid_layout:after{content:" ";display:block;clear:both}@media (min-width: 600px), print{.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:95%}.grid_layout:after{content:" ";display:block;clear:both}}@media (min-width: 769px), print{.grid_layout{max-width:100%;margin-left:auto;margin-right:auto;width:90%}.grid_layout:after{content:" ";display:block;clear:both}}@media (min-width: 1024px), print{.grid_layout{max-width:1200px;width:97%}.content_page .grid_layout{width:90%}}.grid_list_page.module .grid_layout .module:first-of-type,.full_width .grid_layout .module:first-of-type{padding-top:0}.grid_list_page.module .grid_layout .grid_layout,.full_width .grid_layout .grid_layout{width:100%;max-width:none}.grid_layout .grid_layout{width:100%;max-width:none}@media (max-width: 480px){.suggested_features .grid_layout,.news_teaser .grid_layout,.carousel_teaser .grid_layout{width:100%}.suggested_features .grid_layout header,.news_teaser .grid_layout header,.carousel_teaser .grid_layout header{margin-left:auto;margin-right:auto;width:95%}.suggested_features .grid_layout footer,.news_teaser .grid_layout footer,.carousel_teaser .grid_layout footer{margin-left:auto;margin-right:auto;width:95%}}.gradient_container_top,.gradient_container_bottom,.white_gradient_container_bottom{height:200px;max-height:45%;width:100%;position:absolute;z-index:1}.homepage_carousel .gradient_container_top,.homepage_carousel .gradient_container_bottom,.homepage_carousel .white_gradient_container_bottom{z-index:7}.gradient_container_left{height:100%;width:70%;position:absolute;z-index:1}.gradient_container_top{background:-owg-linear-gradient(rgba(0,0,0,0.6), transparent);background:linear-gradient(rgba(0,0,0,0.6), transparent);pointer-events:none;top:0}.gradient_container_left{background:-owg-linear-gradient(to right, rgba(0,0,0,0.6), transparent);background:linear-gradient(to right, rgba(0,0,0,0.6), transparent);left:0;top:0}.gradient_container_bottom{background:-owg-linear-gradient(transparent, rgba(0,0,0,0.9));background:linear-gradient(transparent, rgba(0,0,0,0.9));pointer-events:none;bottom:0}.white_gradient_container_bottom{background:url("https://mars.nasa.gov/assets/white_gradient.png") repeat-x bottom left;bottom:0;height:100px;pointer-events:none}.gradient_bottom_grid{background-image:linear-gradient(to bottom, transparent 0%, #000 30%, #000 100%)}.content_page_template .grid_gallery.module{padding:2rem 0 2rem}.grid_gallery .gallery_header{margin-bottom:2em;z-index:1}@media (min-width: 769px), print{.grid_gallery .gallery_header{margin-bottom:3em}}.grid_gallery .gallery_header .module_title{margin-bottom:0.5em;text-align:left;margin-top:0}.grid_gallery .list_date{font-size:.9em;margin-bottom:.4em;color:#5A5A5A}.grid_gallery.grid_view{background:white}.grid_gallery.grid_view .content_title{letter-spacing:-.03em;display:none}.grid_gallery.grid_view .image_and_description_container{min-height:0}.grid_gallery.grid_view .article_teaser_body{display:none}.grid_gallery.grid_view .list_date{display:none}.grid_gallery.grid_view .list_image{width:100%;float:none;margin:0}.grid_gallery.grid_view .bottom_gradient{color:#222;display:block;position:relative;margin-top:0.3rem;padding-bottom:0.4rem;text-align:left;min-height:52px}@media (min-width: 769px), print{.grid_gallery.grid_view .bottom_gradient{margin-top:.5rem;min-height:85px}}.grid_gallery.grid_view .bottom_gradient div{text-align:left}.grid_gallery.grid_view .bottom_gradient h3{margin-top:0;font-weight:500;font-size:1em}.grid_gallery.grid_view li.slide{margin-bottom:.84034%;width:49.57983%;float:left}.grid_gallery.grid_view li.slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(2n+2){margin-left:50.42017%;margin-right:-100%;clear:none}@media (min-width: 600px), print{.grid_gallery.grid_view li.slide{margin-bottom:.84034%;width:32.77311%;float:left}.grid_gallery.grid_view li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}@media (min-width: 769px), print{.grid_gallery.grid_view li.slide{width:24.36975%;float:left}.grid_gallery.grid_view li.slide:nth-child(4n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(4n+2){margin-left:25.21008%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(4n+3){margin-left:50.42017%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(4n+4){margin-left:75.63025%;margin-right:-100%;clear:none}}@media (min-width: 1200px){.grid_gallery.grid_view li.slide{width:19.32773%;float:left}.grid_gallery.grid_view li.slide:nth-child(5n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery.grid_view li.slide:nth-child(5n+2){margin-left:20.16807%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+3){margin-left:40.33613%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+4){margin-left:60.5042%;margin-right:-100%;clear:none}.grid_gallery.grid_view li.slide:nth-child(5n+5){margin-left:80.67227%;margin-right:-100%;clear:none}}.grid_gallery.grid_view li.slide a{text-decoration:none}.content_page_template .grid_gallery.grid_view .slide{margin-bottom:.84034%;width:49.57983%;float:left}.content_page_template .grid_gallery.grid_view .slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.content_page_template .grid_gallery.grid_view .slide:nth-child(2n+2){margin-left:50.42017%;margin-right:-100%;clear:none}@media (min-width: 769px), print{.content_page_template .grid_gallery.grid_view .slide{margin-bottom:.84034%;width:32.77311%;float:left}.content_page_template .grid_gallery.grid_view .slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.content_page_template .grid_gallery.grid_view .slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.content_page_template .grid_gallery.grid_view .slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.content_page_template .grid_gallery.grid_view .slide{width:24.36975%;float:left}.content_page_template .grid_gallery.grid_view .slide:nth-child(4n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.content_page_template .grid_gallery.grid_view .slide:nth-child(4n+2){margin-left:25.21008%;margin-right:-100%;clear:none}.content_page_template .grid_gallery.grid_view .slide:nth-child(4n+3){margin-left:50.42017%;margin-right:-100%;clear:none}.content_page_template .grid_gallery.grid_view .slide:nth-child(4n+4){margin-left:75.63025%;margin-right:-100%;clear:none}}.grid_gallery.list_view .list_image{float:right;margin-left:4%;margin-bottom:.5em;width:32%}@media (min-width: 600px), print{.grid_gallery.list_view .list_image{margin-left:0;margin-bottom:0;width:23.07692%;float:left;margin-right:2.5641%}}@media (min-width: 769px), print{.grid_gallery.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}@media (min-width: 1024px), print{.grid_gallery.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}.grid_gallery.list_view .list_text{width:auto}@media (min-width: 600px), print{.grid_gallery.list_view .list_text{width:74.35897%;float:right;margin-right:0}}@media (min-width: 769px), print{.grid_gallery.list_view .list_text{width:74.57627%;float:right;margin-right:0}}@media (min-width: 1024px), print{.grid_gallery.list_view .list_text{width:66.10169%;float:left;margin-right:1.69492%}}.grid_gallery.list_view .content_title a{text-decoration:none;cursor:pointer;color:#222}.grid_gallery.list_view .content_title a:hover{text-decoration:underline}.grid_gallery.list_view .content_title{display:block;font-size:1.17em;margin-bottom:.1em;margin-bottom:.2em;font-weight:700;color:#222;letter-spacing:-.035em}@media (min-width: 600px), print{.grid_gallery.list_view .content_title{font-size:1.35em;margin-bottom:.18em}}@media (min-width: 769px), print{.grid_gallery.list_view .content_title{font-size:1.53em;margin-bottom:.26em}}@media (min-width: 1024px), print{.grid_gallery.list_view .content_title{font-size:1.62em;margin-bottom:.29em}}@media (min-width: 1200px){.grid_gallery.list_view .content_title{font-size:1.71em;margin-bottom:.32em}}.grid_gallery.list_view .bottom_gradient{display:none}@media (min-width: 1024px), print{.grid_gallery.list_view .article_teaser_body{font-size:1.1em}}.grid_gallery.list_view li.slide:first-child{border-top:1px solid #CCC}.grid_gallery.list_view li.slide{border-bottom:1px solid #CCC;padding:1.2em 0}.grid_gallery.list_view li.slide a{text-decoration:none;cursor:pointer}.view_selectors{position:relative;margin:0 auto;text-align:center;width:106px;text-align:right}@media (min-width: 769px), print{.view_selectors{position:absolute;right:0;top:0;height:100%}}.view_selectors .nav_item{display:inline-block;position:relative;background-repeat:no-repeat;width:50px;height:50px;cursor:pointer;background-image:url("https://mars.nasa.gov/assets/[email protected]");background-size:125px;background-color:#eef2f6;border-radius:50%;-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}.view_selectors .nav_item.list_icon{background-position:-12px -62px}.no-touchevents .view_selectors .nav_item.list_icon:hover{background-position:-12px -12px}.list_view .view_selectors .nav_item.list_icon{background-position:-12px -12px}.view_selectors .nav_item.grid_icon{background-position:-62px -62px}.no-touchevents .view_selectors .nav_item.grid_icon:hover{background-position:-62px -12px}.grid_view .view_selectors .nav_item.grid_icon{background-position:-62px -12px}.grid_gallery#more_section .module_title{text-align:center;width:100%}.grid_gallery#more_section li.slide{margin-bottom:1.69492%;width:49.15254%;float:left}.grid_gallery#more_section li.slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery#more_section li.slide:nth-child(2n+2){margin-left:50.84746%;margin-right:-100%;clear:none}@media (min-width: 1200px){.grid_gallery#more_section li.slide{width:32.20339%;float:left}.grid_gallery#more_section li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.grid_gallery#more_section li.slide:nth-child(3n+2){margin-left:33.89831%;margin-right:-100%;clear:none}.grid_gallery#more_section li.slide:nth-child(3n+3){margin-left:67.79661%;margin-right:-100%;clear:none}}.grid_gallery#more_section li.slide .image_and_description_container{position:relative}.grid_gallery#more_section li.slide a.slide_title{padding-top:.6em;display:block;color:#222;font-weight:400}.grid_gallery#more_section li.slide:hover a.slide_title{color:#366599}.special_list .event_list_by_location .date_section:first-child .date_title{border-top:1px solid #ccc}.special_list .event_list_by_location .date_section:last-child .country_section:last-child ul.item_list.list_view li.slide:last-child{border-bottom:0}.special_list .event_list_by_location .date_section .date_title{font-weight:bold;font-size:1.3em;margin:1em 0;padding-top:1em}.special_list .event_list_by_location .date_section .country_section{margin-top:1em}.special_list .event_list_by_location .date_section .country_section .country_title{font-size:1.7em;font-weight:400}.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view{margin-bottom:0}.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view .list_title,.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view p span{font-weight:700}.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view a.list_link{font-weight:500}@media (min-width: 600px), print{.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view .list_text.no_float{float:none}}.special_list .event_list_by_location .date_section .country_section ul.item_list.list_view li.slide:first-child{border-top:0}.special_list .event_list_by_location .category{background-color:#DDD;width:auto;display:inline-block;padding:.1em .3em;margin-bottom:.6em;font-weight:700;background-color:#DCEDF3;color:#1F414B;padding:.1em .5em}.special_list .event_list_by_location .date{margin-bottom:.5em}.wysiwyg_content ul,ol{margin-left:1.6em}.feature_pages .wysiwyg_content ol,.feature_pages .wysiwyg_content ul{list-style-position:inside}#secondary_column ul,ol{margin-left:1.2em}.wysiwyg_content ul,#secondary_column ul{margin-left:1.6em;list-style-type:disc;list-style-position:outside}.wysiwyg_content ul ul,#secondary_column ul ul{list-style-type:circle;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ul ul ul,#secondary_column ul ul ul{list-style-type:square;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ul ul ul ul,#secondary_column ul ul ul ul{list-style-type:disc;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol,#secondary_column ol{margin-left:1.6em;list-style-type:decimal;list-style-position:outside}.wysiwyg_content ol ol,#secondary_column ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol ol ol,#secondary_column ol ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol ol ol ol,#secondary_column ol ol ol ol{list-style-type:decimal;margin-left:1.6em;margin-top:.5em}.wysiwyg_content ol,.wysiwyg_content ul,#secondary_column ol,#secondary_column ul{margin-bottom:2em}.wysiwyg_content ol:last-child,.wysiwyg_content ul:last-child,#secondary_column ol:last-child,#secondary_column ul:last-child{margin-bottom:0}.wysiwyg_content ol li,.wysiwyg_content ul li,#secondary_column ol li,#secondary_column ul li{margin-bottom:.5em}.wysiwyg_content .item_list_module,.wysiwyg_content .item_list,.wysiwyg_content .list_sublist,.wysiwyg_content .footnotes ul,.wysiwyg_content .sidebar_gallery,.wysiwyg_content .related_items,.wysiwyg_content .sitemap_directory ul,.wysiwyg_content .list_view_module ul,.wysiwyg_content .faq_topics ul,.wysiwyg_content .item_grid,.wysiwyg_content .related_content_module ul,.wysiwyg_content .sig_events_module ul,.wysiwyg_content ul.detailed_def_nav,#secondary_column .item_list_module,#secondary_column .item_list,#secondary_column .list_sublist,#secondary_column .footnotes ul,#secondary_column .sidebar_gallery,#secondary_column .related_items,#secondary_column .sitemap_directory ul,#secondary_column .list_view_module ul,#secondary_column .faq_topics ul,#secondary_column .item_grid,#secondary_column .related_content_module ul,#secondary_column .sig_events_module ul,#secondary_column ul.detailed_def_nav{margin-left:0;list-style-type:none;list-style-position:inside}.wysiwyg_content .item_list_module li,.wysiwyg_content .item_list li,.wysiwyg_content .list_sublist li,.wysiwyg_content .footnotes ul li,.wysiwyg_content .sidebar_gallery li,.wysiwyg_content .related_items li,.wysiwyg_content .sitemap_directory ul li,.wysiwyg_content .list_view_module ul li,.wysiwyg_content .faq_topics ul li,.wysiwyg_content .item_grid li,.wysiwyg_content .related_content_module ul li,.wysiwyg_content .sig_events_module ul li,.wysiwyg_content ul.detailed_def_nav li,#secondary_column .item_list_module li,#secondary_column .item_list li,#secondary_column .list_sublist li,#secondary_column .footnotes ul li,#secondary_column .sidebar_gallery li,#secondary_column .related_items li,#secondary_column .sitemap_directory ul li,#secondary_column .list_view_module ul li,#secondary_column .faq_topics ul li,#secondary_column .item_grid li,#secondary_column .related_content_module ul li,#secondary_column .sig_events_module ul li,#secondary_column ul.detailed_def_nav li{margin-bottom:0}.wysiwyg_content .faq_topics ul li a,#secondary_column .faq_topics ul li a{font-weight:500;line-height:1.8em}.module header{margin-bottom:1em;position:relative}.module footer{text-align:center;position:relative}.module footer a.detail_link{text-transform:uppercase;font-weight:400}a.detail_link:not(.msl .module footer){font-size:.9em}.module .module_title{font-weight:300;color:#6d3007}.wysiwyg_content>.module:first-child{padding-top:0}.insight_page .module .module_title{color:#000}.multimedia_teaser{-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;overflow:hidden}#secondary_column aside .multimedia_teaser{position:relative}#secondary_column aside .multimedia_teaser .text{position:absolute;width:100%;text-align:center;padding:0 1.4em 2em;bottom:0}#secondary_column aside .multimedia_teaser .text .category_title,#secondary_column aside .multimedia_teaser .text .media_feature_title{color:white}.multimedia_teaser{-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;overflow:hidden}.multimedia_teaser .util-carousel{margin-bottom:2em;width:190%}@media (min-width: 480px){.multimedia_teaser .util-carousel{width:90%}}@media (min-width: 769px), print{.multimedia_teaser .util-carousel{margin-bottom:3em}}.suggested_features.module{-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;background-color:#eef2f6}.related.module{-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none;padding-top:1em}.related.module .module_title{text-align:left;font-size:2em}.related.module .gradient_line_module_top{margin:0 0 2em}.carousel_teaser.related .module_title{text-align:center}@media (min-width: 600px), print{.carousel_teaser.related .module_title{text-align:left;width:88%;margin-left:auto;margin-right:auto}}@media (min-width: 769px), print{.carousel_teaser.related .module_title{width:88.5%}}@media (min-width: 1024px), print{.carousel_teaser.related .module_title{width:89%}}section.site_teaser .img_col{width:100%;margin-bottom:1.5em}@media (min-width: 600px), print{section.site_teaser .img_col{width:40.78947%;float:left;margin-right:5.26316%;margin-bottom:0}}section.site_teaser .text_col{width:100%}@media (min-width: 600px), print{section.site_teaser .text_col{width:53.94737%;float:left;margin-right:5.26316%;float:right;margin-right:0}}section.site_teaser .text_col .category_title{font-size:0.9em}section.site_teaser .text_col p{margin:1em 0 1.7em}section.site_teaser .site_teaser_caption{margin:.5em 1em 0 0;text-align:right;font-size:.8em}section.site_teaser footer{text-align:center}@media (min-width: 600px), print{section.site_teaser footer{text-align:left}}section.site_teaser .button{padding:0.8em 1.2em}section.more_bar{text-align:center;background-color:#4d91a6;color:black;height:36px;cursor:pointer;position:relative}section.more_bar .title,section.more_bar .arrow_down{display:inline-block;vertical-align:middle;margin-top:6px}section.more_bar .arrow_down{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -50px -125px;background-size:300px}section.more_bar .arrow_down:hover,section.more_bar .arrow_down.active{background:url("https://mars.nasa.gov/assets/[email protected]") -50px -125px;background-size:300px}.insight_page section.more_bar{color:#7BBBFF;background-color:#2F6197;font-weight:500}.insight_page section.more_bar .arrow_down{background:none}.insight_page section.more_bar .icon{display:inline-block;width:11px;height:11px;border-radius:50%;vertical-align:middle}.insight_page section.more_bar .icon svg{width:100%;height:100%;fill:#7BBBFF;display:block}.latest_update_module .latest_update_header{display:none}.content_page .latest_update_module{margin:1rem 0}.content_page .latest_update_module .latest_update_header{display:block}.content_page .latest_update_module .latest_update_header h2{margin:1.5rem 0 1rem}.content_page .latest_update_module .latest_update_header h4.subhead{margin:0;font-size:1.3rem}.content_page .latest_update_module .list_text .list_title,.content_page .latest_update_module .list_text .desc_title{display:none}.content_page .latest_update_module .list_image{width:41%;float:right;margin:0 0 0.9rem 1.2rem}@media (min-width: 600px), print{.content_page .latest_update_module .list_image{width:27%}}.content_page .latest_update_module .list_text{float:none;width:auto}.content_page .latest_update_module .description p:first-of-type{margin-top:0}.content_page .latest_update_module ul.item_list{margin-bottom:0}.content_page .latest_update_module ul.item_list li:last-child{margin-bottom:0}.content_page .latest_update_module ul.item_list li:first-child .list_content{padding:0}.content_page .latest_update_module .latest_article_link{margin-top:0}.inline_dashboard_item{display:inline}.inline_dashboard_item div{display:inline-block}.my_map_container{width:100%}.my_map_container iframe{width:100%}.content_page_template .featured_map.module{padding:0;margin:3em 0}.featured_map .featured_map_title{margin:.8rem 0 0}.openseadragon{background-color:black}.openseadragon div[title^='Zoom'],.openseadragon div[title^='Go'],.openseadragon div[title^='Toggle']{margin:6px 0 0 6px !important;cursor:pointer}.feature_pages .deep_zoom_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .deep_zoom_module{max-width:600px}}.feature_pages .deep_zoom_module.full-bleed,.feature_pages .deep_zoom_module.full_width,.feature_pages .deep_zoom_module.wide,.feature_pages .deep_zoom_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .deep_zoom_module.full-bleed,.feature_pages .deep_zoom_module.full_width,.feature_pages .deep_zoom_module.wide,.feature_pages .deep_zoom_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .deep_zoom_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .deep_zoom_module.column-width{max-width:600px}}.feature_pages .deep_zoom_module.full-bleed{width:100%;max-width:none}.feature_pages .deep_zoom_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .deep_zoom_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .deep_zoom_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .deep_zoom_module.full_width{width:55%}}.feature_pages .deep_zoom_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .deep_zoom_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .deep_zoom_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .deep_zoom_module.left,.feature_pages .deep_zoom_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .deep_zoom_module.left,.feature_pages .deep_zoom_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.left,.feature_pages .deep_zoom_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .deep_zoom_module.left,.feature_pages .deep_zoom_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .deep_zoom_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .deep_zoom_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .deep_zoom_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .deep_zoom_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .deep_zoom_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .deep_zoom_module.right{margin-right:20%}}.feature_pages .deep_zoom_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .deep_zoom_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.parallax_module .caption{font-size:.88em}}.feature_pages .deep_zoom_module.parallax_module img{height:auto !important}.feature_pages .deep_zoom_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .deep_zoom_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .deep_zoom_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .deep_zoom_module.parallax_module .window .featured_image{position:absolute}}.deep_zoom_module .deepzoom_caption{margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.deep_zoom_module .deepzoom_caption{font-size:.88em}}.content_page .deep_zoom_module{margin-bottom:2em}.megasection_nav_present.msl section.more_bar{background-color:#e34e41}.megasection_nav_present.msl section.more_bar .title{font-weight:500}.megasection_nav_present.msl section.more_bar .arrow_down .icon.svg_icon_container{width:10px}.megasection_nav_present.msl section.suggested_features{background:#FFF6F5}.megasection_nav_present.msl section.suggested_features h1.module_title{font-weight:500}.megasection_nav_present.msl section.suggested_features button{background-color:#e34e41}.megasection_nav_present.msl section.suggested_features button:hover{background-color:rgba(227,78,65,0.7)}.homepage_carousel .floating_text_area{width:100%;padding:1.4em;bottom:120px;text-align:center;margin-left:auto;margin-right:auto;color:white;background-color:rgba(0,0,0,0.4)}@media (min-width: 769px), print{.homepage_carousel .floating_text_area{bottom:calc(125px + 3em);background-color:transparent}}@media only screen and (max-height: 600px) and (orientation: landscape){.homepage_carousel .floating_text_area{padding:.9em;bottom:111px}}.homepage_carousel .floating_text_area .description{display:block;max-height:130px;overflow-y:auto;padding:0 1.4em;color:#ffffff;font-weight:300}.homepage_carousel .floating_text_area .description a{font-weight:500;color:#69B9FF}.homepage_carousel .floating_text_area .description a.multi_links{display:block}@media (max-height: 600px) and (max-width: 769px) and (orientation: landscape){.homepage_carousel .floating_text_area .description a.multi_links{display:inline-block}.homepage_carousel .floating_text_area .description a.multi_links+.multi_links{margin-left:0.9rem}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .description{display:block;line-height:1.4em;padding:0;max-height:none;overflow:hidden}}.no-touchevents .homepage_carousel .floating_text_area .description{display:none}@media (min-width: 769px), print{.no-touchevents .homepage_carousel .floating_text_area .description{display:block !important}}.homepage_carousel .floating_text_area .description .detail_link{display:inline-block;color:#69B9FF;text-transform:none}.homepage_carousel .floating_text_area .description .detail_link:hover{text-decoration:none;color:#ffffff}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .description .detail_link{display:none}}.homepage_carousel .floating_text_area footer{margin:0}@media (min-width: 769px), print{.homepage_carousel .floating_text_area footer{margin:1.6em 0 0}}.homepage_carousel .floating_text_area .category_title{margin-top:0}.homepage_carousel .floating_text_area .media_feature_title{color:white;margin-top:0;margin-bottom:.4em;font-size:1.6em;width:70%;margin-left:auto;margin-right:auto;position:relative;font-weight:400}.homepage_carousel .floating_text_area .media_feature_title a{color:white;text-decoration:none}@media (min-width: 600px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:2em;width:80%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:1.5em;margin-bottom:.4em;width:100%}}@media (min-width: 1024px), print{.homepage_carousel .floating_text_area .media_feature_title{font-size:1.6em}}@media (min-width: 1200px){.homepage_carousel .floating_text_area .media_feature_title{font-size:1.8em}}@media (min-width: 1700px){.homepage_carousel .floating_text_area .media_feature_title{font-size:1.9em}}.homepage_carousel .floating_text_area .media_feature_title span.arrow{background:url("https://mars.nasa.gov/assets/[email protected]") center no-repeat;position:absolute;right:-25%;margin-top:-0.2em;background-size:12px;height:44px;width:44px;bottom:-9px}@media (min-width: 600px), print{.homepage_carousel .floating_text_area .media_feature_title span.arrow{margin-top:0;right:-10%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .media_feature_title span.arrow{display:none}}@media (orientation: landscape){.homepage_carousel .floating_text_area .media_feature_title span.arrow{display:none}}.homepage_carousel .floating_text_area .button,.homepage_carousel .floating_text_area .button:hover{display:none;text-transform:none;font-size:16px;font-weight:400;border-radius:3px;padding:9px 16px}@media (min-width: 769px), print{.homepage_carousel .floating_text_area .button,.homepage_carousel .floating_text_area .button:hover{display:inline-block !important;color:white !important}}.no-touchevents .homepage_carousel .floating_text_area .button{transition:background 300ms}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:1.5em}}@media (min-width: 1024px), print{.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:1.7em}}@media (min-width: 1200px){.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:2.0em}}@media (min-width: 1700px){.homepage_carousel .floating_text_area.expandable .media_feature_title{font-size:2.2em}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area{text-align:left;padding:1.4em;margin:0}.homepage_carousel .floating_text_area.box,.homepage_carousel .floating_text_area.no-box{position:absolute;width:400px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 1024px), print and (min-width: 769px), print{.homepage_carousel .floating_text_area.box,.homepage_carousel .floating_text_area.no-box{width:480px}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.homepage_carousel .floating_text_area.box,.homepage_carousel .floating_text_area.no-box{width:530px}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.box.left,.homepage_carousel .floating_text_area.no-box.left{left:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.box.left,.homepage_carousel .floating_text_area.no-box.left{left:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.box.right,.homepage_carousel .floating_text_area.no-box.right{right:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.box.right,.homepage_carousel .floating_text_area.no-box.right{right:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.box{background-color:rgba(0,0,0,0.6)}.homepage_carousel .floating_text_area.no-box .media_feature_title{font-size:2.7em;font-weight:400}.homepage_carousel .floating_text_area.no-box .description{font-size:1.5em}.homepage_carousel .floating_text_area.centered{bottom:calc(58px + 3em);text-align:center;left:0;right:0;margin:auto;width:98%;max-width:900px}.homepage_carousel .floating_text_area.centered .media_feature_title{font-size:2.5em;font-weight:500}}@media (min-width: 769px) and (min-width: 600px), print and (min-width: 600px), print and (min-width: 769px), print{.homepage_carousel .floating_text_area.centered .media_feature_title{font-size:2.9em}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{transition:background-color .5s ease-out;width:40%;top:auto}.homepage_carousel .floating_text_area.expandable footer,.homepage_carousel .floating_text_area.expandable_light footer{margin-bottom:1em}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:415px}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:480px}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable,.homepage_carousel .floating_text_area.expandable_light{width:600px}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable.left,.homepage_carousel .floating_text_area.expandable_light.left{left:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable.left,.homepage_carousel .floating_text_area.expandable_light.left{left:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable.right,.homepage_carousel .floating_text_area.expandable_light.right{right:8%}}@media (min-width: 769px) and (min-width: 1700px), print and (min-width: 1700px){.homepage_carousel .floating_text_area.expandable.right,.homepage_carousel .floating_text_area.expandable_light.right{right:4%}}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.expandable .description,.homepage_carousel .floating_text_area.expandable_light .description{max-height:0;overflow:hidden;transition:all .7s}.homepage_carousel .floating_text_area.expandable .media_feature_title:after,.homepage_carousel .floating_text_area.expandable_light .media_feature_title:after{content:url("https://mars.nasa.gov/assets/arrow_down_prompt.png");transition:opacity .25s;position:relative;top:-4px;left:10px;opacity:1}.homepage_carousel .floating_text_area.expandable .media_feature_title span.arrow,.homepage_carousel .floating_text_area.expandable_light .media_feature_title span.arrow{display:none}.homepage_carousel .floating_text_area.expandable:hover:before,.homepage_carousel .floating_text_area.expandable_light:hover:before{opacity:0}.homepage_carousel .floating_text_area.expandable:hover .description,.homepage_carousel .floating_text_area.expandable_light:hover .description{max-height:400px}.homepage_carousel .floating_text_area.expandable:hover .media_feature_title:after,.homepage_carousel .floating_text_area.expandable_light:hover .media_feature_title:after{opacity:0}.homepage_carousel .floating_text_area.expandable{background-color:rgba(0,0,0,0.6)}.homepage_carousel .floating_text_area.expandable_light{background-color:rgba(255,255,255,0.9)}.homepage_carousel .floating_text_area.expandable_light .media_feature_title,.homepage_carousel .floating_text_area.expandable_light .description{color:#452520}.homepage_carousel .floating_text_area.expandable_light .category_title{color:#d63e1c}.homepage_carousel .floating_text_area.expandable_light .media_feature_title:after{content:url("https://mars.nasa.gov/assets/arrow_down_light.png")}}.homepage_carousel .floating_text_area.open span.arrow{transform:rotate(180deg)}@media (min-width: 769px), print{.homepage_carousel .floating_text_area.open .description{display:block}}.primary_media_feature .floating_text_area{bottom:2em}.banner_header_overlay{bottom:1em}.primary_media_feature .floating_text_area,.banner_header_overlay{position:absolute;z-index:12;color:white;width:100%;text-align:center;padding:0 1%}.primary_media_feature .floating_text_area .category_title,.banner_header_overlay .category_title,.banner_header_overlay .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .banner_header_overlay .category_title{color:white;margin-bottom:0.7em;margin-top:0}.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{margin:-.5em auto 1em}@media (min-width: 769px), print{.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{width:500px;margin-bottom:1.5em}}@media (min-width: 1024px), print{.primary_media_feature .floating_text_area .description,.banner_header_overlay .description{width:550px}}.primary_media_feature .floating_text_area .media_feature_title,.banner_header_overlay .media_feature_title{color:white;margin-bottom:.4em;font-size:1.93em}@media (min-width: 600px), print{.primary_media_feature .floating_text_area .media_feature_title,.banner_header_overlay .media_feature_title{font-size:2.8em}}.primary_media_feature .floating_text_area .media_feature_title a,.banner_header_overlay .media_feature_title a{color:white;text-decoration:none}.custom_banner_container{height:190px;width:100%;background-size:cover;background-position:center}@media only screen and (orientation: landscape){.custom_banner_container{height:260px}}@media (min-width: 600px), print{.custom_banner_container{height:420px}}@media only screen and (min-width: 600px) and (orientation: landscape){.custom_banner_container{height:350px}}@media (min-width: 769px), print{.custom_banner_container{height:400px}}@media only screen and (min-width: 769px) and (orientation: landscape){.custom_banner_container{height:400px}}@media (min-width: 1024px), print{.custom_banner_container{height:440px}}@media (min-width: 1200px){.custom_banner_container{height:550px}}@media (min-width: 1700px){.custom_banner_container{height:660px}}.custom_banner_container .banner_header_overlay{position:absolute;width:100%;bottom:0;z-index:2}.custom_banner_container .article_title{margin-bottom:.5em;text-align:center;color:#FFF}.custom_banner_container .secondary_nav_mobile{display:block}.custom_banner_container .secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:.3em 0 2em}.custom_banner_container .secondary_nav_mobile select::-ms-expand{display:none}.custom_banner_container .secondary_nav_mobile select option{padding:0.5em 1em}@media (min-width: 769px), print{.custom_banner_container .secondary_nav_mobile{display:none}}.custom_banner_container .secondary_nav_desktop{display:none;margin:0 0 .8em 0;text-align:center}@media (min-width: 769px), print{.custom_banner_container .secondary_nav_desktop{display:block}}.custom_banner_container .secondary_nav_desktop li{display:inline-block;position:relative}.custom_banner_container .secondary_nav_desktop a{color:#5AA1F5;font-size:1.2em;font-weight:700;display:block;padding:.3em .9em .3em 0}@media (min-width: 1700px){.custom_banner_container .secondary_nav_desktop a{font-size:1.3em}}.custom_banner_container .secondary_nav_desktop li.current a,.custom_banner_container .secondary_nav_desktop li:hover a{text-decoration:none;color:white}.megasection_nav_present nav.secondary_nav.secondary_nav_mobile{visibility:hidden}.homepage_carousel .master-slider .ms-slide-bgvideocont{transform:none !important}#masterslider{height:480px;width:100%}@media only screen and (orientation: landscape){#masterslider{height:260px}}@media (min-width: 600px), print{#masterslider{height:500px}}@media only screen and (min-width: 600px) and (orientation: landscape){#masterslider{height:350px}}@media (min-width: 769px), print{#masterslider{height:500px}}@media only screen and (min-width: 769px) and (orientation: landscape){#masterslider{height:500px}}@media (min-width: 1024px), print{#masterslider{height:640px}}@media (min-width: 1200px){#masterslider{height:780px}}@media (min-width: 1700px){#masterslider{height:800px}}#masterslider .ms-slide-bgvideocont{background-color:#000}#masterslider .ms-slide-bgvideocont video{max-width:none}#masterslider .ms-nav-next,#masterslider .ms-nav-prev{display:none}@media (min-width: 769px), print{#masterslider .ms-nav-next,#masterslider .ms-nav-prev{display:block}}#masterslider .ms-nav-prev,#masterslider .ms-nav-next{width:40px;height:80px;margin-top:-60px}@media (min-width: 769px), print{#masterslider .ms-nav-prev,#masterslider .ms-nav-next{margin-top:-80px}}#masterslider .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_slim.png");background-size:40px 103px;background-position:0;left:15px;border-top-right-radius:6px;border-bottom-right-radius:6px}#masterslider .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_slim.png");background-size:40px 103px;background-position:0;right:15px;border-top-left-radius:6px;border-bottom-left-radius:6px}#masterslider .ms-bullets{left:0;right:0;margin:0 auto;bottom:104px;z-index:10}@media (min-width: 769px), print{#masterslider .ms-bullets{bottom:123px}}@media only screen and (max-height: 600px) and (orientation: landscape){#masterslider .ms-bullets{bottom:100px}}.subsite_content_page #masterslider .ms-bullets{bottom:99px}@media (min-width: 769px), print{.subsite_content_page #masterslider .ms-bullets{bottom:108px}}@media only screen and (max-height: 600px) and (orientation: landscape){.subsite_content_page #masterslider .ms-bullets{bottom:90px}}#masterslider .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:8px;width:8px;opacity:0.5;margin:0 10px}#masterslider .ms-bullet:hover{opacity:1.0}#masterslider .ms-bullet-selected{opacity:1.0}.wysiwyg_content section.jpl_carousel{margin-bottom:3rem}.wysiwyg_content section.jpl_carousel .image_carousel_caption{position:relative}.wysiwyg_content section.jpl_carousel .master-slider{width:100%;height:300px}@media (min-width: 600px), print{.wysiwyg_content section.jpl_carousel .master-slider{height:394px}}@media (min-width: 769px), print{.wysiwyg_content section.jpl_carousel .master-slider{height:534px}}.feature_pages .wysiwyg_content section.jpl_carousel{max-width:600px;width:94%}@media (min-width: 769px), print{.feature_pages .wysiwyg_content section.jpl_carousel{max-width:calc(600px + 15%)}}.wysiwyg_content section.jpl_carousel .slider_title{margin-top:0.4rem;font-weight:700}.wysiwyg_content section.jpl_carousel .slider_caption{color:#5a6470;font-size:0.8em;height:auto;line-height:1.4em}.wysiwyg_content section.jpl_carousel .slider_link{margin-left:.4rem;font-size:.85em}.wysiwyg_content section.jpl_carousel .ms-nav-next,.wysiwyg_content section.jpl_carousel .ms-nav-prev{display:none}.wysiwyg_content section.jpl_carousel.medium_mid .ms-nav-next,.wysiwyg_content section.jpl_carousel.medium_mid .ms-nav-prev,.wysiwyg_content section.jpl_carousel.medium_large .ms-nav-next,.wysiwyg_content section.jpl_carousel.medium_large .ms-nav-prev,.wysiwyg_content section.jpl_carousel.large .ms-nav-next,.wysiwyg_content section.jpl_carousel.large .ms-nav-prev,.wysiwyg_content section.jpl_carousel.xlarge .ms-nav-next,.wysiwyg_content section.jpl_carousel.xlarge .ms-nav-prev,.wysiwyg_content section.jpl_carousel.xxlarge .ms-nav-next,.wysiwyg_content section.jpl_carousel.xxlarge .ms-nav-prev{display:block}.no-touchevents .wysiwyg_content section.jpl_carousel.medium .ms-nav-next,.no-touchevents .wysiwyg_content section.jpl_carousel.medium .ms-nav-prev,.no-touchevents .wysiwyg_content section.jpl_carousel.small .ms-nav-next,.no-touchevents .wysiwyg_content section.jpl_carousel.small .ms-nav-prev{display:block}.wysiwyg_content section.jpl_carousel .ms-nav-prev,.wysiwyg_content section.jpl_carousel .ms-nav-next{width:40px;height:80px;margin-top:-60px}.wysiwyg_content section.jpl_carousel.medium_large .ms-nav-next,.wysiwyg_content section.jpl_carousel.medium_large .ms-nav-prev,.wysiwyg_content section.jpl_carousel.large .ms-nav-next,.wysiwyg_content section.jpl_carousel.large .ms-nav-prev,.wysiwyg_content section.jpl_carousel.xlarge .ms-nav-next,.wysiwyg_content section.jpl_carousel.xlarge .ms-nav-prev,.wysiwyg_content section.jpl_carousel.xxlarge .ms-nav-next,.wysiwyg_content section.jpl_carousel.xxlarge .ms-nav-prev{margin-top:-80px}.wysiwyg_content section.jpl_carousel .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_slim.png");background-size:40px 95px;background-color:transparent;background-position:0;left:0;border-top-right-radius:6px;border-bottom-right-radius:6px}.wysiwyg_content section.jpl_carousel .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_slim.png");background-size:40px 95px;background-color:transparent;background-position:0;right:0;border-top-left-radius:6px;border-bottom-left-radius:6px}.wysiwyg_content section.jpl_carousel .x_of_y{position:absolute;top:-48px;left:0;right:0;margin:auto;padding:2px;background-color:#e7dfdd;background-color:rgba(231,223,221,0.8);width:60px;border-radius:20px;text-align:center}.slick-slider .slick-prev,.slick-slider .slick-next{width:30px;height:48px}.slick-slider .slick-prev:before,.slick-slider .slick-next:before{content:'';display:inline-block;padding:0;cursor:pointer;width:17px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -30px -100px;background-size:300px;opacity:1}.slick-slider .slick-prev:before:hover,.slick-slider .slick-prev:before.active,.slick-slider .slick-next:before:hover,.slick-slider .slick-next:before.active{background:url("https://mars.nasa.gov/assets/[email protected]") -30px -100px;background-size:300px}.slick-slider .slick-prev:not(.slick-disabled):hover:before,.slick-slider .slick-next:not(.slick-disabled):hover:before{opacity:1}.slick-slider .slick-next:before{content:'';display:inline-block;padding:0;cursor:pointer;width:17px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -30px -150px;background-size:300px}.slick-slider .slick-next:before:hover,.slick-slider .slick-next:before.active{background:url("https://mars.nasa.gov/assets/[email protected]") -30px -150px;background-size:300px}.slick-slider .slick-prev{left:-33px}.slick-slider .slick-next{right:-33px}.slick-slider .slick-disabled{cursor:default;opacity:.4}.slick-slider .slick-disabled:before{cursor:default}.main_carousel .slick-nav_container{width:100%;text-align:center;padding-top:1em;border-top:1px solid #BEBEBE}@media (min-width: 600px), print{.main_carousel .slick-nav_container{border:none;margin-top:1em}}.main_carousel .slick-dots{position:relative;top:0}.main_carousel .slick-dots li{vertical-align:top}.main_carousel .slick-dots li button:before{content:"";border-radius:50%;background-color:black;height:8px;width:8px;top:6px;left:5px;text-align:center}.main_carousel .slick-nav{position:relative;display:inline-block}.main_carousel .slick-prev,.main_carousel .slick-next{top:-4px}.suggested_features .slick-prev,.suggested_features .slick-next{top:38%}.suggested_features .slick-prev{left:-9%}@media (min-width: 600px), print{.suggested_features .slick-prev{left:-7%}}@media (min-width: 769px), print{.suggested_features .slick-prev{left:-6%}}.suggested_features .slick-next{right:-9%}@media (min-width: 600px), print{.suggested_features .slick-next{right:-7%}}@media (min-width: 769px), print{.suggested_features .slick-next{right:-6%}}.lightbox_carousel_module .slick-prev,.lightbox_carousel_module .slick-next{top:33%}.lightbox_carousel_module .slick-prev{left:-9%}@media (min-width: 600px), print{.lightbox_carousel_module .slick-prev{left:-7%}}@media (min-width: 769px), print{.lightbox_carousel_module .slick-prev{left:-6%}}.lightbox_carousel_module .slick-next{right:-9%}@media (min-width: 600px), print{.lightbox_carousel_module .slick-next{right:-7%}}@media (min-width: 769px), print{.lightbox_carousel_module .slick-next{right:-6%}}.suggested_features .slick-slider .slick-next,.suggested_features .slick-slider .slick-prev{border-radius:50%;background-color:#D3E4FF;width:33px;height:33px}.suggested_features .slick-slider .slick-next:before,.suggested_features .slick-slider .slick-prev:before{transform:scale(0.5);opacity:.6}.suggested_features .slick-slider .slick-next:hover,.suggested_features .slick-slider .slick-prev:hover{background-color:#DFEFFF}.suggested_features .slick-slider .slick-next:before{margin-left:2px}.related .slick-prev,.related .slick-next{top:31%}@media (min-width: 600px), print{.related .slick-prev,.related .slick-next{top:38%}}@media (min-width: 1024px), print{.related .slick-prev,.related .slick-next{top:31%}}.related .slick-prev:before,.related .slick-next:before{background-image:none}.megasection_nav_present.msl .suggested_features .slick-next,.megasection_nav_present.msl .suggested_features .slick-prev{background-color:#E34E41}.megasection_nav_present.msl .suggested_features .slick-next:hover,.megasection_nav_present.msl .suggested_features .slick-prev:hover{background-color:#FC6B5F}.megasection_nav_present.msl .suggested_features .slick-next:before,.megasection_nav_present.msl .suggested_features .slick-prev:before{-webkit-filter:invert(1);filter:invert(1);opacity:1}.main_carousel.module{padding-top:1.7em;padding-bottom:2.1em}@media (min-width: 600px), print{.main_carousel.module{padding-top:2em;padding-bottom:2.2em}}@media (min-width: 769px), print{.main_carousel.module{padding-top:3em;padding-bottom:2.9em}}@media (min-width: 769px), print{.main_carousel.module{padding-top:3.5em;padding-bottom:3.2em}}.main_carousel.module .carousel_header .carousel_title{border:1px solid blue;margin-bottom:.8em}@media (min-width: 600px), print{.main_carousel.module .carousel_header .carousel_title{margin-bottom:1.1em}}.main_carousel.module .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:0}.main_carousel.module .slick-slider .slick-slide a{text-decoration:none}.main_carousel.module .slick-slider .content_title{margin:.6em 0 0;color:#222}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_title{margin:0 0 .8em}}.main_carousel.module .slick-slider .content_title h1{font-size:1.209em;margin-bottom:0em}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.395em;margin-bottom:0em}}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.581em;margin-bottom:0em}}@media (min-width: 1024px), print{.main_carousel.module .slick-slider .content_title h1{font-size:1.674em;margin-bottom:0em}}@media (min-width: 1200px){.main_carousel.module .slick-slider .content_title h1{font-size:1.767em;margin-bottom:0em}}.main_carousel.module .slick-slider .left-col,.main_carousel.module .slick-slider .right-col{display:inline-block;vertical-align:top}@media (min-width: 600px), print{.main_carousel.module .slick-slider .left-col{display:inline-block;width:45.83333%;float:left;margin-right:4.16667%}}.main_carousel.module .slick-slider .right-col{width:100%}@media (min-width: 600px), print{.main_carousel.module .slick-slider .right-col{width:50%;float:right;margin-right:0}}.main_carousel.module .slick-slider .carousel_item_description{display:none}@media (min-width: 600px), print{.main_carousel.module .slick-slider .carousel_item_description{display:block;margin-bottom:1.5em}}.main_carousel.module .slick-slider .content_body{margin-bottom:.9em}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_body{margin-bottom:0}}.main_carousel.module .slick-slider .content_footer{padding:.6em 1em 1em 0}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_footer{padding:0}.main_carousel.module .slick-slider .content_footer a+a{margin-top:.6em}}.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{padding-right:.8em;display:inline-block;font-size:.9em;white-space:nowrap}@media (min-width: 600px), print{.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{font-size:1em}}@media (min-width: 769px), print{.main_carousel.module .slick-slider .content_footer .full_artical_link,.main_carousel.module .slick-slider .content_footer .full_category_link{float:left;clear:both}}.main_carousel.module .slick-slider .content_footer .full_artical_link:before,.main_carousel.module .slick-slider .content_footer .full_category_link:before{content:"› "}.suggested_features.module .slick-slider,.related.module .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:1em}@media (min-width: 480px){.suggested_features.module .slick-slider,.related.module .slick-slider{width:84%}}@media (min-width: 600px), print{.suggested_features.module .slick-slider,.related.module .slick-slider{margin-bottom:1.5em;width:90%}}.suggested_features.module .slick-slider .category_title,.related.module .slick-slider .category_title{color:white;line-height:1.4em}.suggested_features.module .slick-slider .slick-slide a,.related.module .slick-slider .slick-slide a{text-decoration:none;color:#222}.suggested_features.module .slick-slider .slide,.related.module .slick-slider .slide{margin:0 6px}@media (min-width: 769px), print{.suggested_features.module .slick-slider .slide,.related.module .slick-slider .slide{margin:0 9px}}.no-touchevents .suggested_features.module .slide:hover .content_title a,.no-touchevents .related.module .slide:hover .content_title a{color:#366599}.no-touchevents .suggested_features.module .slide:hover .rollover_description,.no-touchevents .related.module .slide:hover .rollover_description{background-color:rgba(0,0,0,0.7)}.suggested_features.module .content_title,.related.module .content_title{padding:.6rem 0 0;color:#222;font-weight:400}.suggested_features.module .content_title a,.related.module .content_title a{display:block;font-size:.9rem}@media (min-width: 769px), print{.suggested_features.module .content_title a,.related.module .content_title a{font-size:1.14rem}}.suggested_features.module .image_and_description_container,.related.module .image_and_description_container{position:relative;overflow:hidden;min-height:129px}.suggested_features.module .module_title,.related.module .module_title{font-size:1.69em;margin-bottom:.35em;margin-top:0}@media (min-width: 600px), print{.suggested_features.module .module_title,.related.module .module_title{font-size:1.95em;margin-bottom:.63em}}@media (min-width: 769px), print{.suggested_features.module .module_title,.related.module .module_title{font-size:2.21em;margin-bottom:.91em}}@media (min-width: 1024px), print{.suggested_features.module .module_title,.related.module .module_title{font-size:2.34em;margin-bottom:1.015em}}@media (min-width: 1200px){.suggested_features.module .module_title,.related.module .module_title{font-size:2.47em;margin-bottom:1.12em}}.insight_page .suggested_features.module .module_title,.insight_page .related.module .module_title{color:black}.lightbox_carousel_module,.news_features_carousel{width:100%;margin-left:auto;margin-right:auto;max-width:1200px}@media (min-width: 480px){.lightbox_carousel_module,.news_features_carousel{width:calc(100% - 55px)}}.lightbox_carousel_module .slick-slider,.news_features_carousel .slick-slider{margin-left:auto;margin-right:auto;width:100%}@media (min-width: 480px){.lightbox_carousel_module .slick-slider,.news_features_carousel .slick-slider{width:90%}}.lightbox_carousel_module header,.news_features_carousel header{text-align:center;margin-bottom:2.2rem}.lightbox_carousel_module .module_title,.news_features_carousel .module_title{color:#222;margin-top:0}.lightbox_carousel_module .subtitle,.news_features_carousel .subtitle{margin:0 auto;width:97%;max-width:620px}.lightbox_carousel_module .slide,.news_features_carousel .slide{padding:0 3px}.lightbox_carousel_module .slide_description,.news_features_carousel .slide_description{text-align:center;padding:1rem .5rem 0}.lightbox_carousel_module .slide_description h3,.news_features_carousel .slide_description h3{font-size:1.1rem;margin:0;font-weight:400}.lightbox_carousel_module .slide_description .description_body,.news_features_carousel .slide_description .description_body{margin-top:.3rem;color:#727272}.lightbox_carousel_module .image_container,.news_features_carousel .image_container{position:relative}.lightbox_carousel_module .youtube_play_button,.news_features_carousel .youtube_play_button{position:absolute;left:0;right:0;top:0;bottom:0;margin:auto;width:50px;height:50px}.insight_page .suggested_features.module .module_title{color:#222;margin-bottom:1.12em}.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{margin-left:auto;margin-right:auto;width:100%;margin-bottom:2.5em}@media (min-width: 480px){.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:84%}}@media (min-width: 600px), print{.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:88%;margin-bottom:3em}}@media (min-width: 769px), print{.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{margin-bottom:3.5em}}@media (min-width: 1300px){.carousel_teaser .slick-slider,.news_teaser .slick-slider,.multimedia_teaser .slick-slider{width:92%}}.carousel_teaser .slick-slider .category_title,.news_teaser .slick-slider .category_title,.multimedia_teaser .slick-slider .category_title{color:white;line-height:1.4em}.carousel_teaser .slick-slider .slick-slide a,.news_teaser .slick-slider .slick-slide a,.multimedia_teaser .slick-slider .slick-slide a{text-decoration:none;color:#222}.carousel_teaser .slick-slider .slide,.news_teaser .slick-slider .slide,.multimedia_teaser .slick-slider .slide{margin:0 6px}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slide,.news_teaser .slick-slider .slide,.multimedia_teaser .slick-slider .slide{margin:0 6px}}.no-touchevents .carousel_teaser .slick-slider .slide:hover .content_title a,.no-touchevents .news_teaser .slick-slider .slide:hover .content_title a,.no-touchevents .multimedia_teaser .slick-slider .slide:hover .content_title a{color:#366599;cursor:pointer}.carousel_teaser .slick-slider .image_and_description_container,.news_teaser .slick-slider .image_and_description_container,.multimedia_teaser .slick-slider .image_and_description_container{position:relative;overflow:hidden;min-height:129px}.carousel_teaser .slick-slider .content_title,.news_teaser .slick-slider .content_title,.multimedia_teaser .slick-slider .content_title{padding:.6em 0 0;color:#222;font-weight:400}.carousel_teaser .slick-slider .slick-prev,.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-next{top:35%;content:'';display:inline-block;padding:0;cursor:pointer;width:35px;height:35px;background:url("https://mars.nasa.gov/assets/[email protected]") -32px -208px;background-size:300px}.carousel_teaser .slick-slider .slick-prev:hover,.carousel_teaser .slick-slider .slick-prev.active,.carousel_teaser .slick-slider .slick-next:hover,.carousel_teaser .slick-slider .slick-next.active,.news_teaser .slick-slider .slick-prev:hover,.news_teaser .slick-slider .slick-prev.active,.news_teaser .slick-slider .slick-next:hover,.news_teaser .slick-slider .slick-next.active,.multimedia_teaser .slick-slider .slick-prev:hover,.multimedia_teaser .slick-slider .slick-prev.active,.multimedia_teaser .slick-slider .slick-next:hover,.multimedia_teaser .slick-slider .slick-next.active{background:url("https://mars.nasa.gov/assets/[email protected]") -32px -258px;background-size:300px}.multimedia_module_gallery .carousel_teaser .slick-slider .slick-prev,.multimedia_module_gallery .carousel_teaser .slick-slider .slick-next,.multimedia_module_gallery .news_teaser .slick-slider .slick-prev,.multimedia_module_gallery .news_teaser .slick-slider .slick-next,.multimedia_module_gallery .multimedia_teaser .slick-slider .slick-prev,.multimedia_module_gallery .multimedia_teaser .slick-slider .slick-next{top:45%}.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:25%}@media (min-width: 769px), print{.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:25%}}@media (min-width: 1024px), print{.related .carousel_teaser .slick-slider .slick-prev,.related .carousel_teaser .slick-slider .slick-next,.related .news_teaser .slick-slider .slick-prev,.related .news_teaser .slick-slider .slick-next,.related .multimedia_teaser .slick-slider .slick-prev,.related .multimedia_teaser .slick-slider .slick-next{top:31%}}.news_teaser .carousel_teaser .slick-slider .slick-prev,.suggested_features .carousel_teaser .slick-slider .slick-prev,.news_teaser .carousel_teaser .slick-slider .slick-next,.suggested_features .carousel_teaser .slick-slider .slick-next,.news_teaser .news_teaser .slick-slider .slick-prev,.suggested_features .news_teaser .slick-slider .slick-prev,.news_teaser .news_teaser .slick-slider .slick-next,.suggested_features .news_teaser .slick-slider .slick-next,.news_teaser .multimedia_teaser .slick-slider .slick-prev,.suggested_features .multimedia_teaser .slick-slider .slick-prev,.news_teaser .multimedia_teaser .slick-slider .slick-next,.suggested_features .multimedia_teaser .slick-slider .slick-next{top:38%}.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{transform:scaleX(-1)}.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-9%}@media (min-width: 600px), print{.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-7%}}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slick-prev,.news_teaser .slick-slider .slick-prev,.multimedia_teaser .slick-slider .slick-prev{left:-6.5%}}.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-9%}@media (min-width: 600px), print{.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-7%}}@media (min-width: 769px), print{.carousel_teaser .slick-slider .slick-next,.news_teaser .slick-slider .slick-next,.multimedia_teaser .slick-slider .slick-next{right:-6.5%}}.carousel_teaser .slick-slider .slick-disabled,.news_teaser .slick-slider .slick-disabled,.multimedia_teaser .slick-slider .slick-disabled{cursor:default;opacity:.4}.carousel_teaser .slick-slider .slick-disabled:before,.news_teaser .slick-slider .slick-disabled:before,.multimedia_teaser .slick-slider .slick-disabled:before{cursor:default}section.vital_signs_menu .readout{color:#222;display:inline-block;text-align:right;position:absolute;right:0;top:0;height:100%}section.vital_signs_menu .readout .title{font-size:1em;font-weight:300;color:#e4e4e4;white-space:nowrap;text-transform:uppercase}@media (min-width: 600px), print{section.vital_signs_menu .readout .title{font-size:1.4em}}.mini_module header h3,.mini_module header h4{color:#ad7226;font-weight:600;font-size:0.9em;margin:0 0 0.1em;text-transform:uppercase;letter-spacing:0}.mini_module header h3.info_tipped,.mini_module header h4.info_tipped{display:inline}.mini_module .inline_component{display:inline-block}.mini_module .description{font-size:1.85rem}.mini_module .transmissions{font-weight:400;color:#ad7226}.mini_module .target_name{color:#ad7226}.mini_module .upcoming{margin-top:.3rem}.mini_module .upcoming>span,.mini_module .previous>span{white-space:nowrap}.mini_module.countdown{overflow:hidden}.mini_module.countdown .unit>span{color:#ad7226;background-color:#FFF1DE}#iframe_overlay .mini_module header h3,#iframe_overlay .mini_module header h4{color:#f7c585}#iframe_overlay .mini_module.countdown .unit>span{color:#ad7226;background-color:#382B19}#iframe_overlay .mini_module .transmissions{color:#f7c585}#iframe_overlay .mini_module .target_name{color:white}#secondary_column .boxed .mini_module{border:none;margin:0;padding:0}#secondary_column .boxed .mini_module+.mini_module{margin-top:1.4rem}#secondary_column .mini_module header{margin-bottom:0.1rem}#secondary_column .mini_module .description{font-size:1.4em}@media (min-width: 1024px), print{#secondary_column .mini_module .description{font-size:1.5em}}#secondary_column .mini_module .countdown .unit span{padding:0 0.8em}#primary_column .mini_module{margin:3rem 0}#primary_column .mini_module header{margin-bottom:.5rem}#primary_column .mini_module header h3{font-size:1.7rem}.homepage_dashboard_modal .mini_module .upcoming{margin-top:.3rem}@media (max-width: 480px){.homepage_dashboard_modal .mini_module .upcoming>span,.homepage_dashboard_modal .mini_module .previous>span{display:block}}.homepage_dashboard_modal.vital_signs_container{position:absolute;top:0;left:0;width:100%;height:calc(100vh - 36px);z-index:41;color:#222;overflow-x:hidden;overflow-y:hidden;-webkit-overflow-scrolling:touch;visibility:hidden;opacity:0;transition:opacity .5s;background:rgba(0,0,0,0.9);word-wrap:break-word;transition:opacity 400ms}.homepage_dashboard_modal.vital_signs_container.visible{visibility:visible;opacity:1}.homepage_dashboard_modal.vital_signs_container.hide_background{background:transparent}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container{z-index:30}}.homepage_dashboard_modal.vital_signs_container .loading{position:absolute;left:50%;top:44vh;width:150px;margin-left:-75px;text-align:center}.homepage_dashboard_modal.vital_signs_container .loading p{color:#fcb963;padding-top:67px;font-size:15px;font-weight:300;margin:0;display:none}.homepage_dashboard_modal.vital_signs_container .vital_signs.module{padding:0;height:100%}.homepage_dashboard_modal.vital_signs_container .countdown_time .unit{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .countdown_time .unit span{color:#966b35;background-color:#000000}.homepage_dashboard_modal.vital_signs_container .content{transition:opacity .5s;opacity:0;height:100%}.homepage_dashboard_modal.vital_signs_container .content.visible{opacity:1}.homepage_dashboard_modal.vital_signs_container .vital_signs_header{margin-bottom:.8em}@media (min-width: 600px), print{.homepage_dashboard_modal.vital_signs_container .vital_signs_header{margin-bottom:1.5em}}.homepage_dashboard_modal.vital_signs_container a{color:#6bbed8;font-size:.9em}.homepage_dashboard_modal.vital_signs_container .more_link{margin:1em 0;float:right;font-weight:600;white-space:nowrap}.homepage_dashboard_modal.vital_signs_container .more_link::after{content:" ›"}.homepage_dashboard_modal.vital_signs_container .grid_layout{display:flex;flex-direction:column;background:rgba(62,23,0,0.75);position:relative;width:100%;margin:0;padding:2em 2em 3em;height:100%;z-index:5;left:0;top:100vh;opacity:0}@media (min-width: 769px), print{.homepage_dashboard_modal.vital_signs_container .grid_layout{padding:2em 15% 3em}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .grid_layout{display:block;padding:2em 2em 2.5em;max-height:100%;height:auto;background:rgba(39,20,5,0.95);top:4.5em;max-width:550px;width:48%;left:100%}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .grid_layout{max-width:700px;width:48%}}@media (min-width: 1700px){.homepage_dashboard_modal.vital_signs_container .grid_layout{max-width:750px;width:48%}}.homepage_dashboard_modal.vital_signs_container .background_area{width:100%;height:100%;position:absolute;top:0;left:0;-webkit-filter:none;filter:none;display:none}.homepage_dashboard_modal.vital_signs_container .background_area.blur{-webkit-filter:blur(3px);filter:blur(3px)}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .background_area{display:none}}.homepage_dashboard_modal.vital_signs_container .module_title{margin-bottom:0.4em;text-align:left;color:#efe9e6;font-weight:400;font-size:1.6em;width:90%}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .module_title{width:100%}}.homepage_dashboard_modal.vital_signs_container a.close_button{display:none}.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special{display:block;position:absolute;top:.8em;right:.8em;z-index:99;width:44px;height:44px}.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special .close_icon{display:block;height:100%;position:relative}.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#fff}.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#fff}.no-touchevents .homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special:hover{opacity:1}@media (min-width: 600px), print{.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special{top:1em;right:1em}}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special{background:rgba(37,21,8,0.8)}}@media (min-width: 1700px){.homepage_dashboard_modal.vital_signs_container a.close_button.homepage_special{top:1.2em;right:1.2em;width:60px;height:60px}}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser{overflow:visible;padding-bottom:2em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .loader{font-size:15px;color:#fcb963;display:block;margin:1em 0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser h3,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser h4{color:#fcb963;font-weight:400;font-size:0.9em;text-transform:uppercase;letter-spacing:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser p{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .desc_title{font-weight:600;margin-bottom:.3em;display:block}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article{margin:0 0 0.5em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article header{margin:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article header .subtitle{color:#fcb963}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article .description{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser article .description:not(p){font-size:1.8em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_content{padding:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_title{margin-bottom:.5em;text-transform:uppercase;font-size:.9em;font-weight:400;color:#fcb963}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_image{width:30%;float:right;margin:0 0 1em 1.8em}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_image .caption{font-size:0.7em;display:block;margin-top:0.4em;color:#beb0a4}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text{width:100%;float:none}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text .description{color:#efe9e6;font-weight:300}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p,.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text .description a{font-size:16px}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p{margin:1em 0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p:first-of-type{margin-top:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text p:last-of-type{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .vital_signs_teaser .item_list .list_text a.more_or_less{color:#efe9e6;text-decoration:underline}.homepage_dashboard_modal.vital_signs_container .overlay_nav li{display:inline-block;color:#94623a;font-size:1.2em;cursor:pointer;transition:all 200ms;margin-right:0.8em}.homepage_dashboard_modal.vital_signs_container .overlay_nav li.current,.homepage_dashboard_modal.vital_signs_container .overlay_nav li:hover{color:white}.homepage_dashboard_modal.vital_signs_container .overlay_nav .nav_spacer{font-size:0.4em;vertical-align:middle;margin:0 .5em;color:#94623a}.homepage_dashboard_modal.vital_signs_container .column_container{display:block;overflow:auto;overflow-x:hidden;padding-right:2em;max-width:100%;height:100%}@media (min-width: 1200px){.homepage_dashboard_modal.vital_signs_container .column_container{width:100%;overflow-y:auto;height:auto;max-height:calc(80vh - 320px)}}@media (min-width: 1200px) and (min-height: 700px){.homepage_dashboard_modal.vital_signs_container .column_container{max-height:calc(88vh - 320px)}}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar-track{background-color:rgba(252,185,99,0.2)}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar{width:5px;left:10px}.homepage_dashboard_modal.vital_signs_container .column_container::-webkit-scrollbar-thumb{background-color:#eaeaea}.homepage_dashboard_modal.vital_signs_container .content_col{position:relative;float:left;width:100%}.homepage_dashboard_modal.vital_signs_container .content_col hr{border-top:1px solid rgba(255,255,255,0.2)}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content>*{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content>*:last-child{margin-bottom:0}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content p{color:#efe9e6}.homepage_dashboard_modal.vital_signs_container .content_col .wysiwyg_content h4{font-size:0.9em;font-weight:400;margin-top:0}.homepage_dashboard_modal.vital_signs_container .dsn_status_module+p{margin-top:0}#vital_signs_modal::-webkit-scrollbar-track{background-color:rgba(252,185,99,0.2)}#vital_signs_modal::-webkit-scrollbar{width:4px;left:10px}#vital_signs_modal::-webkit-scrollbar-thumb{background-color:#eaeaea}.homepage_dashboard_modal .homepage_only{display:block}.homepage_dashboard_modal .no_homepage{display:none}.homepage_only{display:none}#dashboard_modal{position:absolute;top:0;left:0;width:100%;height:100%;z-index:1001;color:#222;overflow-x:hidden;overflow-y:auto;-webkit-overflow-scrolling:touch;visibility:hidden;opacity:0;transition:opacity .5s;background:rgba(33,33,33,0.9);word-wrap:break-word}#dashboard_modal.visible{visibility:visible;opacity:1}#dashboard_modal .content{transition:opacity .5s;opacity:0}#dashboard_modal .content.visible{opacity:1}#dashboard_modal .more_link{margin:1em 0;float:right;font-weight:600;white-space:nowrap}#dashboard_modal .more_link::after{content:" ›"}#dashboard_modal hr.column_divider{margin:2em 0;width:100%;display:inline-block}@media (min-width: 769px), print{#dashboard_modal hr.column_divider{display:none}}#dashboard_modal .vital_signs.module{padding:1em 0}@media (min-width: 600px), print{#dashboard_modal .vital_signs.module{padding:2em 0 2em}}@media (min-width: 769px), print{#dashboard_modal .vital_signs.module{padding:3em 0 2em}}@media (min-width: 1024px), print{#dashboard_modal .vital_signs.module{padding:4em 0 2em}}@media (min-width: 1200px){#dashboard_modal .vital_signs.module{padding:5em 0 2em}}#dashboard_modal .grid_layout{background:white;position:relative;width:100%;padding:2.5%;z-index:5}@media (min-width: 480px){#dashboard_modal .grid_layout{width:95%;border-radius:4px;padding:3.5% 4.5%}}@media (min-width: 1700px){#dashboard_modal .grid_layout{width:95%;max-width:1120px}}#dashboard_modal .grid_layout>header{margin-bottom:1.4em;width:85%}#dashboard_modal .background_area{width:100%;height:100%;position:absolute;top:0;left:0;cursor:pointer}#dashboard_modal .module_title{margin-bottom:0;text-align:left;margin:0 0 .7em;font-weight:500;color:black;font-size:1.82em;margin-bottom:0em}@media (min-width: 600px), print{#dashboard_modal .module_title{font-size:2.1em;margin-bottom:0em}}@media (min-width: 769px), print{#dashboard_modal .module_title{font-size:2.38em;margin-bottom:0em}}@media (min-width: 1024px), print{#dashboard_modal .module_title{font-size:2.52em;margin-bottom:0em}}@media (min-width: 1200px){#dashboard_modal .module_title{font-size:2.66em;margin-bottom:0em}}#dashboard_modal a.close_button{position:absolute;top:.8em;right:.8em;z-index:99;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px -25px;background-size:300px;opacity:.5}#dashboard_modal a.close_button:hover,#dashboard_modal a.close_button.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -25px;background-size:300px}.no-touchevents #dashboard_modal a.close_button:hover{opacity:1}@media (min-width: 600px), print{#dashboard_modal a.close_button{top:1em;right:1em}}@media (min-width: 1700px){#dashboard_modal a.close_button{top:1.2em;right:1.2em}}#dashboard_modal .close_button.homepage_special{display:none}#dashboard_modal .loader{height:100vh}#dashboard_modal .column_container{display:flex;flex-wrap:wrap}#dashboard_modal .left_col,#dashboard_modal .right_col{position:relative;float:left}#dashboard_modal .left_col{width:100%}@media (min-width: 769px), print{#dashboard_modal .left_col{width:65%;border-right:1px solid #BEBEBE;padding-right:1em}}@media (min-width: 1200px){#dashboard_modal .left_col{padding-right:3em}}#dashboard_modal .right_col{width:100%}#dashboard_modal .right_col p{color:#868686}#dashboard_modal .right_col p b{color:#222}@media (min-width: 769px), print{#dashboard_modal .right_col{width:30%;padding-left:1em;left:-1px}}@media (min-width: 1200px){#dashboard_modal .right_col{padding-left:3em}}#dashboard_modal footer{margin-bottom:1.3em}@media (min-width: 769px), print{#dashboard_modal footer{margin-bottom:.3em}}#dashboard_modal article.vital_signs_teaser{overflow:visible}.homepage_dashboard_modal .homepage_only{display:block}.homepage_dashboard_modal .no_homepage{display:none}#dashboard_modal_content .wysiwyg_content .left_col .vital_signs_teaser>:first-child,#dashboard_modal_content .right_col .vital_signs_teaser>:first-child h2{margin-top:0}.distance_travelled .sol{font-size:0.7em}section.vital_signs_menu{z-index:20}section.vital_signs_menu .slick-slider{width:80%;margin-left:auto;margin-right:auto}@media (min-width: 480px){section.vital_signs_menu .slick-slider{width:84%}}@media (min-width: 600px), print{section.vital_signs_menu .slick-slider{width:calc(100% - 85px)}}@media (min-width: 769px), print{section.vital_signs_menu .slick-slider{width:90%}}@media (min-width: 1024px), print{section.vital_signs_menu .slick-slider{width:92%}}@media (min-width: 1700px){section.vital_signs_menu .slick-slider{width:1350px}}section.vital_signs_menu .slick-slider .slick-list{z-index:22}section.vital_signs_menu .slick-slider .slick-track{margin-left:auto;margin-right:auto}section.vital_signs_menu .slick-navigation{display:block;position:absolute;top:0;width:100%;height:100%;z-index:21}@media screen and (min-width: 1550px){section.vital_signs_menu .slick-navigation{display:none}}section.vital_signs_menu .slick-navigation .slick-prev,section.vital_signs_menu .slick-navigation .slick-next{position:absolute;background-color:transparent;font-size:1.8em;color:#808080;height:77px;background-image:url("https://mars.nasa.gov/assets/[email protected]");background-repeat:no-repeat;background-size:135px;width:34px;text-indent:-9999px;top:4px;z-index:999999}section.vital_signs_menu .slick-navigation .slick-prev{left:.7%;border-right:1px solid rgba(216,216,216,0.2);background-position:-27px 19px}@media (min-width: 1024px), print{section.vital_signs_menu .slick-navigation .slick-prev{left:.8%}}@media (min-width: 1200px){section.vital_signs_menu .slick-navigation .slick-prev{left:.9%}}section.vital_signs_menu .slick-navigation .slick-prev:hover{background-position:0px 19px}section.vital_signs_menu .slick-navigation .slick-next{right:.7%;border-left:1px solid rgba(216,216,216,0.2);background-position:-75px 19px}@media (min-width: 1024px), print{section.vital_signs_menu .slick-navigation .slick-next{right:.8%}}@media (min-width: 1200px){section.vital_signs_menu .slick-navigation .slick-next{right:.9%}}section.vital_signs_menu .slick-navigation .slick-next:hover{background-position:-102px 19px}section.vital_signs_menu ul .slick-slide{height:90px;margin:0}@media (min-width: 600px), print{section.vital_signs_menu ul .slick-slide{height:95px}}section.vital_signs_menu li{display:inline-block;vertical-align:top;position:relative;width:100%}section.vital_signs_menu li:last-child .image_and_description_container{border-right:none}section.vital_signs_menu .image_and_description_container{display:inline-block;overflow:visible;cursor:pointer;width:auto;height:75%;padding-right:.5em;padding-left:1em;vertical-align:middle}@media (min-width: 1700px){section.vital_signs_menu .image_and_description_container{border-right:none}}section.vital_signs_menu .readout .subtitle{color:#4e8fa4;font-size:.78em;font-weight:500;text-transform:uppercase}section.vital_signs_menu .readout{color:white;position:relative;text-align:left;margin:0 auto;float:left}section.vital_signs_menu .readout .text_container{float:left;margin-right:10px}section.vital_signs_menu .readout .title{font-size:1.4em;margin-bottom:0;letter-spacing:-.03em}@media (min-width: 480px){section.vital_signs_menu .readout .title{font-size:1.5em}}@media (min-width: 600px), print{section.vital_signs_menu .readout .title{font-size:1.2em}}@media (min-width: 769px), print{section.vital_signs_menu .readout .title{font-size:1.3em}}@media (min-width: 850px){section.vital_signs_menu .readout .title{font-size:1.1em}}@media (min-width: 950px){section.vital_signs_menu .readout .title{font-size:1.2em}}@media (min-width: 1700px){section.vital_signs_menu .readout .title{font-size:1.5em}}section.vital_signs_menu .readout .overlay_icon{position:absolute;width:37px;height:37px;background:url("https://mars.nasa.gov/assets/dashboard_expand.png") no-repeat;background-size:100%;display:inline-block}#vital_signs_modal.visible ~ .vital_signs_menu{z-index:40}div.modal_open{display:none !important}.page_dashboard,.homepage_dashboard_container{position:relative}section.dashboard{position:absolute;bottom:0;overflow:hidden;width:100%;color:#F1F1F1;text-align:center;z-index:20}@media (min-width: 480px){section.dashboard{background-color:transparent}}section.dashboard a{color:#F1F1F1}section.dashboard .slick-slider{margin:0 auto;width:calc(100% - 90px)}@media (min-width: 1200px){section.dashboard .slick-slider{max-width:1400px}}@media (min-width: 1700px){section.dashboard .slick-slider{max-width:1400px;width:1600px}}section.dashboard .slick-slider .slick-track{margin-left:auto;margin-right:auto}section.dashboard .slick-prev,section.dashboard .slick-next{position:absolute;background-color:transparent;font-size:1.8em;width:40px;height:100%;z-index:21;text-indent:-9999px;top:0;color:#F1F1F1;margin-top:0}section.dashboard .slick-prev{background-image:url("https://mars.nasa.gov/assets/arrow_left_slim_dashboard.png");background-repeat:no-repeat;background-size:27px 70px;background-position:7px 12px;left:-40px}section.dashboard .slick-next{background-image:url("https://mars.nasa.gov/assets/arrow_right_slim_dashboard.png");background-repeat:no-repeat;background-size:27px 70px;background-position:7px 12px;right:-40px}section.dashboard .slide{outline:0;padding:1.2rem .4rem;min-width:200px}@media (min-width: 600px), print{section.dashboard .slide{padding:1.2rem 1.2rem}}@media (min-width: 1024px), print{section.dashboard .slide{min-width:0;padding:1.2rem .3rem}}section.dashboard .slide.overlay,section.dashboard .slide.link_to_page{cursor:pointer}section.dashboard .slide.overlay .transmissions,section.dashboard .slide.link_to_page .transmissions{text-align:left}section.dashboard .slide .svg_icon_container{width:37px;height:37px;margin-left:4px;stroke:#9acaf7;fill:#9acaf7;color:#9acaf7}section.dashboard .slide:hover svg{stroke:white;fill:white}section.dashboard .slide.current svg{transform:rotate(45deg);fill:black}section.dashboard a:hover{text-decoration:none}section.dashboard .image_and_description_container{padding-left:0}section.dashboard .image_and_description_container p{margin:0;display:inline-block;vertical-align:middle}section.dashboard a.image_and_description_container{cursor:pointer;display:block}section.dashboard a.image_and_description_container:hover{text-decoration:none}section.dashboard a.image_and_description_container .countdown,section.dashboard a.image_and_description_container .readout{margin-left:auto;margin-right:auto;width:100%}section.dashboard .title,section.dashboard .countdown_title{color:rgba(255,255,255,0.5);margin-bottom:.5rem;font-size:0.8em;font-weight:500;text-transform:uppercase;text-align:center;color:#DCC5AA}section.dashboard .value,section.dashboard .countdown_time{color:#F1F1F1;margin-top:2px;text-align:center;line-height:1.1}section.dashboard .value .unit,section.dashboard .countdown_time .unit{font-size:1.2em;font-weight:400}@media (min-width: 769px), print{section.dashboard .value .unit,section.dashboard .countdown_time .unit{font-size:1.4em}}section.dashboard .value .time_label,section.dashboard .countdown_time .time_label{padding:0 0.4em;margin-bottom:0}section.dashboard .value{font-size:1.1em}@media (min-width: 769px), print{section.dashboard .value{font-size:1.4em}}section.dashboard .countdown_time{margin-top:-4px}section.dashboard .change_number{font-size:2.1em}section.dashboard .units{top:0;width:45%}section.dashboard .mars_relay_connection{margin-top:-7px}section.dashboard .transmissions{text-align:center}section.dashboard .transmissions .previous span,section.dashboard .transmissions .upcoming span{display:block}section.dashboard .signal{font-size:14.5px;margin:0}section.dashboard .signal .dsn_icon{display:none}section:not(.homepage_dashboard_container) .page_dashboard .link_to_page .value:after,section:not(.homepage_dashboard_container) .page_dashboard .overlay .value:after{content:"";display:inline-block;background:url("https://mars.nasa.gov/assets/dashboard_sprite.png") no-repeat;background-size:200px;background-position:-10px -10px;width:33px;height:33px;margin-left:.8rem;vertical-align:middle;transform:translateY(-5%)}@media (min-width: 1200px){section:not(.homepage_dashboard_container) .page_dashboard .slick-slider{max-width:1600px}}@media (min-width: 1700px){section:not(.homepage_dashboard_container) .page_dashboard .slick-slider{max-width:1600px;width:1600px}}section:not(.homepage_dashboard_container) .page_dashboard .slide:hover .value:after{background-position:-10px -60px}section:not(.homepage_dashboard_container) .page_dashboard .slide.current .value:after{background-position:-10px -110px}section:not(.homepage_dashboard_container) .page_dashboard .slide.overlay .value:after{background-position:-60px -10px}section:not(.homepage_dashboard_container) .page_dashboard .slide.overlay:hover .value:after{background-position:-60px -60px}.dashboard li .value{text-align:center}.dashboard li .value .toggle_units{display:inline-block;margin-left:.3rem}.dashboard li .value .unit_toggle{display:inline-block;font-size:.8rem;vertical-align:middle;text-transform:uppercase;font-weight:500;color:#D5D5D5}.dashboard li .value .unit_toggle+.unit_toggle{margin-left:.3rem}.dashboard li .value .unit_toggle:not(.current){text-decoration:underline;color:#9ACAF7;cursor:pointer}.dashboard .svg_icon_container{display:none}body.core section.dashboard,body.data section.dashboard{display:none}@media (min-width: 1200px){body.core section.dashboard,body.data section.dashboard{display:block}}body.info_panel_open section.dashboard{display:none}.top_feature_area section.dashboard{bottom:36px}.carousel_dashboard_container section.dashboard .slide .value,.carousel_dashboard_container section.dashboard .slide .countdown_time{color:#F1F1F1}.carousel_dashboard_container section.dashboard .slide .title,.carousel_dashboard_container section.dashboard .slide .countdown_title{color:rgba(255,255,255,0.5)}.solar_system_dashboard .readout.multiple{text-align:center}.solar_system_dashboard .readout.multiple .value_container{font-size:.7em;display:inline-block}.solar_system_dashboard .readout.multiple .value_container .value{display:inline}.solar_system_dashboard .readout.multiple .value_container::after{content:" | ";font-size:1.5em}.solar_system_dashboard .readout.multiple .value_container.last::after{content:""}.solar_system_dashboard .readout.multiple.moon_counts .value_container::after{content:" | ";font-size:1.8rem;vertical-align:-3px;opacity:0.3}.solar_system_dashboard .readout.multiple.moon_counts .value_container.last{padding-left:5px;vertical-align:-3px}.solar_system_dashboard .readout.multiple.moon_counts .value_container.last::after{content:""}.solar_system_dashboard .readout.multiple.moon_counts .moon_count_total{font-size:23px;margin-right:-3px}.solar_system_dashboard .readout.multiple.moon_counts .moon_count_types{font-size:1.1em;line-height:1.05}.homepage_dashboard_container .dashboard{position:absolute;bottom:0}.homepage_dashboard_container .dashboard .value{color:#4e8fa4;font-size:.78em;font-weight:500;text-transform:uppercase}.homepage_dashboard_container .dashboard .title{font-size:1.4em;margin-bottom:0;letter-spacing:-.03em;text-align:left}@media (min-width: 480px){.homepage_dashboard_container .dashboard .title{font-size:1.5em}}@media (min-width: 600px), print{.homepage_dashboard_container .dashboard .title{font-size:1.2em}}@media (min-width: 769px), print{.homepage_dashboard_container .dashboard .title{font-size:1.3em}}@media (min-width: 850px){.homepage_dashboard_container .dashboard .title{font-size:1.1em}}@media (min-width: 950px){.homepage_dashboard_container .dashboard .title{font-size:1.2em}}@media (min-width: 1700px){.homepage_dashboard_container .dashboard .title{font-size:1.5em}}.homepage_dashboard_container .dashboard .svg_icon_container{display:inline-block;vertical-align:middle}.homepage_dashboard_container .dashboard .slide+.slide:after{content:'';display:block;height:70px;border:1px solid rgba(216,216,216,0.2);width:1px;position:absolute;top:50%;transform:translateY(-50%)}@media (min-width: 1700px){.homepage_dashboard_container .dashboard .slide+.slide:after{content:none}}.homepage_dashboard_container .dashboard .slick-next:before{content:'';display:block;height:70px;border:1px solid rgba(216,216,216,0.2);width:1px;position:absolute;top:50%;transform:translateY(-50%)}.homepage_dashboard_container .dashboard .overlay_icon{position:relative;width:37px;height:37px;background:url("https://mars.nasa.gov/assets/dashboard_expand.png") no-repeat;background-size:100%;display:inline-block;vertical-align:middle}.no-touchevents .homepage_dashboard_container .dashboard .slide:hover .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_expand_hover.png") no-repeat;background-size:100%}.homepage_dashboard_container .dashboard .slide.current .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_contract.png") no-repeat;background-size:100%}.no-touchevents .homepage_dashboard_container .dashboard .slide.current:hover .overlay_icon{background:url("https://mars.nasa.gov/assets/dashboard_contract_hover.png") no-repeat;background-size:100%}.megasection_nav_present.msl section.dashboard .title,.megasection_nav_present.msl section.dashboard .countdown_title{color:#e34e41;font-weight:700}.homepage_countdown{background-color:white;height:600px}.homepage_countdown h2{position:relative;z-index:1}.homepage_countdown img{-o-object-fit:cover;object-fit:cover;position:absolute;height:100%;width:100%;left:0;bottom:0;-o-object-position:left bottom;object-position:left bottom}.homepage_countdown img.foreground{-o-object-position:10% 50%;object-position:10% 50%}@media (min-width: 600px), print{.homepage_countdown img.foreground{-o-object-position:left bottom;object-position:left bottom}}.homepage_countdown .text_area{z-index:1;position:relative}.homepage_countdown .text_area .countdown_time{font-size:2em;margin:10px 0;transition:font-size,250ms}@media (min-width: 480px){.homepage_countdown .text_area .countdown_time{font-size:2em}}@media (min-width: 600px), print{.homepage_countdown .text_area .countdown_time{font-size:3em}}@media (min-width: 1200px){.homepage_countdown .text_area .countdown_time{font-size:4em}}.homepage_countdown .text_area .countdown_time .unit{color:#6d301d;font-weight:600;padding:0 5px}@media (min-width: 480px){.homepage_countdown .text_area .countdown_time .unit{padding:0 5px}}@media (min-width: 600px), print{.homepage_countdown .text_area .countdown_time .unit{padding:0 10px}}@media (min-width: 1200px){.homepage_countdown .text_area .countdown_time .unit{padding:0 15px}}.homepage_countdown .text_area .countdown_time .time_label{color:#6d301d;font-size:16px}@media (max-width: 1199px){.homepage_countdown .text_area .countdown_time .time_label{font-size:12px}}.homepage_countdown .button_link{text-align:center;margin:0 auto;display:block}.homepage_countdown .publish_date{position:absolute;right:15%}@media (max-width: 1199px){.homepage_countdown .publish_date{left:0;right:0;text-align:center}}.image_of_the_day{overflow:hidden;z-index:10;padding:0}.image_of_the_day .gradient_container_top,.image_of_the_day .gradient_container_left,.image_of_the_day .gradient_container_bottom{display:none}.image_of_the_day .gradient_container_bottom{display:block}.image_of_the_day .window{width:100%;position:absolute;overflow:hidden;height:100%;padding:1em}.image_of_the_day .window.mobile{height:100%;min-height:100%}.image_of_the_day #featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}.image_of_the_day a.image_day{width:100%;height:100%;position:absolute;top:0;left:0;z-index:11;text-indent:-999px}.image_of_the_day .grid_layout{z-index:12;height:100%;position:relative}@media (min-width: 1024px), print{.image_of_the_day .grid_layout{width:86%}}.image_of_the_day .floating_text_area{position:absolute;bottom:0;width:100%;text-align:left}@media (min-width: 600px), print{.image_of_the_day .floating_text_area{bottom:1.3rem;text-align:left}}.image_of_the_day .description{color:white}.image_of_the_day .description p{color:inherit}.image_of_the_day header{z-index:12;width:100%}.image_of_the_day header .header_link{display:inline-block;width:100%}.image_of_the_day header .header_link:hover{text-decoration:none}.image_of_the_day header .category_title,.image_of_the_day header .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .image_of_the_day header .category_title{color:white;font-size:0.9em;margin-bottom:0.3em;margin-top:0;font-weight:500}@media (min-width: 600px), print{.image_of_the_day header .category_title,.image_of_the_day header .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .image_of_the_day header .category_title{margin-bottom:0.7em}}.image_of_the_day header .media_feature_title{font-size:1.75rem;font-weight:200;margin-bottom:.3rem;margin-top:0}@media (min-width: 600px), print{.image_of_the_day header .media_feature_title{margin-bottom:0;font-size:2.7rem}}@media (min-width: 769px), print{.image_of_the_day header .media_feature_title{font-size:3rem}}.image_of_the_day header .multimedia_link{display:inline-block;text-transform:uppercase;font-size:0.9em;color:white;text-align:right;font-weight:600;right:0;position:relative}@media (min-width: 600px), print{.image_of_the_day header .multimedia_link{bottom:10px;width:28%;position:absolute}}.image_of_the_day header .multimedia_link a{color:white}.image_of_the_day.homepage_iotd header .header_link{width:100%}@media (min-width: 600px), print{.image_of_the_day.homepage_iotd header .header_link{width:70%}}.image_of_the_day .outline_button,.image_of_the_day .primary_media_feature .floating_text_area .button,.primary_media_feature .floating_text_area .image_of_the_day .button,.image_of_the_day .banner_header_overlay .button,.banner_header_overlay .image_of_the_day .button{opacity:1}.image_of_the_day footer .iotd_link.button{color:white}.image_of_the_day_module a{color:#98c7fc;font-weight:500}.insight_page .image_of_the_day_module a{color:#90c9ff}.image_of_the_day.no_parallax{height:600px}@media (min-width: 1024px), print{.image_of_the_day.no_parallax{height:700px}}.image_of_the_day.no_parallax .featured_image{height:100%}.image_of_the_day.no_parallax .featured_image picture{width:100%;height:100%;display:flex}.image_of_the_day.no_parallax .featured_image picture img{-o-object-fit:cover;object-fit:cover;height:auto;width:100%}.image_of_the_day.no_parallax .floating_text_area{z-index:12;color:white;top:12px;bottom:auto}@media (min-width: 600px), print{.image_of_the_day.no_parallax .floating_text_area{width:70%;bottom:auto;top:45%;transform:translateY(-50%)}}@media (min-width: 769px), print{.image_of_the_day.no_parallax .floating_text_area{width:50%}}.image_of_the_day.no_parallax .floating_text_area p{color:inherit}.image_of_the_day.no_parallax footer{text-align:left;margin-top:1rem}.image_of_the_day.no_parallax .gradient_container_bottom{display:none}.image_of_the_day.no_parallax .gradient_container_left{display:none}@media (min-width: 600px), print{.image_of_the_day.no_parallax .gradient_container_left{display:block}}.image_of_the_day.no_parallax .gradient_container_top{display:block}@media (min-width: 600px), print{.image_of_the_day.no_parallax .gradient_container_top{display:none}}.filter_bar{z-index:20}.filter_bar .section_search{padding-bottom:1em;max-width:380px;width:100%;margin:0 auto}@media (min-width: 1024px), print{.filter_bar .section_search{width:auto;max-width:none;display:flex;padding-bottom:0}}.filter_bar .section_search .search_submit{right:0px;top:-1px}.filter_bar.fixed{position:fixed;top:0;left:0;width:100%;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}.filter_bar .search_binder{width:100%;max-width:304px;position:relative;margin-left:auto;margin-right:auto;margin:0 auto .7em 0}@media (min-width: 480px){.filter_bar .search_binder{margin:0 0 .7em 0}}@media (min-width: 1024px), print{.filter_bar .search_binder{position:relative;vertical-align:top;display:inline-block;width:35%;margin-right:1%;max-width:300px}}.filter_bar .search_clear{width:20px;height:20px;position:absolute;right:8px;top:10px;padding:3px;cursor:pointer}.filter_bar .search_clear .search_clear_icon{display:block;height:100%;position:relative}.filter_bar .search_clear .search_clear_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#2b2b2b}.filter_bar .search_clear .search_clear_icon:after{transform:rotate(45deg);content:'';position:absolute;height:2px;width:100%;top:calc(50% - 1px);left:0;background:#2b2b2b}.filter_bar input.search_field{width:100%}.filter_bar input.search_field::-webkit-input-placeholder{color:#bbb !important}.filter_bar input.search_field::-moz-placeholder{color:#bbb !important}.filter_bar input.search_field:-moz-placeholder{color:#bbb !important}.filter_bar input.search_field:-ms-input-placeholder{color:#bbb !important}.filter_bar select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0 .5rem .5rem auto;float:none}.filter_bar select::-ms-expand{display:none}.filter_bar select option{padding:0.5em 1em}@media (min-width: 1024px), print{.filter_bar select{margin-bottom:0;margin-left:unset;width:30%;max-width:284px}.filter_bar select+select{margin-left:1%}}.filter_bar header{display:inline-block;width:100%;text-align:left}@media (min-width: 600px), print{.filter_bar header{text-align:center}}@media (min-width: 1024px), print{.filter_bar header{display:none}}.filter_bar .arrow_box{display:inline-block;position:absolute;padding:4px;cursor:pointer;right:0;bottom:7px;float:none;transition:all .2s}@media (min-width: 600px), print{.filter_bar .arrow_box{text-align:center}}.filter_bar .arrow_box.rotate_up{transform:rotate(180deg)}.filter_bar .arrow_box.rotate_right{transform:rotate(270deg)}.filter_bar .arrow_box.rotate_left{transform:rotate(90deg)}.filter_bar .arrow_box .arrow_down{display:block;border-left:8px solid transparent;border-right:8px solid transparent;border-top:8px solid #8597B1}.advanced_search .filter_bar .section_search{max-width:none}.advanced_search .filter_bar .suggestion_text{margin-top:0}.advanced_search .filter_bar .search_row+.search_row{margin-top:1em}@media (min-width: 600px), print{.advanced_search .filter_bar .search_row+.search_row{margin-top:0}}@media (min-width: 600px), print{.advanced_search .filter_bar .filter1{width:32.20339%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.advanced_search .filter_bar .search_binder{width:49.15254%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.advanced_search .filter_bar .conjunction{width:15.25424%;float:right;margin-right:0}}.advanced_search .filter_bar .search_field{background-color:transparent;border:1px solid #C1C1C1;padding-right:1.1em}.advanced_search footer{margin-top:1.5em}@media (min-width: 1024px){.sort_options{text-align:right;position:absolute;top:0;right:100px}@supports (display: grid) and (grid-template-columns: max-content){.sort_options{position:unset;top:unset;right:unset}.gallery_header{display:grid;grid-template-columns:1fr -webkit-max-content -webkit-max-content;grid-template-columns:1fr max-content max-content}.gallery_header .article_title,.gallery_header .module_title{grid-column:1/-1}.gallery_header .filter_bar .section_search{padding-right:unset;margin-right:15px}}}@media (min-width: 600px){.faceted_search_hidden_filters #secondary_column{display:none}.faceted_search_hidden_filters #primary_column{width:100% !important}@supports (display: grid) and (grid-template-columns: max-content){.faceted_search_hidden_filters #primary_column{display:grid;grid-template-columns:1fr -webkit-max-content;grid-template-columns:1fr max-content;grid-column-gap:2.8em}.faceted_search_hidden_filters #primary_column .pagination_status{grid-column:1 / 2}.faceted_search_hidden_filters #primary_column .pagination_options{grid-column:1 / 2}.faceted_search_hidden_filters #primary_column .mobile_filter_btn{grid-column:2 / 3}.faceted_search_hidden_filters #primary_column .mobile_filter_btn .button{margin-top:-0.5em}.faceted_search_hidden_filters #primary_column .filtering_results,.faceted_search_hidden_filters #primary_column .faceted_search_results,.faceted_search_hidden_filters #primary_column .raw_image_gallery,.faceted_search_hidden_filters #primary_column .errors,.faceted_search_hidden_filters #primary_column .search_footer{grid-column:1 / -1}}.faceted_search_hidden_filters .mobile_filter_btn{display:block !important;text-align:right !important}.faceted_search #secondary_column .mobile_filter_controls,.raw_images_form #secondary_column .mobile_filter_controls{display:block !important;position:absolute;top:-0.35em;right:0}.faceted_search #secondary_column .mobile_filter_controls .clear_filter,.raw_images_form #secondary_column .mobile_filter_controls .clear_filter{display:none}.faceted_search #secondary_column .mobile_filter_controls .button,.raw_images_form #secondary_column .mobile_filter_controls .button{position:relative}.faceted_search #secondary_column .mobile_filter_controls .button::after,.raw_images_form #secondary_column .mobile_filter_controls .button::after{content:"";display:block;background-color:white;background-image:url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpolygon points='499.2,55.2 456.8,12.8 256,213.6 55.2,12.8 12.8,55.2 213.6,256 12.8,456.8 55.2,499.2 256,298.4 456.8,499.2 499.2,456.8 298.4,256 ' /%3E%3C/svg%3E");background-size:1.4em 1.4em;background-repeat:no-repeat;background-position:right;position:absolute;top:0;left:0;width:100%;height:100%}}.rollover_description{opacity:0;height:0;z-index:1;overflow:hidden;transition:opacity .4s}.rollover_description .rollover_description_inner{height:100%;overflow:hidden}.slide{position:relative;min-height:100%}.slide .overlay_arrow{display:none}@media (min-width: 769px), print{.no-touchevents .slide:hover .rollover_description{padding:.9rem;position:absolute;opacity:1;height:auto;top:0;right:0;width:100%;height:100%;color:white;background-color:rgba(0,0,0,0.9);cursor:pointer;font-size:.95rem;line-height:1.3}.no-touchevents .slide:hover .rollover_description p{line-height:inherit;font-size:inherit;color:white}.no-touchevents .slide:hover .rollover_description p:first-child{margin-top:0}.no-touchevents .slide:hover .rollover_title{font-size:1.6em;font-weight:700;margin-bottom:.2em}.no-touchevents .slide:hover .overlay_arrow{height:14px;width:14px;position:absolute;right:14px;bottom:14px;display:block}.no-touchevents .slide:hover .overlay_arrow img{display:block}}.list_view .rollover_description{display:none}.fancybox-overlay,#fancybox-lock{background:#000 !important}.fancybox-wrap,.fancybox-wrap *{-safari-box-sizing:content-box !important;box-sizing:content-box !important}.fancybox-wrap .fancybox-inner{box-shadow:none !important;border-radius:2px !important}.fancybox-wrap .fancybox-title{font-size:16px;font-weight:600;letter-spacing:-0.01em;text-align:center;margin-top:16px;color:#E7E7E7;line-height:1.4em}@media (min-width: 769px), print{.fancybox-wrap .fancybox-title{font-size:17px}}@media (min-width: 1200px){.fancybox-wrap .fancybox-title{font-size:19px}}.fancybox-wrap .addthis_toolbox{display:inline-block;width:100%;margin-top:16px;white-space:nowrap}.fancybox-wrap a.addthis_button_compact{width:89px;border-radius:4px;overflow:hidden;height:38px}.fancybox-wrap a.addthis_button_compact img{vertical-align:top}.fancybox-wrap a.addthis_button_compact:hover{cursor:pointer}.fancybox-wrap a.addthis_button_compact,.fancybox-wrap .button{margin-right:6px;display:inline-block;vertical-align:top}.fancybox-mb-video.fancybox-wrap,.fancybox-mb-info.fancybox-wrap{background:#000}.fancybox-mb-video.fancybox-wrap .fancybox-prev span,.fancybox-mb-info.fancybox-wrap .fancybox-prev span{background-image:url("https://mars.nasa.gov/assets/arrow_left_darktheme.png") !important;background-position:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-next span,.fancybox-mb-info.fancybox-wrap .fancybox-next span{background-image:url("https://mars.nasa.gov/assets/arrow_right_darktheme.png") !important;background-position:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-inner,.fancybox-mb-info.fancybox-wrap .fancybox-inner{border:0}.fancybox-mb-video.fancybox-wrap .fancybox-title-float-wrap,.fancybox-mb-info.fancybox-wrap .fancybox-title-float-wrap{position:relative;right:auto;left:auto}.fancybox-mb-video.fancybox-wrap .fancybox-title-float-wrap .child,.fancybox-mb-info.fancybox-wrap .fancybox-title-float-wrap .child{display:block;margin:auto;white-space:normal;padding:1em 0;line-height:normal}.fancybox-mb-video.fancybox-wrap .fancybox-skin,.fancybox-mb-info.fancybox-wrap .fancybox-skin{background-color:black}.fancybox-mb-video.fancybox-wrap .fancybox-title-inside,.fancybox-mb-info.fancybox-wrap .fancybox-title-inside{text-align:left}.fancybox-mb-video.fancybox-wrap .fancybox-nav{top:-15%}@media (min-width: 1px) and (print: 769px){.fancybox-mb-video.fancybox-wrap,.fancybox-mb-info.fancybox-wrap{margin:0 !important;width:95% !important;margin:0 auto !important}.fancybox-mb-video.fancybox-wrap .fancybox-nav,.fancybox-mb-info.fancybox-wrap .fancybox-nav{display:none}.fancybox-mb-video.fancybox-wrap .fancybox-inner,.fancybox-mb-info.fancybox-wrap .fancybox-inner{width:100% !important;height:auto !important}.fancybox-mb-video.fancybox-wrap .fancybox-image,.fancybox-mb-info.fancybox-wrap .fancybox-image{width:100% !important;height:auto !important;margin:0 auto !important}.fancybox-mb-video.fancybox-wrap{left:0 !important;right:0 !important;position:relative !important;margin:0 auto !important;padding:0 !important;border:none}.fancybox-mb-video.fancybox-wrap .fancybox-inner{margin:0 !important}.fancybox-mb-video.fancybox-wrap .fancybox-iframe{height:600px !important}}#fancybox_video .player{min-height:200px;margin-bottom:1.5em}@media (min-width: 480px){#fancybox_video .player{min-height:300px}}@media (min-width: 600px), print{#fancybox_video .player{min-height:400px}}#fancybox_info{margin-top:1.5em}#fancybox_info,#fancybox_video{color:white}#fancybox_info p,#fancybox_info .description,#fancybox_video p,#fancybox_video .description{color:white}#fancybox_info .image_caption,#fancybox_info .image_caption p,#fancybox_video .image_caption,#fancybox_video .image_caption p{color:#aaa;font-size:.9em}#fancybox_info .image_details,#fancybox_video .image_details{overflow:hidden;display:inline-block;width:100%;min-width:inherit}#fancybox_info .image_details .text,#fancybox_video .image_details .text{float:left;text-align:left;width:100%;margin-top:10px}@media (min-width: 769px), print{#fancybox_info .image_details .text,#fancybox_video .image_details .text{margin-top:0}}@media (min-width: 1024px), print{#fancybox_info .image_details .text,#fancybox_video .image_details .text{width:60%}}#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.235em;margin-bottom:.1em;line-height:1.3em;font-weight:700}@media (min-width: 600px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.425em;margin-bottom:.18em}}@media (min-width: 769px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.615em;margin-bottom:.26em}}@media (min-width: 1024px), print{#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.71em;margin-bottom:.29em}}@media (min-width: 1200px){#fancybox_info .image_details .text .title,#fancybox_video .image_details .text .title{font-size:1.805em;margin-bottom:.32em}}#fancybox_info .image_details .buttons,#fancybox_video .image_details .buttons{width:100%;float:right}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons,#fancybox_video .image_details .buttons{width:40%}}#fancybox_info .image_details .buttons .inner_buttons,#fancybox_video .image_details .buttons .inner_buttons{float:left}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .inner_buttons,#fancybox_video .image_details .buttons .inner_buttons{float:right}}#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons .addthis_toolbox{border-radius:4px;overflow:hidden}#fancybox_info .image_details .buttons .addthis_toolbox img,#fancybox_video .image_details .buttons .addthis_toolbox img{height:37px !important;width:auto !important}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox img,#fancybox_video .image_details .buttons .addthis_toolbox img{height:38px !important}}#fancybox_info .image_details .buttons .close_button,#fancybox_video .image_details .buttons .close_button{margin-left:12px}#fancybox_info .image_details .buttons a.button,#fancybox_video .image_details .buttons a.button{padding-left:16px;padding-right:16px}#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button{float:left;margin-left:0;margin-right:12px}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button{float:right;margin-left:12px;margin-right:0}}@media (min-width: 1024px), print{#fancybox_info .image_details .buttons .addthis_toolbox,#fancybox_info .image_details .buttons a.button,#fancybox_info .image_details .buttons .close_button,#fancybox_video .image_details .buttons .addthis_toolbox,#fancybox_video .image_details .buttons a.button,#fancybox_video .image_details .buttons .close_button{margin-bottom:12px}}#fancybox_info .close_button,#fancybox_video .close_button{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0px;background-size:300px;z-index:8060;position:relative;display:block;float:right}#fancybox_info .close_button:hover,#fancybox_info .close_button.active,#fancybox_video .close_button:hover,#fancybox_video .close_button.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0px;background-size:300px}figure{margin-bottom:1em;max-width:100%}@media (min-width: 769px), print{figure{margin-bottom:2em}}figure figcaption,figure figcaption p{margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{figure figcaption,figure figcaption p{font-size:.88em}}.explore_overlay_page figcaption,.explore_overlay_page figcaption p{color:#b0b4b9}@media (max-width: 480px){figure.lede.full_width{width:100%}figure.lede.full_width figcaption{margin-left:auto;margin-right:auto}}#secondary_column aside figure{margin-bottom:1em}#secondary_column aside figure figcaption{margin-bottom:0}.inline_caption{margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.inline_caption{font-size:.88em}}.content_page #page_header{margin-bottom:1.5em}@media (min-width: 769px), print{.content_page #page_header{margin-bottom:2em}}.content_page #page_header .author{margin:.5em 0 1.8em}.content_page #page_header .crumb+.crumb:before{font-size:1.3em;color:#257cdf;content:'\a0\203a\a0'}.content_page.event_page h1,.content_page.event_page h2,.content_page.event_page h3,.content_page.event_page h4,.content_page.event_page h5{margin:0}.content_page .release_date{font-size:1em;color:#222;text-transform:none}.content_page .category_title,.content_page .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .content_page .category_title{color:#222}.content_page .category_title a,.content_page .homepage_carousel .floating_text_area .category_title a,.homepage_carousel .floating_text_area .content_page .category_title a{color:#257cdf}.content_page .audio_player{margin-top:2rem;margin-bottom:2rem}.content_page .main_feature .master-slider,.content_page .jpl_carousel .master-slider{width:100%;height:300px}@media (min-width: 600px), print{.content_page .main_feature .master-slider,.content_page .jpl_carousel .master-slider{height:400px}}.content_page .main_feature .master-slider .gradient_container_bottom,.content_page .jpl_carousel .master-slider .gradient_container_bottom{height:80px}.content_page .main_feature .master-slider .ms-nav-next,.content_page .main_feature .master-slider .ms-nav-prev,.content_page .jpl_carousel .master-slider .ms-nav-next,.content_page .jpl_carousel .master-slider .ms-nav-prev{display:none}@media (min-width: 769px), print{.content_page .main_feature .master-slider .ms-nav-next,.content_page .main_feature .master-slider .ms-nav-prev,.content_page .jpl_carousel .master-slider .ms-nav-next,.content_page .jpl_carousel .master-slider .ms-nav-prev{display:block}}.content_page .main_feature .master-slider .ms-bullets,.content_page .jpl_carousel .master-slider .ms-bullets{bottom:30px}.content_page .main_feature .master-slider .ms-bullets-count,.content_page .jpl_carousel .master-slider .ms-bullets-count{right:-50%;position:absolute}.content_page .main_feature .master-slider .ms-bullet,.content_page .jpl_carousel .master-slider .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:10px;width:10px;opacity:0.5;margin:0 10px}.content_page .main_feature .master-slider .ms-bullet:hover,.content_page .main_feature .master-slider .ms-bullet.ms-bullet-selected,.content_page .jpl_carousel .master-slider .ms-bullet:hover,.content_page .jpl_carousel .master-slider .ms-bullet.ms-bullet-selected{opacity:1.0}.content_page #primary_column{margin-bottom:5.26316%}@media (min-width: 600px), print{.content_page #primary_column{width:61.53846%;float:left;margin-right:2.5641%;margin-bottom:0}}@media (min-width: 769px), print{.content_page #primary_column{width:64.40678%;float:left;margin-right:1.69492%}}@media (min-width: 1024px), print{.content_page #primary_column{width:61.86441%;float:left;margin-right:1.69492%}}@media (min-width: 1200px){.content_page #primary_column{width:59.32203%;float:left;margin-right:1.69492%}}@media (min-width: 600px), print{.content_page #secondary_column{width:35.89744%;float:right;margin-right:0}}@media (min-width: 769px), print{.content_page #secondary_column{width:32.20339%;float:right;margin-right:0}}.content_page.full_width #primary_column,.content_page.full_width #secondary_column{width:100%}.content_page.feature{padding:2em 0 5.3em}.content_page.feature #secondary_column{display:none}.content_page.feature #primary_column{width:64.40678%;float:left;margin-right:1.69492%;margin:auto;padding:1em 0 5.3em;float:none}.content_page a[name]:not([href]){top:-58px px;visibility:hidden}@media (min-width: 769px), print{.content_page a[name]:not([href]){top:-80px}}@media (min-width: 1024px), print{.content_page a[name]:not([href]){top:-84px}}@media (min-width: 1200px){.content_page a[name]:not([href]){top:-92px}}@media (min-width: 1700px){.content_page a[name]:not([href]){top:-98px}}.content_page #on_this_page_column{width:18%;float:left;margin-right:4%;display:none;z-index:1}@media (min-width: 769px), print{.content_page #on_this_page_column{display:block}}.content_page .mb_section_anchor,.content_page .mb_anchor{top:-58px px;visibility:hidden}@media (min-width: 769px), print{.content_page .mb_section_anchor,.content_page .mb_anchor{top:-80px}}@media (min-width: 1024px), print{.content_page .mb_section_anchor,.content_page .mb_anchor{top:-84px}}@media (min-width: 1200px){.content_page .mb_section_anchor,.content_page .mb_anchor{top:-92px}}#secondary_column>:first-child{margin-top:0}#secondary_column{word-wrap:break-word}#secondary_column aside{margin-bottom:7.14286%}#secondary_column aside:last-child{margin-bottom:0}#secondary_column aside.boxed{border:1px solid #C1C1C1;padding:5.26316%}#secondary_column aside.none{border:0;padding:0}#secondary_column aside>:first-child{margin-top:0}#secondary_column aside>:last-child{margin-bottom:0}#secondary_column aside.links_module li{margin-bottom:.5em}#secondary_column aside.downloads_module .download{margin-bottom:1em}#secondary_column aside.downloads_module .download:last-of-type{margin-bottom:0}#secondary_column aside.downloads_module .button{margin-top:1em}#secondary_column aside.downloads_module .photojournal{margin:2.2em 0 1.5em 0;font-weight:500}#secondary_column aside.list_view_module a{text-decoration:none}#secondary_column aside.list_view_module ul{margin-bottom:1.5em}#secondary_column aside.list_view_module li{padding:.6em 0}#secondary_column aside.list_view_module li:last-child{padding-bottom:0}#secondary_column aside.list_view_module .list_image{float:right;margin-left:4%;margin-bottom:.5em;width:32%}@media (min-width: 600px), print{#secondary_column aside.list_view_module .list_image{margin-left:0;margin-bottom:0;float:left;width:31.03448%;float:left;margin-right:3.44828%}}@media (min-width: 600px), print{#secondary_column aside.list_view_module .list_text{width:65.51724%;float:right;margin-right:0}}#secondary_column aside.list_view_module .list_title{letter-spacing:-.01em;font-weight:700}#secondary_column aside.list_view_module .list_title:hover{color:#222}#secondary_column aside.sig_events_module h4{margin-bottom:1em}#secondary_column aside.sig_events_module h4:last-child{margin-bottom:0}#secondary_column aside.sig_events_module ul{margin-bottom:0}#secondary_column aside.sig_events_module ul li{margin-bottom:.5em}#secondary_column .inline_image{margin-bottom:7.14286%}#secondary_column .inline_image .inline_caption{display:block;margin-top:.8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{#secondary_column .inline_image .inline_caption{font-size:.88em}}#secondary_column .related_content_module{margin:0 0 7.14286% 0;padding:5.26316%;width:100%;border:1px solid #bebebe}#secondary_column .related_content_module li{width:100%;border-bottom:1px solid #bebebe}#secondary_column .related_content_module li:last-child{border-bottom:none;padding-bottom:0}#secondary_column .related_content_module li:first-child{border-top:none}#secondary_column .related_content_module>:last-child{margin-bottom:0}.insight_page a{color:#4A97E1}.event_category{background-color:#DDD;width:auto;display:inline-block;padding:.1em .3em;margin-bottom:.6em;font-weight:700;margin-top:.6em}.on_this_page_nav h4{color:#999;font-size:.74rem;text-transform:uppercase;margin-bottom:.7rem}.on_this_page_nav li{margin-bottom:0.4rem}.on_this_page_nav li a{cursor:pointer;color:#212121;font-weight:500;font-size:.9rem;display:inline-block;line-height:1.2}@media (min-width: 1200px){.on_this_page_nav li a{font-size:.97rem}}img.raw_thumbnail{width:auto}a.main_image_enlarge,a.inline_image_enlarge,.image_enlarge{display:block;position:relative;height:100%}a.main_image_enlarge .enlarge_icon,a.inline_image_enlarge .enlarge_icon,.image_enlarge .enlarge_icon{position:absolute;border-radius:6px;border:1px solid rgba(200,200,200,0.8);left:15px;bottom:15px;width:40px;height:40px;background-color:rgba(0,0,0,0.5);background-image:url("https://mars.nasa.gov/assets/zoom_icon.png");background-size:50%;background-repeat:no-repeat;background-position:50%;opacity:0;transition:opacity 0.2s ease-in}a.main_image_enlarge:hover .enlarge_icon,a.inline_image_enlarge:hover .enlarge_icon,.image_enlarge:hover .enlarge_icon{opacity:0.8}body #fancybox-lock{z-index:200000}.article_nav{display:none}@media (min-width: 1024px), print{.article_nav{display:block;position:relative;z-index:11}.article_nav .article_nav_block{position:fixed;height:86px;display:inline-block;top:42.5%}.article_nav .article_nav_block .link_box{width:40px;background-color:#e4e9ef;display:inline;height:100%}.article_nav .article_nav_block .article_details{display:inline;width:250px;background-color:#FFF;text-decoration:none;color:#000;padding:10px;background-color:#e4e9ef}.article_nav .article_nav_block .article_details .img{margin-bottom:6px}.article_nav .article_nav_block .article_details .title{font-weight:700;font-size:.9em}.article_nav .article_nav_block.prev{left:0}.article_nav .article_nav_block.prev .link_box{float:left}.article_nav .article_nav_block.prev .article_details{float:left;display:none}.article_nav .article_nav_block.next{right:0}.article_nav .article_nav_block.next .link_box{float:right}.article_nav .article_nav_block.next .article_details{display:none;float:right}.no-touchevents .article_nav .article_nav_block:hover .article_details{display:block}}.feature_pages{padding:1em 0 3.8em}.feature_pages #page_header{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages #page_header{width:80%}}@media (min-width: 1200px){.feature_pages #page_header{width:55%}}.feature_pages #on_this_page_column{display:none}@media (min-width: 1024px), print{.feature_pages #on_this_page_column{display:block;width:16%;margin-left:8px;position:absolute;z-index:1}}@media (min-width: 1200px){.feature_pages #on_this_page_column{margin-left:4%}}.feature_pages #on_this_page_column .on_this_page_nav{display:none}.feature_pages #primary_column{width:100%;margin:0}.feature_pages .wysiwyg_content>*{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages .wysiwyg_content>*{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content>*{width:55%}}.feature_pages .wysiwyg_content p{font-size:18px;line-height:28px}@media (min-width: 769px), print{.feature_pages .wysiwyg_content p{font-size:19px;line-height:30px}}.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module),.feature_pages .wysiwyg_content>ol{list-style-position:outside;padding-left:1em}.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module) ul,.feature_pages .wysiwyg_content>ul:not(.item_list_module):not(.item_grid_module) ol,.feature_pages .wysiwyg_content>ol ul,.feature_pages .wysiwyg_content>ol ol{list-style-position:outside}.top_feature_area{text-align:center;position:relative}.top_feature_area .header_overlay{position:absolute;width:100%;padding:0 1%;top:46%;color:white;transform:translateY(-50%)}@media (min-width: 769px), print{.top_feature_area .header_overlay{padding:0 4%}}.top_feature_area .header_overlay .article_title{font-size:1.2em;margin-bottom:0}@media (min-width: 480px){.top_feature_area .header_overlay .article_title{font-size:1.6em}}@media (min-width: 600px), print{.top_feature_area .header_overlay .article_title{font-size:1.9em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .article_title{font-size:2.2em}}@media (min-width: 1024px), print{.top_feature_area .header_overlay .article_title{font-size:2.8em}}@media (min-width: 1200px){.top_feature_area .header_overlay .article_title{font-size:3.2em}}@media (min-width: 1700px){.top_feature_area .header_overlay .article_title{font-size:3.4em}}.top_feature_area .header_overlay .sub_title{font-size:1.2em}@media (min-width: 480px){.top_feature_area .header_overlay .sub_title{font-size:1.5em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .sub_title{font-size:1.9em}}.top_feature_area .header_overlay .author{padding:0.2em 0.5em 0.5em 0.6em;background-color:rgba(0,0,0,0.5);margin:0.2em auto 1em;display:inline-block}@media (min-width: 480px){.top_feature_area .header_overlay .author{margin-top:0.4em}}@media (min-width: 600px), print{.top_feature_area .header_overlay .author{margin-top:0.7em}}@media (min-width: 769px), print{.top_feature_area .header_overlay .author{padding:0.3em 0.5em 0.5em 0.7em;max-width:360px;margin-top:1em}}@media (min-width: 1024px), print{.top_feature_area .header_overlay .author{padding:0.4em 0.6em 0.6em 0.8em;max-width:400px;margin-top:1.5em}}@media (min-width: 1200px){.top_feature_area .header_overlay .author{margin-top:1.8em}}.top_feature_area .header_overlay .author p{color:white;margin:0;font-size:0.8em}@media (min-width: 769px), print{.top_feature_area .header_overlay .author p{font-size:0.95em}}.top_feature_area .article_title{margin-bottom:0.9em}.top_feature_area .category_title,.top_feature_area .homepage_carousel .floating_text_area .category_title,.homepage_carousel .floating_text_area .top_feature_area .category_title{color:#707070;margin-bottom:1em}.top_feature_area a.category_title,.top_feature_area .homepage_carousel .floating_text_area a.category_title,.homepage_carousel .floating_text_area .top_feature_area a.category_title{color:#257cdf}.top_feature_area .feature_header:first-child{width:94%;max-width:600px;margin-left:auto;margin-right:auto;padding:3em 0 1.7em}@media (min-width: 769px), print{.top_feature_area .feature_header:first-child{width:80%}}@media (min-width: 1200px){.top_feature_area .feature_header:first-child{width:55%}}@media (min-width: 480px){.top_feature_area .feature_header:first-child{padding:4em 0 2em}}.top_feature_area .feature_header:first-child:after{content:"";display:block;height:1px;width:56%;border-bottom:5px solid;max-width:200px;margin:1.7em auto 0.3em}.top_feature_area .feature_header.no_main_image{padding-bottom:.9em}.top_feature_area .release_date{text-transform:none;color:#222;margin-left:0.1em}.top_feature_area .header_overlay+.feature_header{width:94%;max-width:600px;margin-left:auto;margin-right:auto;text-align:left}@media (min-width: 769px), print{.top_feature_area .header_overlay+.feature_header{width:80%}}@media (min-width: 1200px){.top_feature_area .header_overlay+.feature_header{width:55%}}.top_feature_area figure.lede.full_width figcaption{margin:.8em;text-align:left;font-size:1em}span.mb_anchor_title{display:block;position:relative;height:0;visibility:hidden}span.mb_anchor_title a.mb_anchor{position:absolute}@media not print{p span.mb_anchor_title:first-child:last-child{margin-top:-1em}}.feature_pages .wysiwyg_content .mb_expand{width:100%;max-width:none}.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:94%;max-width:600px;margin-left:auto;margin-right:auto;display:block}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .mb_expand .expandable_element_link{width:55%}}.feature_pages .wysiwyg_content .mb_expand .expandable_element{display:none;max-width:none;width:100%}.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:94%;max-width:600px;margin-left:auto;margin-right:auto}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:80%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .mb_expand .expandable_element>*{width:55%}}.countdown .unit{font-weight:300;display:inline-block;position:relative;padding:0 9px 0 0;vertical-align:middle;text-align:center}.countdown .unit+.unit:before{content:" : ";position:absolute;left:-7px;top:-3px}.countdown .unit:first-of-type{padding-left:0}.countdown .unit:last-of-type{padding-right:0}.countdown .unit span{font-weight:600;padding:0 1em;clear:both;display:block;font-size:11px;text-transform:uppercase;text-align:center;margin-bottom:.3rem}.countdown .completed{font-size:1.8em;font-weight:300;margin-top:3px}.feature_pages .countdown,.content_page .countdown{border-top:1px solid #E8E8E8;border-bottom:1px solid #E8E8E8;text-align:center;padding:0.7em 0 0.9em;margin-top:2.7em;margin-bottom:2.7em}.feature_pages .countdown>div,.content_page .countdown>div{display:inline-block;vertical-align:middle}.feature_pages #primary_column .countdown_header,.content_page #primary_column .countdown_header{margin-right:.8em;margin-left:.8em}.feature_pages .countdown_title,.content_page .countdown_title{width:100%}@media (min-width: 600px), print{.feature_pages .countdown_title,.content_page .countdown_title{width:auto;margin-right:.8em}}#secondary_column .countdown{white-space:normal;display:block;margin-bottom:7.14286%;text-align:left}#secondary_column .countdown .countdown_header{display:block}#secondary_column .countdown .countdown_title{margin-right:0;width:100%;text-align:left}#secondary_column .countdown .countdown_end{text-align:left}#explore_overlay .countdown{border-top-color:#212121;border-bottom-color:#212121}.curtain_module{margin:3em 0;clear:both}.curtain_module .curtain_caption_container{background-color:#eee;padding:1em}.curtain_module .curtain_title{margin:0 0 .5em 0}.curtain_module .curtain_subtitle{color:#777777;margin:0 0 0.5em 0}.feature_pages .curtain_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .curtain_module{max-width:600px}}.feature_pages .curtain_module.full-bleed,.feature_pages .curtain_module.full_width,.feature_pages .curtain_module.wide,.feature_pages .curtain_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .curtain_module.full-bleed,.feature_pages .curtain_module.full_width,.feature_pages .curtain_module.wide,.feature_pages .curtain_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .curtain_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .curtain_module.column-width{max-width:600px}}.feature_pages .curtain_module.full-bleed{width:100%;max-width:none}.feature_pages .curtain_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .curtain_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .curtain_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .curtain_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .curtain_module.full_width{width:55%}}.feature_pages .curtain_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .curtain_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .curtain_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .curtain_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .curtain_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .curtain_module.left,.feature_pages .curtain_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .curtain_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .curtain_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .curtain_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .curtain_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .curtain_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .curtain_module.right{margin-right:20%}}.feature_pages .curtain_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .curtain_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .curtain_module.parallax_module .caption{font-size:.88em}}.feature_pages .curtain_module.parallax_module img{height:auto !important}.feature_pages .curtain_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .curtain_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .curtain_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .curtain_module.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .curtain_module .curtain_caption_container{background-color:#232323}.wysiwyg_content .related_content_module,#secondary_column .related_content_module{font-weight:700}.wysiwyg_content .related_content_module ul,#secondary_column .related_content_module ul{margin:0}.wysiwyg_content .related_content_module li,#secondary_column .related_content_module li{padding:1em 0;border-bottom:1px solid #E5E5E5}.wysiwyg_content .related_content_module li:first-child,#secondary_column .related_content_module li:first-child{border-top:1px solid #E5E5E5}.wysiwyg_content .related_content_module .module_title,#secondary_column .related_content_module .module_title{font-size:1.2em;text-align:left;margin-top:0}.wysiwyg_content .related_content_module .list_image,#secondary_column .related_content_module .list_image{width:25%;display:inline-block}.wysiwyg_content .related_content_module .list_image+.list_text,#secondary_column .related_content_module .list_image+.list_text{width:67%;position:relative;display:inline-block;margin-left:4%;vertical-align:middle}.wysiwyg_content .related_content_module{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .related_content_module{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{float:none}@media (min-width: 480px){.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .related_content_module.left,.wysiwyg_content .related_content_module.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .related_content_module.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .related_content_module.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .related_content_module.full-bleed,.wysiwyg_content .related_content_module.full_width,.wysiwyg_content .related_content_module.wide,.wysiwyg_content .related_content_module.parallax,.wysiwyg_content .related_content_module.column-width{clear:both}.wysiwyg_content .related_content_module.parallax_module{width:100%;position:relative}.wysiwyg_content .related_content_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .related_content_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .related_content_module.parallax_module .caption{color:#b0b4b9}.wysiwyg_content .related_content_module .sidebar_title,.wysiwyg_content #secondary_column .related_content_module .module_title,#secondary_column .wysiwyg_content .related_content_module .module_title,.wysiwyg_content .right_col .related_content_module .module_title,.right_col .wysiwyg_content .related_content_module .module_title{margin-top:0;font-size:1.5em}.wysiwyg_content .related_content_module.full_width{border:1px solid #D2D2D2;padding:5.26316%}@media (min-width: 600px), print{.wysiwyg_content .related_content_module.full_width{padding:2em}}.wysiwyg_content .related_content_module.full_width li{width:100%}.wysiwyg_content .related_content_module.full_width li:first-child{border-top:none}.wysiwyg_content .related_content_module.full_width li:last-child{border-bottom:transparent 0;padding-bottom:0}.wysiwyg_content .related_content_module.full_width .module_title{margin-top:0;font-size:1.5em}.feature_pages .wysiwyg_content .related_content_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module{max-width:600px}}.feature_pages .wysiwyg_content .related_content_module.full-bleed,.feature_pages .wysiwyg_content .related_content_module.full_width,.feature_pages .wysiwyg_content .related_content_module.wide,.feature_pages .wysiwyg_content .related_content_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.full-bleed,.feature_pages .wysiwyg_content .related_content_module.full_width,.feature_pages .wysiwyg_content .related_content_module.wide,.feature_pages .wysiwyg_content .related_content_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:600px}}.feature_pages .wysiwyg_content .related_content_module.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .related_content_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .related_content_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.full_width{width:55%}}.feature_pages .wysiwyg_content .related_content_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.left,.feature_pages .wysiwyg_content .related_content_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .related_content_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .related_content_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .related_content_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .related_content_module.right{margin-right:20%}}.feature_pages .wysiwyg_content .related_content_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .related_content_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .related_content_module.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .related_content_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .related_content_module.parallax_module .window .featured_image{position:absolute}}.feature_pages .wysiwyg_content .related_content_module .module_title{margin-bottom:0.8em}.vital_signs .related_content_module{font-weight:normal;margin-bottom:1em}.vital_signs .related_content_module li{border:none;padding:0.4em 0;font-size:0.9em}.vital_signs .related_content_module .module_title{font-size:1.2em;margin-bottom:.4em}.vital_signs .related_content_module .list_image{display:none}.vital_signs .related_content_module .list_text{width:100%;float:none;margin-left:10px}.vital_signs .related_content_module .list_text:before{content:"›";color:#42a0f2;margin-left:-10px}.explore_overlay_page .related_content_module.full_width{border-color:#353535}.explore_overlay_page .related_content_module ul li{border-bottom:1px solid #212121}.explore_overlay_page .related_content_module ul li:first-child{border-top:1px solid #212121}.carousel_module_container{width:100%;max-width:100%;margin-top:1.4em;margin-bottom:1.4em;clear:both}@media (min-width: 769px), print{.carousel_module_container{margin-top:2em;margin-bottom:2em}}.carousel_module_container.left,.carousel_module_container.right{float:none}@media (min-width: 480px){.carousel_module_container.left,.carousel_module_container.right{max-width:50%}}@media (min-width: 1200px){.carousel_module_container.left,.carousel_module_container.right{max-width:40%}}@media (min-width: 480px){.carousel_module_container.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.carousel_module_container.right{float:right;margin:1em 0 1.5em 2.5em}}.carousel_module_container.full-bleed,.carousel_module_container.full_width,.carousel_module_container.wide,.carousel_module_container.parallax,.carousel_module_container.column-width{clear:both}.carousel_module_container.parallax_module{width:100%;position:relative}.carousel_module_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.carousel_module_container.parallax_module .caption{font-size:.88em}}.explore_overlay_page .carousel_module_container.parallax_module .caption{color:#b0b4b9}.carousel_module_container:last-child{margin-bottom:0}.carousel_module_container .over_carousel_header .module_title{margin-top:0}.carousel_module_container .carousel_module{height:275px;width:100%}.carousel_module_container .carousel_module .gradient_container_top{display:none}.carousel_module_container .carousel_module.medium_mid{height:500px}.carousel_module_container .carousel_module.medium_large,.carousel_module_container .carousel_module.large{height:600px}.carousel_module_container .carousel_module.xlarge{height:700px}.carousel_module_container .carousel_module.xxlarge{height:750px}.carousel_module_container .carousel_module .master-slider{width:100%;height:100%}.carousel_module_container .carousel_module .floating_text_area header{position:relative;margin-bottom:0;padding:0 22px}.carousel_module_container .carousel_module .floating_text_area header:after{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;content:"";position:absolute;right:0;top:.9em;background-size:12px;height:7.5px;width:12px}.carousel_module_container .carousel_module .media_feature_title{color:white;font-size:1.4em;margin:0;padding-right:24px}.carousel_module_container .carousel_module .media_feature_title a{color:white;text-decoration:none}.carousel_module_container .carousel_module .media_feature_title:hover{cursor:pointer}.carousel_module_container .carousel_module .media_feature_title:after{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;content:"";margin-left:14px;background-size:12px;height:7.5px;width:12px;display:inline-block;margin-right:-24px}.carousel_module_container .carousel_module .media_feature_title.hide_expand_arrow{cursor:default}.carousel_module_container .carousel_module .media_feature_title.hide_expand_arrow:after{display:none}.carousel_module_container .carousel_module .subtitle{font-size:1.1em;margin:0.4em 0 .4em}.carousel_module_container .carousel_module .description{display:block;max-height:130px;overflow-y:auto}.carousel_module_container .carousel_module .description::-webkit-scrollbar{width:6px}.carousel_module_container .carousel_module .description::-webkit-scrollbar-thumb{background-color:rgba(107,107,107,0.6)}.carousel_module_container .carousel_module .description::-webkit-scrollbar-track{background-color:rgba(157,157,157,0.4)}.carousel_module_container .carousel_module .description a{color:#69B9FF}.carousel_module_container .carousel_module .description p{color:inherit}.carousel_module_container .carousel_module .description .more_link{margin-top:0.4rem;margin-bottom:.5rem;display:block}.carousel_module_container .carousel_module.medium_large .description,.carousel_module_container .carousel_module.large .description,.carousel_module_container .carousel_module.xlarge .description,.carousel_module_container .carousel_module.xxlarge .description{max-height:none;overflow:hidden;margin-bottom:0;display:none}.carousel_module_container .carousel_module.medium_large .floating_text_area.open header,.carousel_module_container .carousel_module.large .floating_text_area.open header,.carousel_module_container .carousel_module.xlarge .floating_text_area.open header,.carousel_module_container .carousel_module.xxlarge .floating_text_area.open header{margin-bottom:.7em}.carousel_module_container .carousel_module.medium_large .floating_text_area.open header .media_feature_title:after,.carousel_module_container .carousel_module.large .floating_text_area.open header .media_feature_title:after,.carousel_module_container .carousel_module.xlarge .floating_text_area.open header .media_feature_title:after,.carousel_module_container .carousel_module.xxlarge .floating_text_area.open header .media_feature_title:after{transform:rotate(180deg)}.carousel_module_container .carousel_module.small .floating_text_area,.carousel_module_container .carousel_module.medium .floating_text_area,.carousel_module_container .carousel_module.medium_mid .floating_text_area{background:linear-gradient(transparent, rgba(0,0,0,0.6));background-size:100%;background-repeat:no-repeat;background-position:bottom}.carousel_module_container .carousel_module.small header,.carousel_module_container .carousel_module.medium header,.carousel_module_container .carousel_module.medium_mid header{cursor:pointer;margin-bottom:.4em}.carousel_module_container .carousel_module.small .description,.carousel_module_container .carousel_module.medium .description,.carousel_module_container .carousel_module.medium_mid .description{display:none;cursor:pointer}.carousel_module_container .carousel_module.small .media_feature_title:after,.carousel_module_container .carousel_module.medium .media_feature_title:after,.carousel_module_container .carousel_module.medium_mid .media_feature_title:after{display:none}.carousel_module_container .carousel_module.small .gradient_container_bottom,.carousel_module_container .carousel_module.medium .gradient_container_bottom,.carousel_module_container .carousel_module.medium_mid .gradient_container_bottom{display:none}.carousel_module_container .carousel_module .floating_text_area{width:100%;padding:2em 1.4em 2.8em;bottom:0;text-align:center;margin-left:auto;margin-right:auto;color:white}.carousel_module_container .carousel_module.medium_large .floating_text_area,.carousel_module_container .carousel_module.large .floating_text_area,.carousel_module_container .carousel_module.xlarge .floating_text_area,.carousel_module_container .carousel_module.xxlarge .floating_text_area{padding:1.4em;text-align:left;bottom:5em;background-color:black;width:auto;max-width:500px}.carousel_module_container .carousel_module.medium_large .floating_text_area.left,.carousel_module_container .carousel_module.medium_large .floating_text_area.bottom_left,.carousel_module_container .carousel_module.large .floating_text_area.left,.carousel_module_container .carousel_module.large .floating_text_area.bottom_left,.carousel_module_container .carousel_module.xlarge .floating_text_area.left,.carousel_module_container .carousel_module.xlarge .floating_text_area.bottom_left,.carousel_module_container .carousel_module.xxlarge .floating_text_area.left,.carousel_module_container .carousel_module.xxlarge .floating_text_area.bottom_left{left:5%}.carousel_module_container .carousel_module.medium_large .floating_text_area.right,.carousel_module_container .carousel_module.medium_large .floating_text_area.bottom_right,.carousel_module_container .carousel_module.large .floating_text_area.right,.carousel_module_container .carousel_module.large .floating_text_area.bottom_right,.carousel_module_container .carousel_module.xlarge .floating_text_area.right,.carousel_module_container .carousel_module.xlarge .floating_text_area.bottom_right,.carousel_module_container .carousel_module.xxlarge .floating_text_area.right,.carousel_module_container .carousel_module.xxlarge .floating_text_area.bottom_right{right:5%}.carousel_module_container .carousel_module.medium_large .floating_text_area header,.carousel_module_container .carousel_module.large .floating_text_area header,.carousel_module_container .carousel_module.xlarge .floating_text_area header,.carousel_module_container .carousel_module.xxlarge .floating_text_area header{padding:0}.carousel_module_container .carousel_module.medium_large .floating_text_area header:after,.carousel_module_container .carousel_module.large .floating_text_area header:after,.carousel_module_container .carousel_module.xlarge .floating_text_area header:after,.carousel_module_container .carousel_module.xxlarge .floating_text_area header:after{content:none}.carousel_module_container .carousel_module .floating_text_area.open header:after{transform:rotate(180deg)}.carousel_module_container .carousel_module .ms-nav-prev,.carousel_module_container .carousel_module .ms-nav-next{margin-top:-40px}.carousel_module_container .carousel_module .ms-slide-bgvideocont{background-color:#000}.carousel_module_container .carousel_module .ms-slide-bgvideocont video{max-width:none}.carousel_module_container .carousel_module .ms-nav-next,.carousel_module_container .carousel_module .ms-nav-prev{display:none}.carousel_module_container .carousel_module.medium_mid .ms-nav-next,.carousel_module_container .carousel_module.medium_mid .ms-nav-prev,.carousel_module_container .carousel_module.medium_large .ms-nav-next,.carousel_module_container .carousel_module.medium_large .ms-nav-prev,.carousel_module_container .carousel_module.large .ms-nav-next,.carousel_module_container .carousel_module.large .ms-nav-prev,.carousel_module_container .carousel_module.xlarge .ms-nav-next,.carousel_module_container .carousel_module.xlarge .ms-nav-prev,.carousel_module_container .carousel_module.xxlarge .ms-nav-next,.carousel_module_container .carousel_module.xxlarge .ms-nav-prev{display:block}.no-touchevents .carousel_module_container .carousel_module.medium .ms-nav-next,.no-touchevents .carousel_module_container .carousel_module.medium .ms-nav-prev,.no-touchevents .carousel_module_container .carousel_module.small .ms-nav-next,.no-touchevents .carousel_module_container .carousel_module.small .ms-nav-prev{display:block}.carousel_module_container .carousel_module .ms-nav-prev,.carousel_module_container .carousel_module .ms-nav-next{width:40px;height:80px;margin-top:-60px}.carousel_module_container .carousel_module .ms-nav-prev{background:url("https://mars.nasa.gov/assets/arrow_left_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;left:0;border-top-right-radius:6px;border-bottom-right-radius:6px}.carousel_module_container .carousel_module .ms-nav-next{background:url("https://mars.nasa.gov/assets/arrow_right_darktheme.png");background-size:40px 95px;background-color:rgba(32,32,32,0.9);background-position:0;right:0;border-top-left-radius:6px;border-bottom-left-radius:6px}.carousel_module_container .carousel_module .ms-bullets{left:0;right:0;margin:0 auto;bottom:1.2em;z-index:10}.carousel_module_container .carousel_module.medium_mid .ms-bullets{bottom:1.5em}.carousel_module_container .carousel_module.medium_large .ms-bullets,.carousel_module_container .carousel_module.large .ms-bullets,.carousel_module_container .carousel_module.xlarge .ms-bullets,.carousel_module_container .carousel_module.xxlarge .ms-bullets{bottom:2.2em}.carousel_module_container .carousel_module .ms-bullet{background-color:white;background-image:none;border-radius:50% 50% 50% 50%;height:8px;width:8px;opacity:0.5;margin:0 10px}.carousel_module_container .carousel_module .ms-bullet:hover{opacity:1.0}.carousel_module_container .carousel_module .ms-bullet-selected{opacity:1.0}.carousel_module_container .carousel_module .ms-slide-layers{left:0 !important}.carousel_module_container .carousel_module .ms-container,.carousel_module_container .carousel_module .ms-slide-layers{max-width:none !important}.feature_pages .wysiwyg_content .carousel_module_container{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module_container{max-width:600px}}.feature_pages .wysiwyg_content .carousel_module_container.full-bleed,.feature_pages .wysiwyg_content .carousel_module_container.full_width,.feature_pages .wysiwyg_content .carousel_module_container.wide,.feature_pages .wysiwyg_content .carousel_module_container.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module_container.full-bleed,.feature_pages .wysiwyg_content .carousel_module_container.full_width,.feature_pages .wysiwyg_content .carousel_module_container.wide,.feature_pages .wysiwyg_content .carousel_module_container.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .carousel_module_container.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module_container.column-width{max-width:600px}}.feature_pages .wysiwyg_content .carousel_module_container.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .carousel_module_container.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .carousel_module_container.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module_container.full_width{width:55%}}.feature_pages .wysiwyg_content .carousel_module_container.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .carousel_module_container.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module_container.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .carousel_module_container.left,.feature_pages .wysiwyg_content .carousel_module_container.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module_container.left,.feature_pages .wysiwyg_content .carousel_module_container.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.left,.feature_pages .wysiwyg_content .carousel_module_container.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module_container.left,.feature_pages .wysiwyg_content .carousel_module_container.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .carousel_module_container.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module_container.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module_container.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .carousel_module_container.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .carousel_module_container.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .carousel_module_container.right{margin-right:20%}}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .carousel_module_container.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .carousel_module_container.full_width{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.explore_overlay_page .carousel_module_container.full_width{max-width:600px}}.explore_overlay_page .carousel_module_container.full_width.full-bleed,.explore_overlay_page .carousel_module_container.full_width.full_width,.explore_overlay_page .carousel_module_container.full_width.wide,.explore_overlay_page .carousel_module_container.full_width.parallax{clear:both}@media (min-width: 600px), print{.explore_overlay_page .carousel_module_container.full_width.full-bleed,.explore_overlay_page .carousel_module_container.full_width.full_width,.explore_overlay_page .carousel_module_container.full_width.wide,.explore_overlay_page .carousel_module_container.full_width.parallax{margin-top:5em;margin-bottom:5em}}.explore_overlay_page .carousel_module_container.full_width.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.explore_overlay_page .carousel_module_container.full_width.column-width{max-width:600px}}.explore_overlay_page .carousel_module_container.full_width.full-bleed{width:100%;max-width:none}.explore_overlay_page .carousel_module_container.full_width.full-bleed figcaption{margin:.8em .8em 0 .8em}.explore_overlay_page .carousel_module_container.full_width.full_width{clear:both}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.explore_overlay_page .carousel_module_container.full_width.full_width{width:55%}}.explore_overlay_page .carousel_module_container.full_width.wide{width:98%;max-width:none}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.wide{width:95%}}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.explore_overlay_page .carousel_module_container.full_width.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.explore_overlay_page .carousel_module_container.full_width.column-width{max-width:calc(600px + 15%)}}.explore_overlay_page .carousel_module_container.full_width.left,.explore_overlay_page .carousel_module_container.full_width.right{max-width:94%}@media (min-width: 600px), print{.explore_overlay_page .carousel_module_container.full_width.left,.explore_overlay_page .carousel_module_container.full_width.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.left,.explore_overlay_page .carousel_module_container.full_width.right{width:27%;max-width:27%}}@media (min-width: 1700px){.explore_overlay_page .carousel_module_container.full_width.left,.explore_overlay_page .carousel_module_container.full_width.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.explore_overlay_page .carousel_module_container.full_width.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.explore_overlay_page .carousel_module_container.full_width.left{margin-left:15%}}@media (min-width: 1700px){.explore_overlay_page .carousel_module_container.full_width.left{margin-left:20%}}@media (min-width: 480px){.explore_overlay_page .carousel_module_container.full_width.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.explore_overlay_page .carousel_module_container.full_width.right{margin-right:15%}}@media (min-width: 1700px){.explore_overlay_page .carousel_module_container.full_width.right{margin-right:20%}}.explore_overlay_page .carousel_module_container.full_width.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.explore_overlay_page .carousel_module_container.full_width.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.parallax_module .caption{font-size:.88em}}.explore_overlay_page .carousel_module_container.full_width.parallax_module img{height:auto !important}.explore_overlay_page .carousel_module_container.full_width.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.explore_overlay_page .carousel_module_container.full_width.parallax_module .window.mobile{height:auto;min-height:100%}.explore_overlay_page .carousel_module_container.full_width.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.parallax_module .window .featured_image{position:absolute}}.explore_overlay_page .carousel_module_container.full_width.wide{width:98%;max-width:none}@media (min-width: 769px), print{.explore_overlay_page .carousel_module_container.full_width.wide{width:95%;max-width:1200px}}.wysiwyg_content .image_module{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .image_module{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{float:none}@media (min-width: 480px){.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .image_module.left,.wysiwyg_content .image_module.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .image_module.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .image_module.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .image_module.full-bleed,.wysiwyg_content .image_module.full_width,.wysiwyg_content .image_module.wide,.wysiwyg_content .image_module.parallax,.wysiwyg_content .image_module.column-width{clear:both}.wysiwyg_content .image_module.parallax_module{width:100%;position:relative}.wysiwyg_content .image_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .image_module.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .image_module.parallax_module .caption{color:#b0b4b9}.feature_pages .wysiwyg_content .image_module{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module{max-width:600px}}.feature_pages .wysiwyg_content .image_module.full-bleed,.feature_pages .wysiwyg_content .image_module.full_width,.feature_pages .wysiwyg_content .image_module.wide,.feature_pages .wysiwyg_content .image_module.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.full-bleed,.feature_pages .wysiwyg_content .image_module.full_width,.feature_pages .wysiwyg_content .image_module.wide,.feature_pages .wysiwyg_content .image_module.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .image_module.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:600px}}.feature_pages .wysiwyg_content .image_module.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .image_module.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .image_module.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.full_width{width:55%}}.feature_pages .wysiwyg_content .image_module.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.left,.feature_pages .wysiwyg_content .image_module.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .image_module.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .image_module.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .image_module.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .image_module.right{margin-right:20%}}.feature_pages .wysiwyg_content .image_module.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .image_module.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .image_module.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .image_module.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .image_module.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .image_module.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .image_module.parallax_module .window .featured_image{position:absolute}}.image_module figure.inline_figure{margin-bottom:0}.item_grid_module{clear:both;margin:3em auto}@media (min-width: 769px), print{.item_grid_module{margin:4em auto}}.item_grid_module li{visibility:hidden}.item_grid_module .grid_content{margin-bottom:20px}.item_grid_module .grid_content .grid_image{text-align:center}@media (min-width: 600px), print{.item_grid_module .grid_content .grid_image{text-align:left}}.item_grid_module .grid_content .grid_image img{width:auto}@media (min-width: 600px), print{.item_grid_module .grid_content .grid_image img{width:100%}}.item_grid_module .grid_content .caption{font-size:.88em;padding:1em;color:#565455}@media (min-width: 600px), print{.item_grid_module .grid_content .caption{background-color:#D8D6D7}}.feature_pages .wysiwyg_content .item_grid_module{max-width:none;width:90% !important;left:5%;margin-left:auto;margin-right:auto}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .item_grid_module{width:98% !important;left:1%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .item_grid_module{width:95% !important;left:2.5%}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .item_grid_module{width:85% !important;left:0}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .item_grid_module{width:75% !important}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .item_grid_module{width:65% !important}}.explore_overlay_page .item_grid_module .caption{background-color:transparent;color:#9C9FA4}.grid_gallery.list_view .item_grid{display:none}.grid_gallery.masonry_view .item_list{display:none}.grid_gallery.masonry_view .grid_text .caption{background-color:#e5ecf4}.grid_gallery.masonry_view .detail_link_button{position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);padding:0.7em 1.5em;border:2px solid white;border-radius:12px;font-size:0.9em;font-weight:bold}.view_selectors .nav_item.masonry_icon{background-position:-62px -62px}.no-touchevents .view_selectors .nav_item.masonry_icon:hover{background-position:-62px -12px}.masonry_view .view_selectors .nav_item.masonry_icon{background-position:-62px -12px}blockquote{clear:both;color:#000}blockquote .quote{font-style:italic;margin-bottom:.5em;font-size:1.5em;line-height:1.4em;font-weight:700}blockquote footer{font-size:1em;text-align:left}blockquote cite{font-style:normal}.explore_overlay_page blockquote{color:#FFF !important}.explore_overlay_page blockquote.inspirational{color:#FFF}.feature_pages .wysiwyg_content blockquote{width:80%}@media (min-width: 769px), print{.feature_pages .wysiwyg_content blockquote{width:65%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content blockquote{width:40%}}.wysiwyg_content .video_player_container,.wysiwyg_content figure.embedded_video{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.wysiwyg_content .video_player_container,.wysiwyg_content figure.embedded_video{margin-top:2em;margin-bottom:2em}}.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{float:none}@media (min-width: 480px){.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{max-width:50%}}@media (min-width: 1200px){.wysiwyg_content .video_player_container.left,.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.left,.wysiwyg_content figure.embedded_video.right{max-width:40%}}@media (min-width: 480px){.wysiwyg_content .video_player_container.left,.wysiwyg_content figure.embedded_video.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.wysiwyg_content .video_player_container.right,.wysiwyg_content figure.embedded_video.right{float:right;margin:1em 0 1.5em 2.5em}}.wysiwyg_content .video_player_container.full-bleed,.wysiwyg_content .video_player_container.full_width,.wysiwyg_content .video_player_container.wide,.wysiwyg_content .video_player_container.parallax,.wysiwyg_content .video_player_container.column-width,.wysiwyg_content figure.embedded_video.full-bleed,.wysiwyg_content figure.embedded_video.full_width,.wysiwyg_content figure.embedded_video.wide,.wysiwyg_content figure.embedded_video.parallax,.wysiwyg_content figure.embedded_video.column-width{clear:both}.wysiwyg_content .video_player_container.parallax_module,.wysiwyg_content figure.embedded_video.parallax_module{width:100%;position:relative}.wysiwyg_content .video_player_container.parallax_module .caption,.wysiwyg_content figure.embedded_video.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.wysiwyg_content .video_player_container.parallax_module .caption,.wysiwyg_content figure.embedded_video.parallax_module .caption{font-size:.88em}}.explore_overlay_page .wysiwyg_content .video_player_container.parallax_module .caption,.explore_overlay_page .wysiwyg_content figure.embedded_video.parallax_module .caption{color:#b0b4b9}.embedded_video.left:not(.webvr_module_container) .video_wrapper,.embedded_video.right:not(.webvr_module_container) .video_wrapper,.embedded_video.column-width:not(.webvr_module_container) .video_wrapper,section:not(.feature_pages) .embedded_video.wide:not(.webvr_module_container) .video_wrapper,section:not(.feature_pages) .embedded_video.full-bleed:not(.webvr_module_container) .video_wrapper{position:relative;padding-bottom:56.25%;padding-top:25px;height:0;width:100%;overflow:hidden}.embedded_video.left:not(.webvr_module_container) iframe,.embedded_video.right:not(.webvr_module_container) iframe,.embedded_video.column-width:not(.webvr_module_container) iframe,section:not(.feature_pages) .embedded_video.wide:not(.webvr_module_container) iframe,section:not(.feature_pages) .embedded_video.full-bleed:not(.webvr_module_container) iframe{position:absolute;top:0;left:0;width:100%;height:100%}#page .embedded_video.webvr .video_wrapper{padding:0;height:100%}.feature_pages .wysiwyg_content .video_player_container,.feature_pages .wysiwyg_content figure.embedded_video{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container,.feature_pages .wysiwyg_content figure.embedded_video{max-width:600px}}.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content .video_player_container.parallax,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full_width,.feature_pages .wysiwyg_content figure.embedded_video.wide,.feature_pages .wysiwyg_content figure.embedded_video.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content .video_player_container.parallax,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full_width,.feature_pages .wysiwyg_content figure.embedded_video.wide,.feature_pages .wysiwyg_content figure.embedded_video.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:600px}}.feature_pages .wysiwyg_content .video_player_container.full-bleed,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content .video_player_container.full-bleed figcaption,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.full_width,.feature_pages .wysiwyg_content figure.embedded_video.full_width{width:55%}}.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content figure.embedded_video.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.wide,.feature_pages .wysiwyg_content figure.embedded_video.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.column-width,.feature_pages .wysiwyg_content figure.embedded_video.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content figure.embedded_video.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.right{margin-right:20%}}.feature_pages .wysiwyg_content .video_player_container.parallax_module,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content .video_player_container.parallax_module .caption,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.parallax_module .caption,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content .video_player_container.parallax_module img,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window.mobile,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content .video_player_container.parallax_module .window .featured_image,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.parallax_module .window .featured_image,.feature_pages .wysiwyg_content figure.embedded_video.parallax_module .window .featured_image{position:absolute}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:44%}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:33%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .video_player_container.left,.feature_pages .wysiwyg_content .video_player_container.right,.feature_pages .wysiwyg_content figure.embedded_video.left,.feature_pages .wysiwyg_content figure.embedded_video.right{width:25%}}.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper{max-height:98vh}.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper iframe,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper iframe{height:100%}@media (min-height: 400px){.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper{height:300px}}@media (min-height: 600px){.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper{height:400px}}@media (min-height: 800px){.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper{height:600px}}@media (min-height: 1000px){.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper{max-height:90vh}}.feature_pages .wysiwyg_content .video_player_container.full-width .video_wrapper iframe,.feature_pages .wysiwyg_content .video_player_container.full-bleed .video_wrapper iframe,.feature_pages .wysiwyg_content .video_player_container.wide .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-width .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.full-bleed .video_wrapper iframe,.feature_pages .wysiwyg_content figure.embedded_video.wide .video_wrapper iframe{width:100%}.wysiwyg_content figure.embedded_video .video_wrapper{width:100%}.wysiwyg_content figure.embedded_video.right iframe,.wysiwyg_content figure.embedded_video.left iframe{min-height:0}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:400px}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:500px}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:600px}}@media (min-width: 1700px){.feature_pages .wysiwyg_content .embedded_video iframe.webvr_module{min-height:700px}}.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.left,.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.right{width:100%;max-width:100%}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.left,.content_page:not(.feature_pages) .wysiwyg_content figure.embedded_video.right{width:50%}}.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.left,.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.right{width:100%;max-width:100%}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.left,.content_page:not(.feature_pages) .wysiwyg_content .video_player_container.right{width:50%}}.video_player_container .video_container{font-size:16px;position:relative;display:block;min-height:0;overflow:hidden;box-sizing:border-box;font-family:Arial,Helvetica,sans-serif;background-color:#000}.video_player_container .video_container video{width:100%}.webvr-button{padding:0px !important;margin:12px}img[title="Configure viewer"]{margin-left:-12px !important}.feature_pages .wysiwyg_content #scene_container{width:94%;max-width:100%;margin:3em auto;float:none;position:relative}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container{max-width:600px}}.feature_pages .wysiwyg_content #scene_container.full-bleed,.feature_pages .wysiwyg_content #scene_container.full_width,.feature_pages .wysiwyg_content #scene_container.wide,.feature_pages .wysiwyg_content #scene_container.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.full-bleed,.feature_pages .wysiwyg_content #scene_container.full_width,.feature_pages .wysiwyg_content #scene_container.wide,.feature_pages .wysiwyg_content #scene_container.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .wysiwyg_content #scene_container.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:600px}}.feature_pages .wysiwyg_content #scene_container.full-bleed{width:100%;max-width:none}.feature_pages .wysiwyg_content #scene_container.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .wysiwyg_content #scene_container.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.full_width{width:55%}}.feature_pages .wysiwyg_content #scene_container.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.column-width{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.left,.feature_pages .wysiwyg_content #scene_container.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .wysiwyg_content #scene_container.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .wysiwyg_content #scene_container.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .wysiwyg_content #scene_container.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .wysiwyg_content #scene_container.right{margin-right:20%}}.feature_pages .wysiwyg_content #scene_container.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .wysiwyg_content #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.parallax_module .caption{font-size:.88em}}.feature_pages .wysiwyg_content #scene_container.parallax_module img{height:auto !important}.feature_pages .wysiwyg_content #scene_container.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .wysiwyg_content #scene_container.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .wysiwyg_content #scene_container.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .wysiwyg_content #scene_container.parallax_module .window .featured_image{position:absolute}}.feature_pages .wysiwyg_content #scene_container:before{display:block;content:"";width:100%;padding-top:75%}.feature_pages .wysiwyg_content #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.content_page:not(.feature_pages) .wysiwyg_content #scene_container{max-width:100%;margin-top:1.4em;margin-bottom:1.4em;position:relative}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content #scene_container{margin-top:2em;margin-bottom:2em}}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{float:none}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{max-width:50%}}@media (min-width: 1200px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{max-width:40%}}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.content_page:not(.feature_pages) .wysiwyg_content #scene_container.right{float:right;margin:1em 0 1.5em 2.5em}}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.full-bleed,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.full_width,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.wide,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax,.content_page:not(.feature_pages) .wysiwyg_content #scene_container.column-width{clear:both}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module{width:100%;position:relative}.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{font-size:.88em}}.explore_overlay_page .content_page:not(.feature_pages) .wysiwyg_content #scene_container.parallax_module .caption{color:#b0b4b9}.content_page:not(.feature_pages) .wysiwyg_content #scene_container:before{display:block;content:"";width:100%;padding-top:75%}.content_page:not(.feature_pages) .wysiwyg_content #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.content_page.atlas_detail #scene_container{max-width:100%;margin-top:1.4em;margin-bottom:1.4em;position:relative}@media (min-width: 769px), print{.content_page.atlas_detail #scene_container{margin-top:2em;margin-bottom:2em}}.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{float:none}@media (min-width: 480px){.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{max-width:50%}}@media (min-width: 1200px){.content_page.atlas_detail #scene_container.left,.content_page.atlas_detail #scene_container.right{max-width:40%}}@media (min-width: 480px){.content_page.atlas_detail #scene_container.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.content_page.atlas_detail #scene_container.right{float:right;margin:1em 0 1.5em 2.5em}}.content_page.atlas_detail #scene_container.full-bleed,.content_page.atlas_detail #scene_container.full_width,.content_page.atlas_detail #scene_container.wide,.content_page.atlas_detail #scene_container.parallax,.content_page.atlas_detail #scene_container.column-width{clear:both}.content_page.atlas_detail #scene_container.parallax_module{width:100%;position:relative}.content_page.atlas_detail #scene_container.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.content_page.atlas_detail #scene_container.parallax_module .caption{font-size:.88em}}.explore_overlay_page .content_page.atlas_detail #scene_container.parallax_module .caption{color:#b0b4b9}.content_page.atlas_detail #scene_container:before{display:block;content:"";width:100%;padding-top:36.36364%}.content_page.atlas_detail #scene_container>.content{position:absolute;top:0;left:0;right:0;bottom:0}.magic_shell_page .wysiwyg_content #scene_container{height:100%}#scene_container{position:relative;overflow:hidden;width:100%}#scene_container .loading_webgl{width:100%;height:100%;top:0;left:0;right:0;bottom:0;position:absolute;z-index:21;opacity:1.0;color:white;background-color:black;text-align:center;transition:opacity 2s, visibility 2s}#scene_container .loading_webgl .loading_text{position:relative;top:50%;padding-bottom:6px;color:grey}#scene_container .loading_webgl .load_bar{position:relative;top:50%;left:0;width:0%;height:2px;background-color:#FFFFFF;opacity:0.5}#gl_layer{margin:0 auto}#gl_layer,#gl_fallback_layer{-webkit-tap-highlight-color:transparent;-webkit-tap-highlight-color:transparent;position:absolute;overflow:hidden;top:0;left:0;right:0;bottom:0;width:100%;height:100%;display:none}#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{position:absolute;padding-top:8px;top:-120px;left:50%;width:90%;text-align:center;visibility:hidden;transform:translateX(-50%);transition:top 1s, visibility 1s;z-index:20}@media (min-width: 480px){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:26px}}@media only screen and (min-width: 480px) and (orientation: landscape){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:0px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:34px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:4px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper,#gl_fallback_layer .module_title_wrapper{padding-top:34px}}#gl_layer .module_title_wrapper.active,#gl_fallback_layer .module_title_wrapper.active{top:0px;visibility:visible}#gl_layer .module_title_wrapper .gradient_overlay_top,#gl_fallback_layer .module_title_wrapper .gradient_overlay_top{position:absolute;width:120%;height:250%;top:0;left:50%;opacity:.8;background:linear-gradient(to bottom, #000 0%, #000 14%, transparent 99%, transparent 100%);transform:translateX(-50%)}#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:20px;margin-right:0.4em;position:relative;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif;vertical-align:middle}@media (min-width: 480px){#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:22px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:34px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:24px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:28px}}@media (min-width: 1024px), print{#gl_layer .module_title_wrapper .module_title,#gl_fallback_layer .module_title_wrapper .module_title{font-size:34px}}#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{cursor:pointer;position:relative;display:inline-block;padding:.1em;width:26px;height:26px;vertical-align:middle}@media (min-width: 480px){#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:28px;height:28px}}@media (min-width: 600px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:36px;height:36px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:30px;height:30px}}@media (min-width: 769px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:36px;height:36px}}@media (min-width: 1024px), print{#gl_layer .module_title_wrapper a.close_focus,#gl_fallback_layer .module_title_wrapper a.close_focus{width:42px;height:42px}}#gl_layer .module_title_wrapper a.close_focus .close_icon,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon{display:block;height:100%;position:relative;opacity:0.7}#gl_layer .module_title_wrapper a.close_focus .close_icon:before,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:3px;width:100%;top:calc(50% - 1.5px);left:0;background:#fff}#gl_layer .module_title_wrapper a.close_focus .close_icon:after,#gl_fallback_layer .module_title_wrapper a.close_focus .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:3px;width:100%;top:calc(50% - 1.5px);left:0;background:#fff}#gl_layer .module_title_wrapper a.close_focus:hover .close_icon,#gl_fallback_layer .module_title_wrapper a.close_focus:hover .close_icon{opacity:1.0}#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{position:absolute;padding-bottom:26px;bottom:-110px;left:50%;width:90%;text-align:center;visibility:hidden;transform:translateX(-50%);transition:bottom 1s, visibility 1s;z-index:20}@media (min-width: 480px){#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:66px}}@media only screen and (min-width: 480px) and (orientation: landscape){#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:12px}}@media (min-width: 769px), print{#gl_layer .module_description_wrapper,#gl_fallback_layer .module_description_wrapper{padding-bottom:66px}}#gl_layer .module_description_wrapper.active,#gl_fallback_layer .module_description_wrapper.active{bottom:0px;visibility:visible}#gl_layer .module_description_wrapper .gradient_overlay,#gl_fallback_layer .module_description_wrapper .gradient_overlay{position:absolute;width:120%;height:250%;bottom:0;left:50%;opacity:.8;background:linear-gradient(to bottom, transparent 0%, transparent 7%, #000 100%);transform:translateX(-50%)}#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{padding:0px 12px;font-size:14px;font-weight:300;position:relative;z-index:10;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif}@media (min-width: 600px), print{#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:20px}}@media only screen and (min-width: 600px) and (orientation: landscape){#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:16px}}@media (min-width: 1200px){#gl_layer .module_description_wrapper .module_description,#gl_fallback_layer .module_description_wrapper .module_description{font-size:22px}}#gl_layer .module_description_wrapper .explore,#gl_fallback_layer .module_description_wrapper .explore{letter-spacing:.1em;font-family:Whitney-Bold,Helvetica,Arial,sans-serif;font-size:0;vertical-align:baseline;margin:0px;color:#78BDFF;background:rgba(0,0,0,0.1);border-radius:2px;cursor:pointer;position:relative;z-index:10}#gl_layer .module_description_wrapper .explore:hover,#gl_fallback_layer .module_description_wrapper .explore:hover{color:#CFE7FF;background:rgba(0,0,0,0.2)}#gl_layer .module_description_wrapper .explore:after,#gl_fallback_layer .module_description_wrapper .explore:after{content:"››";font-size:20px}@media (min-width: 1024px), print{#gl_layer .module_description_wrapper .explore,#gl_fallback_layer .module_description_wrapper .explore{font-size:17px;padding:10px;border:1px solid #78BDFF;vertical-align:middle}#gl_layer .module_description_wrapper .explore:after,#gl_fallback_layer .module_description_wrapper .explore:after{content:none}}#gl_layer .label{position:absolute;padding:0;border:0;margin:0;max-width:300px;text-align:center;transform:translate(-50%, -110%);font-size:16px;color:#FFFFFF;font-family:Whitney,Helvetica,Arial,sans-serif}#gl_fallback_layer .focus_layer{position:absolute;top:0;right:0;width:100%;height:100%;background-size:cover;background-position:center center;visibility:hidden;opacity:0;transition:opacity 1s, visibility 1s;z-index:15}#gl_fallback_layer .focus_layer.active{visibility:visible;opacity:1}#gl_fallback_layer .overlay_image{position:absolute;top:0;right:0;width:100%;height:100%;height:100%;background-size:cover;background-position:center center}.hotspot_container{position:absolute;height:100%;width:100%}.hotspot_container .hotspot_wrapper{position:absolute;top:0px;left:0px;width:350px;visibility:visible;transform:translate(-12px, -12px)}.hotspot_container .hotspot_wrapper span{position:relative;visibility:visible;font-size:0.7em;color:#FFFFFF;opacity:0.0;transition:opacity 0.75s;left:6px}.hotspot_container .hotspot_wrapper.hidden{visibility:hidden}.hotspot_container .hotspot_wrapper .hotspot{width:24px;height:24px;opacity:0.8}.fallback_mode #gl_fallback_layer{height:auto}.fallback_mode #gl_fallback_layer img{height:auto}.image{height:300px;width:300px;border-color:#000000;border:4px solid #ffffff;border-radius:8px;transform:translate(-50%, -50%)}.gltf_viewer{max-width:100%;margin-top:1.4em;margin-bottom:1.4em}@media (min-width: 769px), print{.gltf_viewer{margin-top:2em;margin-bottom:2em}}.gltf_viewer.left,.gltf_viewer.right{float:none}@media (min-width: 480px){.gltf_viewer.left,.gltf_viewer.right{max-width:50%}}@media (min-width: 1200px){.gltf_viewer.left,.gltf_viewer.right{max-width:40%}}@media (min-width: 480px){.gltf_viewer.left{float:left;margin:1em 2.5em 1.5em 0}}@media (min-width: 480px){.gltf_viewer.right{float:right;margin:1em 0 1.5em 2.5em}}.gltf_viewer.full-bleed,.gltf_viewer.full_width,.gltf_viewer.wide,.gltf_viewer.parallax,.gltf_viewer.column-width{clear:both}.gltf_viewer.parallax_module{width:100%;position:relative}.gltf_viewer.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.gltf_viewer.parallax_module .caption{font-size:.88em}}.explore_overlay_page .gltf_viewer.parallax_module .caption{color:#b0b4b9}.feature_pages .gltf_viewer{width:94%;max-width:100%;margin:3em auto;float:none}@media (min-width: 600px), print{.feature_pages .gltf_viewer{max-width:600px}}.feature_pages .gltf_viewer.full-bleed,.feature_pages .gltf_viewer.full_width,.feature_pages .gltf_viewer.wide,.feature_pages .gltf_viewer.parallax{clear:both}@media (min-width: 600px), print{.feature_pages .gltf_viewer.full-bleed,.feature_pages .gltf_viewer.full_width,.feature_pages .gltf_viewer.wide,.feature_pages .gltf_viewer.parallax{margin-top:5em;margin-bottom:5em}}.feature_pages .gltf_viewer.column-width{max-width:94%;margin-top:3em;margin-bottom:3em;clear:both}@media (min-width: 600px), print{.feature_pages .gltf_viewer.column-width{max-width:600px}}.feature_pages .gltf_viewer.full-bleed{width:100%;max-width:none}.feature_pages .gltf_viewer.full-bleed figcaption{margin:.8em .8em 0 .8em}.feature_pages .gltf_viewer.full_width{clear:both}@media (min-width: 769px), print{.feature_pages .gltf_viewer.full_width{width:94%;max-width:600px;margin-left:auto;margin-right:auto}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px), print and (min-width: 769px), print{.feature_pages .gltf_viewer.full_width{width:80%}}@media (min-width: 769px) and (min-width: 1200px), print and (min-width: 1200px){.feature_pages .gltf_viewer.full_width{width:55%}}.feature_pages .gltf_viewer.wide{width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .gltf_viewer.wide{width:95%}}@media (min-width: 769px), print{.feature_pages .gltf_viewer.column-width{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .gltf_viewer.column-width{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .gltf_viewer.column-width{max-width:calc(600px + 15%)}}.feature_pages .gltf_viewer.left,.feature_pages .gltf_viewer.right{max-width:94%}@media (min-width: 600px), print{.feature_pages .gltf_viewer.left,.feature_pages .gltf_viewer.right{width:50%;max-width:50%}}@media (min-width: 769px), print{.feature_pages .gltf_viewer.left,.feature_pages .gltf_viewer.right{width:27%;max-width:27%}}@media (min-width: 1700px){.feature_pages .gltf_viewer.left,.feature_pages .gltf_viewer.right{width:25%;max-width:25%}}@media (min-width: 600px), print{.feature_pages .gltf_viewer.left{float:left;margin:1em 2.5em 1.5em 0;margin-left:3%}}@media (min-width: 1200px){.feature_pages .gltf_viewer.left{margin-left:15%}}@media (min-width: 1700px){.feature_pages .gltf_viewer.left{margin-left:20%}}@media (min-width: 480px){.feature_pages .gltf_viewer.right{float:right;margin:1em 0 1.5em 2.5em;margin-right:3%}}@media (min-width: 1200px){.feature_pages .gltf_viewer.right{margin-right:15%}}@media (min-width: 1700px){.feature_pages .gltf_viewer.right{margin-right:20%}}.feature_pages .gltf_viewer.parallax_module{position:relative;overflow:hidden;z-index:10;padding-bottom:0;width:100%;max-width:none}.feature_pages .gltf_viewer.parallax_module .caption{margin:.8em .8em 0 .8em;font-size:.8em;color:#5a6470}@media (min-width: 769px), print{.feature_pages .gltf_viewer.parallax_module .caption{font-size:.88em}}.feature_pages .gltf_viewer.parallax_module img{height:auto !important}.feature_pages .gltf_viewer.parallax_module .window{width:100%;height:auto;position:absolute;overflow:hidden;padding:2em}.feature_pages .gltf_viewer.parallax_module .window.mobile{height:auto;min-height:100%}.feature_pages .gltf_viewer.parallax_module .window .featured_image{z-index:9;top:0;left:0;height:100%;overflow:hidden}@media (min-width: 769px), print{.feature_pages .gltf_viewer.parallax_module .window .featured_image{position:absolute}}.feature_pages .gltf_viewer.full-bleed .gltf_caption{margin:.8em .8em 0 .8em}.gltf_viewer .gltf_caption{margin-top:.8rem;font-size:.8rem;color:#5a6470}@media (min-width: 769px), print{.gltf_viewer .gltf_caption{font-size:.88rem}}.gltf_viewer .gltf_link{display:block;font-weight:600}.gltf_viewer:not(.full) iframe,.gltf_container:not(.full) iframe{width:1px;max-width:100%;min-width:100%}.gltf_viewer.full,.gltf_container.full{margin:0}.gltf_viewer.full iframe,.gltf_container.full iframe{position:fixed;top:0;left:0;z-index:99999}.touchevents .feature_pages .gltf_viewer{max-width:86%}.gltf_container{margin-bottom:1em}.wysiwyg_content p{line-height:1.4em}.wysiwyg_content p,.wysiwyg_content a{word-wrap:break-word}.wysiwyg_content table a{word-break:break-word}#primary_column .wysiwyg_content>:first-child{margin-top:0}#primary_column .wysiwyg_content .inset_box{padding:.5em 2em;margin:2em 0;border:4px solid #DCE0E5}.wysiwyg_content strong,.wysiwyg_content b,.wysiwyg_content .bold{font-weight:bold}.wysiwyg_content .content_title{font-size:1.04em;margin-bottom:.1em}@media (min-width: 600px), print{.wysiwyg_content .content_title{font-size:1.2em;margin-bottom:.18em}}@media (min-width: 769px), print{.wysiwyg_content .content_title{font-size:1.36em;margin-bottom:.26em}}@media (min-width: 1024px), print{.wysiwyg_content .content_title{font-size:1.44em;margin-bottom:.29em}}@media (min-width: 1200px){.wysiwyg_content .content_title{font-size:1.52em;margin-bottom:.32em}}.wysiwyg_content .article_teaser_body{font-size:1em}.wysiwyg_content .indent1{margin-left:3.5em}.wysiwyg_content .indent2{margin-left:7em}.wysiwyg_content .indent3{margin-left:10.5em}.wysiwyg_content .publish_date{font-weight:700}.wysiwyg_content .item_list_module{clear:both}.wysiwyg_content .expandable_element_link.style_1{font-size:.8em;font-weight:700;text-transform:uppercase}.wysiwyg_content .expandable_element{display:none}.wysiwyg_content table{border-spacing:1px;border-collapse:separate;font-size:15px;line-height:normal}.wysiwyg_content table th,.wysiwyg_content table td{padding:13px}.wysiwyg_content table th{background-color:#ddd;font-weight:600;text-align:left}.wysiwyg_content table td{background-color:#eee}.wysiwyg_content .table_wrapper{width:100%;margin:1em 0;-webkit-overflow-scrolling:touch}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar{height:12px}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar-track{box-shadow:0 0 2px rgba(0,0,0,0.15) inset;background:#f0f0f0}.wysiwyg_content .table_wrapper>div::-webkit-scrollbar-thumb{border-radius:6px;background:#ccc}.feature_pages .wysiwyg_content .table_wrapper,.feature_pages .wysiwyg_content .expandable_table_wrapper{margin-left:auto;margin-right:auto;margin:3em auto;width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .table_wrapper,.feature_pages .wysiwyg_content .expandable_table_wrapper{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .table_wrapper,.feature_pages .wysiwyg_content .expandable_table_wrapper{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .table_wrapper,.feature_pages .wysiwyg_content .expandable_table_wrapper{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .expandable_table_wrapper .table_wrapper{width:100%;max-width:none;margin:0 0 1rem}.feature_pages .wysiwyg_content .mars_legacy_table{margin-left:auto;margin-right:auto;margin:3em auto;width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages .wysiwyg_content .mars_legacy_table{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages .wysiwyg_content .mars_legacy_table{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages .wysiwyg_content .mars_legacy_table{max-width:calc(600px + 15%)}}.feature_pages .wysiwyg_content .mars_legacy_table .table_wrapper{width:100%;max-width:none;margin:0 0 1rem}.wysiwyg_content .table_wrapper.has-scroll{position:relative;overflow:hidden}.wysiwyg_content .table_wrapper.has-scroll:after{position:absolute;top:0;left:100%;width:50px;height:100%;border-radius:10px 0 0 10px / 50% 0 0 50%;box-shadow:-5px 0 10px rgba(0,0,0,0.25);content:''}.wysiwyg_content .table_wrapper.has-scroll>div{overflow-x:auto}.wysiwyg_content table.mb_table{border-collapse:collapse;width:100%}.wysiwyg_content table.mb_table td{background-color:transparent}.wysiwyg_content table.mb_table th{background-color:white;color:black;font-size:.75em;font-weight:500;text-align:left;padding:13px}@media (min-width: 600px), print{.wysiwyg_content table.mb_table th{font-size:.9em}}.wysiwyg_content table.mb_table tbody td{font-size:.9em}@media (min-width: 600px), print{.wysiwyg_content table.mb_table tbody td{font-size:1.1em}}.wysiwyg_content table.mb_table tr:nth-child(even){background-color:#edf4fb}.wysiwyg_content table.mb_table tr:nth-child(odd){background-color:#ffffff}.wysiwyg_content table.mb_table td{border:1px solid #d2d2d2;padding:.8em}.wysiwyg_content table.mb_table td:first-child{border-left:transparent}.wysiwyg_content table.mb_table td:last-child{border-right:transparent}.wysiwyg_content table.small_table,.wysiwyg_content table.mb_table.small_table{font-size:.75em;padding:.6em}#main_container form.gsc-search-box{padding:0}#main_container form.gsc-search-box td.gsc-input{padding:0}#main_container table[class^="gsc-"] td,#main_container table[class^="gcsc-"] td{background-color:transparent}#main_container.placeholder{-webkit-font-smoothing:antialiased}#main_container:-moz-placeholder{-webkit-font-smoothing:antialiased}#main_container::-moz-placeholder{-webkit-font-smoothing:antialiased}#main_container::-webkit-input-placeholder{-webkit-font-smoothing:antialiased}#main_container .gsc-control-cse table{margin:0}#main_container input.gsc-input{padding:10px 12px;border-radius:6px;font-size:15px}#main_container input.gsc-search-button{border-color:#fff;background-color:#3b788b;padding:10px 14px 10px;height:38px;color:white;font-size:15px;font-weight:500;border-radius:6px;text-transform:uppercase}#main_container input.gsc-search-button:hover{background-color:#5097ad}#main_container .gsc-selected-option-container{width:auto !important;max-width:none}#main_container td.gsc-clear-button{padding-left:4px}#main_container .cse .gsc-control-cse,#main_container .gsc-control-cse{padding:0}#main_container .cse .gsc-control-cse tr,#main_container .gsc-control-cse tr{background:none !important}#main_container td.gsib_b *{padding:0;vertical-align:middle}#main_container td.gsc-result-info-container{padding-left:0}#main_container .gs-no-results-result .gs-snippet,#main_container .gs-error-result .gs-snippet{padding:5px 0;margin:5px 0;border:none;background-color:transparent}#main_container .gsc-webResult.gsc-results{margin-top:0px}#main_container div.gsc-webResult.gsc-result{border-bottom:1px solid #CFD7E1;padding-bottom:16px;padding-top:16px;padding-left:0;margin-bottom:0px;margin-top:0px}#main_container td.gsc-table-cell-snippet-close{padding:0}#main_container div.gs-title{padding:0;height:auto;line-height:1.4em;text-decoration:none}#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{color:#388FDA;text-decoration:none;font-weight:700;letter-spacing:-.035em;height:auto;padding:0}@media (min-width: 600px), print{#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{font-size:18px}}@media (min-width: 769px), print{#main_container .gs-result a.gs-title,#main_container .gs-result a.gs-title b{font-size:20px}}#main_container a.gs-title:hover{color:#115FA3;text-decoration:underline}#main_container a.gs-title:hover b{color:#115FA3}#main_container .gs-webResult .gs-snippet,#main_container .gs-imageResult .gs-snippet,#main_container .gs-fileFormatType{color:#333;line-height:1.4em}@media (min-width: 1024px), print{#main_container .gs-webResult .gs-snippet,#main_container .gs-imageResult .gs-snippet,#main_container .gs-fileFormatType{font-size:15px}}#main_container .gs-webResult div.gs-visibleUrl,#main_container .gs-imageResult div.gs-visibleUrl{color:#888}#main_container .gsc-table-cell-thumbnail{padding:0 6px 0 0}@media (min-width: 600px), print{#main_container .gsc-table-cell-thumbnail{padding:0 12px 0 0}}@media (min-width: 1024px), print{#main_container .gsc-table-cell-thumbnail{padding:0 16px 0 0}}#main_container .gs-web-image-box{margin-right:10px;width:100px}@media (min-width: 600px), print{#main_container .gs-web-image-box{padding:0;width:125px}}#main_container img.gs-image,#main_container .gs-promotion-image-box img.gs-promotion-image{border:none;width:100%;height:auto;max-width:none;max-height:none}#main_container a.gs-image{display:block}#main_container .gsc-results .gsc-cursor-box{padding-top:2px}#main_container .gsc-results .gsc-cursor-box .gsc-cursor-page{color:#388FDA;font-size:17px}#main_container .gsc-results .gsc-cursor-box .gsc-cursor-current-page{color:#333;background-color:transparent;text-shadow:none;padding:0}#main_container .gsc-adBlock{display:none !important}.aaa{border:0 solid green}.shareline{width:100%;margin:1.7em 0 2.7em;display:block;position:relative;clear:both}.shareline .shareline_heading{margin-bottom:.5em;position:relative;margin-top:0}.shareline.top_attached_sl{margin-bottom:0}.shareline.bottom_attached_sl{margin-top:-1px;margin-top:0}.shareline.bottom_attached_sl article{border-top:none}.shareline.bottom_attached_sl .shareline_heading{display:none}.shareline article{padding:17px 0em 18px;border-top:1px solid #BEBEBE;border-bottom:1px solid #BEBEBE;position:relative;overflow:visible}.shareline .share_container{display:inline-block;vertical-align:top;width:75px}.shareline .share_container .selector{display:inline-block}.shareline .share_container .selected{display:inline-block;position:relative;top:1px;height:25px;width:25px;cursor:pointer}.shareline .share_container .selected:before{font-size:32px;margin-top:-3px}.shareline .share_container .arrow_box{display:inline-block;position:relative;padding:9px 11px;cursor:pointer}.shareline .share_container .arrow_down{width:0;height:0;border-top:6px solid transparent !important;border-bottom:6px solid transparent !important;border-left:8px solid #b2b2b2;display:inline-block;transform:rotate(90deg)}.shareline .share_container .arrow_down:hover{border-color:black}.shareline .share_options{display:none;position:absolute;top:47px;left:0;z-index:2;background-color:#FFF;border:1px solid #BEBEBE;padding:5px 6px 0 6px}.shareline .share_options .share_btn{width:40px;height:40px;font-size:40px;display:inline;margin:0.1em;cursor:pointer}.shareline a.fi-social-twitter,.shareline a.fi-social-facebook{color:#2b2b2b;text-decoration:none}.shareline .share_text{display:inline-block;vertical-align:middle;width:calc(100% - 75px);color:#555;font-size:95%}#explore_overlay .shareline a.fi-social-twitter,#explore_overlay .shareline a.fi-social-facebook,#explore_overlay .shareline .share_text,.explore_overlay_page .shareline a.fi-social-twitter,.explore_overlay_page .shareline a.fi-social-facebook,.explore_overlay_page .shareline .share_text{color:#FFF}#explore_overlay .shareline .share_options a.fi-social-twitter,#explore_overlay .shareline .share_options a.fi-social-facebook,.explore_overlay_page .shareline .share_options a.fi-social-twitter,.explore_overlay_page .shareline .share_options a.fi-social-facebook{color:#2b2b2b}#explore_overlay .shareline .arrow_down:hover,.explore_overlay_page .shareline .arrow_down:hover{border-color:white}#explore_overlay .shareline article,.explore_overlay_page .shareline article{border-color:#6d6b6b}.keypoint .share_container{width:28px;margin-top:-1px;margin-left:1em}.keypoint .keypoint_icon{font-size:1.2em}section.missions_teaser{background:#edecec;z-index:10}section.missions_teaser header{margin-bottom:2em}section.missions_teaser ul.missions_circles{text-align:center;margin-bottom:2em}section.missions_teaser ul.missions_circles li.mission_item{display:block;margin-left:auto;margin-right:auto;width:300px;text-decoration:none;margin-bottom:3%;border-radius:150px;overflow:hidden}@media (max-width: 480px){section.missions_teaser ul.missions_circles li.mission_item{width:290px;border-radius:145px}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item{width:210px;border-radius:105px;margin-right:2%;display:inline-block;margin-bottom:0}section.missions_teaser ul.missions_circles li.mission_item:last-child{margin-right:0}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item{margin-right:3%;width:300px;border-radius:150px}}@media (min-width: 1200px){section.missions_teaser ul.missions_circles li.mission_item{margin-right:5%}}section.missions_teaser ul.missions_circles li.mission_item:first-child{border:5px solid #3b788b}section.missions_teaser ul.missions_circles li.mission_item:nth-child(2){border:5px solid #c25b28}section.missions_teaser ul.missions_circles li.mission_item:nth-child(3){border:5px solid #fda43c}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description{transition:opacity .4s}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description .rollover_description_inner{font-size:0.9em}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item .rollover_description .rollover_description_inner{font-size:1em}}section.missions_teaser ul.missions_circles li.mission_item .rollover_description *{color:white}section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{display:none}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{display:block;opacity:1;height:100%;width:100%;z-index:1;top:0;right:0;overflow:hidden;position:absolute;background-color:rgba(0,0,0,0.6);border-radius:50%;padding:2em;color:white;font-weight:500;font-size:1.1em}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles li.mission_item:hover .rollover_description{padding:6em 1.5em}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles li.mission_item:hover li.mission_item .title{opacity:0}}section.missions_teaser ul.missions_circles a{height:300px;display:block;position:relative;text-decoration:none}@media (max-width: 480px){section.missions_teaser ul.missions_circles a{height:290px}}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles a{height:210px}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles a{height:300px}}section.missions_teaser ul.missions_circles a .title{text-align:center;position:relative;display:block;top:80%;color:white;text-transform:uppercase;font-size:1.2em;font-weight:500;transition:opacity .4s}@media (min-width: 769px), print{section.missions_teaser ul.missions_circles a .title{top:72%}}@media (min-width: 1024px), print{section.missions_teaser ul.missions_circles a .title{top:80%}}section.missions_teaser footer .detail_link{display:block;margin:0.5em 1em 0 0;white-space:nowrap}@media (min-width: 769px), print{section.missions_teaser footer{float:right;text-align:right}}section.missions_teaser footer a{color:#4e8fa4}.megasection_nav_present.msl .missions_teaser{background:#000}.megasection_nav_present.msl .missions_teaser header{margin-bottom:4em}.megasection_nav_present.msl .missions_teaser .module_title{color:#e0cec7;font-weight:500}.megasection_nav_present.msl .missions_teaser footer a{font-weight:700;color:#33a0bd;background:#000;border:0}.wysiwyg_content .footnotes li{position:relative;font-size:.85em;margin-bottom:1em}.wysiwyg_content .footnotes li .footnote h2{margin:0;color:#222;font-size:1.2em}.wysiwyg_content .footnotes li p{margin:0.4em 0 0.6em}.wysiwyg_content .footnote{font-size:.8em}#secondary_column .footnote{font-size:.8em}.primary_media_feature{margin-bottom:0}@media (min-width: 769px), print{.primary_media_feature{padding:0}}.primary_media_feature.single{position:relative;margin-bottom:0;overflow:hidden}.primary_media_feature.single .feature_container{height:300px;background-size:cover;position:relative;z-index:3;background-position:center}@media (min-width: 769px), print{.primary_media_feature.single .feature_container{height:700px}}.primary_media_feature.single.video .play{display:none;position:absolute;top:47%;left:47%;top:calc(50%- 30px);left:calc(50%- 30px);top:-webkit-calc(50% - 30px);left:-webkit-calc(50% - 30px);width:60px;height:60px;padding-top:0;cursor:pointer;background:url("https://mars.nasa.gov/assets/play-button.png") 0 0 no-repeat;z-index:10}.primary_media_feature.single.video .player{width:100%;height:100%;position:absolute;top:0;left:0;z-index:2}.primary_media_feature.single .video_header_overlay{position:absolute;bottom:2em;margin:0 auto;left:0;right:0;width:auto;text-align:center;color:white;z-index:5}.primary_media_feature.single .video_header_overlay .media_feature_title{font-size:3em}.custom_banner_container{position:relative}.faq_section h2{margin-top:0}.faq_section ul.q_and_a{margin-bottom:1em}.faq_section ul.q_and_a .question{margin-bottom:1em}.faq_section ul.q_and_a .question:last-child{margin-bottom:0.6em}.faq_section ul.q_and_a .title_container{cursor:pointer}.faq_section ul.q_and_a .title{font-weight:600;font-size:1.1em}.faq_section ul.q_and_a .text.answer{visibility:hidden;position:absolute;left:-9999px}.faq_section ul.q_and_a .text.answer.open{visibility:visible;position:relative;left:0}.faq_section hr:last-child{display:none}.fullscreen_element{position:absolute;top:7px;right:7px;cursor:pointer;background-color:rgba(0,0,0,0.5);width:50px;height:50px;border-radius:5px;z-index:10}@media (min-width: 769px), print{.fullscreen_element{top:20px;right:20px}}.fullscreen_element .fullscreen-icon{height:25px;width:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 1px -25px;background-size:25px;margin:13px 0 0 13px}.fullscreen_element:hover .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px 0px;background-size:25px}.fullscreen_element.fullscreen-mode .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px -74px;background-size:25px}.fullscreen_element.fullscreen-mode:hover .fullscreen-icon{background:url("https://mars.nasa.gov/assets/[email protected]") 1px -49px;background-size:25px}#timeline-embed:-webkit-full-screen{height:100%;width:100%;min-height:none;max-height:none}#timeline-embed:-ms-fullscreen{height:100%;width:100%;min-height:none;max-height:none}#timeline-embed:fullscreen{height:100%;width:100%;min-height:none;max-height:none}@media (min-width: 1024px), print{.double_teaser{padding-left:12%;padding-right:12%}}.double_teaser .column{width:100%}@media (min-width: 600px), print{.double_teaser .column{box-sizing:border-box;width:45.83333%;float:left;padding-left:1.66667%;padding-right:1.66667%}}.double_teaser .column+.column{margin-top:2em}@media (min-width: 600px), print{.double_teaser .column+.column{margin-top:0;width:43.96552%;float:right;margin-right:0}}.double_teaser header{margin-bottom:1.3em}.double_teaser footer{text-align:left}.double_teaser .module_title_small{font-size:1.5em}@media (min-width: 600px), print{.double_teaser .module_title_small{font-size:1.9em}}.double_teaser .img_area{margin-bottom:1em;float:none;width:100%}.double_teaser .item_list{margin-bottom:1em}.double_teaser .item_list li{border-bottom:1px solid #BEBEBE;padding:3.44828% 0}.double_teaser .item_list li:first-child{padding-top:0}.double_teaser .item_list .list_image{width:39.65517%;float:left;margin-right:3.44828%;margin-left:0}@media (min-width: 769px), print{.double_teaser .item_list .list_image{width:31.03448%;float:left;margin-right:3.44828%}}.double_teaser .item_list .list_text{width:56.89655%;float:right;margin-right:0}@media (min-width: 769px), print{.double_teaser .item_list .list_text{width:65.51724%;float:right;margin-right:0}}.double_teaser .item_list .list_text .date{color:#707070;font-size:0.8em;font-weight:500;margin-bottom:0.3em}.double_teaser .item_list .list_text .date span:before{content:" \2022 "}.double_teaser .item_list .list_text .title{font-size:1em;font-weight:500}.insight_page .double_teaser .teaser_container{padding:0 1em}@media (min-width: 1024px), print{.insight_page .double_teaser .teaser_container{padding:0}}.insight_page .double_teaser .teaser_container .module_title{color:#222;text-align:left}.insight_page .double_teaser .teaser_container .upcoming_events footer ul{margin-bottom:1em;margin-left:0}.megasection_nav_present.msl section.module.double_teaser{background:#edecec}.megasection_nav_present.msl section.module.double_teaser h2,.megasection_nav_present.msl section.module.double_teaser a{font-weight:500;text-align:left}.triple_teaser{background-color:white;z-index:11}.triple_teaser .column{width:100%}@media (min-width: 769px), print{.triple_teaser .column{width:31.03448%;float:left}.triple_teaser .column:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.triple_teaser .column:nth-child(3n+2){margin-left:34.48276%;margin-right:-100%;clear:none}.triple_teaser .column:nth-child(3n+3){margin-left:68.96552%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.triple_teaser .column{width:28.57143%;float:left}.triple_teaser .column:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.triple_teaser .column:nth-child(3n+2){margin-left:35.71429%;margin-right:-100%;clear:none}.triple_teaser .column:nth-child(3n+3){margin-left:71.42857%;margin-right:-100%;clear:none}}.triple_teaser .column:last-child{margin-bottom:1em}.triple_teaser .column+.column{margin-top:3em}@media (min-width: 769px), print{.triple_teaser .column+.column{margin-top:0}}.triple_teaser header{margin-bottom:1.3em}.triple_teaser .module_title{text-align:left}@media (min-width: 600px), print{.triple_teaser .module_title{font-size:2em}}@media (min-width: 600px), print{.triple_teaser footer{text-align:left}}.triple_teaser footer .detail_link{float:left;clear:both;text-align:left;white-space:nowrap}.triple_teaser .img_area{margin-bottom:1em;float:none;width:100%}.triple_teaser .item_list{margin-bottom:1em}.triple_teaser .item_list li{border-bottom:1px solid #BEBEBE;padding:3.44828% 0}.triple_teaser .item_list li:first-child{padding-top:0}.triple_teaser .item_list li:last-child{border-bottom:none}.triple_teaser .item_list .list_image{width:39.65517%;float:left;margin-right:3.44828%;margin-left:0}@media (min-width: 600px), print{.triple_teaser .item_list .list_image{width:31.03448%;float:left;margin-right:3.44828%}}.triple_teaser .item_list .list_text{width:56.89655%;float:right;margin-right:0}@media (min-width: 600px), print{.triple_teaser .item_list .list_text{width:65.51724%;float:right;margin-right:0}}.triple_teaser .item_list .list_text .date{color:#707070;font-size:0.8em;font-weight:500;margin-bottom:0.3em}.triple_teaser .item_list .list_text .date span:before{content:" \2022 "}.triple_teaser .item_list .list_text .title{font-size:1em;font-weight:500}.triple_teaser .upcoming_events .item_list li{padding:3.44828% 0}.triple_teaser .upcoming_events .item_list li:first-child{padding-top:0}.triple_teaser .upcoming_events .item_list .list_text .date{margin-bottom:.7em}.triple_teaser .follow_teaser .text_area{font-weight:300;font-size:1rem}.triple_teaser .follow_teaser .text_area footer{margin-top:2em}ul.item_list{margin-bottom:2em}ul.item_list .list_title{font-size:1.3em;font-weight:700;margin-bottom:.5em}ul.item_list .list_title a{color:#222}ul.item_list .text_only .list_text{width:100%;padding:0}ul.item_list>li hr{margin:0}ul.item_list .list_image{width:37.5%;float:right;margin-right:0;margin-left:4.16667%;margin-bottom:.5em}@media (min-width: 600px), print{ul.item_list .list_image{margin-left:0;margin-bottom:0;width:35.89744%;float:left;margin-right:2.5641%}}@media (min-width: 769px), print{ul.item_list .list_image{width:22.41379%;float:left;margin-right:3.44828%}}@media (min-width: 1024px), print{ul.item_list .list_image{width:31.03448%;float:left;margin-right:3.44828%}}@media (min-width: 600px), print{ul.item_list .list_text{width:61.53846%;float:right;margin-right:0}}@media (min-width: 769px), print{ul.item_list .list_text{width:74.13793%;float:right;margin-right:0}}@media (min-width: 1024px), print{ul.item_list .list_text{width:65.51724%;float:right;margin-right:0}}ul.item_list .list_text h2,ul.item_list .list_text h3,ul.item_list .list_text h4{margin-top:0}ul.item_list .list_content{padding:1em 0}ul.item_list .list_description{margin-top:0}ul.item_list .description .long{display:none}ul.item_list .description .long p:first-of-type{margin-top:0}ul.people.item_list li.person{padding:4.16667% 0}ul.people.item_list li.person:first-child{padding-top:0}ul.people.item_list .person_header{margin-bottom:1.2em}ul.people.item_list .list_title.list_name{padding-top:7%}@media (min-width: 600px), print{ul.people.item_list .list_title.list_name{padding:0}}ul.people.item_list .person_title{font-weight:300}ul.people.item_list .description{clear:both}@media (min-width: 600px), print{ul.people.item_list .description{clear:none}}ul.people.item_list .person+.person{border-top:1px solid #BEBEBE}ul.item_list.text_item_list .list_text{width:100%}ul.item_list.text_item_list .list_text .date{margin-bottom:.3em}ul.item_list.text_item_list a{color:#257cdf}ul.item_list.text_item_list a:hover{text-decoration:underline}ul.item_list.text_item_list .publication_authors{margin-bottom:.4em}ul.item_list.text_item_list .citation{font-size:.85em;margin-bottom:.4em;font-weight:300}ul.item_list.text_item_list .publication_title{font-size:1.1em;font-weight:700;margin-bottom:.4em}ul.item_list.text_item_list .publication_title a{color:#222}ul.item_list.text_item_list .list_title a{color:#222}.explore_overlay_page .feature_pages .wysiwyg_content .item_list_module{margin-left:auto}.feature_pages ul.item_list{margin-left:auto;margin-right:auto;margin:3em auto;width:98%;max-width:none}@media (min-width: 769px), print{.feature_pages ul.item_list{max-width:calc(600px + 6%)}}@media (min-width: 1024px), print{.feature_pages ul.item_list{max-width:calc(600px + 10%)}}@media (min-width: 1200px){.feature_pages ul.item_list{max-width:calc(600px + 15%)}}@media (min-width: 1200px){.secondary_nav_desktop{overflow-x:auto}}@media (min-width: 1200px){.custom_banner_container .secondary_nav_desktop{overflow-x:visible}}@media (min-width: 1200px){.custom_banner_container .fixed_secondary_nav{overflow-x:auto}}nav.secondary_nav{font-weight:400}nav.secondary_nav .grid_layout{width:100%;padding-left:10px;padding-right:10px;max-width:none}@media (min-width: 600px), print{nav.secondary_nav .grid_layout{padding-left:17px;padding-right:17px}}nav.secondary_nav.secondary_nav_mobile{display:block;width:100%}nav.secondary_nav.secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0;background-color:#AFB3B9;width:100%;max-width:none;border-radius:0}nav.secondary_nav.secondary_nav_mobile select::-ms-expand{display:none}nav.secondary_nav.secondary_nav_mobile select option{padding:0.5em 1em}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_mobile{display:none}}nav.secondary_nav.secondary_nav_desktop{display:none}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop{padding:0.8em 0 0.9em;display:block;margin:0;background-color:#eee;text-align:center}}nav.secondary_nav.secondary_nav_desktop .section_title{display:none}nav.secondary_nav.secondary_nav_desktop .section_title a{padding-left:0;text-decoration:none}nav.secondary_nav.secondary_nav_desktop li{display:inline-block;position:relative;margin-bottom:0}nav.secondary_nav.secondary_nav_desktop a{color:#777;font-size:1em;font-weight:600;display:block;padding:.3em .3em}@media (min-width: 769px), print{nav.secondary_nav.secondary_nav_desktop a{padding:.3em .6em}}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop a{padding:.3em .9em}}@media (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop a{font-size:1.1em}}.custom_banner_container nav.secondary_nav.secondary_nav_desktop a{color:white}nav.secondary_nav.secondary_nav_desktop ul{white-space:nowrap}nav.secondary_nav.secondary_nav_desktop li.current a,nav.secondary_nav.secondary_nav_desktop li:hover a{text-decoration:none;color:#222}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .grid_layout{display:flex;flex-wrap:nowrap;justify-content:space-between}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{position:fixed;width:100%;top:0;left:0;z-index:100;box-shadow:0 4px 4px -2px rgba(0,0,0,0.15)}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav.secondary_nav_desktop{padding:1em 0 0.8em;white-space:nowrap}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title{display:inline-block;margin-top:3px;margin-right:1.6em;font-size:1.2em;flex-shrink:0}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title a{padding:0;color:#2B2B2B}}@media (min-width: 1200px) and (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title{margin-top:6px}}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav ul{display:inline-block;text-align:right;width:100%}}@media (min-width: 1200px) and (min-width: 1024px), print and (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .5em;font-size:0.9em}}@media (min-width: 1200px) and (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .6em;font-size:0.95em}}@media (min-width: 1200px) and (min-width: 1700px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{padding:.3em .8em;font-size:1em}}@media (min-width: 1200px){nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current a,nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav a:hover,nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav .section_title a:hover{text-decoration:none;color:#2B2B2B}nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li:last-of-type a{padding-right:0}}.custom_banner_container nav.secondary_nav.secondary_nav_desktop,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop{text-align:center;margin:0;background-color:transparent}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li{margin-bottom:6px}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li a,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li a{color:white}.custom_banner_container nav.secondary_nav.secondary_nav_desktop li.current:after,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop li.current:after{bottom:-60px;left:50%;border:solid transparent;content:" ";height:0;width:0;position:absolute;pointer-events:none;border-top-color:black;border-width:20px;margin-left:-20px}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{background-color:#e4e7ec}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after{content:none}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a{color:#fff}.custom_banner_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a:hover,.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li a:hover{color:#2B2B2B}.custom_banner_container nav.secondary_nav.secondary_nav_mobile,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile{text-align:center;padding:0 2.5%}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;margin:0.9em 0 1.1em}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select::-ms-expand,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select::-ms-expand{display:none}.custom_banner_container nav.secondary_nav.secondary_nav_mobile select option,.homepage_feature_container nav.secondary_nav.secondary_nav_mobile select option{padding:0.5em 1em}.homepage_feature_container nav.secondary_nav{position:absolute;bottom:0;z-index:2}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop{width:100%}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav{bottom:auto}.homepage_feature_container nav.secondary_nav.secondary_nav_desktop.fixed_secondary_nav li.current:after{content:none}#explore_overlay nav.secondary_nav{display:none}.megasection_nav_present.msl nav.secondary_nav.secondary_nav_desktop,.megasection_nav_present.msl nav.secondary_nav.secondary_nav_mobile select{background-color:#e34e41;border:0}.megasection_nav_present.msl nav.secondary_nav.secondary_nav_desktop li.current a,.megasection_nav_present.msl nav.secondary_nav.secondary_nav_mobile select li.current a{color:#2B2B2B}.megasection_nav_present.msl nav.secondary_nav.secondary_nav_desktop li a,.megasection_nav_present.msl nav.secondary_nav.secondary_nav_mobile select li a{color:white}.megasection_nav_present.msl nav.secondary_nav.secondary_nav_desktop li a:hover,.megasection_nav_present.msl nav.secondary_nav.secondary_nav_mobile select li a:hover{color:#2B2B2B}body#facts nav.secondary_nav_mobile select{color:initial;background:#AFB3B9 url(http://localhost:3000/assets/[email protected]) no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px}nav.tertiary_nav{position:relative;margin-top:0.8em;font-weight:400;display:block;text-align:center}@media (min-width: 600px), print{nav.tertiary_nav{text-align:left}}nav.tertiary_nav select{position:relative;padding:.5em 2em .5em 1em;font-size:16px;border:0;height:40px;vertical-align:middle;color:white;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;background:#3b788b url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;font-weight:700;cursor:pointer;width:100%;border-radius:5px;max-width:304px;font-weight:400;color:black;margin:0 auto;border:1px solid #B3B3B3;border-radius:0;background:#fff url("https://mars.nasa.gov/assets/triangle_down.svg") no-repeat 94% 15px}nav.tertiary_nav select::-ms-expand{display:none}nav.tertiary_nav select option{padding:0.5em 1em}nav.tertiary_nav ul li{position:relative;display:inline-block;vertical-align:middle}nav.tertiary_nav ul li:hover a,nav.tertiary_nav ul li.current a{color:black}nav.tertiary_nav ul li+li:before{content:" | ";padding:0 .5em;vertical-align:middle;font-weight:100;color:#BEBEBE}nav.tertiary_nav ul a{display:inline-block;vertical-align:middle;color:#909090}nav.tertiary_nav ul a:hover{text-decoration:none}@media (min-width: 769px), print{nav.tertiary_nav ul a{font-size:1.2em}}@media (min-width: 1024px), print{nav.tertiary_nav ul a{font-size:1.3em}}@media (min-width: 1200px){nav.tertiary_nav ul a{font-size:1.4em}}.tertiary_nav.feature_tertiary_nav{text-align:center}.tertiary_nav.feature_tertiary_nav ul li{position:relative;display:inline-block;vertical-align:middle}.tertiary_nav.feature_tertiary_nav ul li:hover a,.tertiary_nav.feature_tertiary_nav ul li.current a{color:black}.tertiary_nav.feature_tertiary_nav ul li+li:before{content:" ";padding:0 .5em;vertical-align:middle;font-weight:100;color:#BEBEBE}@media (min-width: 769px), print{.tertiary_nav.feature_tertiary_nav ul a{font-size:1.1em}}@media (min-width: 1024px), print{.tertiary_nav.feature_tertiary_nav ul a{font-size:1.2em}}@media (min-width: 1200px){.tertiary_nav.feature_tertiary_nav ul a{font-size:1.3em}}.condense_control{color:#257cdf}.condense_control:hover{text-decoration:underline}.condense_control:before{content:' › ';white-space:nowrap}section.intro{display:block;background-color:#000;position:absolute;top:0;left:0;z-index:99;width:100%;overflow:hidden;height:100vh}section.intro .brand_area{background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:100%;z-index:301;position:absolute;top:2%;left:4%;width:60%;height:80%;max-width:500px}@media (min-width: 769px), print{section.intro .brand_area{width:45%;left:2%}}section.intro img{-o-object-fit:cover;object-fit:cover;height:100%;width:100%}@media (min-width: 480px){section.intro img{margin-top:0}}body.intro_screen_visible .vital_signs_menu,body.intro_screen_visible .more_bar{z-index:100}body.intro_screen_visible .vital_signs_menu .overlay_icon{display:none}body.intro_screen_visible section.more_bar .title,body.intro_screen_visible section.more_bar .arrow_down{display:none}body.intro_screen_visible section.more_bar:after{content:"loading...";display:inline-block;padding:0.6em 0;font-size:.9em}html.explore_overlay_open,body.explore_overlay_open{overflow:hidden;width:100%;height:100%;position:fixed;-ms-overflow-style:-ms-autohiding-scrollbar}#explore_overlay{position:fixed;top:0;left:0;height:100vh;width:100%;z-index:1000001;overflow-x:hidden;visibility:hidden;opacity:0;padding:0 0 6px;background-color:rgba(0,0,0,0.9)}.overlay_loaded #explore_overlay{background-color:#000}#explore_overlay.visible{visibility:visible}#explore_overlay .content{position:relative;height:100%;visibility:hidden;opacity:0}#explore_overlay .content.visible{visibility:visible}#explore_overlay .content>iframe{top:0;left:0;position:absolute}#explore_overlay .loading{position:absolute;left:50%;top:42vh;transform:translateX(-50%);width:auto;text-align:center;display:none}#explore_overlay .loading img{width:44px;height:44px}#explore_overlay .loading p{font-family:Whitney, Helvetica, Arial, sans-serif;position:relative;color:#76aee6;font-size:14px;letter-spacing:0.1em}#explore_overlay .loading .spinner div{background:#ccc !important}#explore_overlay .background_area{-webkit-tap-highlight-color:transparent;-webkit-tap-highlight-color:transparent;width:100%;height:100%;position:absolute;top:0;left:0;cursor:pointer;z-index:-1}#explore_overlay.lightbox_overlay{background-color:rgba(0,0,0,0.75);text-align:center;padding:0}#explore_overlay.lightbox_overlay .content{position:fixed;background-color:#1f1f1f;width:95%;height:92% !important;max-width:1400px;margin:2em auto 0;border-radius:4px;left:auto;right:calc(50vw - 47.5%)}@media only screen and (min-width: 1480px){#explore_overlay.lightbox_overlay .content{right:calc(50vw - 697px)}}@media only screen and (max-device-width: 1024px) and (-webkit-min-device-pixel-ratio: 1){#explore_overlay.lightbox_overlay .content{overflow-y:scroll;-webkit-overflow-scrolling:touch}}.overlay_close_button{position:absolute;top:1em;right:1.1em;z-index:1000003;width:40px;height:40px;position:fixed;background:#000;text-decoration:none;text-align:center;line-height:1em;transition:.3s opacity;visibility:hidden;opacity:0}.overlay_close_button.visible{visibility:visible}.no-touchevents .overlay_close_button:hover{opacity:1}.overlay_close_button .close_icon{display:block;height:100%;position:relative}.overlay_close_button .close_icon:before{transform:rotate(-45deg);content:'';position:absolute;height:1px;width:100%;top:calc(50% - .5px);left:0;background:#fff}.overlay_close_button .close_icon:after{transform:rotate(45deg);content:'';position:absolute;height:1px;width:100%;top:calc(50% - .5px);left:0;background:#fff}@media (min-width: 769px), print{.overlay_close_button{width:60px;height:60px;top:1.1em;right:1.1em}}@media (min-width: 1700px){.overlay_close_button{width:70px;height:70px;top:1.2em;right:1.2em}}.overlay_close_button.lightbox_overlay{background-color:#1f1f1f;top:2.5em;right:calc(50vw - 45.5%)}@media (min-width: 600px), print{.overlay_close_button.lightbox_overlay{right:calc(50vw - 45%)}}@media (min-width: 769px), print{.overlay_close_button.lightbox_overlay{top:2.6em;width:50px;height:50px}}@media (min-width: 1024px), print{.overlay_close_button.lightbox_overlay{right:calc(50vw - 45.5%)}}@media only screen and (min-width: 1480px){.overlay_close_button.lightbox_overlay{right:calc(50vw - 680px)}}@media (min-width: 1700px){.overlay_close_button.lightbox_overlay{top:2.7em;width:60px;height:60px}}#iframe_overlay,#iframe_overlay body{height:100%;overflow-y:auto;-webkit-overflow-scrolling:touch;font-weight:400}#iframe_overlay{width:1px;min-width:100%;word-wrap:break-word;color:#e4e3e3}#iframe_overlay p,#iframe_overlay .release_date{color:#e4e3e3}#iframe_overlay hr{border-color:#3c3c3c}#iframe_overlay a{color:#98c7fc}#iframe_overlay .header_mask{display:none}#iframe_overlay .explore_overlay_page{padding-bottom:4em}#iframe_overlay .done_btn{text-align:center}#iframe_overlay .done_btn button{color:#6bbed8;margin:2em 0;padding:.3em .7em .4em;background:none;cursor:pointer;letter-spacing:1px;font-weight:300;outline:none;position:relative;font-size:1.8em;border:1px solid #6bbed8;transition:color 200ms, border-color 200ms}#iframe_overlay .done_btn button::after{content:'Close'}#iframe_overlay .done_btn button:hover{color:#82ddf9;border-color:#82ddf9}#iframe_overlay .left_col,#iframe_overlay .right_col{position:relative;float:left}#iframe_overlay .left_col{width:100%}@media (min-width: 769px), print{#iframe_overlay .left_col{width:65%;border-right:1px solid #BEBEBE;padding-right:1em}}@media (min-width: 1200px){#iframe_overlay .left_col{padding-right:3em}}#iframe_overlay .right_col{width:100%}#iframe_overlay .right_col p{color:#868686}#iframe_overlay .right_col p b{color:#222}@media (min-width: 769px), print{#iframe_overlay .right_col{width:35%;padding-left:1em;left:-1px}}@media (min-width: 1200px){#iframe_overlay .right_col{padding-left:3em}}#iframe_overlay .suggested_features{display:none}#iframe_overlay #secondary_column aside.boxed{border-color:#5a5a5a}#iframe_overlay #secondary_column .related_content_module{border-color:#5a5a5a}#iframe_overlay #secondary_column .related_content_module li{border-color:#3c3c3c;padding:.8em 0}#iframe_overlay table th{background-color:#1f1f1f}#iframe_overlay table td{background-color:#000}#iframe_overlay table.mb_table td{background-color:transparent}#iframe_overlay table.mb_table th{background-color:#1f1f1f;color:#e4e3e3}#iframe_overlay table.mb_table tr:nth-child(even){background-color:#1f1f1f}#iframe_overlay table.mb_table tr:nth-child(odd){background-color:#000}#iframe_overlay table.mb_table td{border:1px solid #505050}#iframe_overlay table.mb_table td:first-child{border-left:transparent}#iframe_overlay table.mb_table td:last-child{border-right:transparent}#iframe_overlay .table_wrapper>div::-webkit-scrollbar-track{box-shadow:0 0 2px rgba(0,0,0,0.15) inset;background:#f0f0f0}#iframe_overlay .table_wrapper>div::-webkit-scrollbar-thumb{background:#ccc}#iframe_overlay .table_wrapper.has-scroll:after{box-shadow:-5px 0 10px rgba(0,0,0,0.25)}#iframe_overlay .article_nav{display:none}.info_tabs_module{position:relative;color:black;background:url("https://mars.nasa.gov/assets/mars_landscape.jpg") center top no-repeat;background-size:cover}@media (min-width: 769px), print{.info_tabs_module{height:620px;background-position:center bottom}}.info_tabs_module div[data-react-class="InfoTabs"]{height:100%}.info_tabs_module .grid_layout{padding-bottom:10em}@media (min-width: 769px), print{.info_tabs_module .grid_layout{padding-bottom:0}}.info_tabs_module .gradient_container_bottom{display:none}.info_tabs_module .info_tabs{padding:2.7em 0 5em;height:100%}@media (min-width: 769px), print{.info_tabs_module .info_tabs{padding:5.3em 0 5em}}.info_tabs_module .col1,.info_tabs_module .col2{width:100%}@media (min-width: 769px), print{.info_tabs_module .col1,.info_tabs_module .col2{width:48%}}.info_tabs_module .col2{display:none;float:right;margin-top:2rem}@media (min-width: 769px), print{.info_tabs_module .col2{display:block;margin-top:0}}.info_tabs_module .col1{float:left}.info_tabs_module .info_tabs_header{width:100%;margin-bottom:1.8em;display:inline-block;text-align:center}@media (min-width: 769px), print{.info_tabs_module .info_tabs_header{text-align:left;margin-bottom:3em}}.info_tabs_module .info_tabs_header h2{font-size:1.69em;margin-bottom:0em;margin-top:1.2em;font-weight:300}@media (min-width: 600px), print{.info_tabs_module .info_tabs_header h2{font-size:1.95em;margin-bottom:0em}}@media (min-width: 769px), print{.info_tabs_module .info_tabs_header h2{font-size:2.21em;margin-bottom:0em}}@media (min-width: 1024px), print{.info_tabs_module .info_tabs_header h2{font-size:2.34em;margin-bottom:0em}}@media (min-width: 1200px){.info_tabs_module .info_tabs_header h2{font-size:2.47em;margin-bottom:0em}}@media (min-width: 769px), print{.info_tabs_module .info_tabs_header h2{margin-top:0}}.info_tabs_module .info_tabs_links{padding-left:2rem;position:relative;z-index:2;font-size:1.3rem;font-weight:300;width:90%}@media (min-width: 769px), print{.info_tabs_module .info_tabs_links{width:auto}}.info_tabs_module .info_tabs_links li{cursor:pointer;position:relative;margin-bottom:0.3em}.info_tabs_module .info_tabs_links .tab_title{margin-bottom:0.2em;letter-spacing:-0.02em}.info_tabs_module .info_tabs_links .info_tabs_link:before{content:'';width:0;height:0;border-top:7px solid transparent !important;border-bottom:7px solid transparent !important;border-left:11px solid #943b2b;display:inline-block;transform:none;position:absolute;left:-20px;top:7px;opacity:0;transition:all 200ms}.no-touchevents .info_tabs_module .info_tabs_links .info_tabs_link:not(.active):hover:before,.info_tabs_module .info_tabs_links .active:before{opacity:1;left:-28px}.no-touchevents .info_tabs_module .info_tabs_links .info_tabs_link:not(.active):hover:before{opacity:.7}.info_tabs_module .info_tabs_detail,.info_tabs_module .active .mobile_tab_detail{float:right;max-height:320px;overflow-y:auto;padding-right:1em;position:relative;z-index:2;font-weight:300;width:100%;-webkit-overflow-scrolling:touch}.info_tabs_module .info_tabs_detail::-webkit-scrollbar,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar{width:5px}.info_tabs_module .info_tabs_detail::-webkit-scrollbar-thumb,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar-thumb{background-color:rgba(107,107,107,0.6)}.info_tabs_module .info_tabs_detail::-webkit-scrollbar-track,.info_tabs_module .active .mobile_tab_detail::-webkit-scrollbar-track{background-color:rgba(157,157,157,0.4)}@media (min-width: 769px), print{.info_tabs_module .info_tabs_detail,.info_tabs_module .active .mobile_tab_detail{padding-right:3em;top:0.8em}}.info_tabs_module .info_tabs_detail .info_tabs_title,.info_tabs_module .active .mobile_tab_detail .info_tabs_title{display:none}@media (min-width: 769px), print{.info_tabs_module .info_tabs_detail .info_tabs_title,.info_tabs_module .active .mobile_tab_detail .info_tabs_title{display:block;font-size:1.1em;margin-bottom:1em;margin-top:0;text-transform:uppercase}}.info_tabs_module .mobile_tab_detail{display:none}.info_tabs_module .active .mobile_tab_detail{font-size:0.95rem;float:none;display:block}@media (min-width: 769px), print{.info_tabs_module .active .mobile_tab_detail{display:none}}.info_tabs_module .active .tab_title{margin-bottom:.5em}@media (min-width: 769px), print{.info_tabs_module .active .tab_title{margin-bottom:.2em}}.info_tabs_module .info_tabs_content *:first-child,.info_tabs_module .active .mobile_tab_detail *:first-child{margin-top:0}.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .info_tabs_content>p,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4,.info_tabs_module .active .mobile_tab_detail>p{margin:1em 0}.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{font-size:1.1em;margin-top:2em}@media (min-width: 769px), print{.info_tabs_module .info_tabs_content>h2,.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h2,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{margin-top:0}}.info_tabs_module .info_tabs_content>h3,.info_tabs_module .info_tabs_content>h4,.info_tabs_module .active .mobile_tab_detail>h3,.info_tabs_module .active .mobile_tab_detail>h4{margin-top:1.5em}.info_tabs_module .info_tabs_content>p:last-child{margin-bottom:1em}.info_tabs_module .info_tabs_content ol{margin-bottom:0}.info_tabs_module .active .mobile_tab_detail>p:last-child{margin-bottom:0}.info_tabs_module .less_option,.info_tabs_module .more_option{display:inline-block;font-size:.95rem;margin-bottom:0.5em}@media (min-width: 769px), print{.info_tabs_module .less_option,.info_tabs_module .more_option{display:none}}.info_tabs_module .less_option:after{content:"- less";display:block}.info_tabs_module .more_option:after{content:"+ more";display:block}.info_tabs_module footer{width:90%;max-width:1330px;position:absolute;bottom:50px;right:0;left:0;margin:auto;text-align:right}.info_tabs_module .more_link{font-size:1rem;font-weight:500;text-transform:uppercase;color:white;cursor:pointer}span.no_wrap{display:inline;white-space:nowrap}.info_tip{white-space:normal;display:inline-block;vertical-align:middle;padding:14px 3px 0 3px;margin-top:-10px}.info_tip:hover svg path{fill:black}.homepage_dashboard_modal .info_tip svg circle{fill:#d9cbbe}.homepage_dashboard_modal .info_tip svg .inner_icon{fill:black}#iframe_overlay .info_tip svg circle{fill:#d9cbbe}#iframe_overlay .info_tip svg .inner_icon{fill:black}#primary_column .info_tip{margin-top:-17px}.info_tip .info_icon{display:inline;text-align:center;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -100px 0;background-size:300px;background:none}.info_tip .info_text{display:none}.info_tip .info_icon:before{display:none;content:'';position:absolute;width:15px;height:15px;background:white;border-right:1px solid #c9c9c9;border-bottom:1px solid #c9c9c9;z-index:1;transform:rotate(45deg);margin-top:-24px;margin-left:2px}.info_tip .info_text{display:none;position:absolute;color:#222;font-size:.9em;background-color:white;padding:18px;border:1px solid #c9c9c9;border-radius:4px;width:calc(100% + 12px);left:-6px;transform:translateY(-100%);margin-top:-41px}.homepage_dashboard_modal .info_tip .info_text{left:0}@media (min-width: 1200px){.info_tip .info_text{font-size:.8em}}.info_tip.open .info_icon:before{display:block}.info_tip.open .info_text{display:block}.homepage_dashboard_modal .info_tip .info_icon:before{background-color:#221307;border-right:1px solid #5f4326;border-bottom:1px solid #5f4326}.homepage_dashboard_modal .info_tip .info_text{color:#beb0a4;background-color:#221307;border:1px solid #5f4326}@media (min-width: 769px){.parallax_categorized_teaser .bubble_container{max-height:788px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .bubble_container .oculus{transform:scale(0.8)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:214px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .bubble_container .oculus{transform:scale(0.9)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:119px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:70px;left:408px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){left:0;top:269px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:347px;left:290px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .bubble_container .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:179px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:104px;left:495px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:282px;left:0}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:390px;left:324px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(1){top:0;left:229px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(2){top:134px;left:565px}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(3){top:272px;left:0}.parallax_categorized_teaser .bubble_container .oculus:nth-of-type(4){top:420px;left:354px}}@media (min-width: 769px){.parallax_categorized_teaser .bubble_container{height:75vh}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.two .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:-30px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.two .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:30px;left:70px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:30px;left:378px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.two .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(1){top:40px;left:70px}.parallax_categorized_teaser .definition_teasers.two .oculus:nth-of-type(2){top:40px;left:422px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.three .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:-30px;left:100px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:185px;left:-20px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:185px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.three .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:0;left:204px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:280px;left:30px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:280px;left:378px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.three .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(1){top:20px;left:226px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(2){top:310px;left:30px}.parallax_categorized_teaser .definition_teasers.three .oculus:nth-of-type(3){top:310px;left:432px}}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four{max-height:788px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:214px;left:224px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.four .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:119px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:70px;left:408px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){left:0;top:269px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:347px;left:290px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.four .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:179px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:104px;left:495px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:282px;left:0}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:390px;left:324px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(1){top:0;left:229px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(2){top:134px;left:565px}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(3){top:272px;left:0}.parallax_categorized_teaser .definition_teasers.four .oculus:nth-of-type(4){top:420px;left:354px}}@media (min-width: 769px) and (min-width: 769px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.five .oculus{transform:scale(0.8)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:-30px;left:-20px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:-30px;left:224px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:214px;left:-20px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:214px;left:224px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:432px;left:102px}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .definition_teasers.five .oculus{transform:scale(0.9)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:0;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:0;left:408px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:200px;left:204px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:400px;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:400px;left:408px}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .definition_teasers.five .oculus{position:absolute;transform:scale(1)}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(1){top:0;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(2){top:0;left:452px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(3){top:200px;left:226px}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(4){top:400px;left:0}.parallax_categorized_teaser .definition_teasers.five .oculus:nth-of-type(5){top:400px;left:452px}}.parallax_categorized_teaser{height:auto;background:url("https://mars.nasa.gov/assets/red_planet_bg.jpg") center no-repeat;background-size:cover;color:#eeaaa1;overflow:hidden;position:relative;padding-top:3em}@media (min-width: 769px){.parallax_categorized_teaser{height:calc(100vh - 74px);padding-top:4em;min-height:860px}}@media (min-width: 1200px){.parallax_categorized_teaser{padding-top:5em}}@media (min-width: 769px){.parallax_categorized_teaser .module_content{height:calc(100% - 36px)}}.parallax_categorized_teaser .module_content.fixed{position:fixed;top:82px;left:0}.parallax_categorized_teaser .module_title{font-size:2em;font-weight:200;text-align:center;margin-bottom:0.85em}@media (min-width: 769px){.parallax_categorized_teaser .module_title{text-align:left;margin-bottom:1em}}@media (min-width: 1200px){.parallax_categorized_teaser .module_title{font-size:2.2em}}.parallax_categorized_teaser .mobile_only{display:block}@media (min-width: 769px){.parallax_categorized_teaser .mobile_only{display:none}}.parallax_categorized_teaser .categorized_content{width:100%;top:0;left:0;right:0;bottom:0;margin-bottom:0}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content{width:55%;max-width:795px;position:absolute;top:80px;left:38%}}@media (min-width: 769px) and (min-width: 600px), print and (min-width: 769px){.parallax_categorized_teaser .categorized_content{width:58%}}@media (min-width: 769px) and (min-width: 1024px), print and (min-width: 769px){.parallax_categorized_teaser .categorized_content{left:33.5%;width:61.2%}}@media (min-width: 769px) and (min-width: 1200px){.parallax_categorized_teaser .categorized_content{left:34.5%;top:100px}}@media (min-width: 769px) and (min-width: 1700px){.parallax_categorized_teaser .categorized_content{position:relative;left:0;top:20px}}.parallax_categorized_teaser .categorized_content .content_for{position:relative;display:none}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents{padding:0 40px 0 0;max-height:71vh;overflow:hidden;overflow-y:auto}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar{width:5px}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar-thumb{background-color:rgba(255,255,255,0.4)}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents::-webkit-scrollbar-track{background-color:rgba(255,255,255,0.1)}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p{margin:1em 1.5em}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{color:#eeaaa1;font-size:1.1em;font-weight:400;margin-top:2em}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{margin-top:0}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h2{text-transform:uppercase}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h3,.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>h4{margin-top:1.5em}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p{color:#e3e3e3;max-width:640px}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p:last-child{margin-bottom:2em}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p:last-child{margin-bottom:1em}}.parallax_categorized_teaser .categorized_content .content_for .bubble_container_contents>p a{color:#54b3da}.parallax_categorized_teaser .categorized_content .content_for.current{display:block}@media (min-width: 769px){.parallax_categorized_teaser .categorized_content .content_for{position:absolute;display:block;opacity:0;left:1000px;top:0;transition:all 400ms;width:100%}.parallax_categorized_teaser .categorized_content .content_for.current{left:0;top:0;opacity:1;display:block !important}}.parallax_categorized_teaser .carousels{height:100%;width:100%}.parallax_categorized_teaser footer{position:absolute;height:50px;bottom:0;left:0;width:100vw;background-color:rgba(129,33,24,0.6)}.parallax_categorized_teaser .oculus{position:relative;text-align:center;background-size:cover;height:186px;padding:1.5em 0;border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .oculus{padding:0;position:absolute;width:275px;height:275px}}.parallax_categorized_teaser .oculus h3.carousel_title{position:absolute;font-size:.9rem;text-align:center;margin:0 auto 12px;width:auto;font-weight:300}@media (max-width: 768px){.parallax_categorized_teaser .oculus h3.carousel_title{left:50%;transform:translateX(-50%)}}@media (min-width: 769px){.parallax_categorized_teaser .oculus h3.carousel_title{position:relative;width:50%;margin:24px auto 12px}}.parallax_categorized_teaser .oculus h3.description_title{font-size:.9rem;font-weight:400;margin:0 0 12px}@media (min-width: 769px){.parallax_categorized_teaser .oculus h3.description_title{margin:17px 0 12px}}.parallax_categorized_teaser .oculus span.mobile_only_title{font-weight:200}@media (min-width: 769px){.parallax_categorized_teaser .oculus span.mobile_only_title{display:none}}.parallax_categorized_teaser .oculus .slide_description{padding:0 2em;font-weight:400}.parallax_categorized_teaser .oculus .cols{display:flex;margin-left:-6px;margin-right:-6px}.parallax_categorized_teaser .oculus .cols .col{margin:0 auto;max-width:55%}.parallax_categorized_teaser .oculus .cols .val{font-size:2rem;font-weight:400;letter-spacing:-.05em}.parallax_categorized_teaser .oculus .cols .val.small_text{font-size:1.6rem}.parallax_categorized_teaser .oculus .cols .val_label{font-size:.9rem;font-weight:300}@media (min-width: 769px){.parallax_categorized_teaser .oculus{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .oculus .carousel_title{color:#445c64}.parallax_categorized_teaser .oculus .title,.parallax_categorized_teaser .oculus .hover{color:#c7d9de}.parallax_categorized_teaser .oculus .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus .slick-prev:before,.parallax_categorized_teaser .oculus .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.data{border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .oculus.data .carousel_title{color:#445c64}.parallax_categorized_teaser .oculus.data .title,.parallax_categorized_teaser .oculus.data .hover{color:#c7d9de}.parallax_categorized_teaser .oculus.data .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.data .slick-prev:before,.parallax_categorized_teaser .oculus.data .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.data .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.data h3.carousel_title{display:none}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data h3.carousel_title{display:block}}.parallax_categorized_teaser .oculus.data .carousel_title{color:#6b93a0}.parallax_categorized_teaser .oculus.data .val{color:white}.parallax_categorized_teaser .oculus.data .note{margin-top:15px;font-size:.75rem;font-weight:300}@media (min-width: 769px){.parallax_categorized_teaser .oculus.data .slick-slide.slide{background-color:rgba(95,180,206,0.1)}}.parallax_categorized_teaser .oculus.images{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60;padding:0}@media (min-width: 769px){.parallax_categorized_teaser .oculus.images{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .oculus.images .carousel_title{color:#f06c60}.parallax_categorized_teaser .oculus.images .title,.parallax_categorized_teaser .oculus.images .hover{color:#ffddcf}.parallax_categorized_teaser .oculus.images .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.images .slick-prev:before,.parallax_categorized_teaser .oculus.images .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.images .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.images h3.carousel_title{z-index:1;background-color:rgba(86,42,42,0.65);text-align:center;padding:5px 14px;border-radius:6px;margin-top:1.2em}@media (max-width: 768px){.parallax_categorized_teaser .oculus.images h3.carousel_title{color:#ffbdb7}}@media (min-width: 769px){.parallax_categorized_teaser .oculus.images h3.carousel_title{background-color:transparent;padding:0;margin-top:24px}}.parallax_categorized_teaser .oculus.images a.hover_state{display:block;height:100%;width:100%;position:absolute;top:0}.parallax_categorized_teaser .oculus.images .rollover_description{background-color:rgba(86,42,42,0.85);padding:1em 2em;text-align:left}.parallax_categorized_teaser .oculus.images .rollover_description .title{font-size:.9rem;color:#f5dddb}.parallax_categorized_teaser .oculus.compare{border-bottom:3px solid #fcb963;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f7ca99;background:linear-gradient(to right, rgba(53,30,1,0.46) 0%, rgba(53,30,1,0.46) 49%, rgba(10,8,9,0.47) 50%, rgba(10,8,9,0.47) 100%)}@media (min-width: 769px){.parallax_categorized_teaser .oculus.compare{border:3px solid #fcb963;border-radius:50%}}.parallax_categorized_teaser .oculus.compare .carousel_title{color:#f7ca99}.parallax_categorized_teaser .oculus.compare .title,.parallax_categorized_teaser .oculus.compare .hover{color:#f7ca99}.parallax_categorized_teaser .oculus.compare .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.compare .slick-prev:before,.parallax_categorized_teaser .oculus.compare .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.compare .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.compare h3.carousel_title{display:none}@media (min-width: 769px){.parallax_categorized_teaser .oculus.compare h3.carousel_title{display:block}}.parallax_categorized_teaser .oculus.compare .thing_one .val{color:white}.parallax_categorized_teaser .oculus.compare .thing_two{color:#66c8eb}.parallax_categorized_teaser .oculus.compare .thing{font-weight:300}.parallax_categorized_teaser .oculus.compare .details{font-size:.9rem;font-weight:300}.parallax_categorized_teaser .oculus.news{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60}@media (min-width: 769px){.parallax_categorized_teaser .oculus.news{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .oculus.news .carousel_title{color:#f06c60}.parallax_categorized_teaser .oculus.news .title,.parallax_categorized_teaser .oculus.news .hover{color:#ffddcf}.parallax_categorized_teaser .oculus.news .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .oculus.news .slick-prev:before,.parallax_categorized_teaser .oculus.news .slick-next:before{background-image:none}.parallax_categorized_teaser .oculus.news .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .oculus.news .slide_description{text-align:left}@media (max-width: 768px){.parallax_categorized_teaser .oculus.news .slide_description{margin-top:32px;padding:0 18%}}.parallax_categorized_teaser .oculus.news .slide_description a{color:#f5dddb}.parallax_categorized_teaser .oculus.news .slide_description .description{font-size:.9rem;font-weight:400}@media (min-width: 769px){.parallax_categorized_teaser .oculus.news .slick-slide.slide{background-color:rgba(86,42,42,0.58)}}.parallax_categorized_teaser .more_bar{background-color:rgba(129,33,24,0.4);transition:background-color 200ms}.parallax_categorized_teaser .more_bar:hover{background-color:rgba(129,33,24,0.7)}.parallax_categorized_teaser .more_bar .arrow_down{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -50px -100px;background-size:300px}.parallax_categorized_teaser .more_bar .arrow_down:hover,.parallax_categorized_teaser .more_bar .arrow_down.active{background:url("https://mars.nasa.gov/assets/[email protected]") -50px -100px;background-size:300px}.parallax_categorized_teaser .slick-prev,.parallax_categorized_teaser .slick-next{top:calc(50% - 20px)}@media (min-width: 769px){.parallax_categorized_teaser .slick-prev,.parallax_categorized_teaser .slick-next{top:83%}}.parallax_categorized_teaser .slick-slider .slick-prev{left:2%}@media (min-width: 769px){.parallax_categorized_teaser .slick-slider .slick-prev{left:calc(50% - 35px)}}.parallax_categorized_teaser .slick-slider .slick-next{right:2%}@media (min-width: 769px){.parallax_categorized_teaser .slick-slider .slick-next{right:calc(50% - 35px)}}.parallax_categorized_teaser .slick-slide.slide{overflow:hidden;height:186px}@media (min-width: 769px){.parallax_categorized_teaser .slick-slide.slide{height:160px}}.parallax_categorized_teaser .definition_teasers .oculus{cursor:pointer}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme{border-bottom:3px solid #fcb963;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f7ca99}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme{border:3px solid #fcb963;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .carousel_title{color:#f7ca99}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .hover{color:#f7ca99}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.warm_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f5b460;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme{border-bottom:3px solid #5fb4ce;margin-bottom:0;overflow:hidden;background-color:rgba(32,52,66,0.39);color:#c7d9de}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme{border:3px solid #5fb4ce;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .carousel_title{color:#445c64}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .hover{color:#c7d9de}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.cool_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #5998ac;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme{border-bottom:3px solid #f56b60;margin-bottom:0;overflow:hidden;background-color:rgba(0,0,0,0.39);color:#f06c60}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme{border:3px solid #f56b60;border-radius:50%}}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .carousel_title{color:#f06c60}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .title,.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .hover{color:#ffddcf}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-prev:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:rotate(180deg)}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-prev:before,.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-next:before{background-image:none}.parallax_categorized_teaser .definition_teasers .oculus.sunset_color_theme .slick-next:before{width:0;height:0;border-top:9px solid transparent !important;border-bottom:9px solid transparent !important;border-left:9px solid #f06c60;display:inline-block;transform:none}.parallax_categorized_teaser .definition_teasers .title{margin-bottom:12px;font-size:1.6em}.no-touchevents .parallax_categorized_teaser .definition_teasers .title{margin-bottom:0}.parallax_categorized_teaser .definition_teasers .hover{font-size:1.2em}.parallax_categorized_teaser .definition_teasers .title,.parallax_categorized_teaser .definition_teasers .hover{margin-top:0;margin-right:10%;margin-left:10%;top:30%;position:relative;font-weight:300}.parallax_categorized_teaser .definition_teasers .title_container{position:relative;top:50%;transform:translateY(-50%)}.no-touchevents .parallax_categorized_teaser .definition_teasers .hover{display:none}.parallax_categorized_teaser .definition_teasers .bg_container{position:absolute;top:0;left:0;width:100%;height:100%;opacity:0.25;background-size:cover;transition:-webkit-filter .4s;transition:filter .4s;transition:filter .4s, -webkit-filter .4s;-webkit-filter:brightness(60%);filter:brightness(60%)}@media (min-width: 769px){.parallax_categorized_teaser .definition_teasers .bg_container{-webkit-filter:brightness(100%);filter:brightness(100%);border-radius:50%}}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .hover{display:block}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .title{display:none}.no-touchevents .parallax_categorized_teaser .definition_teasers .oculus:hover .bg_container{-webkit-filter:brightness(60%);filter:brightness(60%)}.parallax_categorized_teaser .definition_teasers .mobile_detailed_definition{display:none;position:relative}.parallax_categorized_teaser .detailed_def_visible .detailed_definition{padding:1.4em .8em;width:98%;position:relative;display:none}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .detailed_definition{padding:0;max-width:680px;width:90%;left:3%;float:left;display:block}}@media (min-width: 1024px), print{.parallax_categorized_teaser .detailed_def_visible .detailed_definition{max-width:740px;left:6%}}.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition{display:none;padding:0 1.5em;text-align:left}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition{display:none}}.parallax_categorized_teaser .detailed_def_visible .mobile_detailed_definition .mobile_detailed_definition_contents a{color:#54b3da}.parallax_categorized_teaser .detailed_def_visible .close_button{display:block;height:35px;width:33px;position:absolute;right:0;top:0;padding:3px;text-decoration:none;-webkit-touch-callout:none;-webkit-user-select:none;-moz-user-select:-moz-none;-ms-user-select:none;user-select:none}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .close_button{top:0;right:-5px}}.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon{display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon:hover,.parallax_categorized_teaser .detailed_def_visible .close_button .close_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px}.parallax_categorized_teaser .detailed_def_visible p{color:#e3e3e3}.parallax_categorized_teaser .detailed_def_visible .detailed_def_title{padding:0 3%;text-decoration:none;cursor:pointer;color:#eeaaa1;font-size:1.1em;font-weight:400;display:block;text-transform:uppercase}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .detailed_def_title{padding:0 10% 0 0;margin:.1em 0 0.5em;transition:color 200ms}.parallax_categorized_teaser .detailed_def_visible .detailed_def_title:hover{color:white}}.parallax_categorized_teaser .detailed_def_visible .definition_contents{padding:3%}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .definition_contents{padding:0 20px 0 0}}.parallax_categorized_teaser .detailed_def_visible .definition_contents h1,.parallax_categorized_teaser .detailed_def_visible .definition_contents h2,.parallax_categorized_teaser .detailed_def_visible .definition_contents h3,.parallax_categorized_teaser .detailed_def_visible .definition_contents h4,.parallax_categorized_teaser .detailed_def_visible .definition_contents h5{font-weight:500}.parallax_categorized_teaser .detailed_def_visible .definition_contents h3{font-size:1.1em;margin:0.6em 0 0.6em}.parallax_categorized_teaser .detailed_def_visible .definition_contents h3:first-child{margin-top:0}.parallax_categorized_teaser .detailed_def_visible .definition_contents a{color:#54b3da}.parallax_categorized_teaser .detailed_def_visible .oculus{display:block}@media (min-width: 769px){.parallax_categorized_teaser .detailed_def_visible .oculus{display:none}}.parallax_categorized_teaser .detailed_def_visible .oculus.current{height:auto;pointer-events:none}.parallax_categorized_teaser .detailed_def_visible .oculus.current a{pointer-events:auto}.parallax_categorized_teaser .detailed_def_visible .oculus.current h2.title,.parallax_categorized_teaser .detailed_def_visible .oculus.current .hover{display:none !important}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition{display:block}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h1,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h2,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h3,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h4,.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition h5{font-weight:500}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition .mobile_detailed_definition_title{margin-top:0;width:85%;font-weight:300}.parallax_categorized_teaser .detailed_def_visible .oculus.current .mobile_detailed_definition .close_button{pointer-events:auto;right:1em}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav{margin-bottom:1.5em;width:93%}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li{cursor:pointer;display:inline-block;font-weight:300;font-size:1.1em;line-height:1.7em;margin-right:1em;transition:color 200ms}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li:hover{color:white}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav li:last-child{margin-right:0}.parallax_categorized_teaser .detailed_def_visible .detailed_def_nav .current{color:white}#iframe_overlay .explore_overlay_page{background-color:black}@-webkit-keyframes pulse{0%{opacity:1}50%{opacity:.3}100%{opacity:1}}@keyframes pulse{0%{opacity:1}50%{opacity:.3}100%{opacity:1}}.dsn_connection .target_name,.mars_relay_connection .target_name{display:inline-block;width:calc(100% - 61px);vertical-align:middle;white-space:normal}.dsn_connection .mars_relay_title,.dsn_connection .dsn_title,.mars_relay_connection .mars_relay_title,.mars_relay_connection .dsn_title{display:inline}.dsn_connection .signal,.mars_relay_connection .signal{font-size:13px;margin:0.7em 0;white-space:nowrap;font-weight:300}@media (min-width: 480px){.dsn_connection .signal,.mars_relay_connection .signal{font-size:15px}}@media (min-width: 600px), print{#secondary_column .dsn_connection .signal,#secondary_column .mars_relay_connection .signal{font-size:13px;line-height:1.3}}@media (min-width: 769px), print{#secondary_column .dsn_connection .signal,#secondary_column .mars_relay_connection .signal{font-size:14px}}.dsn_connection .signal .dsn_icon,.mars_relay_connection .signal .dsn_icon{display:inline-block;width:54px;height:54px;vertical-align:middle;margin-right:0.8em}.homepage_dashboard_modal .dsn_connection .signal .sending_receiving,.homepage_dashboard_modal .dsn_connection .signal .sending,.homepage_dashboard_modal .dsn_connection .signal .receiving,.homepage_dashboard_modal .mars_relay_connection .signal .sending_receiving,.homepage_dashboard_modal .mars_relay_connection .signal .sending,.homepage_dashboard_modal .mars_relay_connection .signal .receiving{color:white}.homepage_dashboard_modal .dsn_connection .signal .sending_receiving .target_name,.homepage_dashboard_modal .dsn_connection .signal .sending .target_name,.homepage_dashboard_modal .dsn_connection .signal .receiving .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .sending_receiving .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .sending .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .receiving .target_name{color:white}.homepage_dashboard_modal .dsn_connection .signal .transmissions,.homepage_dashboard_modal .dsn_connection .signal .target_name,.homepage_dashboard_modal .mars_relay_connection .signal .transmissions,.homepage_dashboard_modal .mars_relay_connection .signal .target_name{color:#fcb963}.dsn_connection .signal .sending_receiving,.mars_relay_connection .signal .sending_receiving{-webkit-animation:pulse 1s infinite;animation:pulse 1s infinite}.dsn_connection .signal .sending_receiving .dsn_icon,.mars_relay_connection .signal .sending_receiving .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -146px -130px no-repeat;background-size:300px}.dsn_connection .signal .sending_receiving .target_name:before,.mars_relay_connection .signal .sending_receiving .target_name:before{content:" SENDING/RECEIVING"}.dsn_connection .signal .sending,.mars_relay_connection .signal .sending{-webkit-animation:pulse 1s infinite;animation:pulse 1s infinite}.dsn_connection .signal .sending .dsn_icon,.mars_relay_connection .signal .sending .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") 0 -130px no-repeat;background-size:300px}.dsn_connection .signal .sending .target_name:before,.mars_relay_connection .signal .sending .target_name:before{content:" SENDING"}.dsn_connection .signal .receiving,.mars_relay_connection .signal .receiving{-webkit-animation:pulse 1s infinite;animation:pulse 1s infinite}.dsn_connection .signal .receiving .dsn_icon,.mars_relay_connection .signal .receiving .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -73px -130px no-repeat;background-size:300px}.dsn_connection .signal .receiving .target_name:before,.mars_relay_connection .signal .receiving .target_name:before{content:" RECEIVING"}.dsn_connection .signal .disconnected .dsn_icon,.mars_relay_connection .signal .disconnected .dsn_icon{background:url("https://mars.nasa.gov/assets/[email protected]") -219px -130px no-repeat;background-size:300px}.dsn_connection .signal .disconnected .target_name:before,.mars_relay_connection .signal .disconnected .target_name:before{content:" AWAITING NEXT TRANSMISSION"}.homepage_dashboard_modal .dsn_connection .signal .dsn_icon,.homepage_dashboard_modal .mars_relay_connection .signal .dsn_icon{background-position-y:0px}#iframe_overlay .dsn_connection .signal .dsn_icon,#iframe_overlay .mars_relay_connection .signal .dsn_icon{background-position-y:-62px}.mars_relay_connection .transmissions{display:inline-block;vertical-align:middle;width:calc(100% - 70px);white-space:normal;word-wrap:initial}.parallax_categorized_teaser{font-weight:400}.parallax_categorized_teaser nav.mobile_catcont_nav{display:block}@media (min-width: 769px){.parallax_categorized_teaser nav.mobile_catcont_nav{display:none}}.parallax_categorized_teaser nav.desktop_catcont_nav{display:none}@media (min-width: 769px){.parallax_categorized_teaser nav.desktop_catcont_nav{display:block}}.parallax_categorized_teaser .nav_content_container{display:block}@media (min-width: 769px){.parallax_categorized_teaser .nav_content_container{max-width:1300px;width:94%;margin:auto;display:flex;height:100%}}.parallax_categorized_teaser nav.catcont_nav{width:260px;margin-right:3%}@media (min-width: 1024px), print{.parallax_categorized_teaser nav.catcont_nav{width:300px;margin-right:4%}}@media (min-width: 1200px){.parallax_categorized_teaser nav.catcont_nav{width:340px;margin-right:5.5%}}@media (min-width: 1700px){.parallax_categorized_teaser nav.catcont_nav{margin-right:7.5%}}.parallax_categorized_teaser nav.catcont_nav .section{background-color:rgba(129,33,24,0.4);margin-bottom:1px;cursor:pointer;text-align:center;padding:14px 5%;transition:background-color 200ms}.no-touchevents .parallax_categorized_teaser nav.catcont_nav .section:hover:not(.current){background-color:rgba(129,33,24,0.8)}@media (min-width: 480px){.parallax_categorized_teaser nav.catcont_nav .section{padding:14px 10%}}@media (min-width: 600px), print{.parallax_categorized_teaser nav.catcont_nav .section{padding:14px 15%}}@media (min-width: 769px){.parallax_categorized_teaser nav.catcont_nav .section{text-align:left;padding:10px 15px}}.parallax_categorized_teaser nav.catcont_nav .section .nav_title,.parallax_categorized_teaser nav.catcont_nav .section .title{text-transform:uppercase;font-size:15px;letter-spacing:0}.parallax_categorized_teaser nav.catcont_nav .section.default .category_info .description{display:block}.parallax_categorized_teaser nav.catcont_nav .category_info .description{display:none;font-size:16px;margin:.5em 0}@media (min-width: 480px){.parallax_categorized_teaser nav.catcont_nav .category_info .description{font-size:15px}}@media (min-width: 769px){.parallax_categorized_teaser nav.catcont_nav .current{background-color:#822118}.parallax_categorized_teaser nav.catcont_nav .current .category_info .title{color:white}}.parallax_categorized_teaser nav.mobile_catcont_nav{width:100%}.parallax_categorized_teaser nav.mobile_catcont_nav.current .category_info .description{display:block}.parallax_categorized_teaser nav.mobile_catcont_nav .category_info .description{margin:.5em 0;display:none}.raw_image_gallery{margin-bottom:1em}.raw_image_gallery .mission_selector{background-color:#e1e1e1;padding:1em 1.2em;position:relative}.raw_image_gallery .mission_selector .active_mission_selector{width:100px;height:100px;display:inline-block;border:1px solid black;position:relative;background-color:gray;border-radius:50%}.raw_image_gallery .mission_selector .active_mission_header{display:inline-block;vertical-align:28px;margin-left:1em}.raw_image_gallery .mission_selector .circle{position:relative}.raw_image_gallery .mission_selector .circle:hover{opacity:1}.raw_image_gallery .mission_selector .circle:hover .mission_counts{display:block}.raw_image_gallery .mission_selector .mission_title{bottom:20px;position:absolute;width:100%;margin:auto}.raw_image_gallery .mission_selector .mission_counts{display:none;padding-top:3em}.raw_image_gallery .mission_selector_toggle{font-weight:600;right:0;top:0;position:absolute;margin-top:3em;padding-right:1em}.raw_image_gallery .mission_selector_toggle .mission_selector_arrow{margin-left:0.8em;width:0;height:0;border-radius:3px;border-left:11px solid transparent;border-right:11px solid transparent;border-top:11px solid #000;display:inline-block}.raw_image_gallery .mission_selector_toggle.open .mission_selector_arrow{border-top:none;border-left:11px solid transparent;border-right:11px solid transparent;border-bottom:11px solid #000}.raw_image_gallery .missions{margin:1.5em 0}.raw_image_gallery .missions .mission{display:inline-block;text-align:center;width:175px;margin-right:2em;font-size:0.9em;font-weight:600;vertical-align:top;opacity:0.4}.raw_image_gallery .missions .mission .circle{width:150px;height:150px;background-color:#9a9a9a;border-radius:50%;margin:0 auto;color:white;background-size:cover;background-position:center}.raw_image_gallery .missions .mission.current{opacity:1}.raw_image_gallery .missions .mission.msl .circle{background-color:#da9a9a}.raw_image_gallery .missions .mission.mera .circle{background-color:#9ada9a}.raw_image_gallery .missions .mission.merb .circle{background-color:#9a9ada}.raw_image_gallery .missions .mission.insight .circle{background-color:#ba9aba}.raw_image_gallery #primary_column.full_width_grid{width:100%}@media (min-width: 600px), print{.raw_image_gallery #primary_column.full_width_grid .raw_image_container{margin-bottom:.80645%;width:19.35484%;float:left}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(5n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(5n+2){margin-left:20.16129%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(5n+3){margin-left:40.32258%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(5n+4){margin-left:60.48387%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(5n+5){margin-left:80.64516%;margin-right:-100%;clear:none}}@media (min-width: 769px), print{.raw_image_gallery #primary_column.full_width_grid .raw_image_container{margin-bottom:.67114%;width:16.10738%;float:left}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+2){margin-left:16.77852%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+3){margin-left:33.55705%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+4){margin-left:50.33557%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+5){margin-left:67.11409%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(6n+6){margin-left:83.89262%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.raw_image_gallery #primary_column.full_width_grid .raw_image_container{margin-bottom:.57471%;width:13.7931%;float:left}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+2){margin-left:14.36782%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+3){margin-left:28.73563%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+4){margin-left:43.10345%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+5){margin-left:57.47126%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+6){margin-left:71.83908%;margin-right:-100%;clear:none}.raw_image_gallery #primary_column.full_width_grid .raw_image_container:nth-child(7n+7){margin-left:86.2069%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.raw_image_gallery .faceted_search_hidden_filters .raw_image_container{margin-bottom:.84034%;width:15.96639%;float:left}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+2){margin-left:16.80672%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+3){margin-left:33.61345%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+4){margin-left:50.42017%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+5){margin-left:67.22689%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(6n+6){margin-left:84.03361%;margin-right:-100%;clear:none}}@media (min-width: 1200px){.raw_image_gallery .faceted_search_hidden_filters .raw_image_container{margin-bottom:.62893%;width:11.94969%;float:left}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+2){margin-left:12.57862%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+3){margin-left:25.15723%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+4){margin-left:37.73585%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+5){margin-left:50.31447%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+6){margin-left:62.89308%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+7){margin-left:75.4717%;margin-right:-100%;clear:none}.raw_image_gallery .faceted_search_hidden_filters .raw_image_container:nth-child(8n+8){margin-left:88.05031%;margin-right:-100%;clear:none}}.raw_image_gallery .raw_image_container{position:relative;cursor:pointer;margin-bottom:2.04082%;width:48.97959%;float:left}.raw_image_gallery .raw_image_container:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .raw_image_container:nth-child(2n+2){margin-left:51.02041%;margin-right:-100%;clear:none}@media (min-width: 480px){.raw_image_gallery .raw_image_container{margin-bottom:1.35135%;width:32.43243%;float:left}.raw_image_gallery .raw_image_container:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .raw_image_container:nth-child(3n+2){margin-left:33.78378%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(3n+3){margin-left:67.56757%;margin-right:-100%;clear:none}}@media (min-width: 769px), print{.raw_image_gallery .raw_image_container{margin-bottom:1.0101%;width:24.24242%;float:left}.raw_image_gallery .raw_image_container:nth-child(4n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .raw_image_container:nth-child(4n+2){margin-left:25.25253%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(4n+3){margin-left:50.50505%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(4n+4){margin-left:75.75758%;margin-right:-100%;clear:none}}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container{margin-bottom:.80645%;width:19.35484%;float:left}.raw_image_gallery .raw_image_container:nth-child(5n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.raw_image_gallery .raw_image_container:nth-child(5n+2){margin-left:20.16129%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(5n+3){margin-left:40.32258%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(5n+4){margin-left:60.48387%;margin-right:-100%;clear:none}.raw_image_gallery .raw_image_container:nth-child(5n+5){margin-left:80.64516%;margin-right:-100%;clear:none}}.raw_image_gallery .raw_image_container .latest_triangle{position:absolute;width:0;height:0;top:0;left:0;z-index:1;border-left:26px solid #F45C48;border-bottom:26px solid transparent}.raw_image_gallery .raw_image_container .raw_list_image{position:relative;width:100%;padding-bottom:100%}.raw_image_gallery .raw_image_container .raw_list_image_inner{position:absolute;width:100%;height:100%;background-color:#777777;display:flex;justify-content:center;align-items:center}.raw_image_gallery .raw_image_container .raw_list_image_inner img{position:relative;max-height:100%;width:auto}.raw_image_gallery .raw_image_container.thumbnail .raw_list_image_inner img{width:50%;height:50%}.raw_image_gallery .raw_image_container .raw_image_detail_arrow{position:absolute;top:-13px;left:50%;transform:translateX(-50%);width:0;height:0;margin-top:4px;border-left:9px solid transparent;border-right:9px solid transparent;border-bottom:9px solid #222}.raw_image_gallery .raw_image_container .raw_image_detail_container{position:relative;cursor:default}.raw_image_gallery .raw_image_container .raw_image_detail{position:relative;margin:13px 0 8px;padding:2.7em 3em;background-color:#222}.raw_image_gallery .raw_image_container .raw_image_detail.rid_mobile{position:fixed;height:100%;width:100%;left:0;top:0;right:0;bottom:0;margin:0;padding:1em;z-index:21;overflow-y:scroll;-webkit-overflow-scrolling:touch}.raw_image_gallery .raw_image_container .rid_inner{height:100%;display:flex;flex-direction:column;justify-content:space-between;position:relative}@media (min-width: 600px), print{.raw_image_gallery .raw_image_container .rid_inner{display:block}}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container .rid_inner{display:flex;min-height:0;flex-direction:row}}.raw_image_gallery .raw_image_container .info_open .rid_inner{display:flex;min-height:0;flex-direction:row;margin:0 2em}.raw_image_gallery .raw_image_container .rid_mobile .rid_inner{margin-bottom:2em}.raw_image_gallery .raw_image_container .rid_mobile .rid_inner.left,.raw_image_gallery .raw_image_container .rid_mobile .rid_inner.right{transition:transform 0.2s}.raw_image_gallery .raw_image_container .rid_mobile .rid_inner.left{transform:translateX(-120%)}.raw_image_gallery .raw_image_container .rid_mobile .rid_inner.right{transform:translateX(120%)}.raw_image_gallery .raw_image_container .rid_mobile .rid_inner .rid_right_col{padding-bottom:4em}.raw_image_gallery .raw_image_container .rid_left_col,.raw_image_gallery .raw_image_container .rid_right_col{vertical-align:top}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container .rid_left_col,.raw_image_gallery .raw_image_container .rid_right_col{display:inline-block}}.rid_mobile .raw_image_gallery .raw_image_container .rid_left_col,.rid_mobile .raw_image_gallery .raw_image_container .rid_right_col{display:block;width:100%}.raw_image_gallery .raw_image_container .info_open .rid_left_col,.raw_image_gallery .raw_image_container .info_open .rid_right_col{display:inline-block}.raw_image_gallery .raw_image_container .rid_left_col{position:relative;text-align:center;width:100%;margin-bottom:2em}@media (min-width: 600px), print{.raw_image_gallery .raw_image_container .rid_left_col{max-width:33%;margin-left:0.5em;margin-bottom:1em;float:right}}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container .rid_left_col{max-width:100%;width:48%;margin-left:0;margin-bottom:0;display:inline-flex;flex-direction:column;flex:0 0 48%}.raw_image_gallery .raw_image_container .rid_left_col .rid_image_container{position:relative}}.raw_image_gallery .raw_image_container .info_open .rid_left_col{max-width:100%;width:41%;margin-left:0;margin-bottom:0;display:inline-flex;flex-direction:column;flex:0 0 41%}.raw_image_gallery .raw_image_container .rid_image_container{margin-bottom:10px}.raw_image_gallery .raw_image_container .rid_image_container .zoom_icon{display:none;position:absolute;bottom:4px;right:5px;padding:0;cursor:pointer;width:50px;height:50px;background:url("https://mars.nasa.gov/assets/[email protected]") -125px -150px;background-size:300px}.raw_image_gallery .raw_image_container .rid_image_container .zoom_icon:hover,.raw_image_gallery .raw_image_container .rid_image_container .zoom_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -125px -150px;background-size:300px}.raw_image_gallery .raw_image_container .rid_image_container:hover .zoom_icon{display:block}.raw_image_gallery .raw_image_container .rid_image_container>a{position:relative;display:block;height:100%;width:100%}.raw_image_gallery .raw_image_container .rid_image_container .fs_icon{position:absolute;right:5px;bottom:5px;padding:0;cursor:pointer;width:50px;height:50px;background:url("https://mars.nasa.gov/assets/[email protected]") -175px -150px;background-size:300px}.raw_image_gallery .raw_image_container .rid_image_container .fs_icon:hover,.raw_image_gallery .raw_image_container .rid_image_container .fs_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -175px -150px;background-size:300px}.raw_image_gallery .raw_image_container .rid_image_container img{-o-object-fit:contain;object-fit:contain;max-height:100%}.raw_image_gallery .raw_image_container .rid_image_container.thumbnail_detail{background-color:black}.raw_image_gallery .raw_image_container .rid_image_container.thumbnail_detail img{position:absolute;width:auto;top:50%;left:50%;transform:translate(-50%, -50%)}.raw_image_gallery .raw_image_container .rid_mobile .rid_image_container:hover .zoom_icon{display:none}.raw_image_gallery .raw_image_container .rid_image_links{position:relative;width:100%}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container .rid_image_links{margin:1.5em 0 0.6em;flex:1 0 0}}.raw_image_gallery .raw_image_container .rid_image_links a.addthis_button_compact{width:89px;border-radius:4px;overflow:hidden;height:38px;margin-right:12px;margin-bottom:12px}.raw_image_gallery .raw_image_container .rid_image_links a.addthis_button_compact img{vertical-align:top}.raw_image_gallery .raw_image_container .rid_image_links a.addthis_button_compact:hover{cursor:pointer}.raw_image_gallery .raw_image_container .rid_image_links a.addthis_button_compact,.raw_image_gallery .raw_image_container .rid_image_links .button{display:inline-block;vertical-align:top}.raw_image_gallery .raw_image_container .rid_image_links .button{margin-right:12px;padding:12px 14px}.raw_image_gallery .raw_image_container .info_open .rid_image_links{margin-top:10px}.raw_image_gallery .raw_image_container .info_open .rid_image_links .button{margin-right:0}.raw_image_gallery .raw_image_container .info_open .rid_image_links a.addthis_button_compact{margin-bottom:0}.raw_image_gallery .raw_image_container .rid_right_col{text-align:left;width:100%;flex:1;margin-bottom:1em}@media (min-width: 1024px), print{.raw_image_gallery .raw_image_container .rid_right_col{margin-left:1em;width:47%}}@media (min-width: 1200px){.raw_image_gallery .raw_image_container .rid_right_col{margin-left:2em;width:42%}}.raw_image_gallery .raw_image_container .rid_right_col h2{color:#fefefe;font-size:1.5em;margin-bottom:0.7em;width:50%}@media (min-width: 600px), print{.raw_image_gallery .raw_image_container .rid_right_col h2{font-size:1.2em;width:100%}}.raw_image_gallery .raw_image_container .rid_right_col .rid_description p{font-size:0.95em;line-height:1.4em;color:#d0d0d0}.raw_image_gallery .raw_image_container .info_open .rid_right_col{margin-left:2em;width:47%;overflow-y:auto;touch-action:pan-y;-webkit-overflow-scrolling:touch}.raw_image_gallery .raw_image_container .info_open .rid_right_col h2{width:100%}.raw_image_gallery .raw_image_container .rid_mobile .rid_title{display:inline-block}.raw_image_gallery .raw_image_container .rid_prev,.raw_image_gallery .raw_image_container .rid_next{height:62px;width:30px;background-color:#000;position:absolute;top:50%;transform:translateY(-50%)}.raw_image_gallery .raw_image_container .rid_prev.disabled,.raw_image_gallery .raw_image_container .rid_next.disabled{opacity:0.3}.raw_image_gallery .raw_image_container .rid_prev{left:0;border-top-right-radius:6px;border-bottom-right-radius:6px}.raw_image_gallery .raw_image_container .rid_prev .icon{left:0;position:absolute;top:50%;transform:translateY(-50%);padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0px -225px;background-size:300px}.raw_image_gallery .raw_image_container .rid_prev .icon:hover,.raw_image_gallery .raw_image_container .rid_prev .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -225px;background-size:300px}.raw_image_gallery .raw_image_container .rid_prev.disabled .icon{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0px -225px;background-size:300px}.raw_image_gallery .raw_image_container .rid_prev.disabled .icon:hover,.raw_image_gallery .raw_image_container .rid_prev.disabled .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") 0px -225px;background-size:300px}.raw_image_gallery .raw_image_container .rid_next{right:0;border-top-left-radius:6px;border-bottom-left-radius:6px}.raw_image_gallery .raw_image_container .rid_next .icon{right:0;position:absolute;top:50%;transform:translateY(-50%);padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0px -200px;background-size:300px}.raw_image_gallery .raw_image_container .rid_next .icon:hover,.raw_image_gallery .raw_image_container .rid_next .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -200px;background-size:300px}.raw_image_gallery .raw_image_container .rid_next.disabled .icon{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") 0px -200px;background-size:300px}.raw_image_gallery .raw_image_container .rid_next.disabled .icon:hover,.raw_image_gallery .raw_image_container .rid_next.disabled .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") 0px -200px;background-size:300px}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns{display:inline-block;float:right}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_prev,.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_next{height:40px;width:40px;position:relative;top:0;transform:translate(0, 0);display:inline-block}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_prev{border-top-left-radius:6px;border-bottom-left-radius:6px;border-top-right-radius:0px;border-bottom-right-radius:0px}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_prev .icon{left:7px}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_next{margin-left:4px;border-top-left-radius:0px;border-bottom-left-radius:0px;border-top-right-radius:6px;border-bottom-right-radius:6px}.raw_image_gallery .raw_image_container .rid_mobile:not(.landscape):not(.info_open) .rid_prev_next_btns .rid_next .icon{right:7px}.raw_image_gallery .raw_image_container .rid_close{position:absolute;top:-10px;right:-9px;width:32px;height:32px;padding:9px;z-index:1;border-radius:50%;background-color:rgba(0,0,0,0.5);cursor:pointer}.raw_image_gallery .raw_image_container .rid_close .icon{width:100%;height:100%;background:url("https://mars.nasa.gov/assets/[email protected]") no-repeat;background-size:contain}@media (min-width: 600px), print{.raw_image_gallery .raw_image_container .rid_close{top:-2.3em;right:-2.7em}}@media (min-width: 769px), print{.raw_image_gallery .raw_image_container .rid_close{top:-2em;right:-2.4em;width:17px;height:17px;padding:0;background-color:transparent}}.raw_image_gallery .raw_image_container .landscape .rid_close{top:10px;right:10px;width:22px;height:22px;padding:0;background-color:transparent}.raw_image_gallery .raw_image_container .info_open .rid_close{top:-5px;right:-37px;width:22px;height:22px;padding:0;background-color:transparent}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container{position:fixed;top:0;left:0;width:100%;height:100%;background-color:#222;z-index:21;text-align:center}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container.left,.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container.right{transition:transform 0.2s}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container.left{transform:translateX(-120%)}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container.right{transform:translateX(120%)}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container img{height:100vh;width:auto}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container .rid_close{z-index:22;top:10px;right:10px;width:22px;height:22px;padding:0;background-color:transparent}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container .info_button{position:absolute;cursor:pointer;right:12%;width:50px;height:50px;bottom:1.5em}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container .info_button .icon{padding:0;cursor:pointer;width:50px;height:50px;background:url("https://mars.nasa.gov/assets/[email protected]") -175px -100px;background-size:300px}.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container .info_button .icon:hover,.raw_image_gallery .raw_image_container .raw_image_mobile_landscape_container .info_button .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -175px -100px;background-size:300px}.raw_image_gallery .select_button{display:none}@media (min-width: 600px), print{.raw_image_gallery .select_button{position:absolute;cursor:pointer;right:0;bottom:0;height:50px;width:50px;display:block}}.touchevents .raw_image_gallery .select_button{display:none}.raw_image_gallery .select_button .triangle{border-top:25px solid transparent;border-left:25px solid transparent;border-right:25px solid rgba(107,163,228,0.2);border-bottom:25px solid rgba(107,163,228,0.2)}.raw_image_gallery .select_button .checkbox_icon{position:absolute;bottom:3px;right:3px;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -74px -201px;background-size:300px}.raw_image_gallery .select_button .checkbox_icon:hover,.raw_image_gallery .select_button .checkbox_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -74px -201px;background-size:300px}.raw_image_gallery .select_button:hover .triangle{border-right:25px solid rgba(107,163,228,0.5);border-bottom:25px solid rgba(107,163,228,0.5)}.raw_image_gallery .raw_image_container.selected .triangle{border-right:25px solid rgba(107,163,228,0.9);border-bottom:25px solid rgba(107,163,228,0.9)}.raw_image_gallery .raw_image_container.selected .checkbox_icon{background-position:-74px -226px}.raw_image_gallery .latest_info h4{font-weight:normal}.raw_image_gallery .latest_info span{margin-right:1.5em}.raw_image_gallery .latest_info .new_raw_images_count{color:#E35948}.grid_list_page .raw_images_form{margin-top:-1.5em}.raw_images_form.coming_soon h1.article_title{margin-bottom:0.7em}.raw_images_form h2.sub_title{font-weight:600}.raw_images_form #modals>div>div{position:fixed !important}.raw_images_form .article_title{margin-bottom:1em}.raw_images_form input[type="text"]:not(.begin_date):not(.end_date):not(.page_num){border:1px solid rgba(0,0,0,0.3);background-color:white;border-radius:4px;padding:8px 12px;margin-bottom:0.6em;font-size:16px}.raw_images_form input[type="text"]:not(.begin_date):not(.end_date):not(.page_num).wide{width:100%}.raw_images_form ::-ms-clear{display:none;width:0;height:0}.raw_images_form select{font-size:16px;margin-top:1px}@media (min-width: 480px){.raw_images_form select{font-size:15px}.touchevents .raw_images_form select{font-size:16px}}.raw_images_form header>hr{margin-bottom:1em}.raw_images_form .download_status{display:block;width:100%;background-color:#e9eef4;position:relative;padding:1em 2em 1em 1em;margin-bottom:1em;border-radius:5px}.raw_images_form .download_status p{margin:0;padding:0}.raw_images_form .download_status .close_icon{position:absolute;right:10px;top:16px;padding:0 0.8em;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -99px -226px;background-size:300px}.raw_images_form .download_status .close_icon:hover,.raw_images_form .download_status .close_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -99px -226px;background-size:300px}.raw_images_form .download_status #spinner{position:absolute;width:100%;height:100%}.raw_images_form .filter_remove{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -101px -201px;background-size:300px;display:inline-block;top:-1px;vertical-align:middle}.raw_images_form .filter_remove:hover,.raw_images_form .filter_remove.active{background:url("https://mars.nasa.gov/assets/[email protected]") -101px -201px;background-size:300px}.raw_images_form .download_links{display:none}@media (min-width: 600px), print{.raw_images_form .download_links{position:absolute;right:0;top:0;font-weight:700;font-size:0.9em;display:block;text-align:right}}@media (min-width: 769px), print{.raw_images_form .download_links{top:16px;display:inline-flex}}@media (min-width: 1200px){.raw_images_form .download_links{font-size:1em}}.touchevents .raw_images_form .download_links{display:none}.raw_images_form .download_links .download_link{margin-left:0.5em;display:inline-block;cursor:pointer}.raw_images_form .download_links .download_link.selections:hover{text-decoration:none;cursor:default}.raw_images_form .download_links .download_link .download_icon{padding:0 0.8em;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -124px -201px;background-size:300px}.raw_images_form .download_links .download_link .download_icon:hover,.raw_images_form .download_links .download_link .download_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -124px -201px;background-size:300px}.raw_images_form .download_links .disabled .download_link{cursor:default;text-decoration:none;color:#b0b0b0}.raw_images_form .download_links .disabled .download_link .download_icon{padding:0 0.8em;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -124px -226px;background-size:300px}.raw_images_form .download_links .disabled .download_link .download_icon:hover,.raw_images_form .download_links .disabled .download_link .download_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -124px -226px;background-size:300px}.raw_images_form .clear_filter{margin-left:1em;cursor:pointer}.raw_images_form #secondary_column .latest_toggle{margin-bottom:.6rem;display:flex}.raw_images_form #secondary_column .latest_toggle .input_container{margin-right:2rem;display:inline-flex;align-items:baseline}.raw_images_form #secondary_column .latest_toggle .input_container:last-child{margin-right:0}.raw_images_form #secondary_column .latest_toggle label{margin-left:.3rem}.raw_images_form #secondary_column ul{margin-left:0;padding-left:1px}.raw_images_form #secondary_column .target ul{margin-bottom:1em}.raw_images_form #secondary_column .mobile_filter_controls,.raw_images_form #secondary_column .expand_section{display:none}.raw_images_form #secondary_column .form_section_header h3{margin-bottom:0.8em}@media (max-width: 599px){.raw_images_form #secondary_column{position:fixed;right:0;top:0;z-index:21;width:100%;height:100%;background-color:rgba(0,0,0,0.5)}.raw_images_form #secondary_column.open{display:block}.raw_images_form #secondary_column .raw_images_sidebar{border:none;background-color:white;height:100%;float:right;width:87.5%;padding:0;overflow-y:scroll;-webkit-overflow-scrolling:touch}.raw_images_form #secondary_column h3,.raw_images_form #secondary_column h4,.raw_images_form #secondary_column fieldset{padding:0.8em;margin:0}.raw_images_form #secondary_column .form_section_header h3{margin-bottom:0}.raw_images_form #secondary_column .form_section.open p{display:block;padding:0 0.8em}.raw_images_form #secondary_column p{display:none}.raw_images_form #secondary_column .form_section.open fieldset{padding-bottom:0}.raw_images_form #secondary_column .form_section.open fieldset.target,.raw_images_form #secondary_column .form_section.open fieldset.colors{padding-bottom:.8em}.raw_images_form #secondary_column header{padding:1em;margin:0}.raw_images_form #secondary_column hr{display:none}.raw_images_form #secondary_column .mobile_filter_controls{display:block;position:absolute;top:10px;right:10px}.raw_images_form #secondary_column .mobile_filter_controls a{padding:1em}.raw_images_form #secondary_column h4{background-color:#f3f3f3}.raw_images_form #secondary_column .form_section{position:relative;margin-bottom:2px}.raw_images_form #secondary_column .form_section fieldset{display:none}.raw_images_form #secondary_column .form_section .expand_section{display:block;position:absolute;top:5px;right:0px;cursor:pointer;padding:12px}.raw_images_form #secondary_column .form_section .expand_section .arrow_down{border-left:6px solid transparent;border-right:6px solid transparent;border-top:6px solid #57585a;transition:transform 0.2s}.raw_images_form #secondary_column .form_section h3 ~ .expand_section{top:65px}.raw_images_form #secondary_column .form_section.open fieldset,.raw_images_form #secondary_column .form_section.open .featured_links{display:block}.raw_images_form #secondary_column .form_section.open .expand_section .arrow_down{transform:rotate(180deg)}.raw_images_form #secondary_column .form_section.open .distance{margin-bottom:1.6em}.raw_images_form #secondary_column .form_section:last-of-type{margin-bottom:50px}}.raw_images_form .featured_links{display:none;padding:.8em}@media (min-width: 600px), print{.raw_images_form .featured_links{display:block;padding:0}}.raw_images_form .featured_link{cursor:pointer;font-size:.95rem}.raw_images_form .featured_link+.featured_link{margin-top:.3rem}.touchevents .raw_images_form .question_tooltip{display:none}@media (min-width: 600px), print{.raw_images_form .question_tooltip{display:block;width:14px;height:14px;display:inline-block;vertical-align:middle;margin-left:0.35rem;margin-top:-2px;background:url("https://mars.nasa.gov/assets/tooltip_indicator.png") no-repeat center;background-size:100%}}.raw_images_form .target.truncated ul li:nth-child(n+8){display:none !important}.raw_images_form .target.truncated .truncation_toggle:after{content:"+ more"}.raw_images_form .target.expanded .truncation_toggle:after{content:"- less"}.raw_images_form .target input.search_field{margin-top:0.3em}.raw_images_form .ri_checkbox{display:block;margin-bottom:0.4em}.raw_images_form .ri_checkbox label{font-size:0.9em;font-weight:400;display:flex;align-items:baseline}.raw_images_form .ri_checkbox label input{vertical-align:middle;flex-basis:calc(14px + .4em)}.raw_images_form .ri_checkbox label input+span{pointer-events:none}.raw_images_form .target_count{pointer-events:none}.raw_images_form .filter_link_container{position:relative}.raw_images_form .date_filters{margin:1em 0 -0.2em}.raw_images_form .date_filters input[type="submit"]{text-indent:-9999px;width:100%;padding:13.5px 16px}@media (min-width: 600px), print{.raw_images_form .date_filters input[type="submit"]{text-indent:0;padding:12px 13px}}.raw_images_form .date_filters .right_arrow{position:absolute;left:0;right:0;margin:auto;top:30%;height:0px;width:0px;display:inline-block;border-radius:3px;border-top:6px solid transparent;border-left:9px solid white;border-bottom:6px solid transparent}@media (min-width: 600px), print{.raw_images_form .date_filters .right_arrow{display:none}}.raw_images_form .date_filters .observation_time{position:relative}.raw_images_form .date_filters .observation_time label.preposition{display:block}.raw_images_form .calendar_input_binder{position:relative;display:inline-block;width:100%;margin-bottom:.7em;cursor:pointer}.raw_images_form .begin_date,.raw_images_form .end_date,.raw_images_form .begin_sol,.raw_images_form .end_sol{height:40px;font-size:16px;border:1px solid rgba(0,0,0,0.3);border-radius:4px;width:100%}.raw_images_form .begin_date:-webkit-autofill,.raw_images_form .end_date:-webkit-autofill,.raw_images_form .begin_sol:-webkit-autofill,.raw_images_form .end_sol:-webkit-autofill{-webkit-box-shadow:0 0 0px 1000px white inset;-webkit-text-fill-color:#222}.raw_images_form .begin_date,.raw_images_form .end_date{margin:0 0.4em 0 0;padding:8px 32px 8px 8px;width:150px}.raw_images_form .begin_sol,.raw_images_form .end_sol{width:88px;padding:8px}.raw_images_form .date_option_wrapper{display:inline-block;margin-right:1%}.raw_images_form .date_option_wrapper+.date_option_wrapper{margin-right:0}@media (min-width: 600px), print{.raw_images_form .date_option_wrapper{margin-left:0;width:auto;margin-right:0}}.raw_images_form .date_option_wrapper label.preposition{font-weight:bold;display:block}.raw_images_form .date_option_wrapper label.preposition_small{display:none}.raw_images_form .date_submit{position:absolute;right:0;top:0;width:9%;display:inline-block}@media (min-width: 600px), print{.raw_images_form .date_submit{position:relative;width:auto}}.raw_images_form button.ui-datepicker-trigger{vertical-align:middle;position:absolute;display:inline-block;right:6px;top:12px;width:38px;max-width:150px;height:100%;padding:6px 8px 8px 8px;background:transparent;border:transparent}.raw_images_form button.ui-datepicker-trigger:hover{opacity:.8}.raw_images_form button.ui-datepicker-trigger img{width:22px;height:22px}.touchevents .raw_images_form button.ui-datepicker-trigger{width:100%}.touchevents .raw_images_form button.ui-datepicker-trigger img{float:right}.raw_images_form .mobile_filter_btn{text-align:center;display:block}.raw_images_form .mobile_filter_btn .button{margin-bottom:1.2em}@media (min-width: 600px), print{.raw_images_form .mobile_filter_btn{display:none}}.raw_images_form .filtering_results{display:none;margin-bottom:1em;font-weight:600}@media (min-width: 600px), print{.raw_images_form .filtering_results{display:block}}.raw_images_form .filtering_results .applied_filter{display:inline-block;margin:0 .5em .3em}.raw_images_form .select{display:block;margin:0.3em 0}.raw_images_form .pagination_status.no_results_text{float:none;font-weight:600;text-align:center;padding:1em 0}@media (min-width: 600px), print{.raw_images_form .pagination_status.no_results_text{padding:0}}.raw_images_form .mobile_clear_filters{padding-right:20px}.raw_images_sidebar h2{margin-top:.5em}.raw_images_sidebar h4{margin-bottom:0.6em}.tooltip_content{display:none}.selections_link{padding-bottom:0.5em}.selections_link:last-child{padding-bottom:0}.download_modal{padding:20px;border-radius:6px}.download_modal p{font-weight:700;color:#FFF;text-align:center;margin-bottom:.5em}.download_modal .loader>div{top:25px !important}.download_request{padding:2.7em 2em 2em;border-radius:6px;width:475px;color:black;transform:translate(-50%, -50%)}.download_request h1{font-size:1.3em;font-weight:600}.download_request input[name="email"]{border:none;height:40px;border-radius:5px;padding:0 1em;font-size:0.9em;width:100%;margin-bottom:1em;background-color:#f3f4f8}.download_request footer{text-align:left;margin-top:1em}.download_request footer a{margin-right:1em}.download_request footer a:last-child{margin-right:0;font-size:.9em;float:right}.download_request footer a.disabled{opacity:0.5}.download_request footer .button.gray_background{background-color:#a3a3a3}.download_request .close_icon{position:absolute;right:10px;top:16px;opacity:.6;padding:0 0.8em;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px -25px;background-size:300px}.download_request .close_icon:hover,.download_request .close_icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -25px;background-size:300px}.no-touchevents .download_request .close_icon:hover{opacity:.8}.download_request .loader{display:inline-block;height:50px;width:50px;vertical-align:middle}input.facet_checkbox{margin-right:.4em}h3.return_link{color:#f08d77;font-size:.9em;font-weight:700;margin-top:.5em}.filtered_results_container{display:block;float:none}@media (min-width: 769px), print{.filtered_results_container{display:inline-block}}.selections_container{margin-left:.5em;display:inline-block;position:relative}@media (min-width: 1024px), print{.selections_container{margin-left:1.2em}}.selections_container:hover>*{display:block}.selections_container.disabled:hover .selection_options{display:none}.selection_options{display:none;position:relative;background:#f8f8f8;border:1px solid #e1e1e1;padding:1em;border-radius:8px;font-weight:500;top:17px;right:-3px;z-index:2;text-align:left}.selection_options:after,.selection_options:before{bottom:100%;left:50%;border:solid transparent;content:" ";height:0;width:0;position:absolute;pointer-events:none}.selection_options:after{border-color:rgba(248,248,248,0);border-bottom-color:#f8f8f8;border-width:10px;margin-left:-10px}.selection_options:before{border-color:rgba(225,225,225,0);border-bottom-color:#e1e1e1;border-width:11px;margin-left:-11px}.modal.download_request #email_validation_error{font-size:14px;color:#f2162a;font-weight:700}.no-flexbox .raw_image_container img{height:100%}@media (min-width: 600px){.faceted_search_hidden_filters #secondary_column{display:none}.faceted_search_hidden_filters #primary_column{width:100% !important}@supports (display: grid) and (grid-template-columns: max-content){.faceted_search_hidden_filters #primary_column{display:grid;grid-template-columns:1fr -webkit-max-content;grid-template-columns:1fr max-content;grid-column-gap:2.8em}.faceted_search_hidden_filters #primary_column .pagination_status{grid-column:1 / 2}.faceted_search_hidden_filters #primary_column .pagination_options{grid-column:1 / 2}.faceted_search_hidden_filters #primary_column .mobile_filter_btn{grid-column:2 / 3}.faceted_search_hidden_filters #primary_column .mobile_filter_btn .button{margin-top:-0.5em}.faceted_search_hidden_filters #primary_column .mobile_filter_btn hr{display:none}.faceted_search_hidden_filters #primary_column .filtering_results,.faceted_search_hidden_filters #primary_column .faceted_search_results,.faceted_search_hidden_filters #primary_column .raw_image_gallery,.faceted_search_hidden_filters #primary_column .errors,.faceted_search_hidden_filters #primary_column .search_footer{grid-column:1 / -1}}.faceted_search_hidden_filters .mobile_filter_btn{display:block !important;text-align:right !important}.raw_images_form #secondary_column .mobile_filter_controls{display:block;position:absolute;top:-0.35em;right:0}.raw_images_form #secondary_column .mobile_filter_controls .clear_filter{display:none}.raw_images_form #secondary_column .mobile_filter_controls .button{position:relative}.raw_images_form #secondary_column .mobile_filter_controls .button::after{content:"";display:block;background-color:white;background-image:url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 512 512'%3E%3Cpolygon points='499.2,55.2 456.8,12.8 256,213.6 55.2,12.8 12.8,55.2 213.6,256 12.8,456.8 55.2,499.2 256,298.4 456.8,499.2 499.2,456.8 298.4,256 ' /%3E%3C/svg%3E");background-size:1.4em 1.4em;background-repeat:no-repeat;background-position:right;position:absolute;top:0;left:0;width:100%;height:100%}}.pagination_options{padding:0 0 1em 0}.no_results .pagination_options{display:none}.pagination_options .options{display:inline-block;float:right;width:100%;text-align:center}@media (min-width: 1024px), print{.pagination_options .options{width:auto}}.pagination_options .options_field[name="sort"]{margin-right:1.6em}.pagination_status{float:left;margin-bottom:1.2em;text-align:left;width:calc(100% - 176px)}@media (min-width: 1024px), print{.pagination_status{width:100%}.search_results .pagination_status{width:auto}}.pagination_nav{display:block;float:right;text-align:center;margin-top:-4px;margin-bottom:.7em}@media (min-width: 1024px), print{.pagination_nav{float:left;clear:both;margin-bottom:0}.search_results .pagination_nav{clear:none;float:right;margin-bottom:.7em}}.pagination_nav span.total_pages{padding:0 0 0 8px;display:inline-block}.pagination_nav .page_selector{display:inline-block;vertical-align:super;margin:0 4px 0 6px;font-size:1em}.pagination_nav .page_selector input[type=number]{-moz-appearance:textfield}.pagination_nav .page_selector input[type=number]::-webkit-inner-spin-button,.pagination_nav .page_selector input[type=number]::-webkit-outer-spin-button{-webkit-appearance:none;margin:0}.pagination_nav .page_selector input{width:2.8em;margin-right:2px;text-align:center;border:2px solid #C1C1C1}.pagination_nav .prev,.pagination_nav .next{display:inline-block;cursor:pointer;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px -100px;background-size:300px;opacity:0.6}.pagination_nav .prev:hover,.pagination_nav .prev.active,.pagination_nav .next:hover,.pagination_nav .next.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -100px;background-size:300px}.pagination_nav .prev:hover,.pagination_nav .next:hover{opacity:1}.pagination_nav .prev.disabled,.pagination_nav .next.disabled{opacity:.1;cursor:default}.pagination_nav .prev.disabled a,.pagination_nav .next.disabled a{cursor:default}.pagination_nav .prev a,.pagination_nav .next a{width:100%;height:100%;display:block}.pagination_nav .next{transform:rotate(180deg)}.pagination_nav input:-webkit-autofill{-webkit-box-shadow:0 0 0px 1000px white inset;-webkit-text-fill-color:#222}.page_option_wrapper{display:inline-block;margin-bottom:.6em}@media (min-width: 480px){.page_option_wrapper .options_field{margin-left:.5em}}.page_option_wrapper label{font-size:16px}@media (max-width: 480px){.page_option_wrapper label:after{content:":"}}.page_option_wrapper select{vertical-align:top}@media (max-width: 480px){.page_option_wrapper select{border:none;-webkit-appearance:none;-o-appearance:none;-moz-appearance:none;color:#257cdf;padding-left:0.3em;background-color:transparent}}.page_option_wrapper.sort_by{text-align:left;float:left}@media (min-width: 480px){.page_option_wrapper.sort_by{width:auto}}.page_option_wrapper.sort_by select{max-width:200px}@media (min-width: 480px){.page_option_wrapper.sort_by select{width:auto;min-width:130px}}.page_option_wrapper.per_page{text-align:right;margin-left:2%;float:right}@media (min-width: 480px){.page_option_wrapper.per_page{width:auto;margin-left:1em}}@media (min-width: 480px){.page_option_wrapper.per_page select{min-width:48px}}.nonessential{display:none}@media (min-width: 769px), print{.nonessential{display:inline}}footer.search_footer .pagination_status{display:none}footer.search_footer .pagination_nav{float:left}footer.search_footer .scroll_to_top{float:right;text-align:right;font-weight:700}/*! jQuery UI - v1.11.2 - 2015-02-04
* http://jqueryui.com
* Includes: core.css, draggable.css, sortable.css, datepicker.css
* Copyright 2015 jQuery Foundation and other contributors; Licensed MIT */.ui-helper-hidden{display:none}.ui-helper-hidden-accessible{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.ui-helper-reset{margin:0;padding:0;border:0;outline:0;line-height:1.3;text-decoration:none;font-size:100%;list-style:none}.ui-helper-clearfix:before,.ui-helper-clearfix:after{content:"";display:table;border-collapse:collapse}.ui-helper-clearfix:after{clear:both}.ui-helper-clearfix{min-height:0}.ui-helper-zfix{width:100%;height:100%;top:0;left:0;position:absolute;opacity:0;filter:Alpha(Opacity=0)}.ui-front{z-index:100}.ui-state-disabled{cursor:default !important}.ui-icon{display:block;text-indent:-99999px;overflow:hidden;background-repeat:no-repeat}.ui-widget-overlay{position:fixed;top:0;left:0;width:100%;height:100%}.ui-draggable-handle{touch-action:none}.ui-sortable-handle{touch-action:none}.ui-datepicker{width:17em;padding:.2em .2em 0;display:none}.ui-datepicker .ui-datepicker-header{position:relative;padding:.2em 0}.ui-datepicker .ui-datepicker-prev,.ui-datepicker .ui-datepicker-next{position:absolute;top:2px;width:1.8em;height:1.8em}.ui-datepicker .ui-datepicker-prev-hover,.ui-datepicker .ui-datepicker-next-hover{top:1px}.ui-datepicker .ui-datepicker-prev{left:2px}.ui-datepicker .ui-datepicker-next{right:2px}.ui-datepicker .ui-datepicker-prev-hover{left:1px}.ui-datepicker .ui-datepicker-next-hover{right:1px}.ui-datepicker .ui-datepicker-prev span,.ui-datepicker .ui-datepicker-next span{display:block;position:absolute;left:50%;margin-left:-8px;top:50%;margin-top:-8px}.ui-datepicker .ui-datepicker-title{margin:0 2.3em;line-height:1.8em;text-align:center}.ui-datepicker .ui-datepicker-title select{font-size:1em;margin:1px 0}.ui-datepicker select.ui-datepicker-month,.ui-datepicker select.ui-datepicker-year{width:45%}.ui-datepicker table{width:100%;font-size:.9em;border-collapse:collapse;margin:0 0 .4em}.ui-datepicker th{padding:.7em .3em;text-align:center;font-weight:bold;border:0}.ui-datepicker td{border:0;padding:1px}.ui-datepicker td span,.ui-datepicker td a{display:block;padding:.2em;text-align:right;text-decoration:none}.ui-datepicker .ui-datepicker-buttonpane{background-image:none;margin:.7em 0 0 0;padding:0 .2em;border-left:0;border-right:0;border-bottom:0}.ui-datepicker .ui-datepicker-buttonpane button{float:right;margin:.5em .2em .4em;cursor:pointer;padding:.2em .6em .3em .6em;width:auto;overflow:visible}.ui-datepicker .ui-datepicker-buttonpane button.ui-datepicker-current{float:left}.ui-datepicker.ui-datepicker-multi{width:auto}.ui-datepicker-multi .ui-datepicker-group{float:left}.ui-datepicker-multi .ui-datepicker-group table{width:95%;margin:0 auto .4em}.ui-datepicker-multi-2 .ui-datepicker-group{width:50%}.ui-datepicker-multi-3 .ui-datepicker-group{width:33.3%}.ui-datepicker-multi-4 .ui-datepicker-group{width:25%}.ui-datepicker-multi .ui-datepicker-group-last .ui-datepicker-header,.ui-datepicker-multi .ui-datepicker-group-middle .ui-datepicker-header{border-left-width:0}.ui-datepicker-multi .ui-datepicker-buttonpane{clear:left}.ui-datepicker-row-break{clear:both;width:100%;font-size:0}.ui-datepicker-rtl{direction:rtl}.ui-datepicker-rtl .ui-datepicker-prev{right:2px;left:auto}.ui-datepicker-rtl .ui-datepicker-next{left:2px;right:auto}.ui-datepicker-rtl .ui-datepicker-prev:hover{right:1px;left:auto}.ui-datepicker-rtl .ui-datepicker-next:hover{left:1px;right:auto}.ui-datepicker-rtl .ui-datepicker-buttonpane{clear:right}.ui-datepicker-rtl .ui-datepicker-buttonpane button{float:left}.ui-datepicker-rtl .ui-datepicker-buttonpane button.ui-datepicker-current,.ui-datepicker-rtl .ui-datepicker-group{float:right}.ui-datepicker-rtl .ui-datepicker-group-last .ui-datepicker-header,.ui-datepicker-rtl .ui-datepicker-group-middle .ui-datepicker-header{border-right-width:0;border-left-width:1px}/*!
* jQuery UI CSS Framework 1.11.2
* http://jqueryui.com
*
* Copyright 2014 jQuery Foundation and other contributors
* Released under the MIT license.
* http://jquery.org/license
*
* http://api.jqueryui.com/category/theming/
*
* To view and modify this theme, visit http://jqueryui.com/themeroller/?ffDefault=Segoe%20UI%2CArial%2Csans-serif&fwDefault=bold&fsDefault=1.1em&cornerRadius=6px&bgColorHeader=333333&bgTextureHeader=gloss_wave&bgImgOpacityHeader=25&borderColorHeader=333333&fcHeader=ffffff&iconColorHeader=ffffff&bgColorContent=000000&bgTextureContent=inset_soft&bgImgOpacityContent=25&borderColorContent=666666&fcContent=ffffff&iconColorContent=cccccc&bgColorDefault=555555&bgTextureDefault=glass&bgImgOpacityDefault=20&borderColorDefault=666666&fcDefault=eeeeee&iconColorDefault=cccccc&bgColorHover=0078a3&bgTextureHover=glass&bgImgOpacityHover=40&borderColorHover=59b4d4&fcHover=ffffff&iconColorHover=ffffff&bgColorActive=f58400&bgTextureActive=inset_soft&bgImgOpacityActive=30&borderColorActive=ffaf0f&fcActive=ffffff&iconColorActive=222222&bgColorHighlight=eeeeee&bgTextureHighlight=highlight_soft&bgImgOpacityHighlight=80&borderColorHighlight=cccccc&fcHighlight=2e7db2&iconColorHighlight=4b8e0b&bgColorError=ffc73d&bgTextureError=glass&bgImgOpacityError=40&borderColorError=ffb73d&fcError=111111&iconColorError=a83300&bgColorOverlay=5c5c5c&bgTextureOverlay=flat&bgImgOpacityOverlay=50&opacityOverlay=80&bgColorShadow=cccccc&bgTextureShadow=flat&bgImgOpacityShadow=30&opacityShadow=60&thicknessShadow=7px&offsetTopShadow=-7px&offsetLeftShadow=-7px&cornerRadiusShadow=8px
*/.ui-widget{font-family:Segoe UI,Arial,sans-serif;font-size:1.1em}.ui-widget .ui-widget{font-size:1em}.ui-widget input,.ui-widget select,.ui-widget textarea,.ui-widget button{font-family:Segoe UI,Arial,sans-serif;font-size:1em}.ui-widget-content{border:1px solid #666666;background:#000 url("https://mars.nasa.gov/assets/images/ui-bg_inset-soft_25_000000_1x100.png") 50% bottom repeat-x;color:#ffffff}.ui-widget-content a{color:#ffffff}.ui-widget-header{border:1px solid #333333;background:#333 url("https://mars.nasa.gov/assets/images/ui-bg_gloss-wave_25_333333_500x100.png") 50% 50% repeat-x;color:#ffffff;font-weight:bold}.ui-widget-header a{color:#ffffff}.ui-state-default,.ui-widget-content .ui-state-default,.ui-widget-header .ui-state-default{border:1px solid #666666;background:#555 url("https://mars.nasa.gov/assets/images/ui-bg_glass_20_555555_1x400.png") 50% 50% repeat-x;font-weight:bold;color:#eeeeee}.ui-state-default a,.ui-state-default a:link,.ui-state-default a:visited{color:#eeeeee;text-decoration:none}.ui-state-hover,.ui-widget-content .ui-state-hover,.ui-widget-header .ui-state-hover,.ui-state-focus,.ui-widget-content .ui-state-focus,.ui-widget-header .ui-state-focus{border:1px solid #59b4d4;background:#0078a3 url("https://mars.nasa.gov/assets/images/ui-bg_glass_40_0078a3_1x400.png") 50% 50% repeat-x;font-weight:bold;color:#ffffff}.ui-state-hover a,.ui-state-hover a:hover,.ui-state-hover a:link,.ui-state-hover a:visited,.ui-state-focus a,.ui-state-focus a:hover,.ui-state-focus a:link,.ui-state-focus a:visited{color:#ffffff;text-decoration:none}.ui-state-active,.ui-widget-content .ui-state-active,.ui-widget-header .ui-state-active{border:1px solid #ffaf0f;background:#f58400 url("https://mars.nasa.gov/assets/images/ui-bg_inset-soft_30_f58400_1x100.png") 50% 50% repeat-x;font-weight:bold;color:#ffffff}.ui-state-active a,.ui-state-active a:link,.ui-state-active a:visited{color:#ffffff;text-decoration:none}.ui-state-highlight,.ui-widget-content .ui-state-highlight,.ui-widget-header .ui-state-highlight{border:1px solid #cccccc;background:#eee url("https://mars.nasa.gov/assets/images/ui-bg_highlight-soft_80_eeeeee_1x100.png") 50% top repeat-x;color:#2e7db2}.ui-state-highlight a,.ui-widget-content .ui-state-highlight a,.ui-widget-header .ui-state-highlight a{color:#2e7db2}.ui-state-error,.ui-widget-content .ui-state-error,.ui-widget-header .ui-state-error{border:1px solid #ffb73d;background:#ffc73d url("https://mars.nasa.gov/assets/images/ui-bg_glass_40_ffc73d_1x400.png") 50% 50% repeat-x;color:#111111}.ui-state-error a,.ui-widget-content .ui-state-error a,.ui-widget-header .ui-state-error a{color:#111111}.ui-state-error-text,.ui-widget-content .ui-state-error-text,.ui-widget-header .ui-state-error-text{color:#111111}.ui-priority-primary,.ui-widget-content .ui-priority-primary,.ui-widget-header .ui-priority-primary{font-weight:bold}.ui-priority-secondary,.ui-widget-content .ui-priority-secondary,.ui-widget-header .ui-priority-secondary{opacity:.7;filter:Alpha(Opacity=70);font-weight:normal}.ui-state-disabled,.ui-widget-content .ui-state-disabled,.ui-widget-header .ui-state-disabled{opacity:.35;filter:Alpha(Opacity=35);background-image:none}.ui-state-disabled .ui-icon{filter:Alpha(Opacity=35)}.ui-icon{width:16px;height:16px}.ui-icon-blank{background-position:16px 16px}.ui-icon-carat-1-n{background-position:0 0}.ui-icon-carat-1-ne{background-position:-16px 0}.ui-icon-carat-1-e{background-position:-32px 0}.ui-icon-carat-1-se{background-position:-48px 0}.ui-icon-carat-1-s{background-position:-64px 0}.ui-icon-carat-1-sw{background-position:-80px 0}.ui-icon-carat-1-w{background-position:-96px 0}.ui-icon-carat-1-nw{background-position:-112px 0}.ui-icon-carat-2-n-s{background-position:-128px 0}.ui-icon-carat-2-e-w{background-position:-144px 0}.ui-icon-triangle-1-n{background-position:0 -16px}.ui-icon-triangle-1-ne{background-position:-16px -16px}.ui-icon-triangle-1-e{background-position:-32px -16px}.ui-icon-triangle-1-se{background-position:-48px -16px}.ui-icon-triangle-1-s{background-position:-64px -16px}.ui-icon-triangle-1-sw{background-position:-80px -16px}.ui-icon-triangle-1-w{background-position:-96px -16px}.ui-icon-triangle-1-nw{background-position:-112px -16px}.ui-icon-triangle-2-n-s{background-position:-128px -16px}.ui-icon-triangle-2-e-w{background-position:-144px -16px}.ui-icon-arrow-1-n{background-position:0 -32px}.ui-icon-arrow-1-ne{background-position:-16px -32px}.ui-icon-arrow-1-e{background-position:-32px -32px}.ui-icon-arrow-1-se{background-position:-48px -32px}.ui-icon-arrow-1-s{background-position:-64px -32px}.ui-icon-arrow-1-sw{background-position:-80px -32px}.ui-icon-arrow-1-w{background-position:-96px -32px}.ui-icon-arrow-1-nw{background-position:-112px -32px}.ui-icon-arrow-2-n-s{background-position:-128px -32px}.ui-icon-arrow-2-ne-sw{background-position:-144px -32px}.ui-icon-arrow-2-e-w{background-position:-160px -32px}.ui-icon-arrow-2-se-nw{background-position:-176px -32px}.ui-icon-arrowstop-1-n{background-position:-192px -32px}.ui-icon-arrowstop-1-e{background-position:-208px -32px}.ui-icon-arrowstop-1-s{background-position:-224px -32px}.ui-icon-arrowstop-1-w{background-position:-240px -32px}.ui-icon-arrowthick-1-n{background-position:0 -48px}.ui-icon-arrowthick-1-ne{background-position:-16px -48px}.ui-icon-arrowthick-1-e{background-position:-32px -48px}.ui-icon-arrowthick-1-se{background-position:-48px -48px}.ui-icon-arrowthick-1-s{background-position:-64px -48px}.ui-icon-arrowthick-1-sw{background-position:-80px -48px}.ui-icon-arrowthick-1-w{background-position:-96px -48px}.ui-icon-arrowthick-1-nw{background-position:-112px -48px}.ui-icon-arrowthick-2-n-s{background-position:-128px -48px}.ui-icon-arrowthick-2-ne-sw{background-position:-144px -48px}.ui-icon-arrowthick-2-e-w{background-position:-160px -48px}.ui-icon-arrowthick-2-se-nw{background-position:-176px -48px}.ui-icon-arrowthickstop-1-n{background-position:-192px -48px}.ui-icon-arrowthickstop-1-e{background-position:-208px -48px}.ui-icon-arrowthickstop-1-s{background-position:-224px -48px}.ui-icon-arrowthickstop-1-w{background-position:-240px -48px}.ui-icon-arrowreturnthick-1-w{background-position:0 -64px}.ui-icon-arrowreturnthick-1-n{background-position:-16px -64px}.ui-icon-arrowreturnthick-1-e{background-position:-32px -64px}.ui-icon-arrowreturnthick-1-s{background-position:-48px -64px}.ui-icon-arrowreturn-1-w{background-position:-64px -64px}.ui-icon-arrowreturn-1-n{background-position:-80px -64px}.ui-icon-arrowreturn-1-e{background-position:-96px -64px}.ui-icon-arrowreturn-1-s{background-position:-112px -64px}.ui-icon-arrowrefresh-1-w{background-position:-128px -64px}.ui-icon-arrowrefresh-1-n{background-position:-144px -64px}.ui-icon-arrowrefresh-1-e{background-position:-160px -64px}.ui-icon-arrowrefresh-1-s{background-position:-176px -64px}.ui-icon-arrow-4{background-position:0 -80px}.ui-icon-arrow-4-diag{background-position:-16px -80px}.ui-icon-extlink{background-position:-32px -80px}.ui-icon-newwin{background-position:-48px -80px}.ui-icon-refresh{background-position:-64px -80px}.ui-icon-shuffle{background-position:-80px -80px}.ui-icon-transfer-e-w{background-position:-96px -80px}.ui-icon-transferthick-e-w{background-position:-112px -80px}.ui-icon-folder-collapsed{background-position:0 -96px}.ui-icon-folder-open{background-position:-16px -96px}.ui-icon-document{background-position:-32px -96px}.ui-icon-document-b{background-position:-48px -96px}.ui-icon-note{background-position:-64px -96px}.ui-icon-mail-closed{background-position:-80px -96px}.ui-icon-mail-open{background-position:-96px -96px}.ui-icon-suitcase{background-position:-112px -96px}.ui-icon-comment{background-position:-128px -96px}.ui-icon-person{background-position:-144px -96px}.ui-icon-print{background-position:-160px -96px}.ui-icon-trash{background-position:-176px -96px}.ui-icon-locked{background-position:-192px -96px}.ui-icon-unlocked{background-position:-208px -96px}.ui-icon-bookmark{background-position:-224px -96px}.ui-icon-tag{background-position:-240px -96px}.ui-icon-home{background-position:0 -112px}.ui-icon-flag{background-position:-16px -112px}.ui-icon-calendar{background-position:-32px -112px}.ui-icon-cart{background-position:-48px -112px}.ui-icon-pencil{background-position:-64px -112px}.ui-icon-clock{background-position:-80px -112px}.ui-icon-disk{background-position:-96px -112px}.ui-icon-calculator{background-position:-112px -112px}.ui-icon-zoomin{background-position:-128px -112px}.ui-icon-zoomout{background-position:-144px -112px}.ui-icon-search{background-position:-160px -112px}.ui-icon-wrench{background-position:-176px -112px}.ui-icon-gear{background-position:-192px -112px}.ui-icon-heart{background-position:-208px -112px}.ui-icon-star{background-position:-224px -112px}.ui-icon-link{background-position:-240px -112px}.ui-icon-cancel{background-position:0 -128px}.ui-icon-plus{background-position:-16px -128px}.ui-icon-plusthick{background-position:-32px -128px}.ui-icon-minus{background-position:-48px -128px}.ui-icon-minusthick{background-position:-64px -128px}.ui-icon-close{background-position:-80px -128px}.ui-icon-closethick{background-position:-96px -128px}.ui-icon-key{background-position:-112px -128px}.ui-icon-lightbulb{background-position:-128px -128px}.ui-icon-scissors{background-position:-144px -128px}.ui-icon-clipboard{background-position:-160px -128px}.ui-icon-copy{background-position:-176px -128px}.ui-icon-contact{background-position:-192px -128px}.ui-icon-image{background-position:-208px -128px}.ui-icon-video{background-position:-224px -128px}.ui-icon-script{background-position:-240px -128px}.ui-icon-alert{background-position:0 -144px}.ui-icon-info{background-position:-16px -144px}.ui-icon-notice{background-position:-32px -144px}.ui-icon-help{background-position:-48px -144px}.ui-icon-check{background-position:-64px -144px}.ui-icon-bullet{background-position:-80px -144px}.ui-icon-radio-on{background-position:-96px -144px}.ui-icon-radio-off{background-position:-112px -144px}.ui-icon-pin-w{background-position:-128px -144px}.ui-icon-pin-s{background-position:-144px -144px}.ui-icon-play{background-position:0 -160px}.ui-icon-pause{background-position:-16px -160px}.ui-icon-seek-next{background-position:-32px -160px}.ui-icon-seek-prev{background-position:-48px -160px}.ui-icon-seek-end{background-position:-64px -160px}.ui-icon-seek-start{background-position:-80px -160px}.ui-icon-seek-first{background-position:-80px -160px}.ui-icon-stop{background-position:-96px -160px}.ui-icon-eject{background-position:-112px -160px}.ui-icon-volume-off{background-position:-128px -160px}.ui-icon-volume-on{background-position:-144px -160px}.ui-icon-power{background-position:0 -176px}.ui-icon-signal-diag{background-position:-16px -176px}.ui-icon-signal{background-position:-32px -176px}.ui-icon-battery-0{background-position:-48px -176px}.ui-icon-battery-1{background-position:-64px -176px}.ui-icon-battery-2{background-position:-80px -176px}.ui-icon-battery-3{background-position:-96px -176px}.ui-icon-circle-plus{background-position:0 -192px}.ui-icon-circle-minus{background-position:-16px -192px}.ui-icon-circle-close{background-position:-32px -192px}.ui-icon-circle-triangle-e{background-position:-48px -192px}.ui-icon-circle-triangle-s{background-position:-64px -192px}.ui-icon-circle-triangle-w{background-position:-80px -192px}.ui-icon-circle-triangle-n{background-position:-96px -192px}.ui-icon-circle-arrow-e{background-position:-112px -192px}.ui-icon-circle-arrow-s{background-position:-128px -192px}.ui-icon-circle-arrow-w{background-position:-144px -192px}.ui-icon-circle-arrow-n{background-position:-160px -192px}.ui-icon-circle-zoomin{background-position:-176px -192px}.ui-icon-circle-zoomout{background-position:-192px -192px}.ui-icon-circle-check{background-position:-208px -192px}.ui-icon-circlesmall-plus{background-position:0 -208px}.ui-icon-circlesmall-minus{background-position:-16px -208px}.ui-icon-circlesmall-close{background-position:-32px -208px}.ui-icon-squaresmall-plus{background-position:-48px -208px}.ui-icon-squaresmall-minus{background-position:-64px -208px}.ui-icon-squaresmall-close{background-position:-80px -208px}.ui-icon-grip-dotted-vertical{background-position:0 -224px}.ui-icon-grip-dotted-horizontal{background-position:-16px -224px}.ui-icon-grip-solid-vertical{background-position:-32px -224px}.ui-icon-grip-solid-horizontal{background-position:-48px -224px}.ui-icon-gripsmall-diagonal-se{background-position:-64px -224px}.ui-icon-grip-diagonal-se{background-position:-80px -224px}.ui-corner-all,.ui-corner-top,.ui-corner-left,.ui-corner-tl{border-top-left-radius:6px}.ui-corner-all,.ui-corner-top,.ui-corner-right,.ui-corner-tr{border-top-right-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-left,.ui-corner-bl{border-bottom-left-radius:6px}.ui-corner-all,.ui-corner-bottom,.ui-corner-right,.ui-corner-br{border-bottom-right-radius:6px}.ui-widget-overlay{background:#5c5c5c url("https://mars.nasa.gov/assets/images/ui-bg_flat_50_5c5c5c_40x100.png") 50% 50% repeat-x;opacity:.8;filter:Alpha(Opacity=80)}.ui-widget-shadow{margin:-7px 0 0 -7px;padding:7px;background:#ccc url("https://mars.nasa.gov/assets/images/ui-bg_flat_30_cccccc_40x100.png") 50% 50% repeat-x;opacity:.6;filter:Alpha(Opacity=60);border-radius:8px}body .ui-datepicker{background:#175C84;border:none;border-radius:2px;padding-left:10px;padding-right:10px;padding-bottom:5px;z-index:99999 !important;font-size:1.05em !important;font-family:inherit}body .ui-datepicker table{border-collapse:separate;background:#175C84}body .ui-datepicker td{padding:1px;width:35px;height:25px}body .ui-datepicker .ui-datepicker-header{background:none;border:none;margin-bottom:0.3em;margin-top:0.6em}body .ui-datepicker .ui-datepicker-title{margin:0 1.8em;margin-top:-3px}body .ui-datepicker select.ui-datepicker-month,body .ui-datepicker .ui-datepicker select.ui-datepicker-year{width:40%}body .ui-datepicker select.ui-datepicker-month{margin-right:5%}body .ui-datepicker .ui-datepicker-prev{cursor:pointer;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px -150px;background-size:300px;transform:rotate(180deg)}body .ui-datepicker .ui-datepicker-prev:hover,body .ui-datepicker .ui-datepicker-prev.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -175px;background-size:300px}body .ui-datepicker .ui-datepicker-prev .ui-state-hover{opacity:2}body .ui-datepicker .ui-datepicker-next{cursor:pointer;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px -150px;background-size:300px}body .ui-datepicker .ui-datepicker-next:hover,body .ui-datepicker .ui-datepicker-next.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px -175px;background-size:300px}body .ui-datepicker .ui-datepicker-next .ui-state-hover{opacity:2}body .ui-datepicker .ui-datepicker-calendar{border-radius:4px;padding:1px}body .ui-datepicker .ui-datepicker-calendar thead{color:white}body .ui-datepicker .ui-state-default,body .ui-datepicker .ui-widget-content .ui-state-default,body .ui-datepicker .ui-widget-header .ui-state-default{font-weight:bold;border:none;background:transparent;color:#FFF;border-radius:0;text-align:center}body .ui-datepicker .ui-state-disabled,body .ui-datepicker .ui-widget-content .ui-state-disabled,body .ui-datepicker .ui-widget-header .ui-state-disabled{opacity:.45;filter:Alpha(Opacity=45)}body .ui-datepicker .ui-priority-secondary,body .ui-datepicker .ui-widget-content .ui-priority-secondary,body .ui-datepicker .ui-widget-header .ui-priority-secondary{opacity:1;-webkit-filter:none;filter:none}body .ui-datepicker .ui-state-active,body .ui-datepicker .ui-state-active.ui-state-hover{border:none;background:#F1F9FF;color:#175C83}body .ui-datepicker :not(.ui-datepicker-header) .ui-state-hover{border:none;background:#F1F9FF;color:#175C83}body .ui-datepicker .ui-datepicker-header .ui-state-hover{background-color:transparent;border:0}body .ui-datepicker td.ui-datepicker-today a.ui-state-hover,body .ui-datepicker td.ui-datepicker-today a.ui-state-highlight{background-color:#A1BCD0;color:#175C83}body .ui-datepicker td.ui-datepicker-today a:hover,body .ui-datepicker td.ui-datepicker-today a.ui-state-active{border:none;background:#F1F9FF;color:#175C83}body .ui-datepicker .ui-datepicker-buttonpane{display:flex;justify-content:space-between;border:none;background-color:transparent}body .ui-datepicker .ui-datepicker-buttonpane button{font-weight:600 !important;font-size:14px;padding:0.4em 0.8em 0.5em 0.8em;text-transform:uppercase;margin-bottom:0.8em}body .ui-datepicker .ui-datepicker-buttonpane button:hover{background:#F1F9FF;color:#175C83}body .ui-datepicker .ui-datepicker-buttonpane button.ui-datepicker-close{order:3}.grid_view #ui-datepicker-div{left:calc(50% - 136px) !important}@media (max-width: 599px){.datepicker_mask{position:absolute;top:0;left:0;width:100%;height:100%;background-color:rgba(0,0,0,0.8);z-index:100}#ui-datepicker-div{position:absolute;left:0 !important;right:0 !important;top:10vh !important;margin:auto !important}}.faceted_search .gallery_header{border-bottom:2px solid #E5E5E5;padding-bottom:1.4em}.faceted_search .gallery_header .article_title{margin-bottom:1rem}.faceted_search .gallery_header .section_search{max-width:none}.faceted_search .gallery_header .search_binder{display:inline-block;vertical-align:top;margin-right:1.3%;width:100%;max-width:300px}@media (min-width: 769px), print{.faceted_search .gallery_header .search_binder{width:40%}}.faceted_search hr{border-top:2px solid #E5E5E5}.faceted_search.grid_view{background:white}.faceted_search.grid_view .content_title{letter-spacing:-.03em;display:none}.faceted_search.grid_view .image_and_description_container{min-height:0}.faceted_search.grid_view .article_teaser_body{display:none}.faceted_search.grid_view .list_date{display:none}.faceted_search.grid_view .list_image{width:100%;float:none;margin:0}.faceted_search.grid_view .bottom_gradient{color:#222;display:block;position:relative;margin-top:0.3rem;margin-top:.5rem;padding-bottom:0.4rem;text-align:left;min-height:52px}@media (min-width: 769px), print{.faceted_search.grid_view .bottom_gradient{margin-top:.5rem;min-height:85px}}.faceted_search.grid_view .bottom_gradient div{text-align:left}.faceted_search.grid_view .bottom_gradient h3{font-weight:600;font-size:0.95em}.faceted_search.grid_view li.slide{margin-bottom:.84034%;width:49.57983%;float:left}.faceted_search.grid_view li.slide:nth-child(2n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.grid_view li.slide:nth-child(2n+2){margin-left:50.42017%;margin-right:-100%;clear:none}@media (min-width: 600px), print{.faceted_search.grid_view li.slide{margin-bottom:.84034%;width:32.77311%;float:left}.faceted_search.grid_view li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.grid_view li.slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}@media (min-width: 769px), print{.faceted_search.grid_view li.slide{width:24.36975%;float:left}.faceted_search.grid_view li.slide:nth-child(4n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.grid_view li.slide:nth-child(4n+2){margin-left:25.21008%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(4n+3){margin-left:50.42017%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(4n+4){margin-left:75.63025%;margin-right:-100%;clear:none}}@media (min-width: 1200px){.faceted_search.grid_view li.slide{width:19.32773%;float:left}.faceted_search.grid_view li.slide:nth-child(5n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.grid_view li.slide:nth-child(5n+2){margin-left:20.16807%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(5n+3){margin-left:40.33613%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(5n+4){margin-left:60.5042%;margin-right:-100%;clear:none}.faceted_search.grid_view li.slide:nth-child(5n+5){margin-left:80.67227%;margin-right:-100%;clear:none}}.faceted_search.grid_view li.slide a{display:block;text-decoration:none}.faceted_search.mosaics .gallery_header .article_title{font-size:1.2em;font-weight:500;margin-top:0.5rem;margin-bottom:0}@media (min-width: 769px), print{.faceted_search.mosaics.grid_view li.slide{margin-bottom:.84034%;width:32.77311%;float:left}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}@media (min-width: 1200px){.faceted_search.mosaics.grid_view li.slide{margin-bottom:.84034%;width:32.77311%;float:left}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+1){margin-left:0;margin-right:-100%;clear:both;margin-left:0}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+2){margin-left:33.61345%;margin-right:-100%;clear:none}.faceted_search.mosaics.grid_view li.slide:nth-child(3n+3){margin-left:67.22689%;margin-right:-100%;clear:none}}.faceted_search.mosaics .view_selectors{display:none}.faceted_search.list_view .list_image{float:right;margin-left:4%;margin-bottom:.5em;width:32%}@media (min-width: 600px), print{.faceted_search.list_view .list_image{margin-left:0;margin-bottom:0;width:23.07692%;float:left;margin-right:2.5641%}}@media (min-width: 769px), print{.faceted_search.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}@media (min-width: 1024px), print{.faceted_search.list_view .list_image{width:23.72881%;float:left;margin-right:1.69492%}}.faceted_search.list_view .list_text{width:auto}@media (min-width: 600px), print{.faceted_search.list_view .list_text{width:74.35897%;float:right;margin-right:0}}@media (min-width: 769px), print{.faceted_search.list_view .list_text{width:74.57627%;float:right;margin-right:0}}@media (min-width: 1024px), print{.faceted_search.list_view .list_text{width:66.10169%;float:left;margin-right:1.69492%}}.faceted_search.list_view .content_title a{text-decoration:none;cursor:pointer;color:#222}.faceted_search.list_view .content_title a:hover{text-decoration:underline}.faceted_search.list_view .content_title{display:block;font-size:1.17em;margin-bottom:.1em;margin-bottom:.2em;font-weight:700;color:#222;letter-spacing:-.035em}@media (min-width: 600px), print{.faceted_search.list_view .content_title{font-size:1.35em;margin-bottom:.18em}}@media (min-width: 769px), print{.faceted_search.list_view .content_title{font-size:1.53em;margin-bottom:.26em}}@media (min-width: 1024px), print{.faceted_search.list_view .content_title{font-size:1.62em;margin-bottom:.29em}}@media (min-width: 1200px){.faceted_search.list_view .content_title{font-size:1.71em;margin-bottom:.32em}}.faceted_search.list_view .bottom_gradient{display:none}@media (min-width: 1024px), print{.faceted_search.list_view .article_teaser_body{font-size:1.1em}}.faceted_search.list_view li.slide:first-child{border-top:1px solid #CCC}.faceted_search.list_view li.slide{border-bottom:1px solid #CCC;padding:1.2em 0}.faceted_search.list_view li.slide a{text-decoration:none;cursor:pointer}.faceted_search .mobile_filter_btn{text-align:center;display:block}.faceted_search .mobile_filter_btn .button{margin-bottom:2.8em}@media (min-width: 600px), print{.faceted_search .mobile_filter_btn{display:none}}.faceted_search .filter_remove{padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -101px -201px;background-size:300px;display:inline-block;top:-1px;vertical-align:middle}.faceted_search .filter_remove:hover,.faceted_search .filter_remove.active{background:url("https://mars.nasa.gov/assets/[email protected]") -101px -201px;background-size:300px}.faceted_search .filtering_results{display:none;margin-bottom:1em}@media (min-width: 600px), print{.faceted_search .filtering_results{display:block}}.faceted_search .filtering_results .applied_filter{display:inline-block;margin:0 .5em .3em}.faceted_search .clear_filter{margin-left:1em;cursor:pointer}.faceted_search .mobile_clear_filters{padding-right:20px;vertical-align:middle}@media (min-width: 600px), print{.faceted_search .mobile_clear_filters{display:none}}.faceted_search .faceted_search_results .no_items{margin:2.8em 0;border:2px solid #E5E5E5;padding:1.4em;font-style:italic}.faceted_search input[type="checkbox"]{margin-bottom:0.4em;margin-right:.4em}.faceted_search input[type="checkbox"] label{font-size:0.9em;font-weight:400}.faceted_search input[type="checkbox"] label input{vertical-align:middle}.faceted_search input[type="checkbox"] label input+span{pointer-events:none}.faceted_search .facet_section.truncated ul li.truncatable{display:none !important}.faceted_search .facet_section.truncated .truncation_toggle:after{content:"+ more"}.faceted_search .facet_section.expanded .truncation_toggle:after{content:"- less"}.faceted_search .facet_section input.search_field{margin-top:0.3em}.faceted_search .facet_section .search_field{border:1px solid rgba(0,0,0,0.3);background-color:white;border-radius:4px;padding:8px 12px;margin-bottom:0.6em;font-size:16px}.faceted_search .facet_section .search_field.wide{width:100%}.faceted_search #secondary_column ul{margin-left:0;padding-left:1px}.faceted_search #secondary_column .date_option_wrapper{display:inline-block;margin-right:1%}.faceted_search #secondary_column .date_option_wrapper+.date_option_wrapper{margin-right:0}@media (min-width: 600px), print{.faceted_search #secondary_column .date_option_wrapper{margin-left:0;width:auto;margin-right:0}}.faceted_search #secondary_column .date_option_wrapper label.preposition{font-weight:bold;display:block}.faceted_search #secondary_column .date_option_wrapper label.preposition_small{display:none}.faceted_search #secondary_column .calendar_input_binder{position:relative;display:inline-block;width:100%;margin-bottom:.7em;cursor:pointer}.faceted_search #secondary_column .fs_datepicker,.faceted_search #secondary_column .mosaics_to,.faceted_search #secondary_column .mosaics_from{height:40px;font-size:16px;border:1px solid rgba(0,0,0,0.3);border-radius:4px;width:100%;margin:0 0.4em 0 0;padding:8px 32px 8px 8px;width:150px}.faceted_search #secondary_column .fs_datepicker:-webkit-autofill,.faceted_search #secondary_column .mosaics_to:-webkit-autofill,.faceted_search #secondary_column .mosaics_from:-webkit-autofill{-webkit-box-shadow:0 0 0px 1000px white inset;-webkit-text-fill-color:#222}.faceted_search #secondary_column .date_filters{margin:1em 0 -1em}.faceted_search #secondary_column .date_filters input[type="submit"]{text-indent:-9999px;width:100%;padding:13.5px 16px}@media (min-width: 600px), print{.faceted_search #secondary_column .date_filters input[type="submit"]{text-indent:0;padding:12px 13px}}.faceted_search #secondary_column .date_filters .right_arrow{position:absolute;left:0;right:0;margin:auto;top:30%;height:0px;width:0px;display:inline-block;border-radius:3px;border-top:6px solid transparent;border-left:9px solid white;border-bottom:6px solid transparent}@media (min-width: 600px), print{.faceted_search #secondary_column .date_filters .right_arrow{display:none}}.faceted_search #secondary_column .date_filters .datepicker_section{position:relative}.faceted_search #secondary_column .date_filters .datepicker_section label.preposition{display:block}.faceted_search #secondary_column .datepicker_section{padding:0 0.8em}.faceted_search #secondary_column .date_option_wrapper{display:inline-block;margin-right:1%}.faceted_search #secondary_column .date_option_wrapper+.date_option_wrapper{margin-right:0}@media (min-width: 600px), print{.faceted_search #secondary_column .date_option_wrapper{margin-left:0;width:auto;margin-right:0}}.faceted_search #secondary_column .date_option_wrapper label.preposition{font-weight:bold;display:block}.faceted_search #secondary_column .date_option_wrapper label.preposition_small{position:absolute;font-size:0.6em;left:9px;top:5px;color:#777}@media (min-width: 600px), print{.faceted_search #secondary_column .date_option_wrapper label.preposition_small{display:none}}.faceted_search #secondary_column .date_submit{position:absolute;right:0;top:0;width:9%;display:inline-block}@media (min-width: 600px), print{.faceted_search #secondary_column .date_submit{position:relative;width:auto}}.faceted_search #secondary_column button.ui-datepicker-trigger{vertical-align:middle;position:absolute;display:inline-block;right:6px;top:12px;width:38px;height:100%;padding:6px 8px 8px 8px;background:transparent;border:transparent}.faceted_search #secondary_column button.ui-datepicker-trigger:hover{opacity:.8}.faceted_search #secondary_column button.ui-datepicker-trigger img{width:22px;height:22px}.touchevents .faceted_search #secondary_column button.ui-datepicker-trigger{width:100%}.touchevents .faceted_search #secondary_column button.ui-datepicker-trigger img{float:right}.faceted_search #secondary_column .facet_section ul{margin-top:1.05em;margin-bottom:1.05em}.faceted_search #secondary_column .facet_section li{display:block;margin-bottom:0.35em}@supports (display: grid) and (grid-template-columns: max-content){.faceted_search #secondary_column .facet_section li label{display:grid;grid-template-columns:-webkit-max-content 1fr;grid-template-columns:max-content 1fr;align-items:center}.faceted_search #secondary_column .facet_section li input[type="checkbox"]{margin-bottom:0;margin-right:0.7em}}.faceted_search #secondary_column .mobile_filter_controls,.faceted_search #secondary_column .expand_section{display:none}@media (max-width: 599px){.faceted_search #secondary_column{display:none;position:fixed;right:0;top:0;z-index:41;width:100%;height:100%;background-color:rgba(0,0,0,0.5)}.faceted_search #secondary_column.open{display:block}.faceted_search #secondary_column .facets_sidebar{border:none;background-color:white;height:100%;float:right;width:87.5%;padding:0;overflow-y:scroll;-webkit-overflow-scrolling:touch}.faceted_search #secondary_column h4,.faceted_search #secondary_column fieldset{padding:0.8em;margin:0}.faceted_search #secondary_column .form_section.open fieldset{padding-bottom:0}.faceted_search #secondary_column .form_section.open fieldset.facet_section{padding-bottom:.8em}.faceted_search #secondary_column header{padding:1em;margin:0}.faceted_search #secondary_column hr{display:none}.faceted_search #secondary_column .mobile_filter_controls{display:block;position:absolute;top:10px;right:10px}.faceted_search #secondary_column .mobile_filter_controls a{padding:1em}.faceted_search #secondary_column h4{background-color:#f3f3f3}.faceted_search #secondary_column .form_section{position:relative;margin-bottom:2px}.faceted_search #secondary_column .form_section fieldset{display:none}.faceted_search #secondary_column .form_section .expand_section{display:block;position:absolute;top:5px;right:0px;cursor:pointer;padding:12px}.faceted_search #secondary_column .form_section .expand_section .arrow_down{border-left:6px solid transparent;border-right:6px solid transparent;border-top:6px solid #57585a;transition:transform 0.2s}.faceted_search #secondary_column .form_section.open fieldset,.faceted_search #secondary_column .form_section.open .featured_links{display:block}.faceted_search #secondary_column .form_section.open .expand_section .arrow_down{transform:rotate(180deg)}.faceted_search #secondary_column .form_section.open .distance{margin-bottom:1.6em}}.grid_layout .mosaics_container{margin-top:-1.5em}.study_list_item{margin:1em 0 0}.study_list_item a{text-decoration:none;color:black}.study_list_item h3{margin:1em 0}.study_list_item .study_description>div{margin:3px 0}.study_label{margin-right:1em;font-weight:700}.study_page .study_description{margin:1em 0}.study_page .study_description>div{margin:3px 0}.study_page .study_summary{margin:1em 0}.wysiwyg_content .center_focus_carousel_module{background-color:#e9eef4;width:100%;max-width:none;margin:3rem 0 3rem;padding:2.2rem 0px 2.5rem}@media (min-width: 600px), print{.wysiwyg_content .center_focus_carousel_module{padding:4rem 0 5rem}}.wysiwyg_content .center_focus_carousel_module .grid_layout{width:100%}@media (min-width: 600px), print{.wysiwyg_content .center_focus_carousel_module .grid_layout{width:calc(100% - 100px)}}.wysiwyg_content .center_focus_carousel_module .person{margin:0 3px}.wysiwyg_content .center_focus_carousel_module .person.slick-center{z-index:2}.wysiwyg_content .center_focus_carousel_module .person .image_container{opacity:.3;cursor:pointer}.wysiwyg_content .center_focus_carousel_module .person.slick-center .image_container{opacity:1;transform:scale(1.3);transition:transform .5s;cursor:default}.wysiwyg_content .center_focus_carousel_module .center_focus_carousel{margin-bottom:15px}.wysiwyg_content .center_focus_carousel_module .carousel_header{padding:0 8%}.wysiwyg_content .center_focus_carousel_module h2.module_title{margin:0}.wysiwyg_content .center_focus_carousel_module .slick-track{padding-top:15%;padding-bottom:15%}@media (min-width: 600px), print{.wysiwyg_content .center_focus_carousel_module .slick-track{padding-top:6%;padding-bottom:6%}}.wysiwyg_content .center_focus_carousel_module .caption_container{margin:auto;text-align:center;width:86%;max-width:568px}.wysiwyg_content .center_focus_carousel_module .caption_container .cf_slide_name{font-weight:700;font-size:1.4rem;margin-bottom:0.2rem}.wysiwyg_content .center_focus_carousel_module .caption_container .cf_slide_title{font-weight:500;font-size:1.15rem;color:#222222}.wysiwyg_content .center_focus_carousel_module .caption_container .cf_slide_descritpion{text-align:left}.wysiwyg_content .center_focus_carousel_module .caption_container .button{margin-bottom:0}.wysiwyg_content .center_focus_carousel_module .slick-prev,.wysiwyg_content .center_focus_carousel_module .slick-next{top:calc(50% - 32px)}.wysiwyg_content .center_focus_carousel_module .slick-prev:before,.wysiwyg_content .center_focus_carousel_module .slick-next:before{content:none}.wysiwyg_content .center_focus_carousel_module .slick-next{position:absolute;height:75px;width:44px;z-index:1;right:-50px}.wysiwyg_content .center_focus_carousel_module .slick-next:after{content:'';position:absolute;margin:auto;width:25px;height:25px;transform:rotate(45deg) skew(10deg, 10deg);top:0px;left:2%;right:20%;bottom:2px;border-top:3px solid #a1a3a3;border-right:3px solid #a1a3a3}.wysiwyg_content .center_focus_carousel_module .slick-next:hover:after{border-color:#bcb9b9}.wysiwyg_content .center_focus_carousel_module .slick-prev{position:absolute;height:75px;width:44px;z-index:1;left:-50px}.wysiwyg_content .center_focus_carousel_module .slick-prev:after{content:'';position:absolute;margin:auto;width:25px;height:25px;transform:rotate(45deg) skew(10deg, 10deg);top:0px;left:20%;right:2%;bottom:2px;border-bottom:3px solid #a1a3a3;border-left:3px solid #a1a3a3}.wysiwyg_content .center_focus_carousel_module .slick-prev:hover:after{border-color:#bcb9b9}.wysiwyg_content .center_focus_carousel_module footer{margin-top:2rem;font-weight:800}.content_page:not(.stellar_item_feature) .center_focus_carousel_module{padding:2.2rem 0px 2.5rem}@media (min-width: 600px), print{.content_page:not(.stellar_item_feature) .center_focus_carousel_module{padding:2.9rem 0 2.6rem}}.wysiwyg_content .center_focus_carousel_module.sfad_carousel .sfad_footer{font-weight:800;text-align:center;width:86%;margin:2rem auto 0}.rrr{border:0px solid transparent}.automat_module{width:100%;padding:3em 0 3.5em}.feature_pages .wysiwyg_content .automat_module{margin-left:auto;margin-right:auto;width:95%;max-width:1200px}@media (min-width: 480px){.feature_pages .wysiwyg_content .automat_module{width:90%}}@media (min-width: 1024px){.feature_pages .wysiwyg_content .automat_module{width:99%}}.homepage_page .automat_module{padding-right:3em;padding-left:3em}.automat_module.dark_bg{color:#fff}.automat_module header{margin-bottom:0.4rem}@media (min-width: 1024px){.automat_module header{margin-bottom:2rem}}.automat_module h2.module_title{margin:0;font-size:1.8rem;text-align:center}.automat_module .module_subtitle{line-height:1.4;width:100%;display:block;text-align:center;margin-left:auto;margin-right:auto;margin-top:1rem}@media (min-width: 480px){.automat_module .module_subtitle{width:90%}}@media (min-width: 769px), print{.automat_module .module_subtitle{width:80%}}@media (min-width: 1024px), print{.automat_module .module_subtitle{width:60%}}@media (min-width: 1200px){.automat_module .module_subtitle{width:50%}}.automat_module .category{text-transform:uppercase;color:#979ea7;font-family:CooperHewitt,Helvetica,Arial,sans-serif;font-weight:700;font-size:.75rem}.feature_pages .automat_module .category{font-size:.85rem}@media (min-width: 1024px){.automat_module .category{color:#bbd3ec}}.automat_module .title{margin-top:.2rem}.automat_module .image_container{background-size:cover}.automat_module a.compartment_link{word-wrap:normal}.automat_module .additional_info{clear:both;display:none}.automat_module .more_link{font-weight:700;text-transform:capitalize;margin-top:0.3rem;font-size:0.9rem}.feature_pages .automat_module .more_link{margin-top:1rem;font-size:.9rem}@media (min-width: 1024px){.feature_pages .automat_module .more_link{font-size:1.1rem}}.automat_module .more_link:after{content:" ›"}.automat_module .rollover_body{font-size:0.8rem;font-weight:500;margin-bottom:0.2rem}@media (min-width: 1024px){.automat_module .rollover_body{font-size:0.8rem}}@media (min-width: 1024px){.feature_pages .automat_module .rollover_body{font-size:1rem}}.automat_module .rollover_body p:first-child{margin-top:0}.automat_module .large_button_text_container{padding:1em 0}@media (min-width: 600px), print{.automat_module .large_button_text_container{padding:2em}}.automat_module .definition_description{font-size:.85em;display:inline-block;vertical-align:middle;width:56%;text-align:left}@media (min-width: 600px), print{.automat_module .definition_description{width:auto;text-align:center}}.automat_module .gradient_container_bottom{display:none;z-index:0}.automat_module .automat_footer{margin-top:1.7rem;text-align:center}.automat_module video,.automat_module .background_image{position:absolute;top:0;left:0;max-width:120%;z-index:-1}.automat_module video{display:none}@media (min-width: 600px), print{.automat_module video{display:block;width:120%}.touchevents .automat_module video{display:none}}@media (min-width: 1024px), print{.automat_module video{width:105%}}@media (min-width: 1200px){.automat_module video{width:102%}}@media (min-width: 1700px){.automat_module video{width:101%}}.automat_module .background_image{min-width:100%;min-height:100%;height:100%;background-size:cover}@media (max-width: 1023px){.automat_module .desktop_automat_items{display:none}}@media (min-width: 1024px){.automat_module .mobile_automat_items{display:none}}@media (min-width: 1024px){.automat_module .automat_items{text-align:center}.automat_module .automat_items a:hover{text-decoration:none}.automat_module .compartment{overflow:hidden;display:inline-block;position:relative;vertical-align:middle;background-color:rgba(255,255,255,0.13);width:270px;height:270px;margin:0.3%;cursor:pointer;transition:all 200ms;color:#d4ecfb}}@media (min-width: 1024px) and (min-width: 769px), print and (min-width: 1024px){.automat_module .compartment{border-width:8px}}@media (min-width: 1024px) and (min-width: 1200px){.automat_module .compartment{margin:0.3%;width:224px;height:224px}}@media (min-width: 1024px){.feature_pages .automat_module .compartment,.subsite_content_page .automat_module .compartment,.homepage_page .automat_module .compartment{margin:0 0.6%;width:311px;height:311px}}@media (min-width: 1024px) and (min-width: 1200px){.feature_pages .automat_module .compartment,.subsite_content_page .automat_module .compartment,.homepage_page .automat_module .compartment{margin:1.2%;width:367px;height:367px}}@media (min-width: 1024px){.automat_module .compartment .title{color:white;display:inline-block;font-size:1em;font-weight:600;vertical-align:middle}.feature_pages .automat_module .compartment .title{font-size:1em}}@media (min-width: 1024px) and (min-width: 1024px), print and (min-width: 1024px){.feature_pages .automat_module .compartment .title{font-size:1.2em}}@media (min-width: 1024px) and (min-width: 1200px){.feature_pages .automat_module .compartment .title{font-size:1.4em}}@media (min-width: 1024px){.no-touchevents .automat_module .compartment:hover .additional_info{display:block}.automat_module .rollover_body{max-height:34px;overflow:hidden;color:white}.automat_module .rollover_body::-webkit-scrollbar{width:8px}.automat_module .rollover_body::-webkit-scrollbar-thumb{background-color:rgba(107,107,107,0.6)}.automat_module .rollover_body::-webkit-scrollbar-track{background-color:rgba(157,157,157,0.4)}.feature_pages .automat_module .rollover_body{max-height:100px}.automat_module .rollover_body p{color:white}.automat_module .image_container{width:100%;height:100%;position:relative;z-index:0}.automat_module .automat_text_container{position:absolute;bottom:30px;left:0;width:100%}.automat_module .automat_text_container .category{font-family:CooperHewitt,Helvetica,Arial,sans-serif}}.automat_module.circle .compartment{font-size:.9rem;border-radius:50%;border:4px solid #969696}.automat_module.circle .compartment:hover .automat_screen{top:0;background-color:rgba(0,0,0,0.6);position:absolute;width:100%;height:100%;transition:all .3s}.automat_module.circle .compartment:hover .automat_cover_title{opacity:0;bottom:100px}.automat_module.circle .compartment:hover .automat_text_container{opacity:1;transition:opacity .5s ease-in-out 0s}.automat_module.circle .compartment:hover .rollover_body{width:94%;margin-left:auto;margin-right:auto}.automat_module.circle .compartment:hover .title{width:88%}.automat_module.circle .compartment .automat_cover_title{position:absolute;opacity:1;bottom:50px;left:0;width:100%}.automat_module.circle .compartment .automat_text_container{position:absolute;opacity:0;bottom:50px;left:0;width:100%}.automat_module.circle .compartment .automat_text_container>*{margin-left:auto;margin-right:auto}.automat_module.circle .compartment .category{margin-left:auto;margin-right:auto;width:80%}.automat_module.circle .compartment .title{width:74%}.automat_module.circle .compartment .additional_info{width:88%;margin:auto}@media (min-width: 1024px){.automat_module.circle .compartment .gradient_container_bottom{display:block}}@media (min-width: 1024px){.feature_pages .automat_module .item_count_4 .compartment,.feature_pages .automat_module .item_count_7 .compartment,.feature_pages .automat_module .item_count_8 .compartment{margin:0.7%}}@media (min-width: 1024px), print{.feature_pages .automat_module .item_count_4 .compartment,.feature_pages .automat_module .item_count_7 .compartment,.feature_pages .automat_module .item_count_8 .compartment{margin:0.7%;width:284px;height:284px}}@media (min-width: 1200px){.feature_pages .automat_module .item_count_4 .compartment,.feature_pages .automat_module .item_count_7 .compartment,.feature_pages .automat_module .item_count_8 .compartment{margin:0.5%}}.feature_pages .automat_module .item_count_4 .compartment .rollover_body,.feature_pages .automat_module .item_count_7 .compartment .rollover_body,.feature_pages .automat_module .item_count_8 .compartment .rollover_body{max-height:50px}.automat_module.square .compartment{color:black}.automat_module.square .compartment .automat_text_container{text-align:left;padding:0.9rem 1rem 0.5rem;bottom:0;background-color:rgba(65,65,65,0.69)}@media (max-width: 1023px){.automat_module .compartment_mobile{overflow:hidden;width:100%;padding:1.4rem 0;border-bottom:1px solid #c3c3c3}.automat_module .compartment_mobile:last-child{border-bottom:none}.automat_module .automat_text_container{position:relative}.automat_module .rollover_body{line-height:1.54}.automat_module .rollover_body p{line-height:inherit}.automat_module .image_container{height:100px;width:100px;float:right;margin-left:1.1rem;margin-bottom:1rem}.automat_module.circle .image_container{border-radius:50%;border:2px solid #969696}.automat_module .title{font-size:1.2rem}.automat_module .arrow_container{display:block;position:absolute;width:28px;height:28px;right:0;top:0;text-align:center}.automat_module .current .arrow_container svg{transform:rotate(-180deg)}}.automat_module.automat_definition_teaser,.homepage_home .automat_module.automat_definition_teaser{padding:4em 0 4em}.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:1.69em;margin-bottom:.1875em;text-align:center;font-weight:700}@media (min-width: 600px), print{.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:1.95em;margin-bottom:.3375em}}@media (min-width: 769px), print{.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:2.21em;margin-bottom:.4875em}}@media (min-width: 1024px), print{.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:2.34em;margin-bottom:.54375em}}@media (min-width: 1200px){.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:2.47em;margin-bottom:.6em}}.automat_module.automat_definition_teaser .compartment,.homepage_home .automat_module.automat_definition_teaser .compartment{background-color:rgba(255,255,255,0.2);border:9px solid rgba(230,230,230,0.7)}.automat_module.automat_definition_teaser .compartment .automat_text_container,.homepage_home .automat_module.automat_definition_teaser .compartment .automat_text_container{display:none}.automat_module.automat_definition_teaser .compartment:hover,.homepage_home .automat_module.automat_definition_teaser .compartment:hover{background-color:rgba(255,255,255,0.2);border:9px solid #e6e6e6}.automat_module.automat_definition_teaser .compartment:hover .additional_info,.homepage_home .automat_module.automat_definition_teaser .compartment:hover .additional_info{display:none}.automat_module.automat_definition_teaser .compartment:hover .automat_cover_title,.homepage_home .automat_module.automat_definition_teaser .compartment:hover .automat_cover_title{opacity:1;bottom:initial}.automat_module.automat_definition_teaser .compartment .additional_info,.homepage_home .automat_module.automat_definition_teaser .compartment .additional_info{display:none}.automat_module.automat_definition_teaser .compartment .automat_screen,.homepage_home .automat_module.automat_definition_teaser .compartment .automat_screen{display:none}.automat_module.automat_definition_teaser .compartment .title,.homepage_home .automat_module.automat_definition_teaser .compartment .title{font-size:1.3em;font-weight:900;width:88%;line-height:1.3}.automat_module.automat_definition_teaser .compartment .automat_cover_title,.homepage_home .automat_module.automat_definition_teaser .compartment .automat_cover_title{width:100%;position:absolute;text-align:center;top:50%;bottom:initial;transform:translateY(-50%)}.automat_module.automat_definition_teaser .compartment .gradient_container_bottom,.homepage_home .automat_module.automat_definition_teaser .compartment .gradient_container_bottom{display:none}.automat_module.automat_definition_teaser .compartment .image_container,.homepage_home .automat_module.automat_definition_teaser .compartment .image_container{background-image:none}.automat_module.automat_definition_teaser .automat_footer,.homepage_home .automat_module.automat_definition_teaser .automat_footer{margin-top:2.0rem}@media (min-width: 1024px){.automat_module.automat_definition_teaser .compartment,.homepage_home .automat_module.automat_definition_teaser .compartment{width:170px;height:170px}}@media (min-width: 1024px) and (min-width: 1200px){.automat_module.automat_definition_teaser .compartment,.homepage_home .automat_module.automat_definition_teaser .compartment{margin:0.8%;width:220px;height:220px}}@media (max-width: 1023px){.automat_module.automat_definition_teaser,.homepage_home .automat_module.automat_definition_teaser{padding-left:0;padding-right:0}.automat_module.automat_definition_teaser header,.homepage_home .automat_module.automat_definition_teaser header{padding-left:3em;padding-right:3em}.automat_module.automat_definition_teaser .automat_items,.homepage_home .automat_module.automat_definition_teaser .automat_items{width:90%;text-align:center;margin-top:1.5rem;margin-left:auto;margin-right:auto}.automat_module.automat_definition_teaser .automat_items a:hover,.homepage_home .automat_module.automat_definition_teaser .automat_items a:hover{text-decoration:none}.automat_module.automat_definition_teaser .additional_info_mobile,.automat_module.automat_definition_teaser .image_container,.homepage_home .automat_module.automat_definition_teaser .additional_info_mobile,.homepage_home .automat_module.automat_definition_teaser .image_container{display:none}.automat_module.automat_definition_teaser .compartment_mobile,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile{padding:1rem 0;border-bottom:1px solid #9c9c9c}.automat_module.automat_definition_teaser .compartment_mobile:last-child,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile:last-child{border-bottom:1px solid #9c9c9c}.automat_module.automat_definition_teaser .compartment_mobile .automat_text_container,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile .automat_text_container{color:#c1e7ff}.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:1.8rem}.automat_module.automat_definition_teaser .module_subtitle,.homepage_home .automat_module.automat_definition_teaser .module_subtitle{font-size:1rem}}@media (max-width: 1023px) and (min-width: 600px), print and (max-width: 1023px){.automat_module.automat_definition_teaser .automat_items,.homepage_home .automat_module.automat_definition_teaser .automat_items{width:70%}.automat_module.automat_definition_teaser .module_title,.homepage_home .automat_module.automat_definition_teaser .module_title{font-size:2.8rem}.automat_module.automat_definition_teaser .module_subtitle,.homepage_home .automat_module.automat_definition_teaser .module_subtitle{font-size:1.1rem}.automat_module.automat_definition_teaser .compartment_mobile,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile{overflow:hidden;width:170px;height:170px;display:inline-block;border-radius:50%;padding:0;margin:1%;border:9px solid #e6e6e6;vertical-align:top;position:relative}.automat_module.automat_definition_teaser .compartment_mobile:last-child,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile:last-child{border-bottom:9px solid #e6e6e6}.automat_module.automat_definition_teaser .compartment_mobile>a,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile>a{display:inline-block;width:100%;height:100%;text-decoration:none}.automat_module.automat_definition_teaser .compartment_mobile .automat_text_container,.homepage_home .automat_module.automat_definition_teaser .compartment_mobile .automat_text_container{position:absolute;width:100%;top:50%;transform:translateY(-50%)}.automat_module.automat_definition_teaser .image_container,.homepage_home .automat_module.automat_definition_teaser .image_container{height:100%;width:100%;border:0}.automat_module.automat_definition_teaser .title,.homepage_home .automat_module.automat_definition_teaser .title{width:88%;font-size:1.15em;font-weight:900;margin-left:auto;margin-right:auto;color:white}}@media (max-width: 1023px) and (min-width: 769px), print and (max-width: 1023px){.automat_module.automat_definition_teaser .automat_items,.homepage_home .automat_module.automat_definition_teaser .automat_items{width:50%}}@media (max-width: 1023px) and (min-width: 1024px), print and (max-width: 1023px){.automat_module.automat_definition_teaser .automat_items,.homepage_home .automat_module.automat_definition_teaser .automat_items{width:50%}}.responsive_image{display:block;position:relative;padding-bottom:100%;background:black;overflow:hidden}.responsive_image__element{position:absolute;top:0;left:0;width:100%;height:100%}img.responsive_image__element{height:auto}@supports ((-o-object-fit: cover) or (object-fit: cover)){img.responsive_image__element{height:100%;-o-object-fit:cover;object-fit:cover}[data-avoid-crop] img.responsive_image__element{-o-object-fit:contain;object-fit:contain}}.responsive_image__element[style*="relative"]{position:absolute !important}.responsive_image[data-aspect-ratio="2/1"]{padding-bottom:50%}.responsive_image[data-aspect-ratio="3/2"]{padding-bottom:66.66666667%}.responsive_image[data-aspect-ratio="4/3"]{padding-bottom:75%}.responsive_image[data-aspect-ratio="1/1"]{padding-bottom:100%}.responsive_image[data-aspect-ratio="1/2"]{padding-bottom:200%}.react_photo_gallery{width:100%;max-width:none;position:relative}.react_photo_gallery .photo_index{text-align:center}.react_photo_gallery .nav{width:100%}.react_photo_gallery .next,.react_photo_gallery .prev{cursor:pointer;top:50%;margin-top:-2.34375em;background-color:transparent;border-width:0}.react_photo_gallery .next:disabled,.react_photo_gallery .prev:disabled{opacity:0;cursor:default}.react_photo_gallery .prev{position:absolute;height:4.6875em;width:2.75em;left:0.625em}.react_photo_gallery .next{position:absolute;height:4.6875em;width:2.75em;right:0.625em}.react_photo_gallery .next::after{content:'';position:absolute;margin:auto;width:1.5625em;height:1.5625em;transform:rotate(45deg) skew(10deg, 10deg);top:0px;left:2%;right:20%;bottom:2px;border-top:3px solid white;border-right:3px solid white}.react_photo_gallery .prev::after{content:'';position:absolute;margin:auto;width:1.5625em;height:1.5625em;transform:rotate(45deg) skew(10deg, 10deg);top:0px;left:20%;right:2%;bottom:2px;border-bottom:3px solid white;border-left:3px solid white}@media (max-width: 500px){.react_photo_gallery .next,.react_photo_gallery .prev{font-size:0.5em}}.react_photo_gallery .photo_index{position:absolute;color:white;margin-top:-2.25em;width:100%}body#facts #page_header{display:none}body#facts section.content_page.module{padding-top:0}.mars_facts{text-align:center;padding-top:0}.mars_facts header{color:#000;margin-bottom:1.5em}.mars_facts header .module_title{font-weight:600;color:#252525;font-size:2.5rem}.mars_facts header .description{font-weight:500}.mars_facts .desktop_nav{display:none;margin-bottom:3rem;justify-content:center;align-items:center}@media (min-width: 600px), print{.mars_facts .desktop_nav{display:flex}}.mars_facts .desktop_nav .nav_item{width:145px;background-color:#f9f2f0;padding:.7em 0;color:#867071;margin:0 4px 8px;font-weight:500;cursor:pointer}.mars_facts .desktop_nav .nav_item.current,.mars_facts .desktop_nav .nav_item:hover{background-color:#eeddd9;color:#613b34;font-weight:600}.mars_facts .mobile_nav{margin-bottom:40px;background:#3B788B url("https://mars.nasa.gov/assets/[email protected]") no-repeat 95% 10px;background-position:right .8em top 10px;background-size:9px;padding:7px 10px;border-radius:5px;cursor:pointer}@media (min-width: 600px), print{.mars_facts .mobile_nav{display:none}}.mars_facts .mobile_nav select{background:transparent;border:none;width:100%;color:white;font-weight:600;cursor:pointer}.mars_facts .fact_title{font-size:2rem;text-align:center;margin-bottom:1rem;font-weight:600}.mars_facts .quick_detailed{text-align:center;margin-bottom:3rem;font-weight:600}.mars_facts .quick_detailed a{margin-right:1rem;color:#a4a4a4;cursor:pointer}.mars_facts .quick_detailed a:hover{text-decoration:none;color:#252527}.mars_facts .quick_detailed a.current{color:#252527;font-weight:600;padding-bottom:2px;border-bottom:3px solid #8E7671}.mars_facts .quick_facts .image_container{max-width:770px;margin:0 auto}.mars_facts .quick_facts .title_description_container{padding-top:1.5rem}.mars_facts .quick_facts .title_description_container h2{font-size:1.8rem;font-weight:600}.mars_facts .quick_facts .title_description_container .description p{font-weight:400;color:#222;line-height:1.3em}.mars_facts .quick_facts .title_description_container .highlight_fact{font-size:1.3em;font-weight:700;margin:2em 0 1.2em}.mars_facts .quick_facts .close_btn{position:absolute;top:.5em;right:.5em;padding:.8em;z-index:99999}.mars_facts .quick_facts .close_btn .icon{width:24px;height:24px;display:block;padding:0;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0px;background-size:300px}.mars_facts .quick_facts .close_btn .icon:hover,.mars_facts .quick_facts .close_btn .icon.active{background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0px;background-size:300px}.mars_facts .columns{display:flex;justify-content:center;align-items:center;flex-direction:column;font-weight:400;font-size:1.3em}.mars_facts .columns .col.mars{color:#6D3007;margin-bottom:2em}.mars_facts .columns .col.earth{color:#052848}.mars_facts .columns .col .column_title{font-size:.9rem;font-weight:700;margin-bottom:.5rem}.mars_facts .columns .col .fact{font-weight:600}@media (min-width: 600px), print{.mars_facts .columns{flex-direction:row}.mars_facts .columns .col.mars{margin-right:2rem;margin-bottom:0em}.mars_facts .columns .col.earth{margin-left:2rem}}.mars_facts .detailed_facts .unit_close{position:fixed;top:0;left:0;height:100vh;width:100vw}.mars_facts .detailed_facts .statement{display:flex;flex-direction:column;margin-bottom:3rem;border:1px solid #F1EAEB}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement{flex-direction:row}}.mars_facts .detailed_facts .statement .description{font-weight:400}.mars_facts .detailed_facts .statement .premise{position:relative;padding:3rem 5rem}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement .premise{width:40%}}.mars_facts .detailed_facts .statement .premise .image_container{width:150px;margin:auto}.mars_facts .detailed_facts .statement .premise h3{margin:.5rem 0 1rem 0}.mars_facts .detailed_facts .statement .premise .tooltip{width:0;height:0;position:absolute;left:50%;transform:translate(-50%, 48px);border-left:20px solid transparent;border-right:20px solid transparent;border-top:20px solid #fff}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement .premise .tooltip{top:50%;left:100%;transform:translate(0, -50%);border-top:30px solid transparent;border-bottom:30px solid transparent;border-left:30px solid #fff;border-right:none}}.mars_facts .detailed_facts .statement .conclusion{padding:3rem 5rem;background-color:#fff5f3}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement .conclusion{width:60%;display:flex;flex-direction:column;justify-content:space-between}}.mars_facts .detailed_facts .statement .conclusion .tools{position:relative;top:-2rem;right:-2rem}.mars_facts .detailed_facts .statement .conclusion .tools .icon_text_button{position:absolute;display:flex;right:-30px;padding-top:10px;transform:translateY(-10px);color:#A69694;font-weight:700;cursor:pointer}.mars_facts .detailed_facts .statement .conclusion .tools .icon_text_button .icon{line-height:normal;margin-left:.2em}.mars_facts .detailed_facts .statement .conclusion .tools .icon_text_button .icon .svg_icon_container{width:18px;height:18px}.mars_facts .detailed_facts .statement .conclusion .tools .units_container{display:none;position:absolute;text-align:left;border:1px solid rgba(51,51,51,0.1);bottom:10px;right:-3.5em;background:white;padding:1em;max-width:250px}@media (min-width: 769px), print{.mars_facts .detailed_facts .statement .conclusion .tools .units_container{right:-4em}}.mars_facts .detailed_facts .statement .conclusion .tools .units_container .unit_selector{font-weight:700;font-size:.9em;color:#939393;margin-bottom:.9em;cursor:pointer}.mars_facts .detailed_facts .statement .conclusion .tools .units_container .unit_selector.current{color:#000}.mars_facts .detailed_facts .statement .conclusion .tools .units_container .unit_selector:last-child{margin-bottom:.7em}.mars_facts .detailed_facts .statement .conclusion .tools .units_container .ancilla{border-top:1px solid #c5c5c5;padding-top:.5em;font-size:.9em;font-weight:500;color:#939393}.mars_facts .detailed_facts .statement .conclusion .tools .units_container.open{display:block}.mars_facts .detailed_facts .statement .conclusion .tools .units_tooltip{display:none;position:absolute;right:-31px;transform:translateY(-11px);border-top:10px solid #fff;border-left:10px solid transparent;border-right:10px solid transparent}.mars_facts .detailed_facts .statement .conclusion .tools .units_tooltip.open{display:block}.mars_facts .detailed_facts .statement .conclusion .tools .units_tooltip_border{display:none;position:absolute;right:-32px;transform:translateY(-11px);border-top:11px solid rgba(51,51,51,0.1);border-left:11px solid transparent;border-right:11px solid transparent}.mars_facts .detailed_facts .statement .conclusion .tools .units_tooltip_border.open{display:block}.mars_facts .detailed_facts .statement .conclusion .highlight_fact{font-weight:700;margin-bottom:1rem;font-size:1.4rem}.mars_facts .detailed_facts .statement .conclusion .description{margin-bottom:1.5rem}.mars_facts .detailed_facts .statement .conclusion .columns{margin-bottom:1rem}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement .conclusion .columns{margin-bottom:2rem}}@media (min-width: 600px), print{.mars_facts .detailed_facts .statement .conclusion .dyk{text-align:left}}.mars_facts .detailed_facts .statement .conclusion .dyk .dyk_title{font-weight:600;font-size:1.2rem;margin-bottom:1rem}.mars_facts .detailed_facts .statement .conclusion .dyk .dyk_content{font-weight:400}.mars_facts .detailed_facts .statement .conclusion table.fact_table{border-spacing:2px;margin-bottom:2.5rem}.mars_facts .detailed_facts .statement .conclusion table.fact_table tbody{font-weight:600}.mars_facts .detailed_facts .statement .conclusion table.fact_table tbody tr td{background:#F2E7E5}.mars_facts .detailed_facts .statement .conclusion table.fact_table tbody tr:first-of-type td{font-weight:700}.mars_facts .detailed_facts .statement .conclusion table.fact_table tbody tr:nth-of-type(odd) td{background:transparent}.secondary_megasection_nav{position:-webkit-sticky;position:sticky;top:0;z-index:11}body.megasection_nav_present section.banner_feature nav.secondary_nav.secondary_nav_desktop{display:none}.msl .megasection_nav .megasection_title{color:white}@media (min-width: 1200px){.megasection_nav{background-color:#D1DEE7;font-weight:400;text-align:center;position:relative;width:100%;z-index:20}.megasection_nav .megasection_title{display:inline-block;z-index:60;position:relative;text-transform:uppercase;font-size:27px;vertical-align:bottom;opacity:1}.megasection_nav .megasection_title a{color:white}.megasection_nav .megasection_title_container{margin-left:16px;position:absolute;left:0px;top:8px}.megasection_nav .main_nav_item{text-transform:uppercase;display:inline-block;padding:0.5em 1em 0.9em;color:#575e6b;text-decoration:none;cursor:pointer}.megasection_nav .nav{margin:0;flex:1;width:100%}.megasection_nav .nav li{white-space:nowrap}.megasection_nav .nav>li{margin-right:1em;padding-top:10px;display:inline-block}.megasection_nav .nav>li:last-child{margin-right:0}.megasection_nav .nav>li.current .main_nav_item,.megasection_nav .nav>li:hover .main_nav_item{background-color:white;color:#575e6b}.megasection_nav .nav .main_nav_item{color:#285978}.megasection_nav li:hover .megasection_subnav{display:block}.megasection_nav .megasection_subnav{display:none;position:absolute;background-color:#D1DEE7;border:1px solid #D1DEE7;border-width:0 1px 1px;min-width:100%;left:0;width:auto;margin:0;cursor:pointer}.megasection_nav .megasection_subnav li{display:block;text-align:left;border-top:#e0e3e6 1px solid;background-color:#D1DEE7;margin-bottom:0}.megasection_nav .megasection_subnav li a{display:block;color:#575e6b;padding:0.7rem 1.1rem}.megasection_nav .megasection_subnav li:hover{background-color:white}.megasection_nav .megasection_subnav li:hover a{text-decoration:none}.megasection_nav_present.msl .megasection_nav{background-color:#e05140}.megasection_nav_present.msl li:hover .main_nav_item,.megasection_nav_present.msl li.current .main_nav_item{color:#942D25}.megasection_nav_present.msl .main_nav_item{color:#fff}.megasection_nav_present.msl .megasection_subnav{background-color:#ffe4e1;border:1px solid #ffe4e1}.megasection_nav_present.msl .megasection_subnav li{background-color:#ffe4e1;border-top:1px solid white}.megasection_nav_present.msl .megasection_subnav li a{text-decoration:none;color:#942D25}.megasection_nav_present.msl .megasection_subnav li:hover{background-color:white}.secondary_megasection_nav .megasection_title_container{display:none}.secondary_megasection_nav .megasection_nav.sticky .megasection_title_container,.secondary_megasection_nav .megasection_nav .megasection_title_container.megasection_title_container_always_vis{display:block}.secondary_megasection_nav .nav>li{margin:0;font-family:"Montserrat",Helvetica,Arial,sans-serif;position:relative;border-color:#D1DEE7;font-size:.93rem;font-weight:600}.secondary_megasection_nav .megasection_subnav{font-family:"Montserrat",Helvetica,Arial,sans-serif}.secondary_megasection_nav .megasection_subnav li:first-child{border-top:none}#site_nav_container .magic_shell_subnav .megasection_nav{display:none}#site_nav_container .magic_shell_subnav .megasection_nav .nav{margin:0;height:100%;margin:0 65px 0 0}#site_nav_container .magic_shell_subnav .megasection_nav .nav li{height:100%;border:none;background-color:#1d1d1d}#site_nav_container .magic_shell_subnav .megasection_nav .nav li a.main_nav_item{height:100%;color:#bdbdbd;background-color:transparent;padding:1em 1em}#site_nav_container .magic_shell_subnav .megasection_nav .nav li a.main_nav_item:hover{color:white}#site_nav_container .magic_shell_subnav .megasection_nav .nav li.current a{color:white}#site_nav_container .magic_shell_subnav .megasection_nav .megasection_subnav{border:none}#site_nav_container .magic_shell_subnav .megasection_nav .megasection_subnav a{color:white}#site_nav_container .magic_shell_subnav .megasection_nav .megasection_subnav li{border-top:1px solid #2c2c2c}#site_nav_container .magic_shell_subnav .megasection_nav .megasection_subnav li:first-child{border-top:none}#site_nav_container .magic_shell_subnav .megasection_nav .megasection_subnav li:hover{background-color:#565656}.nav_expanded #site_nav_container .magic_shell_subnav .megasection_nav{background-color:#1d1d1d;display:flex;position:absolute;top:0;bottom:0;left:0;right:0;text-align:center}.secondary_nav_is_fixed #site_nav_container .magic_shell_subnav{position:fixed;top:0}.secondary_nav_is_fixed #site_nav_container .magic_shell_subnav .megasection_nav{background-color:#1d1d1d}.secondary_nav_is_fixed #site_nav_container .magic_shell_subnav .pullout_subnav{display:none}.secondary_nav_is_fixed #site_nav_container .magic_shell_subnav .megasection_nav{display:flex}.secondary_nav_is_fixed #site_nav_container .magic_shell_subnav .nav_toggle{display:none}}.msl.megasection_nav_present .site_header_area{position:absolute}@media (min-width: 1200px){.secondary_megasection_nav--on_magic_shell_page{display:none}.megasection_nav .arrow_box{display:none}}@media (min-width: 1200px){.megasection_page_name{display:none}}@media (max-width: 1199px){.megasection_nav.sticky{position:fixed;top:0;width:100%;z-index:40}.magic_shell_page.ms_full_height .megasection_nav{display:none}.megasection_nav_present.magic_shell_page.ms_full_height #page{height:calc(100% - 130px) !important;position:static}.megasection_nav_present.magic_shell_page.ms_full_height .megasection_nav{display:block;position:absolute;z-index:21;top:60px;width:100%}.magic_shell_page:not(.nav_overlay_true) .site_header_area{background:black}.megasection_title{text-transform:uppercase;color:#323a40;font-family:"Montserrat",Helvetica,Arial,sans-serif}.current .main_nav_item,.megasection_subnav .current{color:#323a40}.megasection_nav_present.msl .megasection_nav{font-weight:600;background-color:#ffe4e1;color:#942D25;padding:14px 1.5em 11px}.megasection_nav_present.msl .megasection_nav .megasection_title{color:#942D25}.megasection_nav_present.msl .megasection_nav ul{list-style-type:none;margin-left:0}.megasection_nav_present.msl .megasection_nav ul.nav>li{border-bottom:1px solid #dcb6b1}.megasection_nav_present.msl .megasection_nav ul.nav>li .current .main_nav_item,.megasection_nav_present.msl .megasection_nav ul.nav>li .megasection_subnav .current{color:#942D25}.megasection_nav_present.msl .megasection_nav ul.nav>li:last-child{border-bottom:none}.megasection_nav ul.nav>li{border-bottom:1px solid #b6c8d5}.megasection_nav ul.nav>li:last-child{border-bottom:none}.magic_shell_page .megasection_title{color:white}.magic_shell_page .current .main_nav_item,.magic_shell_page .megasection_subnav .current{color:white !important}.magic_shell_page .megasection_nav{background-color:#1d1d1d;color:#bcbcbc}.magic_shell_page .megasection_nav ul.nav>li{border-bottom:1px solid #3b3b3b}.megasection_title.site_title_display,.megasection_title_container{display:flex;justify-content:space-between;cursor:pointer}.megasection_title.site_title_display::after,.megasection_title_container::after{content:"X";font-weight:bold}.megasection_title_container:not(.megasection_title_container--nav-hidden)::after{content:"";display:block;cursor:pointer;width:25px;height:25px;background:url("https://mars.nasa.gov/assets/[email protected]") -25px 0;background-size:300px;transform:scale(0.5)}.secondary_megasection_nav:not(.secondary_megasection_nav--on_magic_shell_page) .megasection_title_container:not(.megasection_title_container--nav-hidden)::after{-webkit-filter:invert(100%);filter:invert(100%)}.megasection_title.site_title_display::after,.megasection_title_container--nav-hidden::after{content:". . .";margin-top:-3px}.megasection_title_container:not(.megasection_title_container--nav-hidden) .megasection_page_name{display:none}.megasection_nav,.megasection_title.site_title_display{padding:11px 1.5em 5px}.megasection_title.site_title_display{background-color:#1d1d1d;color:white;display:none}.megasection_title.site_title_display a{color:inherit}.site_header_area .magic_shell_subnav{display:none}.magic_shell_page #sticky_nav_spacer{height:0}.megasection_nav:not(.megasection_nav--nav-hidden){-webkit-overflow-scrolling:touch;overflow-y:auto;height:100%;min-height:100%}.megasection_nav ul.nav--hidden{display:none}.megasection_nav ul.nav>li{display:block;padding:1em 0 0}.megasection_nav ul.nav>li .gradient_line,.megasection_nav ul.nav>li .related.module .gradient_line_module_top,.related.module .megasection_nav ul.nav>li .gradient_line_module_top{margin:1em 0 0 0}.megasection_nav ul.nav>li .arrow_box{padding:20px 20px;width:52px;float:right;cursor:pointer;margin:-0.4em -.8em 0 0;display:block;text-align:center;border-width:0;background:transparent}.megasection_nav ul.nav>li .arrow_box.reverse{transform:rotate(180deg);-ms-filter:"progid:DXImageTransform.Microsoft.Matrix(M11=-1, M12=1.2246063538223773e-16, M21=-1.2246063538223773e-16, M22=-1, SizingMethod='auto expand')"}.megasection_nav ul.nav>li .arrow_box .arrow_down{width:0;height:0;border-left:6px solid transparent;border-right:6px solid transparent;border-top:8px solid;float:right;text-indent:-9999px;overflow:hidden}.megasection_nav .nav_title{margin-bottom:.3em;display:block;line-height:1.15;font-weight:600;text-align:left;width:80%}.megasection_nav .nav_title a,.megasection_nav .nav_title .unclickable_item{font-size:1.2em;color:inherit;display:block;width:100%;height:100%;padding:.4em .4em .4em 0;cursor:pointer}.megasection_nav .nav_title a:hover,.megasection_nav .nav_title .unclickable_item:hover{text-decoration:none}.megasection_nav ul.megasection_subnav .stellarnav_title{color:#c6c5d3;font-size:14px;margin-bottom:.2rem;margin-top:.8rem}.megasection_nav ul.megasection_subnav li{text-align:left}.megasection_nav ul.megasection_subnav a{color:inherit;font-size:1em;line-height:1.4;text-decoration:none;display:block;padding:.4em 0;cursor:pointer}}@media (max-width: 1199px) and (min-width: 20em){.megasection_nav .top_nav_items{-moz-column-gap:1em;column-gap:1em;-moz-columns:2;columns:2;max-width:30em}.megasection_nav .top_nav_items li{-moz-column-break-inside:avoid;break-inside:avoid-column}}@media (max-width: 1199px){.megasection_nav ul.nav>li.admin_site_nav_item .arrow_box .arrow_up,.megasection_nav ul.nav>li.admin_site_nav_item .arrow_box .arrow_down{border-top-color:#F45F5F}.no-touchevents .megasection_nav ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_up,.no-touchevents .megasection_nav ul.nav>li.admin_site_nav_item .arrow_box:hover .arrow_down{border-top-color:white}.megasection_nav ul.nav>li.admin_site_nav_item .nav_title a,.megasection_nav ul.nav>li.admin_site_nav_item ul.megasection_subnav a{color:inherit}.no-touchevents .megasection_nav ul.nav>li.admin_site_nav_item .nav_title a:hover,.no-touchevents .megasection_nav ul.nav>li.admin_site_nav_item ul.megasection_subnav a:hover{color:inherit}.megasection_nav .megasection_subnav_container--hidden{display:none}}.blog_main_image .primary_media_feature .feature_container,.feature_earth_right_now.primary_media_feature .feature_container{height:300px}@media (min-width: 769px), print{.blog_main_image .primary_media_feature .feature_container,.feature_earth_right_now.primary_media_feature .feature_container{height:488px}}.blog_entry{padding:2.4em 0;border-bottom:1px solid #BEBEBE}.blog_entry:first-child{padding-top:0}.blog_entry footer{text-align:left}.blog_entry footer .button{margin-top:2em}@media (min-width: 600px), print{#secondary_column aside.blog_search{border:none;padding:0}}#secondary_column .blog_subscribe .sidebar_title,#secondary_column .blog_subscribe .related_content_module .module_title,#secondary_column .related_content_module .blog_subscribe .module_title,.right_col .related_content_module #secondary_column .blog_subscribe .module_title{margin-bottom:.3em}#secondary_column .blog_subscribe p{margin-top:0}#secondary_column .blog_subscribe .button{margin-top:0;margin-bottom:1em}#secondary_column .rss{vertical-align:middle;text-decoration:none}#secondary_column .rss img{width:44px;margin-right:1em}#secondary_column .rss .title{display:inline-block;color:#707070;font-weight:600}#primary_column .blog_search{display:block;margin-bottom:3em}@media (min-width: 600px), print{#primary_column .blog_search{display:none}}.blog_search .blog_search_form{display:block;width:100%;max-width:330px}.blog_search .search_submit{opacity:1;right:-1px;top:-2px}.blog_search .date_filter label{margin-right:.1em}.blog_search .date_filter label,.blog_search .date_filter .selects{display:inline-block;margin-bottom:.2em;white-space:nowrap}.blog_search .date_filter select{margin-left:.3em}section.blog .blog_header{margin-bottom:2em}section.blog .blog_header h2 a{color:#222}section.blog .blog_header .author p:before{content:"By "}section.blog .release_date{font-size:1em;color:#222;text-transform:none}section.blog .article_title a{color:#222}section.blog .blog_subtitle{margin-top:.3em}section.blog .blog_truncated{max-height:311px;overflow:hidden;position:relative}section.blog .pagination{padding:0;text-align:right;margin-top:.6em}@media (min-width: 600px), print{section.blog .pagination{margin-top:1.2em}}section.blog .pagination a.prev{float:left}section.blog .pagination a.next{float:right}section.blog .comments_directive{font-size:.8em;margin-bottom:1.4em}section.blog .sharing_buttons_stats img{width:auto}.sharing_buttons_stats *{box-sizing:content-box}.blog_header .sharing_buttons_stats{margin-top:2em}.sharing_buttons_stats .addthis_button_reddit img{position:relative;bottom:6px}.sharing_buttons_stats .addthis_button_facebook_like{position:relative;bottom:4px}@media (min-width: 769px), print{.sharing_buttons_stats .addthis_button_facebook_like{bottom:2px}}.sharing_buttons_stats .addthis_button_tweet{margin-right:4px}.sharing_buttons_stats .addthis_default_style .hide_on_small.addthis_button_reddit{display:none}@media (min-width: 480px){.sharing_buttons_stats .addthis_default_style .hide_on_small.addthis_button_reddit{display:inline-block}}.sharing_buttons_stats .addthis_counter{vertical-align:top;margin-top:0px}.sharing_buttons_stats .addthis_default_style .at300b,.sharing_buttons_stats .addthis_default_style .at300bo,.sharing_buttons_stats .addthis_default_style .at300m{display:inline-block}.sharing_buttons_stats .addthis_default_style .at300b .google_plusone_iframe_widget,.sharing_buttons_stats .addthis_default_style .at300bo .google_plusone_iframe_widget,.sharing_buttons_stats .addthis_default_style .at300m .google_plusone_iframe_widget{width:auto !important}.sharing_buttons_stats .addthis_default_style .at300b .tweet_iframe_widget,.sharing_buttons_stats .addthis_default_style .at300bo .tweet_iframe_widget,.sharing_buttons_stats .addthis_default_style .at300m .tweet_iframe_widget{height:auto !important}aside.expander>header{display:flex;justify-content:space-between;margin:0}aside.expander>header .sidebar_title,aside.expander>header #secondary_column .related_content_module .module_title,#secondary_column .related_content_module aside.expander>header .module_title,aside.expander>header .right_col .related_content_module .module_title,.right_col .related_content_module aside.expander>header .module_title{margin:0}aside.expander .expand{height:22px}aside.expander .expand .expand_icon{display:inline-block;color:black;font-size:33px;font-weight:600;margin-top:-10px;cursor:pointer}aside.expander .expand .expand_icon:after{content:"+"}aside.expander .expandable{display:none}aside.expander.expanded>header{margin-bottom:1rem}aside.expander.expanded .expandable{display:block}aside.expander.expanded .expand .expand_icon:after{content:"-"}aside.expander .title{font-weight:600;font-size:1.2em}aside.expander .subtitle{color:#777;font-size:1em}#about-the-lander .feature_pages .wysiwyg_content .wide{text-align:center}#about-the-lander .feature_pages .wysiwyg_content .wide figcaption{margin:0 auto}#about-the-lander .feature_pages .wysiwyg_content .wide iframe#experience_insight,#about-the-lander .feature_pages .wysiwyg_content .wide figcaption{width:90%}@media (min-width: 769px), print{#about-the-lander .feature_pages .wysiwyg_content .wide iframe#experience_insight,#about-the-lander .feature_pages .wysiwyg_content .wide figcaption{width:85%}}@media (min-width: 1200px){#about-the-lander .feature_pages .wysiwyg_content .wide iframe#experience_insight,#about-the-lander .feature_pages .wysiwyg_content .wide figcaption{width:80%}}@media (min-width: 1700px){#about-the-lander .feature_pages .wysiwyg_content .wide iframe#experience_insight,#about-the-lander .feature_pages .wysiwyg_content .wide figcaption{width:75%}}@media (min-width: 1200px){iframe#experience_insight{min-height:600px}}@media (min-width: 1700px){iframe#experience_insight{min-height:700px}}.-ms-.no-flexbox .grid_view.grid_gallery .list_image img{height:auto}
/*
*/
</style>
<style data-href="/assets/mbcms/vendor/jquery.fancybox3-d5d81bdfc05a59e4ea72bca1d8b7fcc399bd3b61f7c06af95a8a48795df69d7a.css" media="screen">
/*! fancyBox 3.0.0 Beta 1 fancyapps.com | fancyapps.com/fancybox/#license */
#fancybox-loading, #fancybox-lock, .fancybox-wrap, .fancybox-skin, .fancybox-inner, .fancybox-error, .fancybox-image {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
.fancybox-wrap iframe, .fancybox-wrap object, .fancybox-wrap embed {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
a.fancybox-close, a.fancybox-expand {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
a.fancybox-nav {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
a.fancybox-nav span {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
.fancybox-tmp {
padding: 0;
margin: 0;
border: 0;
outline: none;
vertical-align: top;
background-color: transparent;
background-repeat: no-repeat;
background-image: none;
text-shadow: none;
}
#fancybox-lock {
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
z-index: 8020;
overflow-y: scroll;
overflow-y: auto;
overflow-x: auto;
-webkit-transition: -webkit-transform 0.5s;
-webkit-transform: translateX(0px);
}
.fancybox-lock-test {
overflow-y: hidden !important;
}
.fancybox-lock {
overflow: hidden !important;
width: auto;
}
.fancybox-lock body {
overflow: hidden !important;
}
.fancybox-wrap {
position: absolute;
top: 0;
left: 0;
z-index: 8020;
-webkit-transform: translate3d(0, 0, 0);
}
.fancybox-opened {
z-index: 8030;
}
.fancybox-skin {
border-style: solid;
border-color: #fff;
background: #fff;
color: #444;
}
.fancybox-inner {
position: relative;
overflow: hidden;
-webkit-overflow-scrolling: touch;
width: 100%;
height: 100%;
max-width: 100%;
max-height: 100%;
}
.fancybox-spacer {
position: absolute;
top: 100%;
left: 0;
width: 1px;
}
.fancybox-image, .fancybox-iframe {
display: block;
width: 100%;
height: 100%;
}
.fancybox-image {
max-width: 100%;
max-height: 100%;
zoom: 1;
}
a.fancybox-close {
position: absolute;
top: -23px;
right: -23px;
width: 46px;
height: 46px;
cursor: pointer;
background-position: 0 0;
z-index: 8040;
}
a.fancybox-nav {
position: absolute;
top: 0;
width: 50%;
height: 100%;
cursor: pointer;
text-decoration: none;
-webkit-tap-highlight-color: transparent;
z-index: 8040;
overflow: hidden;
}
.fancybox-type-iframe a.fancybox-nav, .fancybox-type-inline a.fancybox-nav, .fancybox-type-html a.fancybox-nav {
width: 70px;
}
a.fancybox-prev {
left: -70px;
}
a.fancybox-next {
right: -70px;
}
a.fancybox-nav span {
position: absolute;
top: 50%;
width: 46px;
height: 46px;
margin-top: -23px;
cursor: pointer;
z-index: 8040;
}
a.fancybox-prev span {
left: 0;
background-position: 0 -50px;
}
a.fancybox-next span {
right: 0;
background-position: 0 -100px;
}
.fancybox-mobile a.fancybox-nav {
max-width: 80px;
}
.fancybox-desktop a.fancybox-nav {
opacity: 0.5;
filter: alpha(opacity=50);
}
.fancybox-desktop a.fancybox-nav:hover {
opacity: 1;
filter: alpha(opacity=100);
}
a.fancybox-expand {
position: absolute;
bottom: 0;
right: 0;
width: 46px;
height: 46px;
z-index: 8050;
opacity: 0;
filter: alpha(opacity=0);
background-position: 0 -150px;
zoom: 1;
-webkit-transition: opacity .5s ease;
-moz-transition: opacity .5s ease;
-o-transition: opacity .5s ease;
transition: opacity .5s ease;
}
.fancybox-wrap:hover a.fancybox-expand {
opacity: 0.5;
filter: alpha(opacity=50);
}
.fancybox-wrap a.fancybox-expand:hover {
opacity: 1;
filter: alpha(opacity=100);
}
#fancybox-loading {
position: fixed;
top: 50%;
left: 50%;
margin-top: -30px;
margin-left: -30px;
width: 60px;
height: 60px;
background-color: #111;
background-image: url(data:image/gif;base64,R0lGODlhGAAYAPcAAAAAAAUFBQkJCQ8PDxAQEBQUFBkZGSEhISYmJikpKS8vLzExMTQ0NDo6Oj8/P0BAQEVFRU1NTVRUVFlZWWVlZW9vb4eHh4mJiYyMjJOTk5WVlZqamp6enqKioq+vr7y8vMPDw8nJyc7OztPT09TU1Nzc3OLi4ubm5ggICA0NDRERERgYGB0dHSAgICQkJCsrKy0tLTMzM0NDQ1JSUl1dXXl5eX5+foWFhYiIiJSUlJycnKGhoaenp62trbCwsLS0tLu7u729vcLCwuXl5e7u7vX19fr6+gQEBAsLCwwMDBISEhcXFyIiIioqKjg4OD09PUdHR1tbW5mZmZ2dnaOjo6urq66urrGxsba2trq6ur+/v9DQ0PT09Pn5+RMTEyMjIzAwMERERExMTGZmZoaGhpaWls/Pz9XV1dvb2+Hh4Tw8PBYWFkZGRktLS1paWm5ubp+fn6CgoKysrL6+vs3NzZubm8DAwAoKClxcXD4+Pg4ODjk5OZCQkAYGBicnJywsLDIyMnh4eAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH/C05FVFNDQVBFMi4wAwEAAAAh/i1NYWRlIGJ5IEtyYXNpbWlyYSBOZWpjaGV2YSAod3d3LmxvYWRpbmZvLm5ldCkAIfkEAQoAAAAsAAAAABgAGAAABvdAgHBIBCwWxWRSEBAOPp+BclrYVJwikRRgODSngMKHpAAMslLBIvEFS06ZwFnLZRCoBaGgY4II0AQMCEMBbQEYHhECAA0lGgITEwEHC1IBBAkHhBQgIxoMAhGDQwJ3AggMCwZFCRYiIRBTA0cHi0kBDxeaSgIHd0UCwUy2YEKFQgcZG8scDsUECgnSCb0aHRzYD88J0QkIaQMC4W1TTcdJA15Tvb9LlAvtRQS0xEIGC4JS4USXZqiqRA4kINBEjSYCdyhtKZCJXxtUd7jJWbALwLkk8zQFkIbMTjGLCRYs2sjGzBpytw6sEhJtSBeUHxEk+PhR3McgACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QBMC+EiYqCASiCKD49KYwBi4QFGBSCKUFBkwA1PCuWggU9QoicngAxQyKjpAARIzcBqikBO0Y0lioqjzkiMiidKBFFPo4AAZWMNjrDAAwhOCgzMyg7RDKCKi8tgwE0PkE3MCgQLoQvM7YuMTErzYIuNkA/Db3wLcqKDTYsLKFo8anQMkaxwh1E4eKFQxi/SKk45NAFihQuKL6I2IvioUnMDiZE2KvFvEQBWnBMhIIFvJWEVMRgwC/RCnguJuEidBEARgYxChBqAXFTDHC+ALSIAbLAt0LNArhg8OsFDFsM1FHqRVOQQ0EtGAiNFcCqo7KIfMK4SrYFLLTNDVaYHLkuLd1FKPpZCgQAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABNLoWJiUdHgkg7O0iOjYqDSjZRgklWVkmCFVJLlYJKU1aIm1WeCiRZoqMAUFo1AEhWVZIaJxKVjI44WU62uBAmkYIGBoRMTUqCC1g1SFBQSBolDQBJUVtUksgLCy5JR08shE3VT1ddJzWUjixOC56KM0RcOwuVSUzfiU2oRIA3iBJBRQYHIWnCkKGzUUoUNJHYBMlChhIfVlLSUOI/WIsgsvhICAmLeomSyKO3MZy/QgYUiCOX5CMST0lcOFHwShATBQ+TLGACQIkzFgrqcSRaEJ5OTwyLOkEkyJciJU6IHokKgIkTjb0mfmPYCInEg4WOMFEGYGuTQQYMmKCF5eItSFgWQQYCACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QAX1+FiYqDSDkYSIJIR4uDR18GgikcUpAAYxhKlABHTWCQSJuQTUI9XqIAXgyImlJHR2QjYou2gwhgKaicD2Y5nQaug19NoQApYF9HDw9HOCEMAEgSQrWDBmBgCCkASpPJYUgMVENnFZ2RXwy/i2JoaWUviylf7oUIZWHlCPF6hQ1JCiUpxCFp8qLhC2aLJpiZaEbLi4VNGC4TJZGiEDACCRpMmDBRCgP8CCExIE4REngMWiZS8m1fIS9gGIQbx89gMwTxMPV6gSwFA0xKQn2RB6sJokoBfYXKOA4c1EVKZI2iaggMxF0MO2WchORFk4CKjiAQSqpJN2gECwkhcFsprsqUiQIBACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QASEiFiYqETS6DR0eLj18rg01NkQA0NkqSAEdNYIigTYJNHhudnkoMX6alRzZAYYuQgkcuYEpHL6VqQBaIAAUFhF9NqilgLABKnTY/L4ZiPziZACtgDC4pACnCgiwNSGAaIyAU14ZfYGDdimEhIjiliilf4IVfFmrqt/+ekKQY+M3QpYOqFs0AAQQIiB9NkBxs8iKhohkNG0Yj5E+RQIL5BN3rKOhFBzEkkbDTpZAIlw5g1GXb1m0XxxRHwvzocqLGtS8VRS5rVowdIiQ0RPAAZ+tTrk6XjigB40rQikqKCrT61EsQu2KeQLl7FQlJL5KTsJIatOIL2kUuCFy89SToEN1AACH5BAEKAAAALAAAAAAYABgAAAf/gACCg4QAAgKFiYqETS5Hi4pHXyuDTTCDK1+PkABNYCkARzBNjwKjm5BKDF+CTaQAXwxKi0ebRy5gSkeuAEpgLoNrs4NfTcMpYKxKs18woAJscDaoK2AMLqApqIbaYDhzPW7bAl9gn4sOWFk1wIopX4iKLDVO24O1nIJHhymHhq6uYAxbFKGHQTlxmggAOGqgojYGDSbUl2/QIX7xCCnRtKiJBjb2BJEz55BQhBJpNFwiVO0aKF2MJAhwQmXImTeEmh1L1ktXHCIQDEmgowEVPkG4QPGKUKRHvDVrFq1ZFYqXgDhG3OTbBQbRrpVghtChBEkSWQCnBNWgcrbirSYWBzNWFClXUSAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQvLouLR18Ggy8vR4IGX5ePRy9giJ0vgkgKlo+CBQxfgpWXXwxKkJsALmCxlQBKYC6bR7MAXy+xAClgq0qxXwopgkoKq4MGYAwuzEq/SMwpLgxgBYVIX2BgzIq6xoiKKV/piZHlir+Q2fSGlZUKw4thdf1xGezuVdKnqEGdDRvqACQkT9GhQ0faDVonkdAXHA0aGhK3bF+IERZEEZJGTZtEFxGQgNEwwg6FWcGGpXh2ZMIEJBpKNDAUwQOGWb4G1UqRQoQIJGFMdChX4JuiVKuKikhxJMMJCacAdCJHzCgzBSQ+OIUkSVCKEVMFVdgwKetEO3YIykV0W2hc1kAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhAB3d4WJioQvLkeLikdfK4MvL48AK1+YkC9gKQBHloJ3CpeQgkoMX4KjAF8MSotHmEcuYLKjKQyOgrSEXy+yAClgrEqyX5+pCqyDKwq8oEqcobIptwpLhXfKuItKYMbVhEosiJFfw4TkqIp3lpYK64pKpqYvh/GW9IlKL/jyuUvUrpCSL+gSsajRoGA3MApAKWrwA4iNF4WWKADjIsWRGRgHfYFwRAGZDz3wcPoyT5AMIjvuzJhxh0wIBoYg6LDB6ZehK0Xa3Pnw4Y6METnQIVsUxciOIymIIiIzoo27FXSGgCEm5AOoF0J6bIO0gkcNQVG9ChqDoR9BdHcLrlxB53NgJQXuAgEAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQvLouLR0wrgy8vR4IrLpePRy9giJ0vgkiVm49KDEyCpQBMDEqQpkxgSqEASmCOgkemrS+wAANgqkqwswOCSi+qgytgDC7IA4iDR9IuDGCThEiztIsDL6nUiQNM5IXdwIS8j4mbm6SVleuKyvMvSKHz9Yn3ldHeudvVrtCRCB1EKYqE7B2YDlyIzFiEaxi6IzVOdLmSB0kbXYJY5DmCBJu2QUh4bImCyEkJDR4jYMQCJtkyQiu2IelgAgKSKnKQOPmAg1rBRDNOaDAEFFENLRAGrvlAQtSAKlUQuZAzpV+hNVIqCLpapWEUG14NUtvZwWivgasEQC4KBAAh+QQBCgAAACwAAAAAGAAYAAAH/4AAgoOEAAIChYmKgwEuL4uLAV8rgy8vAYIrX5iQAC8LegABloICC5edAEoMX4KWmF8MXpGcAC4LSqOPegsujLUAXy9KgrytXsRfCqGqL62DKwoMLqF6wAHVtwuUhAJfC7iLvAtfiIpKBuaJksSFeu/vwJ2cC3Yi9yITnUoKlpYCCrTgy7fPX79q8PSogySPEYQyvhRJYpZIQZk0aMQsUgKuHKEAFc4MobJHAIRnpYjpccFgG6MNdiQgYhACR4AHDwIYACVIiTNCXrgJKCMi5wYOAnhFFNVQkJgzNgUcDRWrHSQvPew8korUUL+mg7xgGFNqqiAvm1IJ4CSAT5mFqQYSfVm6KBAAIfkEAQoAAAAsAAAAABgAGAAAB/+AAIKDhABISIWJioQJCYuLfV8rg419gitflo99CWCInI6Gfwmaj0oMX4J/f5ZfYEqLK5OCrkmgAElgfpp9pX08W1FJuGCpSrC1gkoJqYJ9NSddV099SYiDfbBJfgxgBYVgHVxEM4u5qNeFfWIdoYmRsIVJ89bpmwCaf1dAc/3lpqMSjEKir5+/RwCWNWo0jF49hM56vXuCo1kiJCyGKUpgQUSIMIuUgClmrw8FEFs0MEDSgAUhJA25gZmFD4MHMYj+/KiRDRYLMBoLMCNU4JshC3MaAGiUUBe2UoXCzOHZZ1QrBvFMbfAQqpIoUgiV2IjijKmgApkgShTkxx3ERYcDIAYCACH5BAEKAAAALAAAAAAYABgAAAj/AAEIHEgQwJ07BRMm7INQoB8/CiMCWMGjxsAmTQauaNFH4kQ6QwAB6IOx4x0YTTp6xGOECsImMDq2AEQg4po1ApP4KBIBAEYASQD5UdlH5UgpcyQgdECESh8CNWcmEUigSYuBfd6cGULFyZ0ZEAfeqXnHDyBAKwrCKJOmRJuIBM62mLoQQpmwCe/MTZjkoF+PWEf6pNJDjpwebyUSQInRT1kqhnsg9rgYI0aEfv8C7miUoJNALCLqranQT40sWBxEDMqgRUOBfdz0mIMD0NPXI2smMYsWqw04EDADugoVgFSBa6wSJIDTIaCpMPskYYC3KFyhAmEKbMGAtESSMBpqFjeIsvPCFmlHlhS40TzgJngBi8atMCAAOw==);
background-position: center center;
opacity: 0.85;
filter: alpha(opacity=85);
cursor: pointer;
z-index: 8060;
-webkit-border-radius: 8px;
-moz-border-radius: 8px;
border-radius: 8px;
}
.fancybox-tmp {
position: absolute !important;
top: -99999px;
left: -99999px;
max-width: 99999px;
max-height: 99999px;
overflow: visible !important;
}
.fancybox-title {
font: normal 14px "Helvetica Neue",Helvetica,Arial,sans-serif;
line-height: 1.5;
position: relative;
text-shadow: none;
z-index: 8050;
display: block;
visibility: hidden;
}
.fancybox-title-float-wrap {
position: relative;
margin-top: 10px;
text-align: center;
zoom: 1;
left: -9999px;
}
.fancybox-title-float-wrap > div {
display: inline-block;
padding: 7px 20px;
font-weight: bold;
color: #FFF;
text-shadow: 0 1px 2px #222;
background: transparent;
background: rgba(0, 0, 0, 0.8);
-webkit-border-radius: 15px;
-moz-border-radius: 15px;
border-radius: 15px;
}
.fancybox-title-outside-wrap {
position: relative;
margin-top: 10px;
color: #fff;
text-shadow: 0 1px rgba(0, 0, 0, 0.5);
}
.fancybox-title-inside-wrap {
padding-top: 10px;
}
.fancybox-title-over-wrap {
position: absolute;
bottom: 0;
left: 0;
color: #fff;
padding: 15px;
background: #000;
background: rgba(0, 0, 0, 0.8);
max-height: 50%;
overflow: auto;
}
.fancybox-overlay {
position: absolute;
top: 0;
left: 0;
overflow: hidden;
z-index: 8010;
}
.fancybox-overlay-fixed {
position: fixed;
width: 100%;
height: 100%;
}
/* Default theme */
.fancybox-default-skin {
border-color: #f9f9f9;
background: #f9f9f9;
}
.fancybox-default-skin-open {
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.5);
}
.fancybox-default-overlay {
background: #333;
opacity: 0.8;
filter: alpha(opacity=80);
}
.fancybox-default a.fancybox-close, .fancybox-default a.fancybox-expand, .fancybox-default a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1RkZERjA4NTZBNEMxMUUyOTFGMkY4MEVGREQ0MkRDNCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1RkZERjA4NDZBNEMxMUUyOTFGMkY4MEVGREQ0MkRDNCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU2OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+qKJVUQAADXpJREFUeNrsXQtMVNkZvsOMPHwAoq2KuiLWiixV8G01qxHwkbVZFTWa6G7bWI22ig/wnWxr4itqdN0mRjemGjXZBGtMs4hPQov4fovUagUVUOsTUN4M0/+7njO9DDN35l5mhpnuOcmfYS7nnvPd//7nf6MGi8Ui+eMIkPx0CODeHiblF4PBoHmBlp4RV/a0t8f/B8e1MusjwwxG+jSytUzsZ86QRiIzUQMjMyOLpYWvyqQTMAcaRBRC1I6oLfs5SLEuwNYSVRNVEVWyn2vpgfmDWDwN3MA42YYomKgDUThRBCg1NXVIUlJSQv/+/ft2odGWBm6qrq6ufPPmTemTJ0/uXLp0KXflypX/oMtlRO+Jaojq2ZuxaD5cnJyANjHOdiWKJRoXHBw8NzMz89zDhw+LLS6OZ8+e3b958+aRjh07/oKt1Y6tbXAFIyeDErCDE85BQwzC2Gaf7NixI2X27Nnju3Xr1gmTioqKpHPnzkl5eXnSo0ePpLKyMvnm8PBwqU+fPtKoUaOkxMREqXfv3vJ1+n3J1atXvxs/fvxf6Gs5E6EGe5y3x1RnwLk847V3JOpB9LPc3Nylo0ePjseEK1euSLt375auX79uXcN2HeUbHTx4sLRkyRJp2LBh8ncSocyoqKjf04/v2DloJvd6gBsZpyHHPYliLl68mDZixIiY2tpaadOmTVJGRsZHvRoQIJPaaGxslAljxowZ0tq1a6WgoCCptLT0XI8ePX5Ll98yzptbAtzANEQ4Ax2bk5OTPmbMmE8hBgsXLpRu3bolgzUajU4NinIfs9ksP0B8fLy0Z88eWZxKSkoye/bsOY8d3Fol17UaICPTHuB2r61bt04DaNIS0oIFC2TQAMxBAzDnOn8YkPIafyj+O6yBtbAmcfxz0jq/YXsa9foq/EBC5XWl19mbDuIY/GLjxo3SnTt3rKA4YFlpNzRINTU18qdSdOrr62Vw+FTegzWwFtbEiI2NXdC1a9dwZ1rGGfBgJiaRhw4dmkGvMQwH8dixY004CIK8v3//XqqqqpJ/rqyslCoqKmSw5eXl8nWAxkN9+PBBFhPlG8KaWLtDhw69SCutZ3vrAs4PJVRd17i4OFmHQXvwV60EDbId4DqA2zuguM7v56LG1yZ5H8H2NuoFDsMQQYdwCFnDzvfv35dVnlJz4NAoQU+fPl3WNLYHdNKkSdLOnTutIAG+rq7ufyBoTayNPSIiIj49derUeGfATSrXobvDR44c2RcXTp8+bd2EH0ZwVQl68+bN1oO3bt06+cEmTpwo7dq1ywp62bJlVs0SGBgoz8Ga+I49YmJiYKR+SVP+qhc4Xld7UlndceHGjRtWTvLXjM34GDRokBUcdDTAwIpu27ZNvo65Z86csc5v06aNdR3ZhNMnN2KdO3ce6syPUgMOHR5MagpmXiosLGwmAiaTySoq4DAAAjRGSkqKTJy7y5cvl7KyspoAtw0o4DZgtGvXrpcz4AHOXNfQ0NBA5ls02whWD+C5vAM8NITtWLVqlcxtLmYhISHWA64cfA96qFC9WsXloeQcwJ8/f77ZHPJrmhxqqEZPBcsWHrmQPq7jXp6tCYcIKFUeDiJk2nZMmTJF2rBhg5XDONQQMVtTzvegB6tw5p87As4jlxryIeByStHR0c02UnIOKo9rDzxQenq6dPz4cevvp02bJoNX6nlbRnCXlwzYE4ZBF3Cw8gP5E6Vca3Dg3E1VAie/2goaB5ECDGn9+vVWmcd1aCaroaC5SncXn9gD4/Xr11edATepAIdvXEZu7MO5c+cOAjB4cjAekFdshM05+LS0NPkThxDag8v06tWrZWMD0EePHm0GnBskjAkTJsifjx8/vugMuCO3FieuM1E/oiEFBQWrYD3nzJkjA4Am4TqY+x5aBrQRiHMcYgNuHz58WHr79u29Tp06JYPxLB7V5Naamai8IXqRn58vK1hELvy1802h2uwFELjOVaUaaG7EFi9ezFXiJXvBhBatUsOc+mckKhnFxcXlCLdg2nkkw811+/btJQqcZdWI4D4sLEwGTjYAxkQGiuvk/TUBzdfBmsOHD8fbezpu3LiNbG+LXuANLIXwglRX4ZEjR3LwizVr1kgDBgywRjEcBEADLNfrHBS4jodSGioOGmtgLayJcffu3T0Ug75zFDS7JXRD5IIgoCWhG0Dv3bvX7aGbhR0OcP0/RP8eO3bszsuXL/8LGx08eFCaOXOm9XDxA2ovB6LUHpiL77j3wIEDMmgKlrMJ9CK2V70rySEt6QnEnt1ZemIZmfGBPD0Bw3Pz5k2X0hMJCQlSamqqLNMsPZEVFRW1iEX4bktP2CaEIDZdeEJo1qxZEyIjIyO49+hKQggWGINCuhJ6aCSEDjDx0JQQanEK7uTJk9kEtMTVFNzz588fkjX+vkuXLh5PwbmU9Fy6dOnQ5OTkhH79+v2cQP1UmfR89+5dKVnDu8Thv69YsUJz0lOvqDhLM7e1oSBFvGhmGqLKhmoV+XKnB9FdwJsk9hlI3Yl9vaWUllQkLAxAI/cpRNXNldctKssCuAAugAvgArgALoAL4O4fmt1aHe1PPOzjUVMIu17FoiBr1kqLw2fyEnN4LwCaGMKYL4/Ez1OiYulj94RZWzTgIA+ilh9x9X4WnyIrMCY2Njbt2rVrBQ0NDea6urr67OzsaxSbIgGENEd7rVg8Bpxx+idEn0VGRqYVFhY+t434CTzy4JOJuvkEcBZ3Ik09KjQ0dMm9e/ee2EtV1H9Mrs8litYK3O1ahXXFQY77BAUFDTx79uwfSEw+UQmCDZKTCpvH1SEDDc3R22g0DsjKylowdOjQvo7mX7hwIZ8dzCrtobqbRIUxAfVJtDz9+vjx4xfVMlolJSWvoqKiUB8f3GqHk4HG5nFEc/bv339WDfTLly/LEhIS/oQ0HtM6Jq8DZ/KJ/F9/otnbt2//mxro8vLyysTExK00dyLT64F63n5LgRtY2g1yPGPNmjXfN9JwBLq6urp26tSp3zAV2Iul6wzeBs67iKDKps6fP38/GZYGR6BhdObNm/cdzZ3C7mnWBeQN4LzMAq79KiUl5Vtw0xFos9ncmJ6efoTmTmdvJ8ReMsobwANZdWIi5LWioqJKTa63bNmC2vgsohgmWgZ7oudp4CamCcbFx8f/8dWrV2VqoPft24fumjlM47RXgvY2cBiYIdHR0cuLi4tfqYHOyMjIg05nuh06PkDtsHsaOByiL/Ly8u6qgSZTfzsgIGABMzCoXBidaSlPA+9D9BX5Rw41CJnyR4GBgegfTGbOltEV9dqqTpYvx5xyO8iVK1f+6WjCyJEjo0+cODGDRCWaqcwOzAFz3/gxHc4m6hAOk7+oQ781QH5t8v3WyfJrt1ZXIFFWVqYMJLq3ViChO3QjjfR1q4Zufh0s2ySC4FANNhqNv8vOzr6tBj4nJwdtRV/4RCaLgUeSc3hQUNAicg0eqGkamvclc9xa18mC2mZJnke1tbW3k5KS/lxQUPBUJWVtkXT8aaRHvEMGHl1AD8iq3kpOTv62qKjohe283NzcWyzdXN1qmSxvp5k1t33oqEi0cTWxrwWLN4B7pJTiDeCaxNZjNSBf6SgSdU4BXAAXwAVwAVwAF8B9eejtEOJ/t9+BJYQk5p7yv3tw+pdTXvcOGegwFhigK6Ij87kRDJSwwAB/0+PZLn4doRvCrIEIuxB+IQxDOIawDOEZwjQWrrXRGgp6o3g1Gd09tukGdAGhG4h+/5n0sTvI5EvAkWmdi+4ee7kSdAOhK4jmjHJWuPJ28cqgFvKhGwhdQegOYomeMLfXf1pQvCpn3T12B7qC0B2ELiH62ttXilc4nIORsETiUi03iMSnTxav0OXjrHiFlLNN8SqgtYtXPXjxCl0/auCR7EfSnyX/2/lK8WoyyiNqxSuUV1BmQbmFlV3a+krxagoKUzBGKinlBhS4UOjyleJVCOPidJQEURpUK16htIgSo68Ur/DqUXydhWKsmryjmOtLxSuDsniFMrgaeHQVoZzuk8UrNCCogUcDAxoZaO4Q5h77RvEK3UHoElIDjy4jXytewblKRpcQuoUcAUeXEc37yieKV34ZcyrasHuhOwhdQugWcjSfdRmVSb7Uhu0Xh1OrOmS1/NZVh/5qgPzS5Pulk+W3bm2TLjh0/fhDIKHsO/zan0I3vw2W5TZsdPeogUZ3ELqEJB9rw/5STYOgKwjdQTQP/8JRhOQjbdhyR4+jZgR0A6ErCN1B9PURkkes8abVnSzkwd+x7p4mA11A6AZCVxB9fQAHyhOg/TrNrKdfxWOJfW802rR6KUV0CIlyoQAugAvgArgALoAL4AK4AC6AC+ACuAAugLfy0NOi+rn0Mddtb2xVywjQvasc3JdPczM1AdGRgltlL0OL687WVrtXKw53ikq+m+Z4RlRsXv1qxdc4WxGyl/VS3oN/JKFVgLdkc5uHFlpFM7fo2mQVbaPUHj+4g+t6gCtVnlKTxBGoYCcPHGcjZluF5RTABXD3HU6H/obt4XNmOZW+i9aDqksdcqNjYwV/cMc6QlQ8bbpb4mv86N1anxeVfAfike/he5uKqPhPXgRwAVwAF8AFcAFcABfABXABXADXOv4rwABAehOixiUV0gAAAABJRU5ErkJggg==);
}
@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 2dppx) {
.fancybox-default a.fancybox-close, .fancybox-default a.fancybox-expand, .fancybox-default a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpCMTg4NzhCQTZBNEYxMUUyQTQ2NEQ0Nzc1M0U1REU1MSIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpCMTg4NzhCOTZBNEYxMUUyQTQ2NEQ0Nzc1M0U1REU1MSIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+T32etwAAHWhJREFUeNrsnQtU1VX2x388FcQHaIZEiFb4QDQtSszG7IGplM+sCSvNno72GDNb/3+t5WQ1tpoms6an49DY1OhKXVNqZGmlpI6pmamI/ccAGZ+QKIggCv/9vZyD5/743efv8rvcy95rHS/I7/7uuZ977j5777PPPiH19fUai3USwsAZOANnYeAMnIWBM3AWBs7AGTgLAw9q4CEhIU7/HugfmKv35+v3zsBbG/CQCy+gPho1u5d10OTf0K96Bm4PWW2hooUpLVR5VKFLuHXUziuP55Xf69QPwhn8oAWuG8kq3HDRIkSLFI9h8+bNS7n55puv6tatW+/27dt3j46OToyMjOwYGhoajRvV1dVVnTt37sTZs2eLampq/lNZWbnr559/zrvlllv20p/P6T4Ew5FvNXDbk5w1Hzxfjs4wAbIttRhqsdQuptadWgq1tLFjx2auX7/+1V9++WUdATxR76XQh1BaUVGx/OjRo7OWLVvWQ7xmhOhDiBxoZtWhq/du1Jp1hCuqQx3NbUSLEi36gw8+uLlv376DBgwYMDIiIiLal5Mi9a/i5MmTOadPn85LTEz8VBn19T179qw/cOBA4I9wRW1IwO3EiE6gdjm1/tSuW7p06cuHDx/eV2+RkOrZUlZWdr/oU7gyPwTuCNeN6nChkyV0tI7Dhw/v/tJLL2UPHjx4lLN7k2rRvv/+e23fvn1aYWGhdvDgQe3EiRMajVbb39u1a6fFxsZql156qZacnKz17t1bS09P13r06OG0z6Tr/7Jjx45Xhw0bViJHu2LlBM6kKWCHKvoasKOFzu6AUf7hhx+OJ7k7KiqqvdH99u/fr61YsUJbu3atRqO/sQ+yH/r+yNdXv5E0yWqZmZnauHHjtF69ejnqd1l5eflzcXFxf9VPrAEBXAc7UkxUGNHthTrp/N13380cMmTIjUb32LBhg/buu+9qP/zwQyNgskY8tiJwL7JeGj+AgQMHag899JBGo9nwXtXV1e/Qh/+4N9D9BlyBLc27KDGqOwI0tYv27t37P3369EnTP3/37t3aCy+8oP3444+21woLC7N7Tf3rOxrhRr/j5/Pnz9seaULWnn32Wa1fv35N3gPp9tVt2rSZIExJt6H7BbjOrpYqBKO6E0AnkZB6mEVf7RT1SWT2aX/+8581UjG2jqugXT06euPOHgEez7/77ru1p556SiPAdvcgU/KrlStX3jlp0qST7kL3B3A5QcrJMUrqampdqcXn5+c/Q5PZFeqTMAE++eSTtskQoKE6jHS1+rs7wPWWg163Q9UAPCbX1157zTbR6qGTWTpajHSXE6k/gKsTpBzZccKh6UZqYlb//v37qE+Ajn700Ue1U6dONY5q+RpSZ7sa0e6OPEemGaB36NBBe/vtt206Xgd9FUEfr+h0nwIPNRN3MrC1obO7YGSTGnlADxsT4/3332+DHR4ebjeKAVsFbsbl1t9X/R0Nr40+oC/okyr0tyyaSBeYtdF9PcKlKjFy1S+hr+uYJ554Yrz6BEyKU6ZMwQRlG9ny3r6C7K6DIi0YCEZ6ZGSklpOTY5tUVamqqnqI7Py/OlMtVqoUVW+3U3T2JfQVTd24ceMs6myUvLioqEi76667NHKxbaNLrz6aE7YjrxDghQqxqZd//vOfdjoddjrNP4NSU1MP+hJ4qAlVEqYA7yDMv65z5869VYWNEY0J0gi2Xmc390KD+ppyopbq5fe//72tr8r1nS+77LJZvlYtoSaeFy6sEjnCu7z44ovDb7/99qvVC1955RWNRoqdGlHfsBWw9d9W/QeNvqGP6KsqZDo+dubMmWxf9sHbER4mJspoMVHCMuk8ceLEa/VOzccff2xnjajeo5Ww9dD13zD0EX1Fn1Uhi+URf45wvWUigceS2khPSUnppuq3efPm2XSlkb52x4MlawGBJtujOzqaRqNWUVFhe3QRNm7S0Ef0FV6v+lz6IDJOnz59u6+gh3o5uqX7Lj3K2GuvvfYy9cJvv/1W27VrV6MqceTMGAkAIypIloJNr+IRcwAmOCOB11peXm4DXVtba3t0dr2jvqCvsKbQd90oH+pv4G0U4B179OgRT7o7Vb3w/fffd2hnu4INwHqBCYfRrocI2AjX6kc0RitGuyvoRvb/e++9p7fNHz5w4MBFVgNX3fg2inXSYcaMGQOjoqIi5IUFBQXajh07mkyUrkQP+/rrr9feeustjLBGiCp0CVsKAlOLFy/W2rdv36hmXEHX9w99hjeMsIPy95jExMRJvhjloV5cr+pvODwxgwYNulS96F//+leTr6wr6Eaw33nnHe3WW2+1hW310I1gL1myRLvhhhtsjowKHdc70umO4jd4D3ZvPDR0qD9GuNTfcoTbgKenpyeoF65Zs8atoJM64RnBpm+N7XdA1EM3gt2xY0fb74iPqNBxvauJV9/f3NxcPfDh/hrh0p3HCI++6aabEsnRiVBXbI4ePdokAugMOkarlKFDh9rBliKhwxVXTTo9bCmA/vrrr6tBKbcnUNz7yJEjtiU+5ZqLSD319ccIl6s5tnbjjTcmqhdBdxt9VZ2JCgPxaj1sPfS2bdvadG3//v0NYUNgpSDerkx8btvn8uetW7fqHaHr/alSbMDT0tK6qhfBY/NEnehhPP7449o333zj8FoslSGsSvOG9sEHHziEfc899zQ6MegDPiR3nSL5MyZ/nVoZ4A/gcpQDemRycrLdO0aeh6cepByxENjRDz/8sFPov/nNb7SPPvrIFnRyBRsSHR3tcZ9wvapSBPAUq4Gr65a21rVrV7vEHbnSbmQFOBOkPEi9L6HrHRBXUUwEoRAC3rNnj9199ctprqwVKSUlJfrLevoDuF1OII2ySPUieHzexEqgVmJiYjyCroeNBQVvYDv6AH799Vf9/3Xxh+OjZrWG0gQXpl6kmmueihH06dOnN1mVMTIrcd3evXtNw1YHiYHH284frr0+zdinAujSSsGbhwWDSdJVMOqRRx6x2elSNSCe4srDdCXS7lek1l/x8EahN3Zer4vNiOpBAnLfvn1tpqAr9QT7/c0337R9YHJFxyj24k7U0dF7ob9VWg1cv+OgjnTnWfWCTp06NckL8QY2JDU11eYxGlkjRgIPFd8GR7EXT8FLT1U1gqwEbrTzoI48MjufGXl9+o67A95RbMTIznZ2P5iMzsIAzgDr73vxxRfrI5D/tRq4ur0Dw+Yc2aoVdnZTz54ej2x9LMUZbFgj9957r5aXl+fUOVKhI7TrziKGvk94LzrZ5w/g58TkYWu7du2ys52Q1aTPhHInLCuvcwV76tSp2vbt27UZM2ZoGzdudHhPhAHcjaUY9VXOH7oQxA6rgcvRDb2NiNPZdevWHVUvuuqqq5p8RV1BV2HMmjXLobuOkf3TTz/ZrocVAsvEkcmIDxFrlO7GUoxS46655hq7a+hbuNEfwGsFbLRqesNHaWJqJJaSkqKR99mY9+HObgoVxmOPPWZbADBy1wEb6kE2QEUqst45wv/DaVK/Ac5scn0f0ff4+Hi7xH762/G4uLi9/gCO0Q2FeEY8Vm/ZsqVUvXDUqFEeqRXEUqSzgxUauOcSuhobkVAQG5H3xZonoMvYC0a+HrY7sRR9f7HwYWeA19Z+q3mxS8KsWVgnRjhAV8m2devWY+pFY8aMcZrFauS4wMOUUCR0QNQHoqQHaRQG+Pzzz21qRg/bWaTQqI9oeA864N/5wrHzJNVN5hJilQcxhUuo4TvXIykpqXd+fv44enNhalx7586djUmb7iwiQzcDtqM+6d11XA87W6ovvbiCLSHj+TLnEPe88sorbdFI1eHZtGlTL3KuDquj3IpUN2ml1IjRDc+rsri4uHTp0qVF6oX4mhvl87nS5XA25MhVP3Sj2IiMvaipGN7A1vcTfddNlosI9jFfjHBvHB8JHN7EKdk2b978X70tjBUZTG76r6sr6LBSEE+BHY1HeK+OJj15PQBj+Q2P2NnmDmx9f9BX9Bl91wH/TvNyp5sZlSI/IJl8j/S2eK1hJ3EyGtnkE9PS0mLlxdC9yJoVwXvL0pPd9SrVES7VErJo1X1ApLv/TR/kMDF31bnr8foyliJNQ1gp8DLLqZ1AI4fFzhNDx3/72982bmzSqxV/lP5QX1tVJ+gj+qrfdFVeXv6+dmEvp2b1CJcfkpoXjoDDpWKkJxH0EZMnT75MjZHgjSCxxihd2cFrNCts/ajGRAkPGY6SqrrIs32L1NUsYQrXeRLT8WV4VrXHq0QEDe59Gdq8efO2ke1cqzocWD3HZCg9SvUN+6LIgCdqxAg2Jt5XX33VDjb9/dcvv/xygS9Ht5l4uGqPnxYqBc5P6f79+w9S5+1iDthZgHw9vCE5iaqmWHND1+tsCRt9QZ+QB6nfLn7kyJG5EydOPKi52FhlhUqRNrnhlhOoFTySmThq0qRJl6tPgguONAh4h1K96BM+fali9CpE/WAxsmHVIMClt0roG/o+WUZPim/xOa2F7PFxtKkqUbSEvLy8cdddd51dgHzLli026HBw1ER9o9zx5tw2CBUH2IMHD7Z77unTp78kFXOH+PbWai1kU5WErhadkRtiYSomCOjx5PaPT09Pt4vkt9SNseS1bqAPYqwC26n+tnqfphofPyt0OSbQ41CB1A7hMSsr69Pt27cf1et02LuIk0h32mhSk4Dkz66aeq1+UpavkZ2d3WTHmoCdN3v27PsUNeJxdYnmHuGqanG49ZtaN5qY4r/++utxGRkZl+hvAOcIW1OwW8Kd4gbq/xn13VFxA3iQzz33nGFxA9LZ66hv9+Tn51fo9HaL2/otoctcFXVnhK24gdDraBetWLFi9Lhx4/oYdRwTKiwZq8t3kDXy927dus0WjlyN5kEZj5ZSviNSmUgby3eIEd/lhRdeGDpz5swhHTp0MAyO+KJADRYPRowY4bRADamYkwUFBS/RiP+b0Nkeq5IWX6BGhHU79+nT51JSI9dPmDChv7N7I5kSKcPIYsXPhw4dsqXSyQVnBKoQ2EpISLDZ0YCLZTFXJZho0v77okWL/vLiiy8eUGAHToEa5Vp9CSapYmIU3R4nHmOff/75DLLVryZQ8Va49qSrfyTL6EMyBT9WVEitFoglmHTQ9UXG5I4JWXGio9DxeOxA1sFAbDscNWpU/6ioqEhfQibVUUVqamVRUdEWuv9qEY5QVUjgFhkzgB6qAx8lwMvaKmqLSU1N7Tp9+vQraaLrkZaWlkwOiFc1DGtqak6S+vnh4MGD299+++1VZAaWKKBrFNB1ZuLcLa4UqkGhSFnpza5QpHahxJ78P3wjIseMGZM4cuTIK0jnX0J6uktcXFws6e1ocskjRKz6bHV1deWpU6dKy8rKSoqLi/eT+bltwYIFewXgM4rqOKuoj/OaqEkbdLVnlZqzoboRH6GoG1mtU+4ditQu1KDVF3TUJySd1S5kEsgRXKOojVrdiK4T/a43G7cJlmK/EQrocM2+wrIK/LwCUoKtVQC3yGK/XM6aC7ZzwfZmBe5vYeCtHTiLxeFZFgbOwFkYOANnYeAMnIGzMHAGzsLAGTgLA2fgDJyFgTNwFgbOwFkYOANn4CwMnIGzMHAGzsLAGTgDZwkC4BYWFJObuOS+f+yQk1sRZZ0uVKKT27wbxQoWwQY8RIDGjmfU4EItrs7i/7ANBTvbUH8QNRaLtYYCadgfVG8VcIdFXHzdmvv1BWxsPcTm+gmpqanzly1b9sPhw4cr6kjOnTt3/sCBA2VvvPHGt7GxsXPomhFaQ02XNlayCArgAjb2e/amln3fffd9XFFRUV3vQH755ZeyQYMGvSKgXyxUEAP3ADZUxhWAPXHixI/OnDlTW+9CMNrbt2//ND0nXXxYDNxN2NhIi7MD7hwxYsTfTp06VV3vppB6QbX3iVpDIR0G7gbsNmJyHJ+RkfFOaWlpVb0HUlhYiHqL08W3g4G7uF+ksELG0AS5AJNjvYdCEyksl6eo9bcKeGhAOg8hIRHC9EtLSkoasnr16inx8fEx3hhpVvc9NABhY/89itz069Kly+Avvvhiavfu3Tt6c6/i4uJy7ULVCQbuADaqCfWNiYm5Jjc39/7evXt7ffz5qlWrcDThceF9WiOBosOFrQyv8frQ0NCn161b9596E0I6/1Tnzp3/l+53LZuFTZ+LbyJKNmVQe2LFihV7zMCurq6uve222xZjwqXWjR2fprCho6+mNnPRokXbzMCGi//AAw+soHtNFuZgW3bt7W1tRPsGUnvk5Zdf/tYMbMRUnn76aRxYPw2Troi9hDBw+2AUwDwwZ86cXAAzA/yPf/wjDj96mNqV4oMMsXo+a5HAdcGoex988MGVUAVmYJMq+p7uNUOoJqioUH2/WiVwJT4C/Xr3hAkTPsIkZwb28uXLcdTVE9QGi8k31KhfrQ64Eh9BLdM7hg8fvsiTYJSRfPXVV/8HMxLmpDArwxz1qzUCR3wERxqMHThw4BvHjx+vMgN769atB6Ojo5+l+w3XGgoOhzvrV6sCrjWUzkN98azLL7/8T+R2nzQDe+/evcfI9X+e7pcpFhnCXfWr1QDXGuoTIiadSd7fvIKCglIzsAsLC08kJibOp/uN0hqOR4hwp1+tAriAja/7jTExMc9u3779vyZd9op+/fq9LrxIrFlGutuv1gBcjY/M+eabbw6YgV1eXn5m6NCh72IRWWs4H66NJ/0KduBqfOTJlStXmoqPVFVVnR05cmQO3Qun7V0mTMsQBn4BdmN8ZPHixabiIzU1NecmT568TMRHeolF5RBP+xWswGV8BO71w/PnzzcVHzl//nzdzJkzP6V7TaWWKjzUEG8GQjACV+Mj02bPnv252fjI3Llzv6J7PSjWJWM8gR3swKXLnkLtnmnTpi03Gx9ZuHDhJrHqPkjkEYaYUXXBBjxcmGnjxo4dm0OTnKn4yJIlS3AO+2NixcYuPsLAG94Yvu5DkpOT/0Aue6UZ2GvWrNlHZiTSG4aIBeUwX0zmwQYcS1l35uTk/NsM7Ly8vMLIyMhn6F7DjIJRDPzCG4Pu/h1SE7yFvXPnzsPkjc6l+9ykNZx8Fe5LczXYEoFsZ/kkJCR08ObJBQUF5ZmZmZ9WVlbup19/pvYrligDLa8mIDOvAlmsBI5jXqoOHTp0ypsn9+rVq9PatWtvJ5WSIlaD4kRiEAN3IMhuKl2/fn2BtzcYMGBAfG5u7hSaNJHTjWMiOxL0sIAizmYhOz7s+LBrz8ErDl55G55FNhSHZ61dgJjBCxC8xMaLyLyIzGkSQZUIdJwTgTjVjZM5OZmz5aYrz+Z0ZQsT8j/55BNOyOctJy18UxV2n/GmqgDcNoiAGW8b9O/GWK55Vd/MW79RoikrK4u3ftdzcYOWVRFIibsMR9xl27ZtJSZXjFDz6g6ueeUaOlz1TLju+fn5x7wFjspuVte8CrhEIJFtheJgu0tLS7egkltRUdFJb+6VlJTUSaiTKM68cg69Fjku1H4qLi7eNHr06JwjR45UelNkiFPd3BdARx3ZnXv27Nkwfvz4f5SVlXlUu6qkpIRrXnkwyqHQUaj3KKBv3rz56+zs7KUVFRU17t7js88+2y0+NK555WG+C5dCtQp4PRf75XLWzhoXbLe4YDsfSdB0QZuBW2lABA1wFgbOwBk4CwNn4CwMnIGzMHAGzsBZGDgDZ2HgDJyFgTNwBs7CwBk4CwNn4CwMnIEzcBYGzsBZGDgDZ2HgDJyBN9cL2eeHI2EeWz1kwrzcmIrdCDJhHsnzSJi3bEQEY0I+/sFuBOwARjExbAvB4RnYEBUqIGM79kGtYVtIqfi/+mABbvWmKoxqbGQagY1N2OCEjU7Y8ISNT9gAhY1Q2BClNVTXxAapdpoXZaq52G+DGsEWvRHYsoete4629WHLH7b+0bXZWsNWwOjmgh7MwAEtHZtRMardKSKDTa4C+hWaBxWTGXjDG0NNkomkRja4W14DNQtRLYKedye1npqbFZMZeMMbwyidXlhY+KsnNU1KS0urMjIy3qHnjheTbBtfQg9m4Dgy4CmaID0uhYfJlCbSBVpDjSpsdo0MVOD+qCbhse0VHx8fs3r16ilJSUk4RiaNWhcyMyMC0fGxEjjs6SocfufNk7t3797xiy++mNqlSxeUM0XNQT6pyoXAgzy+atWqPd7eoHfv3hfl5ubeHxMTcw392pdabMBBt9gsvBal61DCzkw1NpTQQyk9zUHFZJ40Lzg+KMo45rbbbltstmIyikVqDRWTMzQ+qcrhG5NlqiejHKkPKiZvo3vN1HQVkxm4g5OqfFExGQWA6V6PaA0FgdtrfFJV855UhQ8MHxzd6wFNqZjMwJtaR40nVaGYutmKySjqTve619NgV2sBLqFjsoNd/cTy5ct3m4GOSRjHF9C97hbzBJ9UZSB2FZNxQIYZ6Ah24aAOraGAbw934i6tDTiksWIyjoDBUTBmoOMoGhxJQ/cbK1aV+KQqg2Utu4rJOPTIDHQcuoTDl+h+WVrDYUx8UpX+TWoNJ1XhGK9RONYLx3uZgV5QUFCKY8a0hpOqEIvnk6r0b1IsMGPNcwwOsENY1gx0HKSHwu50vxs1PqnK+E2KiS4Zi8g4qhFHNpqBjiMjcXSkxidVOQQuT6rCIaR34VBSHE5qBjoOR6V7PWkUd2n1wHUVk5EmMRnH7+IYXjPQcQywUdyFgTetmIyDpafioGkcOG0G+vz58/mkKjcqJseI9dAHcaQ6n1TV/CdVhYgcxEFY9V+4cOEms3GXadOmLad73UMtRYYAGLjxSVU45OixJUuW/GAGOk3CtWPHjs2he40TZmg4AzdeMYqjNoTMvKfWrFmzz2QIoDI5OfkPuJ9QWwzcyfFgwyIjI5/Jy8srNAM9Jyfn3yKrqxsDd35SFVKcbyIvcu7OnTsPm4i5IGXjd0KX80lVDj4geVLVz5WVlfszMzM/LSgo8CrXJSEhoYMwPdsGY14KSyACF4k/mDyvIJWSsnbt2tt79erVyZt7HTp0CFtbcBBHNQM3hh0m3PI+NGmm5+bmThkwYEC8t/dbv359gdjWwidVsVnIjg+79uzac/CKw7OqIMuLw7PWLkDM4AUIXmLjRWROk+A0iYBIBDrOiUCc6sbJnK05XXk2pytbmJD/ySefcEI+bznhTVW8qcrMtkEEknjbYPPXvOKNsRYBb9z6nZWVtRgllszA5q3fXNygRda8umPhwoUbzMDetm1biYiPDHcUH2HgSs0rdyq6OZL8/PxjcP1FfORis7D9AdzKNAksKEQnJSV5lUNSVFR0EhXeSktLt9Cvu5F9JbKwOBHIVXqJp084cuRI5ejRo3OKi4s30a8/IZeEYNdy5pVzsdW8Kikp8SgPsKys7Mz48eP/sWfPng30605qx6gFJGyrgSO76dhnn3222+0nVFTUZGdnL928efPXAvZRarbltoBNLrTYLORSqBY7Plzs1w+ufasuZ80F25su9zUvB4uBqys9fCSBhcBbrAERNMBZGDgDZ+AsDJyBszBwBs7CwBk4A2dh4AychYEzcBYGzsAZOAsDZ+AsDJyBszBwBs7AWRg4A2dh4AychYEzcAbOwsAZOAsDZ+AsDJyBM3AWBs7AWRg4A2dh4AycgbMwcAbu+QuEhIwWP/bz8KkvO/ujq37T687x8PV2i/uubk4efESvxRJuwWv0EyNnvpvfiGd8+eJevC6PcB7hzSu7W9h9eITzCPfOennGmc53ZdW4WyXO0eu4q9t5hPMIt8aKsOCbxSM8GIWBM3DW4ZbqTrouyx0rxV07nOaIVS1Jl/MID8IR7ijqN8eZHU4js63Jb1Q/F9bRyzzCedJkYeAMnIWBM3AWBs7AGThLgHqaXuWHuPIU3b2PE090jj88UB7hQTjC5ciZrxthjtYaV7Wk/vAI50mThYGzDndqLTzDI5wl+Ea4o3wUZcT7JD/cXxlWPMIZOANnscJIaIF7fGQMZLVJHe7V6/IeHx7hLAycgbMwcAbOwFkYOANnYeAMnIWBM3AGzsLAGTgLA2fgLAycgTNwFgbOwFkYOANnYeAMnIEzcAbOwFkYOANnYeAMnIWBWyz/L8AAHWgCuybDs4EAAAAASUVORK5CYII=);
background-size: 46px auto;
}
}
/* Dark theme */
.fancybox-dark a.fancybox-close, .fancybox-dark a.fancybox-expand, .fancybox-dark a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1OTJGQjgwRDZBNEQxMUUyOEJDREM1NUU4QUUxNjBFMCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1OTJGQjgwQzZBNEQxMUUyOEJDREM1NUU4QUUxNjBFMCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU2OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+YnXBBgAAC/pJREFUeNrsXGtsFccVHhvbGGxT1BC1qFBT7DpVZRErpdQ8hBRbIJpEgSqqnaaoP6pKDjSOBEi1eTQgqMBYPAK1UahQfjkNjyJERIpAUP9AFFLHpSCkNLXNq45QBakKfvA2Pd9o53Y8zOzO7t17by1mpKPZuzs7883ZM7Nnznx3sx4/fsxGYspmIzQ54A64ZcpJtoLi4mKeZ2VlDcvVJCYBNb927VpmgAOoCloFD5A4p8szpvHs7GyAKKTDlyl/kfIKkm+RfMUrcovOX6b8bwS0nfKPKe9PdhrOSqaCkpKSUgLVQIc/obzAZCay5kkG6PBDyjeTdF+6dCl9wAlwPmXrCOgykjydufjZuCf3Sd6lU2t7enruphw4gS4hcH8gqRCAdaDFsVy/BjwEJvRjAt+dMuCE+QUC9EeSr8mgI2pcln+RvETg/xo7cM+e/0zyrDcgrUEHgR8aGkJ+g2SWreatgBPoMQTqDMnzOtBJmooM/gJJJYG/E9ebEwPxeR3gmGUa2opF495g/Iw0nYvKTRqHxh49esQePnzIcnJyWF5eHr8f5x88eMCv4d7Ro0f7aRz5A8q/G2QyORZvxkaSXD9N3b17l927dy9hIgAKQQdwTTYTnB8zZgzvhPoG9vJcyhpJfhFZ46TsIqroOjVSIDStalyADhqYsp2PGjWKFRYWmjQOGaDjiaT1vqg2/pL8RlTBoREZ9KJFi9i6deueKDd//nzW1NSU6DTMRjwhjcYhBWg78uCkCqp1DpQQABAJoJctW8bmzp3LwcNMoNkFCxawlStXspkzZ7KNGzcO67RpVvLOVSdj49P8vD4MRHGuoqKCd2RwcJDNmjWLrVmzhnV0dLAVK1awO3fucLtub29P1INOqRpXvMZpkW28tLT0Bj3eCSb7xkADKNEwNA3QMAOUhUCz+fn5bMuWLezo0aOJQVpUVJQAabDzm93d3c9GtfFxqg3K5oIpDyYhBt2GDRvY6dOnOeD79+/zgYvj7du3s5MnTyZmEnREVoKuDbntlCzdBHABHuYhT3XQ3owZM4adg4mles15W31dy69sgBLzNNK8efO4TQvzwBOB3VdWVrJVq1YlNArgeCKiHl0bcttRgPf4XYSNy1NeQ0MD7whA79q1i3V2dvJj2Pzs2bP5gA2h9Z5kgF/QLXBFgjbFuaqqKm4OmD0wEI8dO8anv7Nnz7KxY8fyWeT8+fOJemQTM7RxIZk3Zy2B2WuaVcQAFNcAFFPe8ePHEzaNa9A0QB86dCgBzmJWeZ3enPtS9srv7+/nmg/zyoejBRPyAT1Ix1+P/Mr3btxrWLVwgWnADNSBBmDqjGMCrQoW036grbxDqqiJsp9RnquzczwBOEwYgNAWOpGbm5twYWFOwtXFNZz3A00Ct3Zz0gsJ+MVU0faAxvjUB+0jl80GncB55KpNG+RdarMrthWQt6xiKZYLsa2ARvRieUSHJxTNH8TCNqZVPkzwtbABodBOljdYf0CPtxkLW2/eHTYXS/OxaZ5m3r3NqCss6DiCnt8mzf6KDt+gfKxl0BMO/O+9oGdXWoOecpo6daoIM7+ihJnHe0X+QyKHmY+IMDPkypUrmQE+ZcqUoAWB1iUWx1evXs0McLd55YA74A64A+6AO+AjISW9lz958mSr1Y/Jb+nt7c0McNXBsikfh38UB/Bsku/TIVza6XRcRvk3SAq8IiAdfEFg/0H5pyRwbTsQyA3T4di8Q3JnJ1H2S2r8p7CYMItlSv+k4w8obyV/vDctwAnwM5StJ5DYzsuzXeEbgptgUOyh/B3qwJcpA06gawhgKx1O8Fs8BC0kNIuKm5S/ReD3xQq8uLg4h8D8luTNICpTUHhCo3V5xf8eST2tih4mDZxAYxG8j+QVFXQQFyvATJ5YxnmCNWktgR+MDNzTNGIor+piKCkCDvkIsRY/zQdt0LaooNMkaLMlksZpINbSzXv9Qm0p1LiQN2jAfmgNHFMegfg7yQQ/0HEB9wH/b5LvEPgbtqbyGzHlmV7xuk6EEV1dajuUvkqywUrjpO1v0k3dQRwVk7nYzuMWZiLvUJSS1q8FaXwJSW4Ybek6gj3QgYEBdvv2bZ6LrRabupTruR4ms8Zp+sumdJVumBT2kcvaBjFB5aOIJLYJLTQtSy91upimxyGtxj0vb5LOnnWA1YEJgXaxYWWyX3EtTBvA5GEzmkqVrY+tNo69Tux5ylvdCxcuZKdOnWIHDhwYto8f5B4YfP0X/fzx76mV2ZgIwMA8YMMiLV26lNXW1vJreAq6wSyINUG+jVd2uhE4FXjOpFVTAlhoWpTF3ia2wOfMmcOvnTt3jjMnEg3m5FitgtQyKjZV4xPDPEbVZseNG8eam5tZWVkZP3fixAm2devWBMsC59Ax22Wccn2iH/CisCsR2Wb37NnDxo8fz4HCrvfv35+gOCGBRSF2liOkorSFJ3T0pVTFVfpCr7YlokFdXR27fPky3/5evHgxq6+vTzxuMevIAzhk6vMDfl03qv1GPfbuxfGtW7c4tQnkGpwD+Wb9+vW8I+Ie+cVk24aK7QngVPBznxuN5gD2hKDqgXe4du1advDgQX4OfMTW1tZhY8KmXrWMik3VeKfmBhOnJJHDXAR9SbCXW1pa2LZt2/i1goICrWMVVL/SiU+Nvgp5hpUewZ35Ua79vEO8bFSimNyGSrTxAy/vTIMQTx7iJyZT+QtlvUHa9nNToV1h9zrtyWPCtg1got8dRlOB90UF2mwGjU5j4hgahemItyRMCJrGWNB5hhZttcmeoWkhAU+sR/5/T4YXEoh2laihuideQCiAsJjN4NENNt09UUBL197XxRdNb853SL7UPUaLBa6VBJmLdw7xxDXWcRUEIBHLs2kwDvGp821TMNQ3kkX2/h7Za12G4iq/I9B1kQL76LHnTr4ah2MUMgRXH0fQE3/K+2GaNI7/e9YEBT0D3VpUQBUtJNkdZJPKm87qmlIX2vhREOgogf3XESdPUWD/bVOcMK6tlAkIixG4n8e0lfI+5b8m0DfTuXn1FgFe7O2yhdm8wi5cG+Utadu80gxesV1Y5YU3yrygUqFXpN9zkrBd2EnyJ89hGvLGUGb2OQkEAHziSahFQjJKcyw4B9wBd8Ad8HjmcccQygBwxxBKKXDHEGKOIeS7WHYMoRDhCccQcgwhxxBStP30MYTEPiY2YgUzCDtwtsyijDCEEBvs69NvTIudNpPWM8oQAovCdB+eAjZuTWaVUYaQ+LoNBKwJMIPAEJKv44nIe/nJMIRU4JEYQrwi6fspYpN2+fLlbMmS/5knQOOa/MkHC22L39ONwKMwhEQZmUUBWhOYQQBaU1PDv3QjCDaCURTEotCAf85P46EZQuJYfAFBfAVk9erV/PM7+I1vq+zYsYMziMR9Nt/WUq5P9AMemiEkmwo4hYIRBNm5cydra2vjUyL+2wwGkUjyp6osU+oZQjKJIV3hCUzEz0SpSJ7HBesCDKHq6mp+raurizU2NibKq9/Jskh9fsCvC+B+nED1Oo7Fx4yQYO9gBoFkg3NnzpxhmzZt4nO5KCO+wmcbg2EKQyhHKfg5gSnXgQvqgMxRAckGf3uH5sGG2717N7dpaB7nbNhwQQwhVeNgCL0mgzVVrjI15ekNnBVMj/gI0uHDh4cNYBV0VIaQCrxdBhvGVABUEMUwd6tlQShDh3y+i6VtQyrTbpxVkmEIiS9K6u6DPUPTNhSRtDOEoH3M4wCJGQO/BclMeIa2lA9NW44hpG3c9PijgJauOYaQYwg5hpBP0NMxhBxDyDGEmGMIpWef0zGEwkYV2AhNDrgD7oA74A64A+6AO+AOuAPugDvgTw/w0ItlWsW/TFm54fJmQzhC3NtguO8iQm9hV+lhQ8INjzUJ54Pq9rs3LI44TeViTGVSYyrKo2+UfparJmRgtzVKT6QpI8CTaVzptJtVQmsLbGef2UaePY7EofUowOUpT55JyglUfkCHyxUz2/zUmYoD7oCncHAa/Q118AW9OWXfJexAjTQdipeO8hY8Ekc9zlRS/epOxtd46t3a/3tTuWgwj4spvne4ibrtQgfcAXfAHXAH3AF3wB1wB9wBd8DDpv8KMABmoXlBk8maWwAAAABJRU5ErkJggg==);
}
.fancybox-dark-skin {
background: #2A2A2A;
border-color: #2A2A2A;
color: #fff;
border-radius: 4px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.3) inset !important;
}
.fancybox-dark-overlay {
background: #000;
opacity: 0.8;
filter: alpha(opacity=80);
}
@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min--moz-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 192dpi), only screen and (min-resolution: 2dppx) {
.fancybox-dark a.fancybox-close, .fancybox-dark a.fancybox-expand, .fancybox-dark a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDoyMzAwM0E4MDZBNEQxMUUyQUMyMDg1MkQ4RkQxRDJCNCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDoyMzAwM0E3RjZBNEQxMUUyQUMyMDg1MkQ4RkQxRDJCNCIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOkU4OUM1RDBBNEI2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+WJRMjgAAI75JREFUeNrsXQuwFsWV7ksIIk9hIRDChitceaiJbtwUEmJZywYlsoaquKGI0VoQNJaICioXtYjysPReFYgiKcUHGzaa0jyKQJSQWqxUCErlsZZReV0exiKKugS8gMQkuueb2/2n7zDTfbqn5/8vbp+qrp5//nl0f/PN6XO6e/rUffjhhyJK9aRThCACHgGPEgGPgEeJgEfAo0TAI+AR8CgR8BNZOhe9wKc+9akkr6ura7df/83Z5vzmSrp/yPT7tddeO7EA54Id4gGEEICtrluLjrvOZYFtA5UBOl6dMZRG0b6RlA+nNJBSb0o95TGtlA5RepPSDgJwG+Vb6fjnKd+XBXDW9gnHcBdQLfvOoXQp/b6I8pEMpveUaTClf079t41AfYbOfYK2f5vH8hMScBOIjP8A2DdpexrlpwdULSNlmkPpVQL5cbrOQ/KtqJlK6VQNsJFnpFPor9sp30vpHkqn5xzX7vo+/8tr4x575T1PSZ97wpmFJrBTOTamUraT0h2U+mYBaACPlXKu0Vfec6csQ90JCbgDsxso/ZISXu9+riCnmcxgdt61+8ky/HIYyQmpUvLA1vZ9jXI0XmNtx9oeAEfFmLa1HGX5HWH+tROe4SkVci9lT1Hq5fD6sxnMOd5wHZTpKQL9Xkp1JyTDte0ulP0X5Tdy1Q53n+95hjLciLIS6F1OKIanwF5D+aUcq8WmTtL3y9vnop4yynQpylwm6J1KAhsbj1E2gcu0WujwnHMmoOxlqZegDNe276L8G4xGywqQicW24wrc+xu0eXeHt8Nlof+d8kYbY20Vt6kVmzrhgG15I+aWYb10Cgg0ZChtPxJKVRS1UoqqHkorCfShHZXhkP+UvXlOTDOpAhdn0AQmV6WkytQbdQqpz0My/D8ofbFohTkWCEeXu+pxwzlfRDdAR2M4OoOabazm9iaGYrhvT2ZGOZqI5ad0JLPwekr9bawuYqEUaVtc7p3TDvSXdewQDO9BBbuOYyG4MLyoSgnMcKTriOU9OwLgV1Pqmwcah2V5YL///vvi6NGj4s9//rO1EBhMwHHqeH1wwQZ6VnkyfqOO3yz85hUZ9UDrTbKd0mlc+9bGNAX0e++9154ZnTqJ7t27i4997GOZDwbH63XB8SeffLLo3LmzSI/wmPL0dup3C6Xhu3bt+rBWDMcg72k2dhcFG/LBBx+II0eOiL/97W+Zb0GaODge+//61786lcHC8gZZ55qplK9ZHKFc15sD9he+8AWxdOlS8fGPf7wd6Mhx7l/+8pcEVCWnn366+M53viN69OhRYaUNdBNBctqFyTUDXHVOcQpvahTzwF64cKE466yzxH333dcO9MOHDyfHA3wdbDyc4cOHi/vvv78d6Gl1Y3v4ln6cCTUBnPQ3piaMtHWZctmdBvuOO+4Qx44dE62trWLkyJHHgZ4Ge8mSJQmT8TAGDRrUDnQcjwfkyvKceo2guv9jLRh+rs0Mc2F3pVEYMyZhNvYBQMVQgA5Qu3TpklwLjSJyBTZ0O1SMslaGDBki7rnnnsp107qfU0ZDmcfUAvCzQ7m7uo6dPHlyAqYOkAJ9xIgRCdO7du2aWCtnnnlmO7CVwDLBNR944IHKvizrpoB8tuqA09MfxdHfHH2pm25z584Vr7zySgKqfpwOelNTU6Lbm5ubjwMbagfgzp49W7z66quVe5x00klO7YpFj4+qBcOHcAclbAIwwGoIwJszZ47YsWNHLugNDQ1i8eLFuWDjfAU2BPa4a59MkbqXBfgnGSYUG3w4NVmgp8EC6NDvaFB1VaTAvvnmm8XWrVsr+7t165bo/SIgZ9SvXy0A7xlSKQKsNOg33nij2L59u5WhUEk4f968ee3AxvU4YPv0H3UIwItOwgRoMOV00KHTt23bVjEJ8wC//fbbk+OKgs2sQ89aAF6KAHQwWlUWauO73/1u5SFkCezsyy+/PHko6vWHrjeZgrWSIoC3ZvXYmX5zRPcgcT7s70WLFrXT12nBm3DaaaeJu+++O2E7zlMeqSvozDq0dgjAi0raXR81alTivAA8E+AABY0orBccn/ZIS2D64VoA/oZeYRObOUxXvX5pdz3LqYG5qDzONOhguqkbwKeMGfV7pxaAv+Zbgaxj9b6UPLABIgBHP8vu3bsz7XSArjxSHXTOIIZD2V+rOuBUqK15zM4rcLrDX23rIzSq1y/PqWlsbBQvvviiuPXWW0VLS0uuc3TGGWcknqjefZC+L6esWUxH3WvB8JdCKURdx1533XWVvpA02DfddFPi9uM/gIrfO3fuPA501W/y9NNPt1NFAeWlWgD+vEsrb3pN9Y4lgIiPVVXfh+6uA2w8HJWgPtIeKYDFufPnzxebN2+uXNdkk+eV0VDmF6oO+K5du16nbLup4TSpG/0/gKHsbJhyYPkf//jHxAkCgKpvRN0D7rru5uN/OD09e/ZM2A4nSAdb2fUcEmTVIbVvB9X9DzVxfKgg6zl63MZyMFN33xXoYK7e66d7kGmPFN0A0O3f+ta3jgPbld0W/f1sEcyKjtrjG5lNpq8O0p0/pgFc6OasAeG8jiioFTXGmSVZYBcYsVfpi8TwX9XKtQeVWjhemm2KgmrYAGrajcdDyer1Ux1eWYMLrmAzPc0WWeeauPbQ41SeDx9Kq5Ws31y7F6BDXUAXo8FE3qtXr1y1ALChuwEwjkfeu3fvXLA5ZcirD+paZE5KqM4rfE79J46VwmWWGqEBq9MjNXmijs96MLY3jWml/EnWVdQa8FYq2LdNLOeoFpsTwkkc5trubWD3/cTu1poDLgv3bUpvm9SJieFZDVSIcvncO0etoG7317p7VpeDVLjGkAy3Mdf2JgRmeCOx+0CHAFwr4CpKvzKx1ZfhPirFheGWc34l6yY6EsNROAg+Ozlkq4hLhYsw3OeBp8p0CHUqapmUxfDEUqTtK23MtTHU0HCxGM49n1HGKwnsXSF7vUIyXBX2acrv4XpwTAuBNUDAsZRs3qS2fS9h/bQILJ1CgZ3aRgP6PRcdyrFUuOagq/7OOP57tDlXlCCdQ4ANx0QVWG5/SPkVlP0D7Zqg/687NznnHred1dll8x45lkoO8Otp84qQers0hqcqgSmxkyh/wvQ6c/W8pcvU+Xo5ZcIKcJMI7PdFSdIpJNg5oF9G+X0cXeqyz/c8QxnuQ1nLBDuISkmrlRz1chNlW2jXI5R6ZamILPXgu4Kmo+PzLuUzymggq8bwHB0J6+Uc5RxxGkyu+edq7mk5yvK5aoFdig636Gp8dnceJTSo7/ioDR87PeOYd2QZzgttZ1fVDmfm2HicstMoLaB0wEdn++h0ea+FuLcsQ9WX5gymUhyZjnSQ/rqD8npKN+vzXHxUiOUhbZX3qKfDbse9Q/VKukpd0ZsOHjy4XQOnb3su9ns5/Z4o2j5CPe4c7sMngfr6KeWrhVzsN6v/Zs+ePVUFvHNolWJzaCyWyW+xmCTtv0H8fTnrMyiNEH9fzroPJcyTwLSsw3IkBgnLWWPaxiuibc7MPo7FUm0J5mlaPM/j1u1meJf76PcPKP9BEbOwo4FeF8M7nmCNZpQIeAQ8SgQ8Ah4BjxIBj4BHiYBHwKNEwCPgEfAoEfAIeJQIeAQ8SgQ8Ah4BjxIBj4BHiYBHwKNkSOGZV1iNHmJbQpS77HXoQNzcRdBsqxnlzdh64403qgt4GiRX4G0Auz6ArDmM6f/TU/P040y/8/6rKuBcsB0Zj+Wi/0m0TeIcKXNM7sRXcZjIidWN8S0OJnMelPlblBBTCCv+YlLn/wi5oGNWECUOsDagfd7G4Aw3AW0AGSD+K6VxlP6F0mfr7LXBwih9hRYli+RLGnOBCpa7e44utZHy/6Z0NAusrMmmLg+lQzCcATQa7PGiLXTixfS7e0iVIh/YWTLdQPuxHupa2r2K8p/T7w+yWG8DvijoQUP0crYpIVge5n/vonw9pSk62JwYmS7HaPu7y3utl/e+QZbFmzw+KqXMMOvpwmHxwJsp7aHtpZTqs0DiAst9EDng16MMKAulubJsxjoUBToY4MxApFNoc5sMctrPBjIXUG70b8Pyfv0QnFSWbQo38GkR0MtmOKKBPEv5k5Q+nccgrlrglINzzYxyfBplpE2ouCFFQlLWkuGXwUrQY5dx4x67MtsW/dsWL1nbf6G0bC7jxuCsKcPldlfKHqW0mrZ72XQhB6QQQJviJafKhzLjy7fHZF1yiVVTHS63P0HZRizd4RDK3NpoFlEpnMDUOWWdJuvyCRPTa2kWnkrZZsrHcFid9xDKaDS59844Zoys06mhmB5KpSA2GRb9HcYxEzkMDNlo2hYftjSSw2TdRoUAPYRKwRfDGygfxDUTTV0CZTSavp6xlg+SdXT+Ojoo4PX19f2l2TfYBjZXlXDCprsc46JaLKAPlnXtXxOzkMDuIvsmGjhgcxosjivP0e8mE9DHudHyBlnnk2phpaygG492KKyV+RyGu+pyhu3Ncm60HHV+sKoqhdgNp2a6C9guFfexUFwsFdt+Rn2mp52j0gAnsIekn7BPwTmNJ+cBcNjt8tAd3PgHhUcgUx+GP6R7kCY3n6MjOVaF7wOwXd8V9NR+YPBQqYATu6fI/gZrEA3X3jauHvdhOMc05ICekV9ImHzdqX3hjlrQhTEMtjWv16+IpcIYgnMe8cnbx11iL2tfzipxiOkzau/evUdDM/xaHexQfeVcJ8jVQinq7DjUDZhcG5ThxO4ecqSmn8PrFnzYKu+/ImvQurDawPb/BUzE8sOhGD5NjdRwGksXNnFUissQm6U304kIDnXD9I1pQRhO7MZDaUGPmS+7fYarivY7+4Qr82G3xvK9tDmMWP5BUYaPN3VPlmGtcPtROMeWZJ1knVtP2QUhVMpUl0EIF8ZyzwulUkKXzQsrk0qRpuBbck6HsffPwz32rqAKuYvgdwiYh9htCCmGKFUqUCl3cqavasnZh8lGA0itHPFl+AXpSTpcS8LF5OKCjf0I5Yjwj4ijCcCxDw8Av7FfxT52uaaPiZhzHrAaX0SlTPCxtX1tcZtaAZhZ4R/1tw2he1UsZR+ryaeOqf8mFAF8nI/e49rSHLWiA6lHBh8+fLhYtWqV2LRpk1i7dq04//zzK8eD7S5uvSvIlnqO89LhpL8xWr0/a/TEVYf7eppKEKRUjwA+ZswYsXDhwkR/IyHEI8JAjhs3rnIuQkIWde89dLj6DT3+livDPxf6awRXsKGjW1tb24E9adIkceeddyYsRoL+xvGHDh0q7N4HrOM5ef+bpiuf7eJih5ooo85X4Xr18LvXXHONmDx5ctI4qgYTQUuxjXj3lUpJS6XIPG79fIfVoZVgivSzroA3FAHQR60oAaOhRnQAEVx67NixCeMVqxG09MCBA0nUb/1bG+w3AZi3bQLY5QEJbe1zF8BP5aiEkK+kavDAbCUIv9vc3CwaGhoqjIfdDZ3d0tIi5s6dmzBeVRZRY/UYyUVZbmN8zvWH+gA+qCw9lycw+3RLZMCAAWLZsmWib9++yX5UDGAD1Oeff14sXry4YndDEEtZqZNqSwr4T/oA3resgmVtQw/rYI8YMULce++9CYAAFZVRUb7XrFkjHn744QrbIXgIeV6mT/ixgtLXB/CTi4DpqnZ0po4ePVosWrQoaTiVTQ0wEdF75cqVYt26dQnQeABKl2eFWg/RQHo+lG4+gPes5isJcJUosHWPEW/AXXfdJX7961+3i3uP/hP9dweRHj6e5vuiA0q12pOyxAR4azULojd28+fPT37DxlY6GSrj1ltvFRdddFE7z1B1YnUwOVwVwG1doCbRbectW7aIWbNmJUBCZYDVUDGwzWfMmCGuuOKK5JpoNNV+XSX5SF5ZPRvVox2G4Xn9GGAwGkUl27dvT8A9ePBg8jCUHofNPXHiRHHLLbckuhugI2E/dL7vNInAcsAH8LerULDjWA7zTsn+/fvFVVddJXbv3p08DAUwHKDPf/7zidmoH6/6yWshKYze9AF8D4cpIR8GrgUVAlu78pqRKw/1AkcH4CrQYbMPHTo0sccHDhxYKQcehq7TQ5eP+Zbs8gG8pUiBOUNYeddHYwmvUZl70M/oS/nhD3+YuPpQPzgPtvspp5yS2OboH8+y6V1US1ESaee1+AD+IrdxDMl4dT6sFDBdd2hWrFghli5dWvEqlZWCY5qamo6z6UOVhbvIjSYv+QD+u9C6m8ty9RtAgtHKPITArb/tttsS1aOcHhzfu3fvdufXKh6bvN5vnAGXIxYtHP2VV2gX1psAAtN1sxH6fObMmUmj2qdPn0RnL1iwoJ3F42raFalLat/OvNEem2sPwcIuDelli9RN8vZl/Zd1Xl6kwnRFsE9ZKaqDa8eOHWLq1KmZFQfzOQ+SMyfRQ31u9PU0Ic+WwQyOOknvQ1JmY9aDVufhwegeqk+j6fOmav+tN3rUFjyxcs4RNTfFhcF5MTSzzrMxXd8PMKEyik4EKqIqDedhmOrn3gyXM4jWcVtqLlu55+ZVGACDyTAdYRZihF63XFxnXYWqE7AyzbriqBTIqiINnss+kzqxsS3vmDLL5oMVB/ANciqu0Umw5VxPzQSuLeq37TplTVeW268Bq8KAy/nOS3z0oktFXYC3vW22a3EfvGPdltrmhnMZDnlcflbhpBM5VoHNTAupUrJmTHGBt9QN2DzKAZIFuPx2pTmUSrFtl6FSyvq+R0oz5/seF4ZDlstP5Nivui/oZaiUora4oW7AZDm3fGzA5XeI8zg9gi76Mv2a5wFvUhl5/5uuXbQB1eQW7jeargwH6E/SDX9WlnXC1cnc/4s04sx8A2HyhAuGPvMLrqYbvctpYFytkzIZ7mqt2BpLicHVruA5A05PFDb5tT6d+VzrJDTDXa0VZtuDr4/3lA64BH21MoN8Cs61xU2WCddiMbHahyhSHqO0utqrK8+kG25xAZ1bcV8LxabHuQSw1GOLrLtX2bwBJ5Zj4PBiunFLtSwU7htQoqXSIut8zBe3QpPyCHRMpfgyFWCfi962sc/F+eEwnHM/Btj7ZF0LTR8pPAtSPvXxlL8e2tlx1d9cq8fD43xd1rGlCNhBAJc330rZWMpf5bDaxmgOi13Yb2K8je2yTmNlHb280uCAayw4j/LNLo1USEuFY6G4NOKyLuept7co2KFUir6NOXXjKF/OtVJMDAzBcJvnafjmcrmsywGuU1RNlaJvw3qZRemrtH2Qw3aXfS66m7Mvo3wo8yWog6xLEGYHZ3hGRX5M6WzV92Jie1GgXYC3EABlRcCmH7n4FDVjeMarjGGnCZRfqnft2kzA0I0mo+H8A8ooy7rX9ol3rc1CjnODgBYjKW+k9I5Nb4duNPPugbJQmifL9qSPU9TRGK4f854cNRpK27PTA9OmRtNliI3ZaILFs0XbB6xNsmzOTlHNAHccPmultIx+DqMcr/D35QQaJ7Xgqo5wD3mvCfLey2RZvJwiX+A7hwCaG7MsNbsKI9w/o31oqLrRbwS/Q0Lwu8+o4HcF5mrjxN9Tek60Bb5rF/zOlSyhWN45JLtdgsWljsUQFRZCXyv/7k/7YS1gln06vGN3mSBHZMKoOfo6ENYR4R13iLbwjm/ngeMCZqAPrcphuClsYt5DyagIgMKkmg0h1p51Aa/Ig+BIXbU+mIoSuNGMEgGPgEeJgEfAI+BRIuAR8CgR8Ah4lAh4BDwCHiUCHgGPEgHvwFJ4xGfQoLZFmH3CxJQRoSotob6tzxv50dctrwrgaZBcgbcBXHSIzbawjm3ozzQYXpNRe9dAoUx2I9AeBpExeJweRMZChVhMFwsU4uvfgzLHskfb6RoYRMZgMgaR30kD4wKsDWiftzE4w01AG0AGiJgigTAlmCbx2Tp7bbDOUl/Rfq3uL2nMBSpYXe05uhSWRWo3TcI02O36UDoEwxlAo8FGVKeplC7WI2KFUCnygZ0l0w1yshGmYqwSbSsdfZDFehvwRUEPZqVwgafUg9IN9HMX5espTUmHH7MFlXY5RtvfXd5rvbz3DbIs3uSpSph1E9jpYEqpwmEl35tFW+TZpTIc4nEgcYHlPogc8OtRBtH2YetcWTZjHULF/ukUEmwDq6fQ5jbKm1XkWQ4wNkBdI4Bn/EbI4SZZtikusdp8QS+b4UNo81nKn8yLFu4Ty57LdA74cvvTKCNtQsUN4YQ/q5lKMRTkMlgJtD3B9GDyAHFhtkFvG++VUe4LpWVzmS0WaIdguNzuKtq+w19N271supADUgig8/ZllA9lxloCj8m65BKrpjpcbiNC4UbKr7CxwgRGSJViAtqkNihNk3X5hInptTQLEUpsM+VjOKzOewhlNJrce2ccM0bW6dRQTA+lUkZRtonyYRwzkcPAkI2mpcG0xQgdJus2KgToIVQKIuthHvcgrplo6hIoo9H09Yy1fJCsY0NNzcL6+vr+0uwbbAObq0q48eq5x7ioFgvog2Vd+9fELCSwu8i+iQYO2JwGi+PKc/S7yQT0cW60vEHW+aRaWCkr6MajHQprZT6H4a66nGF7s5wbLUedH6yqSiF2w6mZ7gK2S8V9LBQXS8W2n1Gf6WnnqDTACewh6SfsU3BO48l5ABx2uzx0Bzf+QUpDqsHwh3QP0uTmc3Qkx6rwfQC267uCntoPDB4qFXBi9xTZ3+AVy95lOC4kwzmmIQf0jPxCwuTrTu0Ld9SCLoxhsK15vX5FLBXGEJzziE/evlDrz6YW+x3FXX/WheHX6mCH6ivnOkGuFkpRZ8ehbsDk2qAMJ3b3kCM1/Rxet+DDVqbIKhzGl7WGuFywvZ6zhjiX4dPUSA2nsXRhE0eluAyxWXoznYjgUDdM35gWhOHEbjyUFvSY+bLbZ7iqaL8zV5eHWCFfsnwvbQ6zxYHgMHy8qXuyDGuF24/CObYk6yTr3HrKLgihUqa6DEK4MJZ7XiiVErpsXliZVIo0Bd+SczqMvX8e7jGrggj9hZiZyBEpFtGoEBoMcdlM8exdo1W5qpacfZhsNMAUrcrG8AvSk3S4loSLyZUHNuKtqTjHAFsFosZvFXTa9Zohyms4D1iNL6JSJvjY2r62uP4bYKpQjlnqAoxC/GM8FB914kMIpsk6oQjg43z0HteWNgECFisgzz//fLF27VqxadMmsWrVqnbxj/FQTA/GZUYvty6Weo7z0uGkvzFavT9r9MRVh/s4PO+++25FP27cuFEcO3YsUSfQ4UgISo3IsUoQ9lGPFh5oGWtXHa5+D8iLGmti+OdCf43g69YfOnQo+Q09DuYj3XnnnWLSpEmVY6CCEJJdhVgvMoU6QB3P8VEpZ7u42KEmyqjzVSBSSGNjYwIkApTif1gs0N+zZ88W11xzTeU4HHPkyJFK1O9QZfFwzM7yAbyhSKF91IouetBpxD++8sorxcGDB5P9yloBoy+55BKxcOHCygPCW6AsGNc3rChptPMafAA/lVPgkK+kfi3Y2ogCq/a/+eab4qqrrhK7d++uBKQGuGD6mDFjxAMPPJCEZFcCpquGN3T5GIQZ6gP4oLL0HFegQhCGVwmYe/3114sXXngheRgKdFgpQ4cOFQ8//LAYMGBA5XhlNlZDUhh90gfwvmUXjNOBBesDoCuvEqpk8eLF4ic/+UmyH28CLAMAi/jIjzzyiBgxYkQ7s1E1pGU3mBzsTICfXPApB1M70M8w+ZADeKRHH31UrFy5sl3waagQPACol9GjR1fO57Cc2+XAlG4+gPcUHUiUTtdBf+aZZ8Rdd91VcfuVeQgrZdGiRZVzldVSRenhA/j7IkpwMX022CraPkTtEAI9DG8TTFb6fOLEiWLGjBlJ46hYDJ2Pt2D+/PntVFKV5XBVAM9bXTm931WUo4PrqEZy+vTp4itf+Upi/uFhqAYWx86ZM0ds27Yt06a3decGWrr6qC/gwcX08WlWjyAABKi6Lr/tttvEueeem5iJSn8D1AMHDiRg6wsOwGbHObZ+lMBywAfwt/OYWy2B1QFmq3vDDGxubhYNDQ3JfoAN9dK1a1exa9cuMXfu3KTTq2IqUCMLW74aS3an7vGmT6O5x3TRUIGE8q4PNQFQ1f6BAwcmjg0cHNjWCmyACkdo1qxZ7cCGGRkabIcAHLt8GN5ShN1p/e2iStK2M/q/lyxZkqgG7Fe6HIx/6qmnxIoVK/7OIPkQshpKn7DCnoRp8QH8RW7jGLKBVOfrtnNTU1MCMFSMsjqgm5cuXSrWrFlznK2udHZRdpsegOXaL/kA/rvQupvL8nRImN69eycmIdirBiDQcIYegAiodn7jrMPliEULR3+ZIrdyK5U+RrcsFixYkOj0Pn36iP3794uZM2e2AxsWChdsHzY7sn1n3miPjeEQLOzSYFIhrmqFGxUFDZ7q0/7FL36RpMwOH1Itys4uGj7GFeCc8zeaALUNIj9bBjM4oVqgIgBmlopRnUrQ1wDbFIvNp9H0eVO1/9b7uvYQrJxzRM1NcWFwFpPzzstjOsBUjWXeRCAbSDY2+6hKw3nw0H7uzXA5g2idz+vq+jrmxVQDwGByr169kv5umIJqxCcPCNdZV6HqBKxMs644KgWyyrXBC7VWoA4gN9pgNcvmgxUH8A3pUIw+OddTc4lpzznWxVMuWDcEa91QGHA533mJj150qagL8La3zXYt7oN3rNtS29xwLsMhj8vPKpx0IscqsJlpIVUKJyCpZ92AzaMcIFmAy29XmkOpFNt2GSqlrO97pDRzvu9xYThkuR5IOkTvmq8u91EpRW1xQ92AyXJu+diAy+8Q55kA9WkoTXGUs1htY30e+BzGu7Bdk1u432i6MhygP6ni1JdhnXB1skukb99GnJlvIEyecMHQ51v7q+lG73IaGFfrpEyGu1ortsZSYnC1K3jOgNMThU1+rU9nPtc6Cc1wV2uF2fbg6+M9pQMuQV+tzCCfgnNtcZNlwrVYTKz2IYqUxyitrvbqyjPphltcQOdW3NdCselxLgEs9dgi6+5VNm/AieUYdLyYbtxSLQuF+waUaKm0yDof88Wt0KpuBDqmUnyZCrDPRW/b2Ofi/HAYzrkfA+x9sq5vO3ZohQNce+rjKX89tLPjqr+5Vo+Hx/m6rGNLEbCDAC5vvpWysZS/ymG1jdEcFruw38R4G9tlncbKOnp5pcEB11hwHuWbXRqpkJYKx0JxacRlXc5Tb29RsEOpFH0bc+rGUb6ca6WYGBiC4TbP0/DN5XJZlwNcp6iaKkXfhvUyi9JXafsgh+0u+1x0N2dfRvlQ5ktQB1mXIMwOzvCMivyY0tmq78XE9qJAuwBvIQDKioBNP3LxKWrG8IxXGcNOEyi/VO/atZmAoRtNRsP5B5RRlnWv7RPvWpuFHOcGAS1GUt5I6R2b3g7daObdA2WhNE+W7Ukfp6ijMVw/5j05ajSUtmenB6ZNjabLEBuz0QSLZ4u2D1ibZNmcnaKaAe44fNZKaRn9HEY5XuHvywk0TmrBVR3hHvJeE+S9l8myeDlFvsB3DgE0N2ZZanYVRrh/RvvQUHWj3wh+h4Tgd59Rwe8KzNXGib+n9JxoC3zXLvidK1lCsbxzSHa7BItLHYshKiyEvlb+3Z/2w1rASjTp8I7dZYIckQmj5ujrQFhHfE21Q7SFd3w7DxwXMAN9aFUOw01hE/MeSkZFABQm1WwIsfasC3hFHgRH6qrxwVGUEhrNKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8Aj4FEi4BHwKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8Aj4FEi4BHwKBHwCHiUCHgEPAIeJQIeAY8SAY+AR4mAR8D/n0npkT3r6uomys0zHU9tMv1p+6CX7tvoeL+X5XV/GhkeGe4kZ0rm3M18I+aFvLnHfSPDI8PLlZc72HUiwyPD/ayXeSadb7NquOuo5N2Hq9sjwyPDq2NFVOHNigyPrn2UCHjU4QV1Jx33bxwrhWuHUxuxriPp8sjwjyDD83r9Gk12ODGza8E36kyLddQUGR4bzSgR8Ah4lAh4BDxKBDwCHgGPcoJ6ml7zQ2yeIvc6Bk+0sRYeaGT4R5Dhijl3pxiWN9a4riOVJzI8NppRIuBRhxuthXmR4VE+egzPm4+iMT7I/PBazbCKDI+AR8CjfER0+MuO1sjLJ/h9I8M7ksTwjlGHR8CjRMAj4FEi4BHwKBHwCHgEPEoEPAIeJQIeAY8SAY+AR8CjRMAj4FEi4BHwKBHwCHgEPEoEPAIeJQIeAY8SAY+AR8CjRMAj4FEi4B1f/k+AAQDJjrwQhWD6twAAAABJRU5ErkJggg==);
background-size: 46px auto;
}
}
/* Light theme */
.fancybox-light a.fancybox-close, .fancybox-light a.fancybox-expand, .fancybox-light a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC4AAADICAYAAACXpNOoAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo1NjIzNzFGMDZBNTUxMUUyQkVBRUY3ODU0RDc4OTlCQyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo1NjIzNzFFRjZBNTUxMUUyQkVBRUY3ODU0RDc4OTlCQyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE5QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+xE3ZhQAAC3lJREFUeNrsnXtMVNkZwO+8gEFEXFBBXSuLrAZHirrZbf9oGvFZqKQrfygBxCBs0ljTl6mb3W2axqai7rb43iauViVREv7gJT6ID4gSaxYrgom6rLLVqgjDMAwww2Pm9vvgXPdye+fOzH0MoOckJ4N37uN3v/u9znfOXHUsyzKTsemZSdooOAV/3cGNwg06nW7CQYp5PqMK58U7NZCnx31yd49X9EB38z5ZTSQeoJrh8SHQQ0k3kW2cCiLsMPQh6AOkD+I2kKJHyRM2ypQwHhcGPRx6hMlkmnrgwIEfQ/vp3LlzLREREXNgWwTuPDw87Ojv739itVr/ff/+/QsbN2681gcNgF3kpljZ+sPvfkgZgaOhJ5jN5verqqo+dzgcz1g/m9Pp/M+dO3c+jo2NnUHOpQ+UcYQzAHADkXAs9MV5eXlZHR0d37AyW29vb/OxY8feJ+c0aAXOQc+GnrJ3794/uFyuHlZhGxoastXV1W3wBS8XHB+lmUh6aXFx8Wdut3uIVal5PJ7B27dvbyLX0KsJbiI6bcnPzy8cHBx0sio3EETfuXPnPiDXUgUcJTAVDXHKlCkr29vbv2U1auB5mlJSUqLEpC4HHH30LOjvnT179rAfknPL+Y5rjx8//i25piJw9NfoixOMRuNqu93+wtsFQX3YnJyclwsXLmxvbm62C7+/d++eDb7ryMrK6sR9vbWBgYG2mJiYqbzIKwvcSHT7h7t27fpUSlKbN29+sWTJEtZisbCLFy+2tbS0vIJ/8OCBFVSgMykpiU1ISGA3bNjQJXWupqam9cLAKAau9xFsMIyHr1ix4gMJV+UGKTMQKZnQ0FAmJCQkKjc31w1R0tba2mqFvz16vT7aYDAw8MmAB2HAk3i8nS8uLm6dP0HJ6CPgoJWb4+PjF3qN/zqd4cyZMyaQeif8HYO5B9zM9K1bt1rhb5a3Dfe1V1RUGPR4B14apAvv+QpIvvJxLokyRUVFxUqdBPT3rZKSEh3YQid0lDoDEo7moFHAAN5dVlamS05OjpQ6Fxz7jj8S1/tIpkZS1bCwsHBfJ0pMTIw+efIkwncgPHZO0tBspaWlerCDSJ9Aev00oXEGYwQUtFqH3gfESPIPeUm/rxOhIULihWnsDOj4yek1fj0d0lkPeJseX+cBtbL7IwAp8FeDgO7u7hdSJ3n48GFXdnY2C7AxCAy+GsGtAN454rrAFuEGojIzM1nwQJLwcOwjcm3Z4G4ycnG2tbU9lHKHmzZtGuagIYjgxW3Hjx/XnzhxQsfBE32flpGR4ZZyh5Dufk2urUjiONTqv3Llyk0pd4jBB1JUDrr79OnThkWLFk1Hgz116pQeOK0Q8Ue8y7Jlyxgpd/j8+fML/kg8qCEfnkyHr5C/YMGCSKUhX5hkHZksSZZYWvtIq7QWxqLNaqa1YwYSBQUFH8GjdmkxkKipqfmRmgMJ/tAtDoduBw8e/JPaQ7fGxsZsLYZu/MHyHIQ/fPjwH8GLONWQ9K1bt3K0GiwL4VHylm3btuV1dHTIHsqBv74HWeVPtC5P8OHNIgWh5wEY4RNeQcjsTwqrBjin8ybibdBVJoSHhy89evToL+/evVva2dl5D3IbG9oBdvDNXTab7e6jR4/+WV1dvQmGZnHkWJO/SZ4YuE4IG0ARMmhFTzGBKgEPWplZq/o4S6RKp1Jk1ccny4QtnXWj4BScglNwCk7BKTgFp+AUfAINJGSOOzVvwgHO6yVxGaN8rlTBFXfcZADtYX2MBYO5JkusNIEVKZwE4KYVndAd+AlgblaDgaxeBegp0GdUV1cXuFyur4eGhpobGxs/gW1Y6w7RVOlllOG4lXA4mbqgqqpqt8fjGTPjsH///g/J93qxawRwLVFOo1JJA/RH6enpvweVGPP0Zs2aFc34MUMcLInzJZ0I6rEPp/6EFdnu7u7HycnJFma0uKnTQuKBnGwMdE1NzedYuBSBfpqTk7Ma9plJdHxcwTloNLjE8+fP/80bdFZWFq43mU08jaR+aw0+BvrChQvFYtA2m+2/mZmZabDP28Q1mogt+Nv1zNhKryLwMdCXLl3aLwZttVqfrVy5EiegUqD/gBldozgzgI6zE2jMODkbRm5EcpJWJ1o0/z6acfM+My9fvvzr1NTUXwlPCL7buW/fvq+6urq+i4yMdISFhQ3iyqCAgolezxqNRnd9ff03FRUVT2BTDwlibm+5ii+J4yxDHEAXiUlag/nOoWvXrv2ZPLFQJaqC0n6nv7+/gw1SGxgYwPUq8bz0QRTcr5BvMBiMTPCaW41cBU/irKurO8IEYdkSxLJh0PMvhPot6p/9Nc7a2tptq1at+o3QOHGFzaFDh061tbXdR+M0m80DUsaJhmi32509PT2DQuO8efPm04aGhnZinC4lxjkmWnoLPABhy87OLoR9lkKfL+EOZ/B6DK+jK3yLpAh+ucNAApBkqO/t7e0sLCzMhH3mMd9PwPoTcPSC4KPTIuRLJlcA/xLgMyZKyPeWg+8V5uDYHA5He0FBQRpRjdCJkB3+H3xlZWWRGDwY33cTKa31Br9bDL60tLSQ5DiajIDkjDlZ4qb6oHdkZGR8BWqzh1tUQC7iaWpqatPU9yuQAl/y8eXl5Z+AcT6F9KAdgshfmdHVRGatVMVXAPJ3/BlGcnAzrzzRi5+YO6lRVxFyKs1BOLVB0EFeQYhbp+LRSlOUSpzWDt/sMvNkWLNCVYWCU3AKTsEpOAWn4BScglNwCv6GD93oKD/YElciCd5T82vl0HitEJJqXGkOS81caQ5/Jo+lOZfSlUOaqAqRNk6lRDU0NPxucHDwbn9//7/Kysq2MKPzP7jWxaBTYkxKKqgSx6NAphUVFaXz54uwjl5RUYGV3AXMaJUXn7hOznW0NE7d/Pnz4/hguIpo/fr1OysrK7fyJc/IWUmkkcQRZOry5cuXOByOpyI/93XjNEwgkldjKsVfcJy4mpmfn/+znp6eZ2LwOAHmL3xQwHl6jt5kdm5ubpoXeA9OPcI+ib7glcxzvnrfSgCd+6XtvKysrPV2u13snXEenPT1BS9nhRDnj3H1TjQxqkBW/+D0OK4aSklLS8u12Wxir4Lw4HS7FLycFUIj0KtXr56zZs2ad+HpmqDrAlQnncvlCgF1mRoTExO/Y8eO/NDQ0DDhbhcvXvz7unXrcKXGSxKsFK0Qiq2trf1seHh4IAjrbDwA/xdm9Cf0ilYIoXHFO53OziCuELLBNfEFSOHjFYBkBy5GhZcf4XSfs76+vhj0eigYafeNGzf+IdRv0bvz1zjXrl37NhhootvtNsoxTlC3UDTOefPmJW7fvn2LCV9lJtjt6tWrR1JTU4vh73alxqmmO1yamZmZD/BWMaMEB3CQuMMof9xh0ALQli1bfgF5y0sxaFxByoM2qRGAVAn5eXl56SBp0cCDa3WlJD0eSRYuwZ4Jkl4H0M99RMuoCZFkEYCIpKSkxRDiv/UC/YU/yVWwwVFNokpKSvLlJlXjOQJiW1tbnwpWDrEIDcnWlyQf6WPkvmhaQ1VBw5xz/fr13RjG+/r6XpaXl39KwnlA40052aGSugr34mnhyqE+8jlSnpC7QkhLcGFBiHtJjKyCkOrgtHb4RpeZ6QohCk7BKTgFp+AUnIJTcApOwSk4BafgFJyCU/DXFzzgxWQ6nS4dPixevt7D/4fIm7Z3ejmuBfY9FxCIr5+fi9S+d4pNZeN2X+eWOjZQDjVVpUWlfbRRFcGj/5j3T4tQhcTK1fxjQHJF4wKu5OKCm6ZeJWBpwbafS3gbvveoVkPqcsD5Lo/vSSwAFebjhi0CNdtDIycFp+DqGafXfENofL4iJz93CdRQZblDLugIomC1GuehqqJ16FaSa7zxae2EV5UWL+rRovGxY1V0svxvNDRyUnAKTsEpOAWn4BScglNwCj5x2v8EGAAYJEdp3vkt5wAAAABJRU5ErkJggg==);
}
.fancybox-light-skin-open {
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.5);
}
@media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2 / 1), only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 2dppx) {
.fancybox-light a.fancybox-close {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpEMEQwOUQ1MjZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpEMEQwOUQ1MTZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+z3OoagAAHXpJREFUeNrsnQl4VEW2gG93J510OkASQzCQjMQl8IZN1iCjAREHCMoDGRHQECBsEuAhIomCTxAElcGERRg/UBwdBgOouMAoH09kcWNk2CKrGBCSEEIWyNadrd+pTlVSuXQnfZf0lnO+r75Op/v2vffv06fOOVV1SmOxWAQU54kGgSNwBI6CwBE4CgJH4CgIHIEjcBQEjsBRELhHA9doNEhKJHIV1Z2As5No6d/i1uB+bbQa7jUE3ghg8qijoHWiphV9AYIIMGnVosb+Z2nOL8BlwKWeWFP7YRoOsA80X66x5zx0e8AZ5EpoVfSxknvOvgCLxc6FylUmtwfOgWYg9bT50eZLn1uht2rVynfRokV/jI2N7REeHn5vmzZtIgwGQ1tfX1+jTqczks+srq4uraqqKqmoqMgtLS29XFhYeCYjI+PHefPmHc3Ozi6Dt1TQxr6Y28A7G7j1wKaakuM5bSaa6w+tNbS20CKhRUPrBq0PtAHt27cfkp6evuzy5cuHKysrSywyBb6E4oKCgq+PHz8+e/DgwR3oOf3pNWiZoinx0BzhZpNHcwLnNJpobiC0UAq6E7T7ofWHNnDAgAFjfvjhh61ms/mWRWUB+IWg7WlLliyJptegp9ek8SrgFDbRKAO0IGjtod0LrQe0B6A9HBUVNRJAp4M2l1uaWYj5uXbt2prp06eH02vyseH9eCZw+rMltthItfouaF2g9YM2CNrQjRs3vlpSUnLd4mSBLzfr7NmzT9Nr86XX6pnAOXvNTEgYtHuo+fgTtCHQAY4+duzYbouLpaio6F3Q9lB6rVqp2u4uwBnsVtDupJ1ib2KnoQ175JFHEvLz83+zuIlAn3Fq586d93HQPQe4CHY47Rj7ElsNLe7pp5+eBSYk1+JmAp3q799//31PqdBdCpz+HH2pGbmTwib2ejC0EU899dQsk8lUZHFTgQ4178cff+xO70Hj1sA5b8RIbXY01Wwr7KFDh04Fzc6zuLmApl/dt29ftKPei6uAMz/bQL2Re6jNJmZkRNu2bce6k81uSiBiPQlRahDz05sDuNLQnrl/BHgbaMH0kWi7H3gjM++///4/e1LatbS09N3AwMAkmo+pUTu01yq4Ng0HPIDabyOFr1+3bl1/T4NNxGg0JkKANEqOq+gM4Cw/YuRhd+rUKXDKlCnTPHVwAUxh2gcffBDiTsD5HImBangAzfr5bN68+YmAgIBgTwWu1WrvHDVq1EJHbLmzgfuJYPv26tWrTf/+/YcrvTAIwctccSwTsOMz9+7d29ZdgLMgx59quD8bNEhNTR3h4+PjL/eCCgsLBQiSTF27dtXPmDHjkpTOibw3KSnpUrdu3bTg+5sKCgrk20uNxgiKM1eh2VUFuNic+NO/fYKDg31jYmL+rAR2YmJixblz5/zBJPkcPny4Y0JCQpYj0Ml7Zs6cefWnn37qCMf6nz592n/cuHEVSqBDBzpt4cKFBjW1XC5wH26kho3WaJcuXdrLz8+vlVzY06ZNq8jNzdX7+/sLBoPB2v7zn/90mDhx4tXGoFPN/v3UqVMR7FjymJOTowdNlw0dbHnI/Pnz41wNXMcNh/lxCX3t8OHDY+VeCAQc5QSQr6+voNfrrQ2+PCs88Ocj7EFnZuTEiRN/YMexzyCPV65c0U+ePNkk97qCgoIm0PtzCXDe92awrcNWoaGh+o4dO/aQcxFkXPKXX37RkQALtMradDqdFRiDfvz48YhJkyY1gE7+nj17dubJkyc7kveSY9jx7LPIIxyrgShSVkcKX9wjS5YsUc2saGXab36U3ardYA7uldtZwnGBAwcOvFhTUyOQZg2BARQBCK9ZoRMTQaCDjb/CwuQ5c+ZkgmZHkfeQ95JjyP/Z51RXV1sfhwwZkgPgAuR2nvAL6asWcKmhvY66gME0dxJMI0zfL7/88r9HjBjxjJIhKwB4AczHfUxbGUACDn4F1kb+7tu37yWAXAP2/W4xbPI6uIWC2WwWysvLBYh2r/7jH/+IUDK35ubNm4vAtLwh1I78OzW05zXchwsMNJGRkeGKvnkAsm7duvvAjz/PwPKazswL0XToHDuCZt9t74sB8yGYTCYBPksxbCJwnmhXmxS+WYFDONxB8c8NwKxduza6Z8+eNqETbSaQSbOl2eQYptnwGVchPI9QY9YYnKuTK4FrBRtT0Fq3bh2qio0TQSc2mP2fdYh8x0iEvIfBJpqtJmwKPMLVwMXz/khvblTLdWLQ+/Tpc451ovxrPEjWSTJTojZses4gVwK3OaMVfuYGNUNgAiw1NbVTv379ztobCGH/Zx5J7969r/7973+PUHvyKXxeoKuA24Pu1RPIyXQWVyevbAUv5SrfpABh9fkjR450FpsRsXlhgdLRo0dvC45UupZSVwIXT4S3/g86rFI1Yc+dO/fXf//739F858ibER46eQ/xWkg4D755RFO5FxnXU+Iq4LZgW+XWrVs31IINAdBvEACRyNUKk+8c+cagMuDMT6e5lyy1oJPpcYJKE/vlAGcT4dkKBOuFXL9+PVst2BDC323PzyZRJGn2/HQu4dVBLehwvvOuBs7Dtl7I1atXs5XCJokoANVouE787B49elyAkP08+Z8t6MS0EE0nqd34+HjF0OG8F1wJXLymxgr98OHDvyq5kAULFpwH2FH2wnUWQXbv3j2TpADS0tLsRqR8ihf6gQ5Tp069rOTacnNzj7gaOL+Gxgp806ZNF+HmZeWdSXr2wIEDHfkIUgybRpC/b968OYp5J/bSAHyKlzzu27fvTrnpWeKhvPbaa0ddDbxKDD0/P78iMzPzpNz0bJcuXarEqVVR1u/Kli1b/sB7LBz0C8y88B0qe4RjLXLTs2VlZd9u27atzB00nLW6lWJ79uw5JPdCwEQEhIeHmwk4EqKTJkpERdrzxQH6fQD1IjuOfQZ5jIyMrIAvSvagdlZWVrrQcBmisqhVxlQ3trIhhDaSEyfa4xscHOyXk5PzN/AUAuVcDBnXnDJlivnKlSt+RFsJNJJidSQ3QgeRL0Hw05FoNoENX2DFRx99pA8JCZEFB66hMC4urvPevXsLqXI1OJ9s70DiZE4tBU7SsWQFGpls/xi0J6CN/fbbbz9RMqGyoKDAMn78+LLo6OjyadOm/QbwHD6WvHfWrFkXO3XqVP7kk0+WgZlTNLnz2rVrafRetXK4qTV7lqgaGc8k6ViSmCcr0YZCI/Px/gI/7alqLJIC7S5xxbH8IqwVK1ZE03vVqAVc7uxZH2pGgjizYqQXpzt48OCEhx566L89OWEF2r0BTNKLpN+kzoGghkmRm7winSRZ4VtOm4nvQBMTE3dB717oqbDBzbz+6quvrqb3WKPmZ8sFzrwVM9WAMvq3dU71hQsXysAzeM9TgZ88eXLxxo0bb6jpnSjxUvgviy0PDBbqJ+MbqCejO3Xq1LNdu3Yd5Emw8/LyPg4LC5sJf5Y0puGumJBvobaNmJNS2lhBAatpGTZs2BbwFC57Cmzw+c8lJCQk03uqEpqh9IdS4DXUjJRRjSilNt0KHYKGsnHjxr1RWlpa4O6wwbPKffPNNyf961//yhfql5uoX2ulORdVQRsN7ckxY8b8j8lkuumui6kAdsGqVasG03swCM24qErtZYPtBNGyQQr9LxMmTHiuuLj4urvBBjOS/dZbbw2j124U3HzZoLiYAVvy3WBhLIMeGxs748aNG5nuAhsU4CxEpg/Sa24lSCh24Kplg3VfCLyHFaHxo55La9oC6c/UOq0ZPAD/PXv2xPfu3XuoK212ZmbmjpEjRy7PyMjIo/2PmXaUNY4Cd7ZbeNuJRdCNFHgrEXTr9ObU1NS+06ZNm2I0GkOdCRr6klzoGN944okndsPTW7SjlwTbnYBrOJuup+F/K9rYskK2YkIXERHhv23btjFkEZaSdUEOZv7Kz5w5s2PixIl/O3bs2HX4VzHnxjIX0OJRwEWazq8DMgr1C2cDOOhWbScr39LS0kbExMQM1ev1gWqCBg+kGMzGrpSUlA/37t2bbct9lRO+u11VN6rtbGozW17IgJPGFmPVTeoPCQnRk3VCcXFxAyMjI7vLnT5XVVVVlp2dfezAgQNfJScnH8zJyblJtblUlIaQHbq7ZRk9Cp3XdrbMMECoXwHHL12pmwJtMBh0SUlJ9w4ePPiPUVFRd8GXER4YGBgCv4BWYH78ampqqomZIBOQwLUrKiwszIZA67dDhw6dAJ86o6SkhCXVyilk9rxOq4mn4nV1C0WFIZlt9+Pg+3HQfcXgOTdNXCjSIjScRcDGWSuoBps5yGbOVjcoHOlxwKWcS7BfKFLPNR66o6VQedgVHPTbCkWqFa57Uu1ZJaVQLaL0MAPeZClUtfMinlpdmdf65ij222yVlrGctZ1JpgjcSwSBexNwFBXtJgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBwBI6CwBE4AkdB4AgcBYEjcBQEjsAROAoCR+AoCByBoyBwBI7AURA4AkdB4AgcRSFwXGB1u0hhiMBbOHB7C2JtbmWDwJWD5pd/swIHRPgiBqoXKvAo4Eo7XVHZJtJYDRU/ob6kHatzKy7FUVeGQ8H5Ww5wEWwCmNRPCeSanr5OAJMiM8VCfa0qa2EwUqfdk4BrXfUztAGbACa1yO8cN25czOnTp/9qMpm+qKqq+jonJ+f9jRs3joHXwoTamuWsoLBW42m9uOTKkgqPZ4V4hPrqzKTiW3tof4Q2cN68eS8C6FJbhR337t37AbznAWh30y+HHK+VW7RRrRhE9cqcagK3ATucwn541qxZiwB2eSN7PNTMnDlzGry3q1BbujQAgcuEPX369EXl5eWlTZUvPXTo0Mfw/j7QyLa5Rk8DrnWxzSZF3tslJib2T0tLe9Hf37/JzY2Cg4PbCg1LM3mUaF0NOyEhoe/atWsXGQwGh/ZUvnnzZr5QX35JQOBNwzZS2GHPPPNMnw0bNrwcEBDg6AbWlo8//ni/IKNWbIvwUkQ2m2j1ndRmDxo7duzzxcXFt6SUnz548CAp0BsL7V7qpfhhp9k47P8irt+YMWPm37p1S1LF/J9//vmQTqcbDsd3p25kIItEWzxwG7DbMdijRo2aB3a4SCLsH/z8/EYKtdsd/EGo3U1FsQ/uFcA52L4c7M7EFDz++ONzAXahFNjHjh07AjaebDtGKu5HUVPiTz0VTYsG3hjs4cOHzy4qKiqQAvvkyZNHwXsZTWGz6NKghinxFuAMtpGH/eijjyYVFhbekAI7IyPjWGBgIMmfxFDYd6gN26OBi2CTJBPZeCN2yJAhzxYUFORJgX3mzJkTISEhT4pgB6gN22OB24H90KBBg2beuHHjukTYZ4KCguKF2n06iQvZQajfzZDVHG/Oxld79hEa1r9VDNxHhcCGL8hupOnTsAEDBnTesWPH0jvuuKOto591+vTpiw8++ODbYOvJZhiV9Cb19GX/RkZ5NA7+r1FF5UaX+L2fq4SGG24rCraUAteKtNsKu1+/fp127dq1LDQ0NMzRDzp16tTVhx9+eBvY+kp6XQZ6g/4OhPIaBaDF0PkS2SausVEmQQl0JcBZyO5L4ZAtZEJjYmKiv/jii1fbtm3bztEPAm8kb+DAgV+DZrPtf6vpT9ssNF3/2xZsjQLgTLNJdf1SbpSpRKjfN1QjdzxVyRCbltNEEoi069OnT+fdu3evDAsL6yDlM00mU1V1dXUNfDbZk4FtFSD5hnx9ffU6spm9QiH7S0AknH/ixIkDCQkJWy5fvpwF/yYb+N0S6ncirJE1zKig09RSz4GYjS5RUVEjs7Ozf7N4meTk5Fy45557htNIOYwNerjCS/GhZoSE2v3279+/w+KlQgc9+tJ7bS02xc4agOCjSkPPnj1jBS+V7t27x/ID10o6ZqX58LpdS8geO94KHO4tWKifmKRolEkp8Do3qqysrMhbgZeUlBSKfHGXAGewiX9qAtfuO28FnpGRcZhzCZVt3auCl0L87a4gT+Tl5V31tg7zxo0bl+HeHqNpBpd6KWyAgUSXdxFPZdCgQYn5+fm5ngqXzHspLy8vIxORbt68ef3777/f1atXr8fpKFME9VD0SoArCXz4UZ26geHBgwd327lz57Lg4GCHNybNysoqgS/r819//fUqPCWj8jdplFehht2UYCL5nczZ9vElQv1GpyzEb2BWnDmZU5y4sk59GDJkSJft27cT6Hc4+rlXrlwpgOO2nD9/PhOekm1zi+hNVnI5DIsToNdwCSs+l1LJ2fAaGxlTpwAXQ2fzTcKGDh3aPT09fWmbNm1CJEC/NnDgwLcyMzN/46CXCAr3vpSQKZSVLXRWaG8rAGowrBYXFze3qKhI0hjm77//fjkqKipRqN0xvBu0SDq0FijUb3zqsflwtQYg7I3Sx8oZpb906dKFyMjI8ULtTNl7SBaSG/FRPFLvLUNsduehjB49+rlbt25Jgn7hwoUzISEhY+H4/hR6swyzecuo/W3QyUwrgC5pptWZM2dOAvS/cGObbNReJ+CofdPT28aPH/9CMYiCqRI4L8XBeeB10OPj4xeWlpaWSIF+9OjRH3Q6Hc68auxkQiOT7ydPnvyiVOhHjhw54OPjM4x6LuHU78e5hY5Cnzp16kuOrHjg5ZtvvtlFpl/QThTX+EiEPnjGjBmLTSZTmZRcRxIIzW3gGh+Jq9a6EOjA738bW0hlQ8s/EmrX+HQQcI2P3QtiiaEqOvWBhOskqZ/79ttvH05OTn7TbDabHPmsoKCgEEeiP3cVH2ediECneRiWFKpLDaxZs+YAeCGalStXLtTr9X6NfU5eXl4ON03B84q9uHidZmtqXsi6y8EpKSmvVlRUmO2Zk+rq6sqnnnpqIjVHYTQIQhsuEzqBOOjll19eSsyLrT5z165dm2jUGUWzknp0C5VBJ2mAAZMmTZp57ty5H8nIS2VlZUVWVtaZ1NTUV8hr0KK5oS6P88PdtZpEK+qBsGoSpKMtpyMvxfRvj6wm4a71Uvy5fLSGdpIVAtZLUU3EFYHYpBusCIQ1r7wLuEcKAkfgCFw2cBSFnRQCR+AIHAWBI3AUBI7AURA4AkfgKAgcgaMgcASOgsAROAJHQeAIHAWBI3AUBI7AETgKAkfgKAgcgaMgcASOwFEQOAJHQeAIHAWBI3AEjoLAETgKAkfgKMqB41r72wWLGyDw5vl1co9NFbSxIHDlsOv2gRPqSzaxSqOsTBMr2VTTnOA9qsiYZNINi5KRQmRsoww/Cp0vuUoa23DUZlGyFlfVTQZwtlMtAc22Bm5F/9ZT4KTqG9tel9S5rSu7J4bubOBawYOEanfdHp4C3fR68+bNY69du/Z+VVXV1yaT6Ytffvnlzfj4eFLBk1Tmv4N+If70i9JoXGkXXVWZU2YVTVbJk1RYJptwPLB///6PbNWpNZvNZSkpKaSa58NCbZnV9kLDvdQ0LaoUqgLgbIfDbnPmzHmWFHG3VxwYoJuTk5OXCfWbMN0GHYE3DZyUSCU7//X57rvvPm2qwDup1gzQX4P3P9Jc0KXcg4/gecLsuJYWb29UyNbry5Yte4HUJ1+xYsU3opeZByMITio86YnA2QZ0NXl5ebmOHADQfZcsWbJAC7J8+fJ9osDJudA90KQYmA2fMGFCIngmVY7uHwHmpWLx4sWvw7FDqHnpoIZ58XYbTuAEUy/lTzt27HiXlLSWAv2VV155g0LvrgZ0bwbOIkwj1XKyPc2Q7du3b5UKfenSpW/CsY+qAd1rgYu0vDX1VnpAG5aenv5PKdBJ5X2A/lc1oHs7cOal+NFIk+zN1hPa8G3btkmFXgkejGLoXg2cg66j4TqD3otA/yeIFOjQ5yqG7vXAG4Fu1fStIFKhg7u4moPOtlT3cwS6qzcwrQtMhPotBtRu/NYFehrukyQV2W6GbBs24sMPP/xnI1G/LehVEBilwrF/lgrdqZEml5/WiCBrnZiNZF9ENU3FFsfHx+8CZ0Q/efLkMY4kByES1S1cuHAOxEaalJSUPdxLxfSxUo3gSClwfuTFh6ZN9fTRtxmgW5oI95nGE6lKTEz8v/LycsOsWbPiHIW+YMGC2SQN8MILL+wWndeiRkSqBnAG2p/6xwZuMMBXxaycRQJ0A70e3ezZs38uLS0NBICxjkJ/7rnnksgvF+B/Qf9dI0oDuAQ4f4MB1N61CQsLC503b16/yMjICPh56sGOkr3SNArNlqWsrKzSZDJVsOd2bKn1F1ddXa0j5sRsNhugBeTm5lYfOXIkNyYmpp1D9gkE7mEWXL9l/vz5nwn1m+0pHx9V0GkyX5gNBvRbvXr1yyUlJYUWLxHSka5bty5NqN2l9l56r35iM+ksL0VHTQjJL3dftGjRPLIboMXLhEB//vnnn4N7vJ/eq1Go39DJqcB9qBkhrtiAS5cuHbd4qVy8ePGYULvxXhS9Zx+5wLUKbThzx/Tt27fvLHipREREdKamhM1/kd0nOcNl83hRc5RfqxAw67krsrOzz3krcLi3C0L9DoeKtgZWCpzNcCp9//33P1C6t6VbjufBPcG9bYU/ywRuMpErhti0NLhgbmHMpk2bVkJkV+wtnSXEENXvvffeBri3WGj30XyNvxK3UMlUN9Zp+lFXicxuCrrrrrvCkpKS+oaHh7dXK/CRIuR8JPCprKz0haDHnwY/xsmTJ3fr0KGDUYpmb968+d3p06dvh6dkp/F8oXba3G1a7sy5hVoutDfQiDOAC+19BOftP8/ndMj529BfX1h6evpjY8eO7SYF9jvvvPPus88+uxOeXqOwi7nQvkas4c7KpVi4b5tNoiwTwda6CDb55VV/8skng0aPHi0J9oYNGzbPnj37E3hKpmEU2tNsVySv2MlZjqGSXpiz0rMaG0krdk8+n3322UiQB6TAXr9+/aa5c+cy2AVC7cbXJnpvyueYe+gAhE40AEFscyjL6UAb+SWI1A5y7dq1G+gABBmYjuR+KaoNQHjLEFswN64Z9/nnn++SAhs62eo1a9bIgt3SB5HjPv30050SYVelpaW9zcGOkAK7JU+TiNuxY8d2qZnA1NTU9UpgtwTg/ESgDkL9RKBtUmGvXr16nWjQWDJsrwbODemxSflkqtsjAPtDidMiVIPdEoDzkzkHbN269R2psFetWrVGLdjeDpxNVw6D1hUCmonAr0IK7Ndffz1NTdgtAbiR2u7eX3311QfOmOijJnCPWjYoyuHo2oE48mbi+oFmr3nppZf20NwIiyDNqkWQTgrtXSUkjVBVWFiY7yDstYsXL/6KhuviRJTTYKse2jvZhndJSEiYTKLExqYjL1++fJWg4mqHluylkBH0mN27d2+x5aWQ5YIrV65cIdSv0bwNNi6MleaHEy2PJq7h+vXrl2ZlZZ0lqxrMZnPp2bNnf5o6dWoSvPYn6qvbXIXsCuCeXNzAj4JvRVuAcHtxgxKhvrhBnc12ZXEDbyjf4SfUl/BgM6IqKXQT54l4R/kOF0nLLVDjYug8fCzB1FIEgSNwBC4bOIrCjgeBI3AEjoLAETgKAkfgKAgcgSNwFASOwFEQOAJHQeAIHIGjIHAEjoLAETgKAkfgCBwFgSNwFASOwFEQOAJH4CgIHIGjIHAEjoLAETgCR0HgCBwFgSNwFASOwBE4CgJH4CgIHIGjIHAEjsBRPB24RqMZQf/sKvHQNxp7sanrhvMmSzxfBv3c3c3JQ4s651xxRoX8rlRzXnfwF5Gi5sllnBc1HDW8eSXDzT4HNRw1XJ73ktKYzW/Kq3G0Bq698zhq21HDUcOd40U44ZeFGu6NgsARONpwp9pOeN9jjngpjvrh0Ed86U62HDXcCzXcXtYvuTE/HDTTX+EvqmsT3tEbqOHYaaIgcASOgsAROAoCR+AIHMVDI01Z80OaihQd/ZxGItFkV0SgqOFeqOFMc14XaZi9scYv3el6UMOx00RB4GjDG/UWUlDDUbxPw+3NR+E0XpX54a6aYYUajsAROIqX2PAMid5IhoefFzXcnQSXDSJwBI6CwBE4CgJH4CgIHIEjcBQEjsBREDgCR0HgCByBoyBwBI6CwBE4CgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBw95f/F2AAPX2XGJHD060AAAAASUVORK5CYII=);
background-size: 46px auto;
}
.fancybox-light a.fancybox-expand, .fancybox-light a.fancybox-nav span {
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFwAAAGQCAYAAAAjsgcjAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA2ZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOk9yaWdpbmFsRG9jdW1lbnRJRD0ieG1wLmRpZDpGNzRGRjc2NzEwNERFMjExQTc0M0U0NzZGQkE0MTM5RSIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDpEMEQwOUQ1MjZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDpEMEQwOUQ1MTZBNEUxMUUyQjJGNkY3NDBEMEE5NDY5NyIgeG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ1M2IChXaW5kb3dzKSI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjE0QzZBQjVDNEU2QUUyMTE5NTdDREVCQjFFNDc0RjQzIiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOkY3NEZGNzY3MTA0REUyMTFBNzQzRTQ3NkZCQTQxMzlFIi8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+z3OoagAAHXpJREFUeNrsnQl4VEW2gG93J510OkASQzCQjMQl8IZN1iCjAREHCMoDGRHQECBsEuAhIomCTxAElcGERRg/UBwdBgOouMAoH09kcWNk2CKrGBCSEEIWyNadrd+pTlVSuXQnfZf0lnO+r75Op/v2vffv06fOOVV1SmOxWAQU54kGgSNwBI6CwBE4CgJH4CgIHIEjcBQEjsBRELhHA9doNEhKJHIV1Z2As5No6d/i1uB+bbQa7jUE3ghg8qijoHWiphV9AYIIMGnVosb+Z2nOL8BlwKWeWFP7YRoOsA80X66x5zx0e8AZ5EpoVfSxknvOvgCLxc6FylUmtwfOgWYg9bT50eZLn1uht2rVynfRokV/jI2N7REeHn5vmzZtIgwGQ1tfX1+jTqczks+srq4uraqqKqmoqMgtLS29XFhYeCYjI+PHefPmHc3Ozi6Dt1TQxr6Y28A7G7j1wKaakuM5bSaa6w+tNbS20CKhRUPrBq0PtAHt27cfkp6evuzy5cuHKysrSywyBb6E4oKCgq+PHz8+e/DgwR3oOf3pNWiZoinx0BzhZpNHcwLnNJpobiC0UAq6E7T7ofWHNnDAgAFjfvjhh61ms/mWRWUB+IWg7WlLliyJptegp9ek8SrgFDbRKAO0IGjtod0LrQe0B6A9HBUVNRJAp4M2l1uaWYj5uXbt2prp06eH02vyseH9eCZw+rMltthItfouaF2g9YM2CNrQjRs3vlpSUnLd4mSBLzfr7NmzT9Nr86XX6pnAOXvNTEgYtHuo+fgTtCHQAY4+duzYbouLpaio6F3Q9lB6rVqp2u4uwBnsVtDupJ1ib2KnoQ175JFHEvLz83+zuIlAn3Fq586d93HQPQe4CHY47Rj7ElsNLe7pp5+eBSYk1+JmAp3q799//31PqdBdCpz+HH2pGbmTwib2ejC0EU899dQsk8lUZHFTgQ4178cff+xO70Hj1sA5b8RIbXY01Wwr7KFDh04Fzc6zuLmApl/dt29ftKPei6uAMz/bQL2Re6jNJmZkRNu2bce6k81uSiBiPQlRahDz05sDuNLQnrl/BHgbaMH0kWi7H3gjM++///4/e1LatbS09N3AwMAkmo+pUTu01yq4Ng0HPIDabyOFr1+3bl1/T4NNxGg0JkKANEqOq+gM4Cw/YuRhd+rUKXDKlCnTPHVwAUxh2gcffBDiTsD5HImBangAzfr5bN68+YmAgIBgTwWu1WrvHDVq1EJHbLmzgfuJYPv26tWrTf/+/YcrvTAIwctccSwTsOMz9+7d29ZdgLMgx59quD8bNEhNTR3h4+PjL/eCCgsLBQiSTF27dtXPmDHjkpTOibw3KSnpUrdu3bTg+5sKCgrk20uNxgiKM1eh2VUFuNic+NO/fYKDg31jYmL+rAR2YmJixblz5/zBJPkcPny4Y0JCQpYj0Ml7Zs6cefWnn37qCMf6nz592n/cuHEVSqBDBzpt4cKFBjW1XC5wH26kho3WaJcuXdrLz8+vlVzY06ZNq8jNzdX7+/sLBoPB2v7zn/90mDhx4tXGoFPN/v3UqVMR7FjymJOTowdNlw0dbHnI/Pnz41wNXMcNh/lxCX3t8OHDY+VeCAQc5QSQr6+voNfrrQ2+PCs88Ocj7EFnZuTEiRN/YMexzyCPV65c0U+ePNkk97qCgoIm0PtzCXDe92awrcNWoaGh+o4dO/aQcxFkXPKXX37RkQALtMradDqdFRiDfvz48YhJkyY1gE7+nj17dubJkyc7kveSY9jx7LPIIxyrgShSVkcKX9wjS5YsUc2saGXab36U3ardYA7uldtZwnGBAwcOvFhTUyOQZg2BARQBCK9ZoRMTQaCDjb/CwuQ5c+ZkgmZHkfeQ95JjyP/Z51RXV1sfhwwZkgPgAuR2nvAL6asWcKmhvY66gME0dxJMI0zfL7/88r9HjBjxjJIhKwB4AczHfUxbGUACDn4F1kb+7tu37yWAXAP2/W4xbPI6uIWC2WwWysvLBYh2r/7jH/+IUDK35ubNm4vAtLwh1I78OzW05zXchwsMNJGRkeGKvnkAsm7duvvAjz/PwPKazswL0XToHDuCZt9t74sB8yGYTCYBPksxbCJwnmhXmxS+WYFDONxB8c8NwKxduza6Z8+eNqETbSaQSbOl2eQYptnwGVchPI9QY9YYnKuTK4FrBRtT0Fq3bh2qio0TQSc2mP2fdYh8x0iEvIfBJpqtJmwKPMLVwMXz/khvblTLdWLQ+/Tpc451ovxrPEjWSTJTojZses4gVwK3OaMVfuYGNUNgAiw1NbVTv379ztobCGH/Zx5J7969r/7973+PUHvyKXxeoKuA24Pu1RPIyXQWVyevbAUv5SrfpABh9fkjR450FpsRsXlhgdLRo0dvC45UupZSVwIXT4S3/g86rFI1Yc+dO/fXf//739F858ibER46eQ/xWkg4D755RFO5FxnXU+Iq4LZgW+XWrVs31IINAdBvEACRyNUKk+8c+cagMuDMT6e5lyy1oJPpcYJKE/vlAGcT4dkKBOuFXL9+PVst2BDC323PzyZRJGn2/HQu4dVBLehwvvOuBs7Dtl7I1atXs5XCJokoANVouE787B49elyAkP08+Z8t6MS0EE0nqd34+HjF0OG8F1wJXLymxgr98OHDvyq5kAULFpwH2FH2wnUWQXbv3j2TpADS0tLsRqR8ihf6gQ5Tp069rOTacnNzj7gaOL+Gxgp806ZNF+HmZeWdSXr2wIEDHfkIUgybRpC/b968OYp5J/bSAHyKlzzu27fvTrnpWeKhvPbaa0ddDbxKDD0/P78iMzPzpNz0bJcuXarEqVVR1u/Kli1b/sB7LBz0C8y88B0qe4RjLXLTs2VlZd9u27atzB00nLW6lWJ79uw5JPdCwEQEhIeHmwk4EqKTJkpERdrzxQH6fQD1IjuOfQZ5jIyMrIAvSvagdlZWVrrQcBmisqhVxlQ3trIhhDaSEyfa4xscHOyXk5PzN/AUAuVcDBnXnDJlivnKlSt+RFsJNJJidSQ3QgeRL0Hw05FoNoENX2DFRx99pA8JCZEFB66hMC4urvPevXsLqXI1OJ9s70DiZE4tBU7SsWQFGpls/xi0J6CN/fbbbz9RMqGyoKDAMn78+LLo6OjyadOm/QbwHD6WvHfWrFkXO3XqVP7kk0+WgZlTNLnz2rVrafRetXK4qTV7lqgaGc8k6ViSmCcr0YZCI/Px/gI/7alqLJIC7S5xxbH8IqwVK1ZE03vVqAVc7uxZH2pGgjizYqQXpzt48OCEhx566L89OWEF2r0BTNKLpN+kzoGghkmRm7winSRZ4VtOm4nvQBMTE3dB717oqbDBzbz+6quvrqb3WKPmZ8sFzrwVM9WAMvq3dU71hQsXysAzeM9TgZ88eXLxxo0bb6jpnSjxUvgviy0PDBbqJ+MbqCejO3Xq1LNdu3Yd5Emw8/LyPg4LC5sJf5Y0puGumJBvobaNmJNS2lhBAatpGTZs2BbwFC57Cmzw+c8lJCQk03uqEpqh9IdS4DXUjJRRjSilNt0KHYKGsnHjxr1RWlpa4O6wwbPKffPNNyf961//yhfql5uoX2ulORdVQRsN7ckxY8b8j8lkuumui6kAdsGqVasG03swCM24qErtZYPtBNGyQQr9LxMmTHiuuLj4urvBBjOS/dZbbw2j124U3HzZoLiYAVvy3WBhLIMeGxs748aNG5nuAhsU4CxEpg/Sa24lSCh24Kplg3VfCLyHFaHxo55La9oC6c/UOq0ZPAD/PXv2xPfu3XuoK212ZmbmjpEjRy7PyMjIo/2PmXaUNY4Cd7ZbeNuJRdCNFHgrEXTr9ObU1NS+06ZNm2I0GkOdCRr6klzoGN944okndsPTW7SjlwTbnYBrOJuup+F/K9rYskK2YkIXERHhv23btjFkEZaSdUEOZv7Kz5w5s2PixIl/O3bs2HX4VzHnxjIX0OJRwEWazq8DMgr1C2cDOOhWbScr39LS0kbExMQM1ev1gWqCBg+kGMzGrpSUlA/37t2bbct9lRO+u11VN6rtbGozW17IgJPGFmPVTeoPCQnRk3VCcXFxAyMjI7vLnT5XVVVVlp2dfezAgQNfJScnH8zJyblJtblUlIaQHbq7ZRk9Cp3XdrbMMECoXwHHL12pmwJtMBh0SUlJ9w4ePPiPUVFRd8GXER4YGBgCv4BWYH78ampqqomZIBOQwLUrKiwszIZA67dDhw6dAJ86o6SkhCXVyilk9rxOq4mn4nV1C0WFIZlt9+Pg+3HQfcXgOTdNXCjSIjScRcDGWSuoBps5yGbOVjcoHOlxwKWcS7BfKFLPNR66o6VQedgVHPTbCkWqFa57Uu1ZJaVQLaL0MAPeZClUtfMinlpdmdf65ij222yVlrGctZ1JpgjcSwSBexNwFBXtJgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBwBI6CwBE4AkdB4AgcBYEjcBQEjsAROAoCR+AoCByBoyBwBI7AURA4AkdB4AgcRSFwXGB1u0hhiMBbOHB7C2JtbmWDwJWD5pd/swIHRPgiBqoXKvAo4Eo7XVHZJtJYDRU/ob6kHatzKy7FUVeGQ8H5Ww5wEWwCmNRPCeSanr5OAJMiM8VCfa0qa2EwUqfdk4BrXfUztAGbACa1yO8cN25czOnTp/9qMpm+qKqq+jonJ+f9jRs3joHXwoTamuWsoLBW42m9uOTKkgqPZ4V4hPrqzKTiW3tof4Q2cN68eS8C6FJbhR337t37AbznAWh30y+HHK+VW7RRrRhE9cqcagK3ATucwn541qxZiwB2eSN7PNTMnDlzGry3q1BbujQAgcuEPX369EXl5eWlTZUvPXTo0Mfw/j7QyLa5Rk8DrnWxzSZF3tslJib2T0tLe9Hf37/JzY2Cg4PbCg1LM3mUaF0NOyEhoe/atWsXGQwGh/ZUvnnzZr5QX35JQOBNwzZS2GHPPPNMnw0bNrwcEBDg6AbWlo8//ni/IKNWbIvwUkQ2m2j1ndRmDxo7duzzxcXFt6SUnz548CAp0BsL7V7qpfhhp9k47P8irt+YMWPm37p1S1LF/J9//vmQTqcbDsd3p25kIItEWzxwG7DbMdijRo2aB3a4SCLsH/z8/EYKtdsd/EGo3U1FsQ/uFcA52L4c7M7EFDz++ONzAXahFNjHjh07AjaebDtGKu5HUVPiTz0VTYsG3hjs4cOHzy4qKiqQAvvkyZNHwXsZTWGz6NKghinxFuAMtpGH/eijjyYVFhbekAI7IyPjWGBgIMmfxFDYd6gN26OBi2CTJBPZeCN2yJAhzxYUFORJgX3mzJkTISEhT4pgB6gN22OB24H90KBBg2beuHHjukTYZ4KCguKF2n06iQvZQajfzZDVHG/Oxld79hEa1r9VDNxHhcCGL8hupOnTsAEDBnTesWPH0jvuuKOto591+vTpiw8++ODbYOvJZhiV9Cb19GX/RkZ5NA7+r1FF5UaX+L2fq4SGG24rCraUAteKtNsKu1+/fp127dq1LDQ0NMzRDzp16tTVhx9+eBvY+kp6XQZ6g/4OhPIaBaDF0PkS2SausVEmQQl0JcBZyO5L4ZAtZEJjYmKiv/jii1fbtm3bztEPAm8kb+DAgV+DZrPtf6vpT9ssNF3/2xZsjQLgTLNJdf1SbpSpRKjfN1QjdzxVyRCbltNEEoi069OnT+fdu3evDAsL6yDlM00mU1V1dXUNfDbZk4FtFSD5hnx9ffU6spm9QiH7S0AknH/ixIkDCQkJWy5fvpwF/yYb+N0S6ncirJE1zKig09RSz4GYjS5RUVEjs7Ozf7N4meTk5Fy45557htNIOYwNerjCS/GhZoSE2v3279+/w+KlQgc9+tJ7bS02xc4agOCjSkPPnj1jBS+V7t27x/ID10o6ZqX58LpdS8geO94KHO4tWKifmKRolEkp8Do3qqysrMhbgZeUlBSKfHGXAGewiX9qAtfuO28FnpGRcZhzCZVt3auCl0L87a4gT+Tl5V31tg7zxo0bl+HeHqNpBpd6KWyAgUSXdxFPZdCgQYn5+fm5ngqXzHspLy8vIxORbt68ef3777/f1atXr8fpKFME9VD0SoArCXz4UZ26geHBgwd327lz57Lg4GCHNybNysoqgS/r819//fUqPCWj8jdplFehht2UYCL5nczZ9vElQv1GpyzEb2BWnDmZU5y4sk59GDJkSJft27cT6Hc4+rlXrlwpgOO2nD9/PhOekm1zi+hNVnI5DIsToNdwCSs+l1LJ2fAaGxlTpwAXQ2fzTcKGDh3aPT09fWmbNm1CJEC/NnDgwLcyMzN/46CXCAr3vpSQKZSVLXRWaG8rAGowrBYXFze3qKhI0hjm77//fjkqKipRqN0xvBu0SDq0FijUb3zqsflwtQYg7I3Sx8oZpb906dKFyMjI8ULtTNl7SBaSG/FRPFLvLUNsduehjB49+rlbt25Jgn7hwoUzISEhY+H4/hR6swyzecuo/W3QyUwrgC5pptWZM2dOAvS/cGObbNReJ+CofdPT28aPH/9CMYiCqRI4L8XBeeB10OPj4xeWlpaWSIF+9OjRH3Q6Hc68auxkQiOT7ydPnvyiVOhHjhw54OPjM4x6LuHU78e5hY5Cnzp16kuOrHjg5ZtvvtlFpl/QThTX+EiEPnjGjBmLTSZTmZRcRxIIzW3gGh+Jq9a6EOjA738bW0hlQ8s/EmrX+HQQcI2P3QtiiaEqOvWBhOskqZ/79ttvH05OTn7TbDabHPmsoKCgEEeiP3cVH2ediECneRiWFKpLDaxZs+YAeCGalStXLtTr9X6NfU5eXl4ON03B84q9uHidZmtqXsi6y8EpKSmvVlRUmO2Zk+rq6sqnnnpqIjVHYTQIQhsuEzqBOOjll19eSsyLrT5z165dm2jUGUWzknp0C5VBJ2mAAZMmTZp57ty5H8nIS2VlZUVWVtaZ1NTUV8hr0KK5oS6P88PdtZpEK+qBsGoSpKMtpyMvxfRvj6wm4a71Uvy5fLSGdpIVAtZLUU3EFYHYpBusCIQ1r7wLuEcKAkfgCFw2cBSFnRQCR+AIHAWBI3AUBI7AURA4AkfgKAgcgaMgcASOgsAROAJHQeAIHAWBI3AUBI7AETgKAkfgKAgcgaMgcASOwFEQOAJHQeAIHAWBI3AEjoLAETgKAkfgKMqB41r72wWLGyDw5vl1co9NFbSxIHDlsOv2gRPqSzaxSqOsTBMr2VTTnOA9qsiYZNINi5KRQmRsoww/Cp0vuUoa23DUZlGyFlfVTQZwtlMtAc22Bm5F/9ZT4KTqG9tel9S5rSu7J4bubOBawYOEanfdHp4C3fR68+bNY69du/Z+VVXV1yaT6Ytffvnlzfj4eFLBk1Tmv4N+If70i9JoXGkXXVWZU2YVTVbJk1RYJptwPLB///6PbNWpNZvNZSkpKaSa58NCbZnV9kLDvdQ0LaoUqgLgbIfDbnPmzHmWFHG3VxwYoJuTk5OXCfWbMN0GHYE3DZyUSCU7//X57rvvPm2qwDup1gzQX4P3P9Jc0KXcg4/gecLsuJYWb29UyNbry5Yte4HUJ1+xYsU3opeZByMITio86YnA2QZ0NXl5ebmOHADQfZcsWbJAC7J8+fJ9osDJudA90KQYmA2fMGFCIngmVY7uHwHmpWLx4sWvw7FDqHnpoIZ58XYbTuAEUy/lTzt27HiXlLSWAv2VV155g0LvrgZ0bwbOIkwj1XKyPc2Q7du3b5UKfenSpW/CsY+qAd1rgYu0vDX1VnpAG5aenv5PKdBJ5X2A/lc1oHs7cOal+NFIk+zN1hPa8G3btkmFXgkejGLoXg2cg66j4TqD3otA/yeIFOjQ5yqG7vXAG4Fu1fStIFKhg7u4moPOtlT3cwS6qzcwrQtMhPotBtRu/NYFehrukyQV2W6GbBs24sMPP/xnI1G/LehVEBilwrF/lgrdqZEml5/WiCBrnZiNZF9ENU3FFsfHx+8CZ0Q/efLkMY4kByES1S1cuHAOxEaalJSUPdxLxfSxUo3gSClwfuTFh6ZN9fTRtxmgW5oI95nGE6lKTEz8v/LycsOsWbPiHIW+YMGC2SQN8MILL+wWndeiRkSqBnAG2p/6xwZuMMBXxaycRQJ0A70e3ezZs38uLS0NBICxjkJ/7rnnksgvF+B/Qf9dI0oDuAQ4f4MB1N61CQsLC503b16/yMjICPh56sGOkr3SNArNlqWsrKzSZDJVsOd2bKn1F1ddXa0j5sRsNhugBeTm5lYfOXIkNyYmpp1D9gkE7mEWXL9l/vz5nwn1m+0pHx9V0GkyX5gNBvRbvXr1yyUlJYUWLxHSka5bty5NqN2l9l56r35iM+ksL0VHTQjJL3dftGjRPLIboMXLhEB//vnnn4N7vJ/eq1Go39DJqcB9qBkhrtiAS5cuHbd4qVy8ePGYULvxXhS9Zx+5wLUKbThzx/Tt27fvLHipREREdKamhM1/kd0nOcNl83hRc5RfqxAw67krsrOzz3krcLi3C0L9DoeKtgZWCpzNcCp9//33P1C6t6VbjufBPcG9bYU/ywRuMpErhti0NLhgbmHMpk2bVkJkV+wtnSXEENXvvffeBri3WGj30XyNvxK3UMlUN9Zp+lFXicxuCrrrrrvCkpKS+oaHh7dXK/CRIuR8JPCprKz0haDHnwY/xsmTJ3fr0KGDUYpmb968+d3p06dvh6dkp/F8oXba3G1a7sy5hVoutDfQiDOAC+19BOftP8/ndMj529BfX1h6evpjY8eO7SYF9jvvvPPus88+uxOeXqOwi7nQvkas4c7KpVi4b5tNoiwTwda6CDb55VV/8skng0aPHi0J9oYNGzbPnj37E3hKpmEU2tNsVySv2MlZjqGSXpiz0rMaG0krdk8+n3322UiQB6TAXr9+/aa5c+cy2AVC7cbXJnpvyueYe+gAhE40AEFscyjL6UAb+SWI1A5y7dq1G+gABBmYjuR+KaoNQHjLEFswN64Z9/nnn++SAhs62eo1a9bIgt3SB5HjPv30050SYVelpaW9zcGOkAK7JU+TiNuxY8d2qZnA1NTU9UpgtwTg/ESgDkL9RKBtUmGvXr16nWjQWDJsrwbODemxSflkqtsjAPtDidMiVIPdEoDzkzkHbN269R2psFetWrVGLdjeDpxNVw6D1hUCmonAr0IK7Ndffz1NTdgtAbiR2u7eX3311QfOmOijJnCPWjYoyuHo2oE48mbi+oFmr3nppZf20NwIiyDNqkWQTgrtXSUkjVBVWFiY7yDstYsXL/6KhuviRJTTYKse2jvZhndJSEiYTKLExqYjL1++fJWg4mqHluylkBH0mN27d2+x5aWQ5YIrV65cIdSv0bwNNi6MleaHEy2PJq7h+vXrl2ZlZZ0lqxrMZnPp2bNnf5o6dWoSvPYn6qvbXIXsCuCeXNzAj4JvRVuAcHtxgxKhvrhBnc12ZXEDbyjf4SfUl/BgM6IqKXQT54l4R/kOF0nLLVDjYug8fCzB1FIEgSNwBC4bOIrCjgeBI3AEjoLAETgKAkfgKAgcgSNwFASOwFEQOAJHQeAIHIGjIHAEjoLAETgKAkfgCBwFgSNwFASOwFEQOAJH4CgIHIGjIHAEjoLAETgCR0HgCBwFgSNwFASOwBE4CgJH4CgIHIGjIHAEjsBRPB24RqMZQf/sKvHQNxp7sanrhvMmSzxfBv3c3c3JQ4s651xxRoX8rlRzXnfwF5Gi5sllnBc1HDW8eSXDzT4HNRw1XJ73ktKYzW/Kq3G0Bq698zhq21HDUcOd40U44ZeFGu6NgsARONpwp9pOeN9jjngpjvrh0Ed86U62HDXcCzXcXtYvuTE/HDTTX+EvqmsT3tEbqOHYaaIgcASOgsAROAoCR+AIHMVDI01Z80OaihQd/ZxGItFkV0SgqOFeqOFMc14XaZi9scYv3el6UMOx00RB4GjDG/UWUlDDUbxPw+3NR+E0XpX54a6aYYUajsAROIqX2PAMid5IhoefFzXcnQSXDSJwBI6CwBE4CgJH4CgIHIEjcBQEjsBREDgCR0HgCByBoyBwBI6CwBE4CgJH4AgcBYEjcBQEjsBREDgCR+AoCByBoyBw95f/F2AAPX2XGJHD060AAAAASUVORK5CYII=);
background-size: 46px auto;
}
}
.fancybox-light-overlay {
opacity: 0.9;
filter: alpha(opacity=90);
background: #555555;
/* Old browsers */
background: -moz-radial-gradient(center, ellipse cover, #999999 0%, #555555 100%);
/* FF3.6+ */
background: -webkit-gradient(radial, center center, 0px, center center, 100%, color-stop(0%, #999999), color-stop(100%, #555555));
/* Chrome,Safari4+ */
background: -webkit-radial-gradient(center, ellipse cover, #999999 0%, #555555 100%);
/* Chrome10+,Safari5.1+ */
background: -o-radial-gradient(center, ellipse cover, #999999 0%, #555555 100%);
/* Opera 12+ */
background: -ms-radial-gradient(center, ellipse cover, #999999 0%, #555555 100%);
/* IE10+ */
/*
commenting out due to bug it creates on production - bm
background: radial-gradient(ellipse at center, #999999 0%, #555555 100%);
*/
/* W3C */
}
</style>
<style data-href="/assets/gulp/print-240f8bfaa7f6402dfd6c49ee3c1ffea57a89ddd4c8c90e2f2a5c7d63c5753e32.css" media="print">
.print_only{display:block}body{overflow:hidden}.print_logo{position:absolute;top:0;left:0}.site_header_area{position:relative}.nav_is_fixed .site_header_area{position:absolute;top:0}.site_header_area .brand1,.site_header_area .brand2{display:none}.site_header_area .brand_area{width:23%}.site_header_area .grace_logo img.grace_logo_white{display:none}.custom_banner_container{height:68px}.custom_banner_container img{display:none}.custom_banner_container .banner_header_overlay{display:none}a[href]:after{content:""}.module{padding:1em 0}#sticky_nav_spacer{display:none}.nav_is_fixed #sticky_nav_spacer{display:block}.main_carousel.module .slick-slider .grid_layout{width:100%}.main_carousel.module .slick-slider .right-col{width:3.75in !important;float:left;margin-right:12px;margin-bottom:1em}.main_carousel.module .slick-slider .left-col{width:3.75in !important;float:left}.definition_teaser{display:none;color:white !important}.double_teaser .column{width:48%;float:left}.double_teaser .column+.column{margin-left:1%;margin-top:0}#home .site_header_area{position:relative}#site_footer{border-top:1px solid gray}#site_footer .upper_footer{padding:1em 0}#site_footer .footer_science_calendar footer{display:none}#site_footer .footer_science_calendar .col1,#site_footer .footer_science_calendar .col2,#site_footer .footer_science_calendar .col3{display:inline-block;width:30%;padding:1%}#site_footer .sitemap,#site_footer .share{display:none}#site_footer .lower_footer{height:auto}#site_footer .lower_footer .nav_container{display:none}#primary_column{width:60%;float:left;overflow:hidden;position:relative;display:block}#secondary_column{width:32%;float:right;position:relative;font-size:80%}.double_teaser .column{width:46%}.double_teaser .column+.column{float:right}.grid_view .module_title{display:block}body #page .grid_gallery.grid_view li.slide{width:19%;margin:1%;float:left;clear:none}body #page .grid_gallery.grid_view .bottom_gradient{margin-top:0}body #page .grid_gallery.grid_view .bottom_gradient div{margin-top:.3em}.gradient_line{display:none}.multi_teaser,.teasers_module,.multimedia_teaser,.filter_bar,.tertiary_nav_container,.secondary_nav_mobile,.carousel_teaser,.image_of_the_day,.view_selectors,.related,.primary_media_feature,.fancybox-overlay,#fancybox-lock,.suggested_features,.homepage_carousel,#site_footer .brand_area{display:none}
</style>
<script src="/assets/public_manifest-b7762ffb108de93fcd0be3bfd82579b100241cc4ee1c087af5f886c0903244cf.js">
</script>
<style>
</style>
<script src="/assets/mbcms/vendor/jquery.fancybox3-bd48876205805faa43a79e74b656191a4ad37809923b4f3247b571ba82d4458c.js">
</script>
<script src="/assets/mb_manifest-a0ae601bc18c852649e350709ab440161da58529f782ae84172c21f8ea27b714.js">
</script>
<!--[if gt IE 8]><!-->
<script src="/assets/not_ie8_manifest.js">
</script>
<style>
</style>
<!--[if !IE]>-->
<script src="/assets/not_ie8_manifest.js">
</script>
<style>
</style>
<!--<![endif]-->
<!-- /twitter cards -->
<meta content="summary_large_image" name="twitter:card"/>
<meta content="News " name="twitter:title"/>
<meta content="NASA’s real-time portal for Mars exploration, featuring the latest news, images, and discoveries from the Red Planet." name="twitter:description"/>
<meta content="https://mars.nasa.gov/system/site_config_values/meta_share_images/1_mars-nasa-gov.jpg" name="twitter:image"/>
<style type="text/css">
.at-icon{fill:#fff;border:0}.at-icon-wrapper{display:inline-block;overflow:hidden}a .at-icon-wrapper{cursor:pointer}.at-rounded,.at-rounded-element .at-icon-wrapper{border-radius:12%}.at-circular,.at-circular-element .at-icon-wrapper{border-radius:50%}.addthis_32x32_style .at-icon{width:2pc;height:2pc}.addthis_24x24_style .at-icon{width:24px;height:24px}.addthis_20x20_style .at-icon{width:20px;height:20px}.addthis_16x16_style .at-icon{width:1pc;height:1pc}#at16lb{display:none;position:absolute;top:0;left:0;width:100%;height:100%;z-index:1001;background-color:#000;opacity:.001}#at_complete,#at_error,#at_share,#at_success{position:static!important}.at15dn{display:none}#at15s,#at16p,#at16p form input,#at16p label,#at16p textarea,#at_share .at_item{font-family:arial,helvetica,tahoma,verdana,sans-serif!important;font-size:9pt!important;outline-style:none;outline-width:0;line-height:1em}* html #at15s.mmborder{position:absolute!important}#at15s.mmborder{position:fixed!important;width:250px!important}#at15s{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);float:none;line-height:1em;margin:0;overflow:visible;padding:5px;text-align:left;position:absolute}#at15s a,#at15s span{outline:0;direction:ltr;text-transform:none}#at15s .at-label{margin-left:5px}#at15s .at-icon-wrapper{width:1pc;height:1pc;vertical-align:middle}#at15s .at-icon{width:1pc;height:1pc}.at4-icon{display:inline-block;background-repeat:no-repeat;background-position:top left;margin:0;overflow:hidden;cursor:pointer}.addthis_16x16_style .at4-icon,.addthis_default_style .at4-icon,.at4-icon,.at-16x16{width:1pc;height:1pc;line-height:1pc;background-size:1pc!important}.addthis_32x32_style .at4-icon,.at-32x32{width:2pc;height:2pc;line-height:2pc;background-size:2pc!important}.addthis_24x24_style .at4-icon,.at-24x24{width:24px;height:24px;line-height:24px;background-size:24px!important}.addthis_20x20_style .at4-icon,.at-20x20{width:20px;height:20px;line-height:20px;background-size:20px!important}.at4-icon.circular,.circular .at4-icon,.circular.aticon{border-radius:50%}.at4-icon.rounded,.rounded .at4-icon{border-radius:4px}.at4-icon-left{float:left}#at15s .at4-icon{text-indent:20px;padding:0;overflow:visible;white-space:nowrap;background-size:1pc;width:1pc;height:1pc;background-position:top left;display:inline-block;line-height:1pc}.addthis_vertical_style .at4-icon,.at4-follow-container .at4-icon{margin-right:5px}html>body #at15s{width:250px!important}#at15s.atm{background:none!important;padding:0!important;width:10pc!important}#at15s_inner{background:#fff;border:1px solid #fff;margin:0}#at15s_head{position:relative;background:#f2f2f2;padding:4px;cursor:default;border-bottom:1px solid #e5e5e5}.at15s_head_success{background:#cafd99!important;border-bottom:1px solid #a9d582!important}.at15s_head_success a,.at15s_head_success span{color:#000!important;text-decoration:none}#at15s_brand,#at15sptx,#at16_brand{position:absolute}#at15s_brand{top:4px;right:4px}.at15s_brandx{right:20px!important}a#at15sptx{top:4px;right:4px;text-decoration:none;color:#4c4c4c;font-weight:700}#at15sptx:hover{text-decoration:underline}#at16_brand{top:5px;right:30px;cursor:default}#at_hover{padding:4px}#at_hover .at_item,#at_share .at_item{background:#fff!important;float:left!important;color:#4c4c4c!important}#at_share .at_item .at-icon-wrapper{margin-right:5px}#at_hover .at_bold{font-weight:700;color:#000!important}#at_hover .at_item{width:7pc!important;padding:2px 3px!important;margin:1px;text-decoration:none!important}#at_hover .at_item.athov,#at_hover .at_item:focus,#at_hover .at_item:hover{margin:0!important}#at_hover .at_item.athov,#at_hover .at_item:focus,#at_hover .at_item:hover,#at_share .at_item.athov,#at_share .at_item:hover{background:#f2f2f2!important;border:1px solid #e5e5e5;color:#000!important;text-decoration:none}.ipad #at_hover .at_item:focus{background:#fff!important;border:1px solid #fff}.at15t{display:block!important;height:1pc!important;line-height:1pc!important;padding-left:20px!important;background-position:0 0;text-align:left}.addthis_button,.at15t{cursor:pointer}.addthis_toolbox a.at300b,.addthis_toolbox a.at300m{width:auto}.addthis_toolbox a{margin-bottom:5px;line-height:initial}.addthis_toolbox.addthis_vertical_style{width:200px}.addthis_button_facebook_like .fb_iframe_widget{line-height:100%}.addthis_button_facebook_like iframe.fb_iframe_widget_lift{max-width:none}.addthis_toolbox a.addthis_button_counter,.addthis_toolbox a.addthis_button_facebook_like,.addthis_toolbox a.addthis_button_facebook_send,.addthis_toolbox a.addthis_button_facebook_share,.addthis_toolbox a.addthis_button_foursquare,.addthis_toolbox a.addthis_button_linkedin_counter,.addthis_toolbox a.addthis_button_pinterest_pinit,.addthis_toolbox a.addthis_button_tweet{display:inline-block}.addthis_toolbox span.addthis_follow_label{display:none}.addthis_toolbox.addthis_vertical_style span.addthis_follow_label{display:block;white-space:nowrap}.addthis_toolbox.addthis_vertical_style a{display:block}.addthis_toolbox.addthis_vertical_style.addthis_32x32_style a{line-height:2pc;height:2pc}.addthis_toolbox.addthis_vertical_style .at300bs{margin-right:4px;float:left}.addthis_toolbox.addthis_20x20_style span{line-height:20px}.addthis_toolbox.addthis_32x32_style span{line-height:2pc}.addthis_toolbox.addthis_pill_combo_style .addthis_button_compact .at15t_compact,.addthis_toolbox.addthis_pill_combo_style a{float:left}.addthis_toolbox.addthis_pill_combo_style a.addthis_button_tweet{margin-top:-2px}.addthis_toolbox.addthis_pill_combo_style .addthis_button_compact .at15t_compact{margin-right:4px}.addthis_default_style .addthis_separator{margin:0 5px;display:inline}div.atclear{clear:both}.addthis_default_style .addthis_separator,.addthis_default_style .at4-icon,.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300bs,.addthis_default_style .at300m{float:left}.at300b img,.at300bo img{border:0}a.at300b .at4-icon,a.at300m .at4-icon{display:block}.addthis_default_style .at300b,.addthis_default_style .at300bo,.addthis_default_style .at300m{padding:0 2px}.at300b,.at300bo,.at300bs,.at300m{cursor:pointer}.addthis_button_facebook_like.at300b:hover,.addthis_button_facebook_like.at300bs:hover,.addthis_button_facebook_send.at300b:hover,.addthis_button_facebook_send.at300bs:hover{opacity:1}.addthis_20x20_style .at15t,.addthis_20x20_style .at300bs{overflow:hidden;display:block;height:20px!important;width:20px!important;line-height:20px!important}.addthis_32x32_style .at15t,.addthis_32x32_style .at300bs{overflow:hidden;display:block;height:2pc!important;width:2pc!important;line-height:2pc!important}.at300bs{overflow:hidden;display:block;background-position:0 0;height:1pc;width:1pc;line-height:1pc!important}.addthis_default_style .at15t_compact,.addthis_default_style .at15t_expanded{margin-right:4px}#at_share .at_item{width:123px!important;padding:4px;margin-right:2px;border:1px solid #fff}#at16p{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);z-index:10000001;position:absolute;top:50%;left:50%;width:300px;padding:10px;margin:0 auto;margin-top:-185px;margin-left:-155px;font-family:arial,helvetica,tahoma,verdana,sans-serif;font-size:9pt;color:#5e5e5e}#at_share{margin:0;padding:0}#at16pt{position:relative;background:#f2f2f2;height:13px;padding:5px 10px}#at16pt a,#at16pt h4{font-weight:700}#at16pt h4{display:inline;margin:0;padding:0;font-size:9pt;color:#4c4c4c;cursor:default}#at16pt a{position:absolute;top:5px;right:10px;color:#4c4c4c;text-decoration:none;padding:2px}#at15sptx:focus,#at16pt a:focus{outline:thin dotted}#at15s #at16pf a{top:1px}#_atssh{width:1px!important;height:1px!important;border:0!important}.atm{width:10pc!important;padding:0;margin:0;line-height:9pt;letter-spacing:normal;font-family:arial,helvetica,tahoma,verdana,sans-serif;font-size:9pt;color:#444;background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABtJREFUeNpiZGBgaGAgAjAxEAlGFVJHIUCAAQDcngCUgqGMqwAAAABJRU5ErkJggg==);padding:4px}.atm-f{text-align:right;border-top:1px solid #ddd;padding:5px 8px}.atm-i{background:#fff;border:1px solid #d5d6d6;padding:0;margin:0;box-shadow:1px 1px 5px rgba(0,0,0,.15)}.atm-s{margin:0!important;padding:0!important}.atm-s a:focus{border:transparent;outline:0;transition:none}#at_hover.atm-s a,.atm-s a{display:block;text-decoration:none;padding:4px 10px;color:#235dab!important;font-weight:400;font-style:normal;transition:none}#at_hover.atm-s .at_bold{color:#235dab!important}#at_hover.atm-s a:hover,.atm-s a:hover{background:#2095f0;text-decoration:none;color:#fff!important}#at_hover.atm-s .at_bold{font-weight:700}#at_hover.atm-s a:hover .at_bold{color:#fff!important}.atm-s a .at-label{vertical-align:middle;margin-left:5px;direction:ltr}.at_PinItButton{display:block;width:40px;height:20px;padding:0;margin:0;background-image:url(//s7.addthis.com/static/t00/pinit00.png);background-repeat:no-repeat}.at_PinItButton:hover{background-position:0 -20px}.addthis_toolbox .addthis_button_pinterest_pinit{position:relative}.at-share-tbx-element .fb_iframe_widget span{vertical-align:baseline!important}#at16pf{height:auto;text-align:right;padding:4px 8px}.at-privacy-info{position:absolute;left:7px;bottom:7px;cursor:pointer;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:10px;line-height:9pt;letter-spacing:.2px;color:#666}.at-privacy-info:hover{color:#000}.body .wsb-social-share .wsb-social-share-button-vert{padding-top:0;padding-bottom:0}.body .wsb-social-share.addthis_counter_style .addthis_button_tweet.wsb-social-share-button{padding-top:40px}.body .wsb-social-share.addthis_counter_style .addthis_button_facebook_like.wsb-social-share-button{padding-top:21px}@media print{#at4-follow,#at4-share,#at4-thankyou,#at4-whatsnext,#at4m-mobile,#at15s,.at4,.at4-recommended{display:none!important}}@media screen and (max-width:400px){.at4win{width:100%}}@media screen and (max-height:700px) and (max-width:400px){.at4-thankyou-inner .at4-recommended-container{height:122px;overflow:hidden}.at4-thankyou-inner .at4-recommended .at4-recommended-item:first-child{border-bottom:1px solid #c5c5c5}}
</style>
<style type="text/css">
.at-branding-logo{font-family:helvetica,arial,sans-serif;text-decoration:none;font-size:10px;display:inline-block;margin:2px 0;letter-spacing:.2px}.at-branding-logo .at-branding-icon{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAMAAAC67D+PAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF////+GlNUkcc1QAAAB1JREFUeNpiYIQDBjQmAwMmkwEM0JnY1WIxFyDAABGeAFEudiZsAAAAAElFTkSuQmCC")}.at-branding-logo .at-branding-icon,.at-branding-logo .at-privacy-icon{display:inline-block;height:10px;width:10px;margin-left:4px;margin-right:3px;margin-bottom:-1px;background-repeat:no-repeat}.at-branding-logo .at-privacy-icon{background-image:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAkAAAAKCAMAAABR24SMAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAABhQTFRF8fr9ot/xXcfn2/P5AKva////////AKTWodjhjAAAAAd0Uk5T////////ABpLA0YAAAA6SURBVHjaJMzBDQAwCAJAQaj7b9xifV0kUKJ9ciWxlzWEWI5gMF65KUTv0VKkjVeTerqE/x7+9BVgAEXbAWI8QDcfAAAAAElFTkSuQmCC")}.at-branding-logo span{text-decoration:none}.at-branding-logo .at-branding-addthis,.at-branding-logo .at-branding-powered-by{color:#666}.at-branding-logo .at-branding-addthis:hover{color:#333}.at-cv-with-image .at-branding-addthis,.at-cv-with-image .at-branding-addthis:hover{color:#fff}a.at-branding-logo:visited{color:initial}.at-branding-info{display:inline-block;padding:0 5px;color:#666;border:1px solid #666;border-radius:50%;font-size:10px;line-height:9pt;opacity:.7;transition:all .3s ease;text-decoration:none}.at-branding-info span{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.at-branding-info:before{content:'i';font-family:Times New Roman}.at-branding-info:hover{color:#0780df;border-color:#0780df}
</style>
<style type="text/css">
.fancybox-margin{margin-right:0px;}
</style>
<style type="text/css">
.fancybox-margin{margin-right:0px;}
</style>
<script async="" charset="utf-8" src="https://s7.addthis.com/static/layers.33f5b85045a5f2308467.js" type="text/javascript">
</script>
</head>
<body id="news" style="">
<svg display="none" height="0" width="0">
<symbol height="30" id="circle_plus" viewbox="0 0 30 30" width="30">
<g fill-rule="evenodd" transform="translate(1 1)">
<circle cx="14" cy="14" fill="#fff" fill-opacity=".1" fill-rule="nonzero" r="14" stroke="inherit" stroke-width="1">
</circle>
<path class="the_plus" d="m18.856 12.96v1.738h-4.004v3.938h-1.848v-3.938h-4.004v-1.738h4.004v-3.96h1.848v3.96z" fill="inherit" stroke-width="0">
</path>
</g>
</symbol>
<symbol height="30" id="circle_arrow" viewbox="0 0 30 30" width="30" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(1 1)">
<circle cx="14" cy="14" fill="#fff" fill-opacity=".1" r="14" stroke="inherit" stroke-width="1">
</circle>
<path class="the_arrow" d="m8.5 15.00025h7.984l-2.342 2.42c-.189.197-.189.518 0 .715l.684.717c.188.197.494.197.684 0l4.35-4.506c.188-.199.188-.52 0-.717l-4.322-4.48c-.189-.199-.496-.199-.684 0l-.684.716c-.189.197-.189.519 0 .716l2.341 2.419h-8.011c-.276 0-.5.223-.5.5v1c0 .275.224.5.5.5z" fill="inherit" stroke-width="0">
</path>
</g>
</symbol>
<symbol height="30" id="circle_close" viewbox="0 0 30 30" width="30">
<g fill-rule="evenodd" transform="translate(1 1)">
<circle cx="14" cy="14" fill="blue" fill-opacity="1" fill-rule="nonzero" r="14" stroke="inherit" stroke-width="1">
</circle>
<path class="the_plus" d="m18.856 12.96v1.738h-4.004v3.938h-1.848v-3.938h-4.004v-1.738h4.004v-3.96h1.848v3.96z" fill="inherit" stroke-width="0">
</path>
</g>
</symbol>
<symbol height="30" id="circle_close_hover" viewbox="0 0 30 30" width="30">
<g fill-rule="evenodd" transform="translate(1 1)">
<circle cx="14" cy="14" fill="white" fill-opacity="1" fill-rule="nonzero" r="14" stroke="inherit" stroke-width="1">
</circle>
<path class="the_plus" d="m18.856 12.96v1.738h-4.004v3.938h-1.848v-3.938h-4.004v-1.738h4.004v-3.96h1.848v3.96z" fill="inherit" stroke-width="0">
</path>
</g>
</symbol>
<symbol height="6" id="chevron_down" viewbox="0 0 10 6" width="10" xmlns="http://www.w3.org/2000/svg">
<path d="m59 7v2.72727273l5 3.27272727 5-3.27272727v-2.72727273l-5 3.2727273z" transform="translate(-59 -7)">
</path>
</symbol>
<symbol height="16" id="gear" viewbox="0 0 16 16" width="16" xmlns="http://www.w3.org/2000/svg">
<path d="m68 9h-1.09c-.15-.91-.5-1.75-1.021-2.471l.761-.77c.39-.39.39-1.029 0-1.42-.391-.39-1.021-.39-1.41 0l-.771.77c-.719-.519-1.469-.869-2.469-1.019v-1.09c0-.55-.45-1-1-1s-1 .45-1 1v1.09c-1 .15-1.75.5-2.47 1.02l-.77-.77c-.389-.39-1.029-.39-1.42 0-.39.391-.39 1.03 0 1.42l.771.77c-.521.72-.871 1.56-1.021 2.47h-1.09c-.55 0-1 .48-1 1.029 0 .551.45.971 1.12.971h.97c.15.91.5 1.75 1.021 2.471l-.771.769c-.39.39-.39 1.029 0 1.42.391.39 1.021.39 1.41 0l.78-.77c.72.52 1.47.87 2.47 1.02v1.09c0 .55.45 1 1 1s1-.45 1-1v-1.09c1-.15 1.75-.5 2.47-1.02l.771.77c.391.39 1.02.39 1.41 0 .39-.391.39-1.03 0-1.42l-.761-.76c.51-.72.87-1.56 1.02-2.48h1.09c.55 0 1-.45 1-1s-.45-1-1-1zm-7 4c-1.66 0-3-1.35-3-3s1.34-3 3-3c1.65 0 3 1.35 3 3s-1.35 3-3 3z" fill="#a79693" transform="translate(-53 -2)">
</path>
</symbol>
</svg>
<div data-react-cache-id="BrowseHappier-0" data-react-class="BrowseHappier" data-react-props='{"gt":1,"lt":11}'>
</div>
<div data-react-cache-id="HiPO-0" data-react-class="HiPO" data-react-props="{}">
<div class="" id="dashboard_modal" style="height: 100%;">
<div>
</div>
<div class="content" id="dashboard_modal_content">
</div>
</div>
</div>
<div id="main_container">
<div id="site_body">
<div class="site_header_area">
<header class="site_header">
<div class="brand_area">
<div class="brand1">
<a class="nasa_logo" href="http://www.nasa.gov" target="_blank" title="visit nasa.gov">
NASA
</a>
</div>
<div class="brand2">
<a class="top_logo" href="https://science.nasa.gov/" target="_blank" title="Explore NASA Science">
NASA Science
</a>
<a class="sub_logo" href="/mars-exploration/#" title="Mars">
Mars Exploration Program
</a>
</div>
<img alt="" class="print_only print_logo" src="/assets/[email protected]"/>
</div>
<a class="visuallyhidden focusable" href="#page">
Skip Navigation
</a>
<div class="right_header_container">
<a class="menu_button" href="javascript:void(0);" id="menu_button">
<span class="menu_icon">
menu
</span>
</a>
<a class="modal_close" id="modal_close">
<span class="modal_close_icon">
</span>
</a>
<div class="nav_area">
<div id="site_nav_container">
<nav class="site_nav" data-react-cache-id="Meganav-0" data-react-class="Meganav" data-react-props="{"nav_items":[{"name":"Mars Now","style":"icon","li_class":"nav_icon mars_now","target":"_self","link":"/explore/mars-now","svg_icon_id":"nav_icon","id":261,"features":[{"title":"Mars Now","body":"View a 3D visualization of all the missions exploring the Red Planet","image_src":"/system/basic_html_elements/225_mars_now_nav.jpg","link":"/explore/mars-now/","target":"_self","categories":[]}],"title":"","short_description":"View the current location and spacecraft communications activity of operating landers, rovers and orbiters using the NASA’s Mars Relay Network."},{"name":"The Red Planet","link":"/#red_planet","target":"_self","sections":[{"items":[{"name":"Dashboard","link":"/#red_planet/0","target":"_self","id":9},{"name":"Science Goals","link":"/#red_planet/1","target":"_self","id":13},{"name":"The Planet","link":"/#red_planet/2","target":"_self","id":14},{"name":"Atmosphere","link":"/#red_planet/3","target":"_self","id":16},{"name":"Astrobiology","link":"/#red_planet/4","target":"_self","id":17},{"name":"Past, Present, Future, Timeline","link":"/#red_planet/5","target":"_self","id":18}]}],"id":3,"meganav_style":"","features":[],"short_description":null},{"name":"The Program","link":"/#mars_exploration_program","target":"_self","sections":[{"items":[{"name":"Mission Statement","link":"/#mars_exploration_program/0","target":"_self","id":8},{"name":"About the Program","link":"/#mars_exploration_program/1","target":"_self","id":42},{"name":"Organization","link":"/#mars_exploration_program/2","target":"_self","id":43},{"name":"Why Mars?","link":"/#mars_exploration_program/3","target":"_self","id":51},{"name":"Research Programs","link":"/#mars_exploration_program/4","target":"_self","id":44},{"name":"Planetary Resources","link":"/#mars_exploration_program/5","target":"_self","id":52},{"name":"Technologies","link":"/#mars_exploration_program/6","target":"_self","id":56}]}],"id":2,"meganav_style":"","features":[],"short_description":null},{"name":"News \u0026 Events","link":"/#news_and_events","target":"_self","sections":[{"items":[{"name":"News","link":"/news","target":"_self","id":92},{"name":"Events","link":"/events","target":"_self","id":93}]}],"id":4,"meganav_style":"","features":[],"short_description":null},{"name":"Multimedia","link":"/#multimedia","target":"_self","sections":[{"items":[{"name":"Images","link":"/multimedia/images/","target":"_self","id":90},{"name":"Videos","link":"/multimedia/videos/","target":"_self","id":91},{"name":"More Resources","link":"/multimedia/more-resources/","target":"_self","id":413}]}],"id":5,"meganav_style":"","features":[],"short_description":null},{"name":"Missions","link":"/#missions_gallery_subnav","target":"_self","sections":[{"items":[{"name":"Past","link":"/mars-exploration/missions/?category=167","target":"_self","id":38},{"name":"Present","link":"/mars-exploration/missions/?category=170","target":"_self","id":59},{"name":"Future","link":"/mars-exploration/missions/?category=171","target":"_self","id":60},{"name":"International Partners","link":"/mars-exploration/partners","target":"_self","id":40}]}],"id":6,"meganav_style":"","features":[],"short_description":null},{"name":"More","link":"/#more","target":"_self","sections":[],"id":7,"meganav_style":"","features":[],"short_description":null}],"gallery_subnav_items":[{"thumb":"/system/missions/list_view_images/23_PIA23764-RoverNamePlateonMars-320x240.jpg","id":23,"title":"Mars 2020 Perseverance Rover","description":"A mission to investigate key questions about potential life on Mars. ","date":"July 17, 2020","url":"/mars-exploration/missions/mars2020/","link_text":"","target":"_blank","mi_traveled":null,"gallery_subnav_link":"https://mars.nasa.gov/mars2020/"},{"thumb":"/system/missions/list_view_images/2_PIA14175-thmfeat.jpg","id":2,"title":"Curiosity Rover","description":"The largest and most capable rover ever sent to Mars.","date":"November 26, 2011","url":"/mars-exploration/missions/mars-science-laboratory","link_text":"","target":"_blank","mi_traveled":14.33,"gallery_subnav_link":"https://mars.nasa.gov/msl/home/"},{"thumb":"/system/missions/list_view_images/21_PIA22743-320x240.jpg","id":21,"title":"InSight Lander","description":"A mission to study the deep interior of Mars. ","date":"November 26, 2018","url":"/mars-exploration/missions/insight/","link_text":"","target":"_blank","mi_traveled":null,"gallery_subnav_link":"https://mars.nasa.gov/insight/"},{"thumb":"/system/missions/list_view_images/6_maven_320x240.jpg","id":6,"title":"MAVEN","description":"Measures Mars' atmosphere to understand its climate change.","date":"November 18, 2013","url":"/mars-exploration/missions/maven","link_text":"","target":"_blank","mi_traveled":null,"gallery_subnav_link":"https://mars.nasa.gov/maven/"},{"thumb":"/system/missions/list_view_images/8_MRO_320x240.jpg","id":8,"title":"Mars Reconnaissance Orbiter","description":"Takes high-resolution imagery of Martian terrain with extraordinary clarity. ","date":"August 12, 2012","url":"/mars-exploration/missions/mars-reconnaissance-orbiter","link_text":"","target":"_blank","mi_traveled":null,"gallery_subnav_link":"https://mars.nasa.gov/mro/"},{"thumb":"/system/missions/list_view_images/5_mars_odyssey320x240.jpg","id":5,"title":"2001 Mars Odyssey","description":"NASA's longest-lasting spacecraft at Mars. ","date":"April 7, 2001","url":"/mars-exploration/missions/odyssey","link_text":"","target":"_blank","mi_traveled":null,"gallery_subnav_link":"https://mars.nasa.gov/odyssey/"}],"search":true,"search_placeholder":{"placeholder":""},"highlight_current":{"highlight":true,"current_id":83,"parent_ids":[]},"search_submit":"/search/"}">
<span style="visibility: visible;">
<ul class="nav">
<li class="nav_icon mars_now">
<div class="nav_title" id="li_261">
<a class="main_nav_item icon" href="/explore/mars-now" title="Mars Now">
<span>
Mars Now
</span>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<h2 class="section_title">
Mars Now
</h2>
<div class="section_description">
View the current location and spacecraft communications activity of operating landers, rovers and orbiters using the NASA’s Mars Relay Network.
</div>
<a class="button" href="/explore/mars-now">
GO
</a>
</div>
<div class="detail_container">
<div class="feature_info">
<div class="single_layout">
<a class="image_container" href="/explore/mars-now/" style='background-image: url("/system/basic_html_elements/225_mars_now_nav.jpg");'>
<div class="bottom_description" id="261">
<div class="category_title">
</div>
<div class="content_title">
Mars Now
</div>
<div class="description">
View a 3D visualization of all the missions exploring the Red Planet
</div>
</div>
</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="gradient_line">
</div>
</li>
<li class="">
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title" id="li_3">
<a class="main_nav_item" href="/#red_planet" target="_self">
<span>
The Red Planet
</span>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container no_feature">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<ul>
<li>
<a class="" href="/#red_planet/0" target="_self">
<span>
Dashboard
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/1" target="_self">
<span>
Science Goals
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/2" target="_self">
<span>
The Planet
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/3" target="_self">
<span>
Atmosphere
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/4" target="_self">
<span>
Astrobiology
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/5" target="_self">
<span>
Past, Present, Future, Timeline
</span>
</a>
</li>
</ul>
</div>
</div>
</div>
<ul class="subnav">
<div>
<li>
<a class="" href="/#red_planet/0" target="_self">
<span>
Dashboard
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/1" target="_self">
<span>
Science Goals
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/2" target="_self">
<span>
The Planet
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/3" target="_self">
<span>
Atmosphere
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/4" target="_self">
<span>
Astrobiology
</span>
</a>
</li>
<li>
<a class="" href="/#red_planet/5" target="_self">
<span>
Past, Present, Future, Timeline
</span>
</a>
</li>
</div>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li class="">
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title" id="li_2">
<a class="main_nav_item" href="/#mars_exploration_program" target="_self">
<span>
The Program
</span>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container no_feature">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<ul>
<li>
<a class="" href="/#mars_exploration_program/0" target="_self">
<span>
Mission Statement
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/1" target="_self">
<span>
About the Program
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/2" target="_self">
<span>
Organization
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/3" target="_self">
<span>
Why Mars?
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/4" target="_self">
<span>
Research Programs
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/5" target="_self">
<span>
Planetary Resources
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/6" target="_self">
<span>
Technologies
</span>
</a>
</li>
</ul>
</div>
</div>
</div>
<ul class="subnav">
<div>
<li>
<a class="" href="/#mars_exploration_program/0" target="_self">
<span>
Mission Statement
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/1" target="_self">
<span>
About the Program
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/2" target="_self">
<span>
Organization
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/3" target="_self">
<span>
Why Mars?
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/4" target="_self">
<span>
Research Programs
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/5" target="_self">
<span>
Planetary Resources
</span>
</a>
</li>
<li>
<a class="" href="/#mars_exploration_program/6" target="_self">
<span>
Technologies
</span>
</a>
</li>
</div>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li class="">
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title" id="li_4">
<a class="main_nav_item" href="/#news_and_events" target="_self">
<span>
News & Events
</span>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container no_feature">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<ul>
<li>
<a class="" href="/news" target="_self">
<span>
News
</span>
</a>
</li>
<li>
<a class="" href="/events" target="_self">
<span>
Events
</span>
</a>
</li>
</ul>
</div>
</div>
</div>
<ul class="subnav">
<div>
<li>
<a class="" href="/news" target="_self">
<span>
News
</span>
</a>
</li>
<li>
<a class="" href="/events" target="_self">
<span>
Events
</span>
</a>
</li>
</div>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li class="">
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title" id="li_5">
<a class="main_nav_item" href="/#multimedia" target="_self">
<span>
Multimedia
</span>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container no_feature">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<ul>
<li>
<a class="" href="/multimedia/images/" target="_self">
<span>
Images
</span>
</a>
</li>
<li>
<a class="" href="/multimedia/videos/" target="_self">
<span>
Videos
</span>
</a>
</li>
<li>
<a class="" href="/multimedia/more-resources/" target="_self">
<span>
More Resources
</span>
</a>
</li>
</ul>
</div>
</div>
</div>
<ul class="subnav">
<div>
<li>
<a class="" href="/multimedia/images/" target="_self">
<span>
Images
</span>
</a>
</li>
<li>
<a class="" href="/multimedia/videos/" target="_self">
<span>
Videos
</span>
</a>
</li>
<li>
<a class="" href="/multimedia/more-resources/" target="_self">
<span>
More Resources
</span>
</a>
</li>
</div>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li class="missions_gallery_subnav_item">
<div class="arrow_box">
<span class="arrow_down">
</span>
</div>
<div class="nav_title" id="li_6">
<a class="main_nav_item">
Missions
<img alt="expand arrow" class="arrow_expand" src="/assets/arrow_down.png"/>
</a>
</div>
<div class="global_subnav_container">
<div class="meganav_container no_feature">
<div class="nav_item_indicator">
</div>
<div class="meganav">
<div class="sections_container">
<ul>
<li>
<a class="" href="/mars-exploration/missions/?category=167" target="_self">
<span>
Past
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/missions/?category=170" target="_self">
<span>
Present
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/missions/?category=171" target="_self">
<span>
Future
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/partners" target="_self">
<span>
International Partners
</span>
</a>
</li>
</ul>
</div>
</div>
</div>
<ul class="subnav">
<div>
<li>
<a class="" href="/mars-exploration/missions/?category=167" target="_self">
<span>
Past
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/missions/?category=170" target="_self">
<span>
Present
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/missions/?category=171" target="_self">
<span>
Future
</span>
</a>
</li>
<li>
<a class="" href="/mars-exploration/partners" target="_self">
<span>
International Partners
</span>
</a>
</li>
</div>
</ul>
</div>
<div class="missions_gallery_subnav">
<h2 class="missions_gallery_subnav_title">
Active & Future Missions
</h2>
<a class="close_button" title="close">
<span class="close_icon">
</span>
</a>
<div class="mission_items">
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/mars2020/" target="">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
A mission to investigate key questions about potential life on Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Mars 2020 Perseverance Rover" class="mission_image" src="/system/missions/list_view_images/23_PIA23764-RoverNamePlateonMars-320x240.jpg"/>
<h3 class="mission_title">
<span>
Mars 2020 Perseverance Rover
</span>
</h3>
</a>
</div>
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/msl/home/" target="">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
The largest and most capable rover ever sent to Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Curiosity Rover" class="mission_image" src="/system/missions/list_view_images/2_PIA14175-thmfeat.jpg"/>
<h3 class="mission_title">
<span>
Curiosity Rover
</span>
</h3>
</a>
</div>
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/insight/" target="_blank">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
A mission to study the deep interior of Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="InSight Lander" class="mission_image" src="/system/missions/list_view_images/21_PIA22743-320x240.jpg"/>
<h3 class="mission_title">
<span>
InSight Lander
</span>
</h3>
</a>
</div>
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/maven/" target="">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
Measures Mars' atmosphere to understand its climate change.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="MAVEN" class="mission_image" src="/system/missions/list_view_images/6_maven_320x240.jpg"/>
<h3 class="mission_title">
<span>
MAVEN
</span>
</h3>
</a>
</div>
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/mro/" target="">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
Takes high-resolution imagery of Martian terrain with extraordinary clarity.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Mars Reconnaissance Orbiter" class="mission_image" src="/system/missions/list_view_images/8_MRO_320x240.jpg"/>
<h3 class="mission_title">
<span>
Mars Reconnaissance Orbiter
</span>
</h3>
</a>
</div>
<div class="mission_item">
<div class="mission_item_overlay">
</div>
<a class="mission_link" href="https://mars.nasa.gov/odyssey/" target="">
<div class="mission_description rollover_description">
<div class="rollover_description_inner">
NASA's longest-lasting spacecraft at Mars.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="2001 Mars Odyssey" class="mission_image" src="/system/missions/list_view_images/5_mars_odyssey320x240.jpg"/>
<h3 class="mission_title">
<span>
2001 Mars Odyssey
</span>
</h3>
</a>
</div>
<div class="background_area">
</div>
</div>
<ul class="missions_links">
<li>
<a class="button" href="/mars-exploration/missions">
All Missions
</a>
</li>
<li>
<a class="button" href="/mars-exploration/partners" target="_blank">
International Partners
</a>
</li>
</ul>
</div>
<div class="gradient_line">
</div>
</li>
<li class="">
<div class="nav_title">
<a class="main_nav_item" href="/#more" target="_self">
<span>
More
</span>
</a>
</div>
<div class="gradient_line">
</div>
</li>
</ul>
<form action="/search/" class="meganav_overlay_search meganav_search">
<input class="search_field" name="q" placeholder="" type="text" value=""/>
<div class="search_submit">
</div>
</form>
</span>
</nav>
</div>
</div>
</div>
</header>
</div>
<div id="sticky_nav_spacer">
</div>
<div id="page">
<div class="page_cover">
</div>
<!-- title to go in the page_header -->
<div class="header_mask">
<section class="content_page module">
</section>
</div>
<div class="grid_list_page module content_page">
<div class="grid_layout">
<article>
<header id="page_header">
</header>
<div class="react_grid_list grid_list_container" data-react-cache-id="GridListPage-0" data-react-class="GridListPage" data-react-props='{"left_column":false,"class_name":"","default_view":"list_view","model":"news_items","view_toggle":false,"search":"true","list_item":"News","title":"News","categories":["19,165,184,204"],"order":"publish_date desc,created_at desc","no_items_text":"There are no items matching these criteria.","site_title":"NASA’s Mars Exploration Program ","short_title":"Mars","site_share_image":"/system/site_config_values/meta_share_images/1_mars-nasa-gov.jpg","per_page":null,"filters":"[ [ \"date\", [ [ \"2020\", \"2020\" ], [ \"2019\", \"2019\" ], [ \"2018\", \"2018\" ], [ \"2017\", \"2017\" ], [ \"2016\", \"2016\" ], [ \"2015\", \"2015\" ], [ \"2014\", \"2014\" ], [ \"2013\", \"2013\" ], [ \"2012\", \"2012\" ], [ \"2011\", \"2011\" ], [ \"2010\", \"2010\" ], [ \"2009\", \"2009\" ], [ \"2008\", \"2008\" ], [ \"2007\", \"2007\" ], [ \"2006\", \"2006\" ], [ \"2005\", \"2005\" ], [ \"2004\", \"2004\" ], [ \"2003\", \"2003\" ], [ \"2002\", \"2002\" ], [ \"2001\", \"2001\" ], [ \"2000\", \"2000\" ] ], [ \"Latest\", \"\" ], false ], [ \"categories\", [ [ \"Feature Stories\", 165 ], [ \"Press Releases\", 19 ], [ \"Spotlights\", 184 ], [ \"Status Reports\", 204 ] ], [ \"All Categories\", \"\" ], false ] ]","conditions":null,"scope_in_title":true,"options":{"blank_scope":"Latest"},"results_in_title":false}'>
<section class="grid_gallery module list_view">
<div class="grid_layout">
<header class="gallery_header">
<h2 class="module_title">
News
</h2>
<section class="filter_bar">
<div class="section_search">
<div class="search_binder">
<input class="search_field" name="search" placeholder="search" type="text" value=""/>
<input class="search_submit" type="submit" value=""/>
</div>
<select class="filter" id="date" name="date">
<option value="">
Latest
</option>
<option value="2020">
2020
</option>
<option value="2019">
2019
</option>
<option value="2018">
2018
</option>
<option value="2017">
2017
</option>
<option value="2016">
2016
</option>
<option value="2015">
2015
</option>
<option value="2014">
2014
</option>
<option value="2013">
2013
</option>
<option value="2012">
2012
</option>
<option value="2011">
2011
</option>
<option value="2010">
2010
</option>
<option value="2009">
2009
</option>
<option value="2008">
2008
</option>
<option value="2007">
2007
</option>
<option value="2006">
2006
</option>
<option value="2005">
2005
</option>
<option value="2004">
2004
</option>
<option value="2003">
2003
</option>
<option value="2002">
2002
</option>
<option value="2001">
2001
</option>
<option value="2000">
2000
</option>
</select>
<select class="filter" id="categories" name="categories">
<option value="">
All Categories
</option>
<option value="165">
Feature Stories
</option>
<option value="19">
Press Releases
</option>
<option value="184">
Spotlights
</option>
<option value="204">
Status Reports
</option>
</select>
</div>
</section>
</header>
Loading...
</div>
</section>
</div>
</article>
</div>
</div>
<section class="module suggested_features">
<div class="grid_layout">
<header>
<h2 class="module_title">
You Might Also Like
</h2>
</header>
<section>
<script>
$(document).ready(function(){
$(".features").slick({
dots: false,
infinite: true,
speed: 300,
slide: '.features .slide',
slidesToShow: 3,
slidesToScroll: 3,
lazyLoad: 'ondemand',
centerMode: false,
arrows: true,
appendArrows: '.features .slick-nav',
appendDots: ".features .slick-nav",
responsive: [{"breakpoint":953,"settings":{"slidesToShow":2,"slidesToScroll":2,"centerMode":false}},{"breakpoint":480,"settings":{"slidesToShow":1,"slidesToScroll":1,"centerMode":true,"arrows":false,"centerPadding":"25px"}}]
});
});
</script>
<div class="features slick-initialized slick-slider">
<div class="slick-list draggable" tabindex="0">
<div class="slick-track" style="opacity: 1; width: 4200px; transform: translate3d(-1050px, 0px, 0px);">
<div class="slide slick-slide slick-cloned" index="-3" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8659/alabama-high-school-student-names-nasas-mars-helicopter/">
<div class="rollover_description">
<div class="rollover_description_inner">
Vaneeza Rupani's essay was chosen as the name for the small spacecraft, which will mark NASA's first attempt at powered flight on another planet.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Alabama High School Student Names NASA's Mars Helicopter" class="img-lazy" data-lazy="/system/news_items/list_view_images/8659_1-PIA23883-MAIN-320x240.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8659/alabama-high-school-student-names-nasas-mars-helicopter/">
Alabama High School Student Names NASA's Mars Helicopter
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="-2" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8645/mars-helicopter-attached-to-nasas-perseverance-rover/">
<div class="rollover_description">
<div class="rollover_description_inner">
The team also fueled the rover's sky crane to get ready for this summer's history-making launch.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Mars Helicopter Attached to NASA's Perseverance Rover" class="img-lazy" data-lazy="/system/news_items/list_view_images/8645_PIA23824-RoverWithHelicopter-32x24.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8645/mars-helicopter-attached-to-nasas-perseverance-rover/">
Mars Helicopter Attached to NASA's Perseverance Rover
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="-1" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8641/nasas-perseverance-mars-rover-gets-its-wheels-and-air-brakes/">
<div class="rollover_description">
<div class="rollover_description_inner">
After the rover was shipped from JPL to Kennedy Space Center, the team is getting closer to finalizing the spacecraft for launch later this summer.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA's Perseverance Mars Rover Gets Its Wheels and Air Brakes" class="img-lazy" data-lazy="/system/news_items/list_view_images/8641_PIA-23821-320x240.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8641/nasas-perseverance-mars-rover-gets-its-wheels-and-air-brakes/">
NASA's Perseverance Mars Rover Gets Its Wheels and Air Brakes
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="0" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8716/nasa-to-broadcast-mars-2020-perseverance-launch-prelaunch-activities/">
<div class="rollover_description">
<div class="rollover_description_inner">
Starting July 27, news activities will cover everything from mission engineering and science to returning samples from Mars to, of course, the launch itself.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities" class="img-lazy" src="/system/news_items/list_view_images/8716_PIA23499-320x240.jpg?1598917708276" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8716/nasa-to-broadcast-mars-2020-perseverance-launch-prelaunch-activities/">
NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="1" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8695/the-launch-is-approaching-for-nasas-next-mars-rover-perseverance/">
<div class="rollover_description">
<div class="rollover_description_inner">
The Red Planet's surface has been visited by eight NASA spacecraft. The ninth will be the first that includes a roundtrip ticket in its flight plan.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="The Launch Is Approaching for NASA's Next Mars Rover, Perseverance" class="img-lazy" src="/system/news_items/list_view_images/8695_24732_PIA23499-226.jpg?1598917708276" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8695/the-launch-is-approaching-for-nasas-next-mars-rover-perseverance/">
The Launch Is Approaching for NASA's Next Mars Rover, Perseverance
</a>
</div>
</div>
<div class="slide slick-slide slick-active" index="2" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8692/nasa-to-hold-mars-2020-perseverance-rover-launch-briefing/">
<div class="rollover_description">
<div class="rollover_description_inner">
Learn more about the agency's next Red Planet mission during a live event on June 17.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA to Hold Mars 2020 Perseverance Rover Launch Briefing" class="img-lazy" src="/system/news_items/list_view_images/8692_PIA23920-320x240.jpg?1598917708276" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8692/nasa-to-hold-mars-2020-perseverance-rover-launch-briefing/">
NASA to Hold Mars 2020 Perseverance Rover Launch Briefing
</a>
</div>
</div>
<div class="slide slick-slide" index="3" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8659/alabama-high-school-student-names-nasas-mars-helicopter/">
<div class="rollover_description">
<div class="rollover_description_inner">
Vaneeza Rupani's essay was chosen as the name for the small spacecraft, which will mark NASA's first attempt at powered flight on another planet.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Alabama High School Student Names NASA's Mars Helicopter" class="img-lazy" data-lazy="/system/news_items/list_view_images/8659_1-PIA23883-MAIN-320x240.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8659/alabama-high-school-student-names-nasas-mars-helicopter/">
Alabama High School Student Names NASA's Mars Helicopter
</a>
</div>
</div>
<div class="slide slick-slide" index="4" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8645/mars-helicopter-attached-to-nasas-perseverance-rover/">
<div class="rollover_description">
<div class="rollover_description_inner">
The team also fueled the rover's sky crane to get ready for this summer's history-making launch.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="Mars Helicopter Attached to NASA's Perseverance Rover" class="img-lazy" data-lazy="/system/news_items/list_view_images/8645_PIA23824-RoverWithHelicopter-32x24.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8645/mars-helicopter-attached-to-nasas-perseverance-rover/">
Mars Helicopter Attached to NASA's Perseverance Rover
</a>
</div>
</div>
<div class="slide slick-slide" index="5" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8641/nasas-perseverance-mars-rover-gets-its-wheels-and-air-brakes/">
<div class="rollover_description">
<div class="rollover_description_inner">
After the rover was shipped from JPL to Kennedy Space Center, the team is getting closer to finalizing the spacecraft for launch later this summer.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA's Perseverance Mars Rover Gets Its Wheels and Air Brakes" class="img-lazy" data-lazy="/system/news_items/list_view_images/8641_PIA-23821-320x240.jpg" src="/assets/loading_320x240.png"/>
</a>
</div>
<div class="content_title">
<a href="/news/8641/nasas-perseverance-mars-rover-gets-its-wheels-and-air-brakes/">
NASA's Perseverance Mars Rover Gets Its Wheels and Air Brakes
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="6" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8716/nasa-to-broadcast-mars-2020-perseverance-launch-prelaunch-activities/">
<div class="rollover_description">
<div class="rollover_description_inner">
Starting July 27, news activities will cover everything from mission engineering and science to returning samples from Mars to, of course, the launch itself.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities" class="img-lazy" src="/system/news_items/list_view_images/8716_PIA23499-320x240.jpg?1598917708276" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8716/nasa-to-broadcast-mars-2020-perseverance-launch-prelaunch-activities/">
NASA to Broadcast Mars 2020 Perseverance Launch, Prelaunch Activities
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="7" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8695/the-launch-is-approaching-for-nasas-next-mars-rover-perseverance/">
<div class="rollover_description">
<div class="rollover_description_inner">
The Red Planet's surface has been visited by eight NASA spacecraft. The ninth will be the first that includes a roundtrip ticket in its flight plan.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="The Launch Is Approaching for NASA's Next Mars Rover, Perseverance" class="img-lazy" src="/system/news_items/list_view_images/8695_24732_PIA23499-226.jpg?1598917708277" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8695/the-launch-is-approaching-for-nasas-next-mars-rover-perseverance/">
The Launch Is Approaching for NASA's Next Mars Rover, Perseverance
</a>
</div>
</div>
<div class="slide slick-slide slick-cloned" index="8" style="width: 332px;">
<div class="image_and_description_container">
<a href="/news/8692/nasa-to-hold-mars-2020-perseverance-rover-launch-briefing/">
<div class="rollover_description">
<div class="rollover_description_inner">
Learn more about the agency's next Red Planet mission during a live event on June 17.
</div>
<div class="overlay_arrow">
<img alt="More" src="/assets/overlay-arrow.png"/>
</div>
</div>
<img alt="NASA to Hold Mars 2020 Perseverance Rover Launch Briefing" class="img-lazy" src="/system/news_items/list_view_images/8692_PIA23920-320x240.jpg?1598917708277" style="opacity: 1;"/>
</a>
</div>
<div class="content_title">
<a href="/news/8692/nasa-to-hold-mars-2020-perseverance-rover-launch-briefing/">
NASA to Hold Mars 2020 Perseverance Rover Launch Briefing
</a>
</div>
</div>
</div>
</div>
<div class="grid_layout">
<div class="slick-nav_container">
<div class="slick-nav">
<button class="slick-prev" data-role="none" style="display: block;" type="button">
Previous
</button>
<button class="slick-next" data-role="none" style="display: block;" type="button">
Next
</button>
</div>
</div>
</div>
</div>
</section>
</div>
</section>
</div>
<footer id="site_footer">
<div class="grid_layout">
<section class="upper_footer">
<div class="share_newsletter_container">
<div class="newsletter">
<h2>
Get the Mars Newsletter
</h2>
<form action="/newsletter-subscribe">
<input id="email" name="email" placeholder="enter email address" type="email" value=""/>
<input data-disable-with="" name="commit" type="submit" value=""/>
</form>
</div>
<div class="share">
<h2>
Follow the Journey
</h2>
<div class="social_icons">
<!-- AddThis Button BEGIN -->
<div class="addthis_toolbox addthis_default_style addthis_32x32_style">
<a addthis:userid="MarsCuriosity" class="addthis_button_twitter_follow icon at300b" href="//twitter.com/MarsCuriosity" target="_blank" title="Follow on Twitter">
<img alt="twitter" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Twitter
</span>
</a>
<a addthis:userid="MarsCuriosity" class="addthis_button_facebook_follow icon at300b" href="http://www.facebook.com/MarsCuriosity" target="_blank" title="Follow on Facebook">
<img alt="facebook" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Facebook
</span>
</a>
<a addthis:userid="nasa" class="addthis_button_instagram_follow icon at300b" href="http://instagram.com/nasa" target="_blank" title="Follow on Instagram">
<img alt="instagram" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
Instagram
</span>
</a>
<a addthis:url="https://mars.nasa.gov/rss/api/?feed=news&category=all&feedtype=rss" class="addthis_button_rss_follow icon at300b" href="https://mars.nasa.gov/rss/api/?feed=news&category=all&feedtype=rss" target="_blank" title="Follow on RSS">
<img alt="rss" src="/assets/[email protected]"/>
<span class="addthis_follow_label">
RSS
</span>
</a>
<div class="atclear">
</div>
</div>
</div>
<script src="//s7.addthis.com/js/300/addthis_widget.js#pubid=ra-5a690e4c1320e328">
</script>
</div>
</div>
<div class="gradient_line">
</div>
</section>
<section class="sitemap">
<div class="sitemap_directory" id="sitemap_directory" style="position: relative; height: 293.516px;">
<div class="sitemap_block" style="position: absolute; left: 0px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#red_planet">
The Red Planet
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#red_planet/0" target="_self">
Dashboard
</a>
</li>
<li>
<a href="/#red_planet/1" target="_self">
Science Goals
</a>
</li>
<li>
<a href="/#red_planet/2" target="_self">
The Planet
</a>
</li>
<li>
<a href="/#red_planet/3" target="_self">
Atmosphere
</a>
</li>
<li>
<a href="/#red_planet/4" target="_self">
Astrobiology
</a>
</li>
<li>
<a href="/#red_planet/5" target="_self">
Past, Present, Future, Timeline
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 194px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#mars_exploration_program">
The Program
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/#mars_exploration_program/0" target="_self">
Mission Statement
</a>
</li>
<li>
<a href="/#mars_exploration_program/1" target="_self">
About the Program
</a>
</li>
<li>
<a href="/#mars_exploration_program/2" target="_self">
Organization
</a>
</li>
<li>
<a href="/#mars_exploration_program/3" target="_self">
Why Mars?
</a>
</li>
<li>
<a href="/#mars_exploration_program/4" target="_self">
Research Programs
</a>
</li>
<li>
<a href="/#mars_exploration_program/5" target="_self">
Planetary Resources
</a>
</li>
<li>
<a href="/#mars_exploration_program/6" target="_self">
Technologies
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 388px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#news_and_events">
News & Events
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li class="current">
<a href="/news" target="_self">
News
</a>
</li>
<li>
<a href="/events" target="_self">
Events
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 582px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#multimedia">
Multimedia
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/multimedia/images/" target="_self">
Images
</a>
</li>
<li>
<a href="/multimedia/videos/" target="_self">
Videos
</a>
</li>
<li>
<a href="/multimedia/more-resources/" target="_self">
More Resources
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 776px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#missions_gallery_subnav">
Missions
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
<li>
<a href="/mars-exploration/missions/?category=167" target="_self">
Past
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=170" target="_self">
Present
</a>
</li>
<li>
<a href="/mars-exploration/missions/?category=171" target="_self">
Future
</a>
</li>
<li>
<a href="/mars-exploration/partners" target="_self">
International Partners
</a>
</li>
</ul>
</div>
</li>
</ul>
</div>
</div>
<div class="sitemap_block" style="position: absolute; left: 970px; top: 0px;">
<div class="footer_sitemap_item">
<h3 class="sitemap_title">
<a href="/#more">
More
</a>
</h3>
<ul>
<li>
<div class="global_subnav_container">
<ul class="subnav">
</ul>
</div>
</li>
</ul>
</div>
</div>
</div>
<div class="gradient_line">
</div>
</section>
<section class="lower_footer">
<div class="nav_container">
<nav>
<ul>
<li>
<a href="http://science.nasa.gov/" target="_blank">
NASA Science Mission Directorate
</a>
</li>
<li>
<a href="https://www.jpl.nasa.gov/copyrights.php" target="_blank">
Privacy
</a>
</li>
<li>
<a href="http://www.jpl.nasa.gov/imagepolicy/" target="_blank">
Image Policy
</a>
</li>
<li>
<a href="https://mars.nasa.gov/feedback/" target="_self">
Feedback
</a>
</li>
</ul>
</nav>
</div>
<div class="credits">
<div class="footer_brands_top">
<p>
Managed by the Mars Exploration Program and the Jet Propulsion Laboratory for NASA’s Science Mission Directorate
</p>
</div>
<!-- .footer_brands -->
<!-- %a.jpl{href: "", target: "_blank"}Institution -->
<!-- -->
<!-- %a.caltech{href: "", target: "_blank"}Institution -->
<!-- .staff -->
<!-- %p -->
<!-- - get_staff_for_category(get_field_from_admin_config(:web_staff_category_id)) -->
<!-- - @staff.each_with_index do |staff, idx| -->
<!-- - unless staff.is_in_footer == 0 -->
<!-- = staff.title + ": " -->
<!-- - if staff.contact_link =~ /@/ -->
<!-- = mail_to staff.contact_link, staff.name, :subject => "[#{@site_title}]" -->
<!-- - elsif staff.contact_link.present? -->
<!-- = link_to staff.name, staff.contact_link -->
<!-- - else -->
<!-- = staff.name -->
<!-- - unless (idx + 1 == @staff.size) -->
<!-- %br -->
</div>
</section>
</div>
</footer>
</div>
</div>
<div id="_atssh" style="visibility: hidden; height: 1px; width: 1px; position: absolute; top: -9999px; z-index: 100000;">
<iframe id="_atssh19" src="https://s7.addthis.com/static/sh.f48a1a04fe8dbf021b4cda1d.html#rand=0.15475128410496586&iit=1598917708381&tmr=load%3D1598917708022%26core%3D1598917708080%26main%3D1598917708370%26ifr%3D1598917708388&cb=0&cdn=0&md=0&kw=Mars%2Cmissions%2CNASA%2Crover%2CCuriosity%2COpportunity%2CInSight%2CMars%20Reconnaissance%20Orbiter%2Cfacts&ab=-&dh=mars.nasa.gov&dr=&du=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F&href=https%3A%2F%2Fmars.nasa.gov%2Fnews%2F&dt=News%20%20%E2%80%93%20NASA%E2%80%99s%20Mars%20Exploration%20Program&dbg=0&cap=tc%3D0%26ab%3D0&inst=1&jsl=1&prod=undefined&lng=en&ogt=image%2Cupdated_time%2Ctype%3Darticle%2Curl%2Ctitle%2Cdescription%2Csite_name&pc=men&pub=ra-5a690e4c1320e328&ssl=1&sid=5f4d8c4c188abfcf&srf=0.01&ver=300&xck=1&xtr=0&og=site_name%3DNASA%25E2%2580%2599s%2520Mars%2520Exploration%2520Program%26description%3DNASA%25E2%2580%2599s%2520real-time%2520portal%2520for%2520Mars%2520exploration%252C%2520featuring%2520the%2520latest%2520news%252C%2520images%252C%2520and%2520discoveries%2520from%2520the%2520Red%2520Planet.%26title%3DNews%2520%2520%25E2%2580%2593%2520NASA%25E2%2580%2599s%2520Mars%2520Exploration%2520Program%26url%3Dhttps%253A%252F%252Fmars.nasa.gov%252Fnews%26type%3Darticle%26updated_time%3D2017-09-22%252019%253A53%253A22%2520UTC%26image%3Dhttps%253A%252F%252Fmars.nasa.gov%252Fsystem%252Fsite_config_values%252Fmeta_share_images%252F1_mars-nasa-gov.jpg&csi=undefined&rev=v8.28.7-wp&ct=1&xld=1&xd=1" style="height: 1px; width: 1px; position: absolute; top: 0px; z-index: 100000; border: 0px; left: 0px;" title="AddThis utility frame">
</iframe>
</div>
<style id="service-icons-0">
</style>
<script id="_fed_an_ua_tag" src="https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=JPL-Mars-MEPJPL&pua=UA-9453474-9,UA-118212757-11&dclink=true&sp=searchbox&exts=tif,tiff,wav" type="text/javascript">
</script>
</body>
</html>
#get the newest title from website
element = soup.select_one('ul.item_list li.slide')
element_____no_output_____
news_titles = element.find('div', class_="content_title").get_text()
_____no_output_____news_titles_____no_output_____#get paragraph from website
news_p = element.find('div', class_="article_teaser_body").get_text()
print(news_p)_____no_output_____#get the current feature mars image
website_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
core_url = 'https://www.jpl.nasa.gov'
#image_url = "/spaceimages/images/largesize/PIA17838_hires.jpg"
#fina_url = core_url + imag_url
#fina_url_____no_output_____#Visit the url for JPL Featured Space Image here
browser.visit(website_url)
_____no_output_____#Use splinter to navigate the site and find the image url for the current Featured Mars Image and
#assign the url string to a variable called featured_image_url
element_full_image = browser.find_by_id("full_image")
element_full_image.click()
_____no_output_____browser.is_element_present_by_text('more info', wait_time = 1)
element_more_info = browser.links.find_by_partial_text('more info')
element_more_info.click()_____no_output_____#Make sure to find the image url to the full size .jpg image
image_html = browser.html
image_soup = BeautifulSoup(image_html, 'html.parser')_____no_output_____#Make sure to save a complete url string for this image
image_url_final = image_soup.select_one('figure.lede a img').get('src')
image_url_final_____no_output_____final_url = core_url + image_url_final
final_url_____no_output_____mars_info = 'https://space-facts.com/mars/'
table = pd.read_html(mars_info)
table[0]_____no_output_____mars_info_df = table[0]
mars_info_df.columns = ["Info", "Value"]
mars_info_df.set_index(["Info"], inplace=True)
mars_info_df_____no_output_____info_html = mars_info_df.to_html()
info_html = info_html.replace("\n","")
info_html_____no_output_____cerberus_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced'
response = requests.get(cerberus_url)
soup = BeautifulSoup(response.text, 'html.parser')_____no_output_____cerberus_image = soup.find_all('div', class_="wide-image-wrapper")
print(cerberus_image)[<div class="wide-image-wrapper" id="wide-image">
<div class="downloads">
<img class="thumb" src="/cache/images/39d3266553462198bd2fbc4d18fbed17_cerberus_enhanced.tif_thumb.png"/>
<h3>Download</h3>
<ul>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg" target="_blank">Sample</a> (jpg) 1024px wide</li>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif" target="_blank">Original</a> (tif) 21 MB</li>
</ul>
</div>
<img class="wide-image" src="/cache/images/f5e372a36edfa389625da6d0cc25d905_cerberus_enhanced.tif_full.jpg"/>
<a class="open-toggle" href="#open" id="wide-image-toggle">Open</a>
</div>]
for image in cerberus_image:
picture = image.find('li')
full_image = picture.find('a')['href']
print(full_image)
cerberus_title = soup.find('h2', class_='title').text
print(cerberus_title)
cerberus_hem = {"Title": cerberus_title, "url": full_image}
print(cerberus_hem)https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg
Cerberus Hemisphere Enhanced
{'Title': 'Cerberus Hemisphere Enhanced', 'url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg'}
schiaparelli_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced'_____no_output_____response = requests.get(cerberus_url)
soup = BeautifulSoup(response.text, 'html.parser')_____no_output_____schiaparelli_image = soup.find_all('div', class_="wide-image-wrapper")
print(schiaparelli_image)[<div class="wide-image-wrapper" id="wide-image">
<div class="downloads">
<img class="thumb" src="/cache/images/39d3266553462198bd2fbc4d18fbed17_cerberus_enhanced.tif_thumb.png"/>
<h3>Download</h3>
<ul>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg" target="_blank">Sample</a> (jpg) 1024px wide</li>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif" target="_blank">Original</a> (tif) 21 MB</li>
</ul>
</div>
<img class="wide-image" src="/cache/images/f5e372a36edfa389625da6d0cc25d905_cerberus_enhanced.tif_full.jpg"/>
<a class="open-toggle" href="#open" id="wide-image-toggle">Open</a>
</div>]
for image in schiaparelli_image:
picture = image.find('li')
full_image2 = picture.find('a')['href']
print(full_image2)
schiaparelli_title = soup.find('h2', class_='title').text
print(schiaparelli_title)
schiaparelli_hem = {"Title": schiaparelli_title, "url": full_image2}
print(schiaparelli_hem)https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg
Cerberus Hemisphere Enhanced
{'Title': 'Cerberus Hemisphere Enhanced', 'url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/cerberus_enhanced.tif/full.jpg'}
syrtis_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced'_____no_output_____response = requests.get(syrtis_url)
soup = BeautifulSoup(response.text, 'html.parser')_____no_output_____syrtis_image = soup.find_all('div', class_="wide-image-wrapper")
print(syrtis_image)[<div class="wide-image-wrapper" id="wide-image">
<div class="downloads">
<img class="thumb" src="/cache/images/55a0a1e2796313fdeafb17c35925e8ac_syrtis_major_enhanced.tif_thumb.png"/>
<h3>Download</h3>
<ul>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif/full.jpg" target="_blank">Sample</a> (jpg) 1024px wide</li>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif" target="_blank">Original</a> (tif) 25 MB</li>
</ul>
</div>
<img class="wide-image" src="/cache/images/555e6403a6ddd7ba16ddb0e471cadcf7_syrtis_major_enhanced.tif_full.jpg"/>
<a class="open-toggle" href="#open" id="wide-image-toggle">Open</a>
</div>]
for image in syrtis_image:
picture = image.find('li')
full_image3 = picture.find('a')['href']
print(full_image3)
syrtis_title = soup.find('h2', class_='title').text
print(syrtis_title)
syrtis_hem = {"Title": syrtis_title, "url": full_image3}
print(syrtis_hem)https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif/full.jpg
Syrtis Major Hemisphere Enhanced
{'Title': 'Syrtis Major Hemisphere Enhanced', 'url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/syrtis_major_enhanced.tif/full.jpg'}
valles_url = 'https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced'_____no_output_____response = requests.get(valles_url)
soup = BeautifulSoup(response.text, 'html.parser')_____no_output_____valles_image = soup.find_all('div', class_="wide-image-wrapper")
print(valles_image)[<div class="wide-image-wrapper" id="wide-image">
<div class="downloads">
<img class="thumb" src="/cache/images/4e59980c1c57f89c680c0e1ccabbeff1_valles_marineris_enhanced.tif_thumb.png"/>
<h3>Download</h3>
<ul>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif/full.jpg" target="_blank">Sample</a> (jpg) 1024px wide</li>
<li><a href="https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif" target="_blank">Original</a> (tif) 27 MB</li>
</ul>
</div>
<img class="wide-image" src="/cache/images/b3c7c6c9138f57b4756be9b9c43e3a48_valles_marineris_enhanced.tif_full.jpg"/>
<a class="open-toggle" href="#open" id="wide-image-toggle">Open</a>
</div>]
for image in valles_image:
picture = image.find('li')
full_image4 = picture.find('a')['href']
print(full_image4)
valles_title = soup.find('h2', class_='title').text
print(valles_title)
valles_hem = {"Title": valles_title, "url": full_image4}
print(valles_hem)https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif/full.jpg
Valles Marineris Hemisphere Enhanced
{'Title': 'Valles Marineris Hemisphere Enhanced', 'url': 'https://astropedia.astrogeology.usgs.gov/download/Mars/Viking/valles_marineris_enhanced.tif/full.jpg'}
mars_hemispheres = [{"Title": cerberus_title, "url": full_image},
{"Title": schiaparelli_title, "url": full_image2},
{"Title": syrtis_title, "url": full_image3},
{"Title": valles_title, "url": full_image4}]
_____no_output_____mars_hemispheres_____no_output_____
</code>
| {
"repository": "ArunKara/web_scraping_hw",
"path": "mission_to_mars.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 809158,
"hexsha": "cb6c3acaf73751f919fe578c632b8a1e9a152843",
"max_line_length": 481167,
"avg_line_length": 249.8172275394,
"alphanum_fraction": 0.7665041932
} |
# Notebook from Laurans/hmm-tagger
Path: HMM Tagger.ipynb
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!_____no_output_____<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>_____no_output_____<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>_____no_output_____### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger_____no_output_____<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>_____no_output_____
<code>
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1_____no_output_____# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution_____no_output_____
</code>
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```_____no_output_____
<code>
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
</code>
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>_____no_output_____#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`._____no_output_____
<code>
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
</code>
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`._____no_output_____
<code>
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
</code>
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset._____no_output_____
<code>
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
</code>
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus._____no_output_____
<code>
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
</code>
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. _____no_output_____## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus._____no_output_____### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences._____no_output_____
<code>
from collections import defaultdict
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
# Init dictionary
tags_words_count = defaultdict(lambda : defaultdict(int))
for i in range(len(sequences_B)):
for itemA, itemB in zip(sequences_A[i], sequences_B[i]):
tags_words_count[itemA][itemB] += 1
return tags_words_count
# Calculate C(t_i, w_i)
emission_counts = pair_counts(data.Y, data.X)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably._____no_output_____
<code>
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(data.training_set.X, data.training_set.Y)
mfc_table = {word: max(subdict, key=subdict.get) for word, subdict in word_counts.items()} # TODO: YOUR CODE HERE
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')_____no_output_____
</code>
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger._____no_output_____
<code>
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions_____no_output_____
</code>
### Example Decoding Sequences with MFC Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
</code>
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. _____no_output_____
<code>
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions_____no_output_____
</code>
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus._____no_output_____
<code>
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.01%
</code>
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmin}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information._____no_output_____### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$_____no_output_____
<code>
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
counter = defaultdict(int)
for i in range(len(sequences)):
for element in sequences[i]:
counter[element] += 1
return counter
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set.Y)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
_____no_output_____
<code>
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
for element, next_element in zip(seq[:-1], seq[1:]):
counter[(element, next_element)] += 1
return counter
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set.Y)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag._____no_output_____
<code>
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
counter[seq[0]] += 1
return counter
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set.Y)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag._____no_output_____
<code>
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
counter = defaultdict(int)
for i in range(len(sequences)):
seq = sequences[i]
counter[seq[-1]] += 1
return counter
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$_____no_output_____
<code>
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
tag_probabilities = defaultdict(dict)
for tag, subdict in emission_counts.items():
for word, value in subdict.items():
tag_probabilities[tag][word] = value / tag_unigrams[tag]
states = {}
for tag, prob in tag_probabilities.items():
states[tag] = State(DiscreteDistribution(prob), name=tag)
basic_model.add_states(list(states.values()))
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
total = sum(tag_starts.values())
for tag, value in tag_starts.items():
basic_model.add_transition(basic_model.start, states[tag], value / total)
total = sum(tag_ends.values())
for tag, value in tag_ends.items():
basic_model.add_transition(states[tag], basic_model.end, value / total)
for keys, value in tag_bigrams.items():
tag1, tag2 = keys
basic_model.add_transition(states[tag1], states[tag2], value / tag_unigrams[tag1])
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')_____no_output_____hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_training_acc > 0.955, "Uh oh. Your HMM accuracy on the training set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')training accuracy basic hmm model: 97.54%
testing accuracy basic hmm model: 96.18%
</code>
### Example Decoding Sequences with the HMM Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
</code>
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>_____no_output_____
<code>
!!jupyter nbconvert *.ipynb_____no_output_____
</code>
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets._____no_output_____
<code>
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]_____no_output_____
</code>
| {
"repository": "Laurans/hmm-tagger",
"path": "HMM Tagger.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 50243,
"hexsha": "cb6dbd356c9a37fdd28d8be0b716dd88ce3309ac",
"max_line_length": 660,
"avg_line_length": 42.4349662162,
"alphanum_fraction": 0.5884202775
} |
# Notebook from moekay/course-content-dl
Path: tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial2.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 2: Learning Hyperparameters
**Week 1, Day 2: Linear Deep Learning**
**By Neuromatch Academy**
__Content creators:__ Saeed Salehi, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Antoine De Comite, Kelson Shilling-Scrivo
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, Spiros Chavlis
_____no_output_____**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>_____no_output_____---
# Tutorial Objectives
* Training landscape
* The effect of depth
* Choosing a learning rate
* Initialization matters
_____no_output_____
<code>
# @title Tutorial slides
# @markdown These are the slides for the videos in the tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/sne2m/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)_____no_output_____
</code>
---
# Setup
This a GPU-Free tutorial!_____no_output_____
<code>
# @title Install dependencies
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm_____no_output_____# Imports
import time
import numpy as np
import matplotlib
import matplotlib.pyplot as plt_____no_output_____# @title Figure settings
from ipywidgets import interact, IntSlider, FloatSlider, fixed
from ipywidgets import HBox, interactive_output, ToggleButton, Layout
from mpl_toolkits.axes_grid1 import make_axes_locatable
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")_____no_output_____# @title Plotting functions
def plot_x_y_(x_t_, y_t_, x_ev_, y_ev_, loss_log_, weight_log_):
"""
"""
plt.figure(figsize=(12, 4))
plt.subplot(1, 3, 1)
plt.scatter(x_t_, y_t_, c='r', label='training data')
plt.plot(x_ev_, y_ev_, c='b', label='test results', linewidth=2)
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.subplot(1, 3, 2)
plt.plot(loss_log_, c='r')
plt.xlabel('epochs')
plt.ylabel('mean squared error')
plt.subplot(1, 3, 3)
plt.plot(weight_log_)
plt.xlabel('epochs')
plt.ylabel('weights')
plt.show()
def plot_vector_field(what, init_weights=None):
"""
"""
n_epochs=40
lr=0.15
x_pos = np.linspace(2.0, 0.5, 100, endpoint=True)
y_pos = 1. / x_pos
xx, yy = np.mgrid[-1.9:2.0:0.2, -1.9:2.0:0.2]
zz = np.empty_like(xx)
x, y = xx[:, 0], yy[0]
x_temp, y_temp = gen_samples(10, 1.0, 0.0)
cmap = matplotlib.cm.plasma
plt.figure(figsize=(8, 7))
ax = plt.gca()
if what == 'all' or what == 'vectors':
for i, a in enumerate(x):
for j, b in enumerate(y):
temp_model = ShallowNarrowLNN([a, b])
da, db = temp_model.dloss_dw(x_temp, y_temp)
zz[i, j] = temp_model.loss(temp_model.forward(x_temp), y_temp)
scale = min(40 * np.sqrt(da**2 + db**2), 50)
ax.quiver(a, b, - da, - db, scale=scale, color=cmap(np.sqrt(da**2 + db**2)))
if what == 'all' or what == 'trajectory':
if init_weights is None:
for init_weights in [[0.5, -0.5], [0.55, -0.45], [-1.8, 1.7]]:
temp_model = ShallowNarrowLNN(init_weights)
_, temp_records = temp_model.train(x_temp, y_temp, lr, n_epochs)
ax.scatter(temp_records[:, 0], temp_records[:, 1],
c=np.arange(len(temp_records)), cmap='Greys')
ax.scatter(temp_records[0, 0], temp_records[0, 1], c='blue', zorder=9)
ax.scatter(temp_records[-1, 0], temp_records[-1, 1], c='red', marker='X', s=100, zorder=9)
else:
temp_model = ShallowNarrowLNN(init_weights)
_, temp_records = temp_model.train(x_temp, y_temp, lr, n_epochs)
ax.scatter(temp_records[:, 0], temp_records[:, 1],
c=np.arange(len(temp_records)), cmap='Greys')
ax.scatter(temp_records[0, 0], temp_records[0, 1], c='blue', zorder=9)
ax.scatter(temp_records[-1, 0], temp_records[-1, 1], c='red', marker='X', s=100, zorder=9)
if what == 'all' or what == 'loss':
contplt = ax.contourf(x, y, np.log(zz+0.001), zorder=-1, cmap='coolwarm', levels=100)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(contplt, cax=cax)
cbar.set_label('log (Loss)')
ax.set_xlabel("$w_1$")
ax.set_ylabel("$w_2$")
ax.set_xlim(-1.9, 1.9)
ax.set_ylim(-1.9, 1.9)
plt.show()
def plot_loss_landscape():
"""
"""
x_temp, y_temp = gen_samples(10, 1.0, 0.0)
xx, yy = np.mgrid[-1.9:2.0:0.2, -1.9:2.0:0.2]
zz = np.empty_like(xx)
x, y = xx[:, 0], yy[0]
for i, a in enumerate(x):
for j, b in enumerate(y):
temp_model = ShallowNarrowLNN([a, b])
zz[i, j] = temp_model.loss(temp_model.forward(x_temp), y_temp)
temp_model = ShallowNarrowLNN([-1.8, 1.7])
loss_rec_1, w_rec_1 = temp_model.train(x_temp, y_temp, 0.02, 240)
temp_model = ShallowNarrowLNN([1.5, -1.5])
loss_rec_2, w_rec_2 = temp_model.train(x_temp, y_temp, 0.02, 240)
plt.figure(figsize=(12, 8))
ax = plt.subplot(1, 1, 1, projection='3d')
ax.plot_surface(xx, yy, np.log(zz+0.5), cmap='coolwarm', alpha=0.5)
ax.scatter3D(w_rec_1[:, 0], w_rec_1[:, 1], np.log(loss_rec_1+0.5),
c='k', s=50, zorder=9)
ax.scatter3D(w_rec_2[:, 0], w_rec_2[:, 1], np.log(loss_rec_2+0.5),
c='k', s=50, zorder=9)
plt.axis("off")
ax.view_init(45, 260)
plt.show()
def depth_widget(depth):
if depth == 0:
depth_lr_init_interplay(depth, 0.02, 0.9)
else:
depth_lr_init_interplay(depth, 0.01, 0.9)
def lr_widget(lr):
depth_lr_init_interplay(50, lr, 0.9)
def depth_lr_interplay(depth, lr):
depth_lr_init_interplay(depth, lr, 0.9)
def depth_lr_init_interplay(depth, lr, init_weights):
n_epochs = 600
x_train, y_train = gen_samples(100, 2.0, 0.1)
model = DeepNarrowLNN(np.full((1, depth+1), init_weights))
plt.figure(figsize=(10, 5))
plt.plot(model.train(x_train, y_train, lr, n_epochs),
linewidth=3.0, c='m')
plt.title("Training a {}-layer LNN with"
" $\eta=${} initialized with $w_i=${}".format(depth, lr, init_weights), pad=15)
plt.yscale('log')
plt.xlabel('epochs')
plt.ylabel('Log mean squared error')
plt.ylim(0.001, 1.0)
plt.show()
def plot_init_effect():
depth = 15
n_epochs = 250
lr = 0.02
x_train, y_train = gen_samples(100, 2.0, 0.1)
plt.figure(figsize=(12, 6))
for init_w in np.arange(0.7, 1.09, 0.05):
model = DeepNarrowLNN(np.full((1, depth), init_w))
plt.plot(model.train(x_train, y_train, lr, n_epochs),
linewidth=3.0, label="initial weights {:.2f}".format(init_w))
plt.title("Training a {}-layer narrow LNN with $\eta=${}".format(depth, lr), pad=15)
plt.yscale('log')
plt.xlabel('epochs')
plt.ylabel('Log mean squared error')
plt.legend(loc='lower left', ncol=4)
plt.ylim(0.001, 1.0)
plt.show()
class InterPlay:
def __init__(self):
self.lr = [None]
self.depth = [None]
self.success = [None]
self.min_depth, self.max_depth = 5, 65
self.depth_list = np.arange(10, 61, 10)
self.i_depth = 0
self.min_lr, self.max_lr = 0.001, 0.105
self.n_epochs = 600
self.x_train, self.y_train = gen_samples(100, 2.0, 0.1)
self.converged = False
self.button = None
self.slider = None
def train(self, lr, update=False, init_weights=0.9):
if update and self.converged and self.i_depth < len(self.depth_list):
depth = self.depth_list[self.i_depth]
self.plot(depth, lr)
self.i_depth += 1
self.lr.append(None)
self.depth.append(None)
self.success.append(None)
self.converged = False
self.slider.value = 0.005
if self.i_depth < len(self.depth_list):
self.button.value = False
self.button.description = 'Explore!'
self.button.disabled = True
self.button.button_style = 'danger'
else:
self.button.value = False
self.button.button_style = ''
self.button.disabled = True
self.button.description = 'Done!'
time.sleep(1.0)
elif self.i_depth < len(self.depth_list):
depth = self.depth_list[self.i_depth]
# assert self.min_depth <= depth <= self.max_depth
assert self.min_lr <= lr <= self.max_lr
self.converged = False
model = DeepNarrowLNN(np.full((1, depth), init_weights))
self.losses = np.array(model.train(self.x_train, self.y_train, lr, self.n_epochs))
if np.any(self.losses < 1e-2):
success = np.argwhere(self.losses < 1e-2)[0][0]
if np.all((self.losses[success:] < 1e-2)):
self.converged = True
self.success[-1] = success
self.lr[-1] = lr
self.depth[-1] = depth
self.button.disabled = False
self.button.button_style = 'success'
self.button.description = 'Register!'
else:
self.button.disabled = True
self.button.button_style = 'danger'
self.button.description = 'Explore!'
else:
self.button.disabled = True
self.button.button_style = 'danger'
self.button.description = 'Explore!'
self.plot(depth, lr)
def plot(self, depth, lr):
fig = plt.figure(constrained_layout=False, figsize=(10, 8))
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, :])
ax2 = fig.add_subplot(gs[1, 0])
ax3 = fig.add_subplot(gs[1, 1])
ax1.plot(self.losses, linewidth=3.0, c='m')
ax1.set_title("Training a {}-layer LNN with"
" $\eta=${}".format(depth, lr), pad=15, fontsize=16)
ax1.set_yscale('log')
ax1.set_xlabel('epochs')
ax1.set_ylabel('Log mean squared error')
ax1.set_ylim(0.001, 1.0)
ax2.set_xlim(self.min_depth, self.max_depth)
ax2.set_ylim(-10, self.n_epochs)
ax2.set_xlabel('Depth')
ax2.set_ylabel('Learning time (Epochs)')
ax2.set_title("Learning time vs depth", fontsize=14)
ax2.scatter(np.array(self.depth), np.array(self.success), c='r')
# ax3.set_yscale('log')
ax3.set_xlim(self.min_depth, self.max_depth)
ax3.set_ylim(self.min_lr, self.max_lr)
ax3.set_xlabel('Depth')
ax3.set_ylabel('Optimial learning rate')
ax3.set_title("Empirically optimal $\eta$ vs depth", fontsize=14)
ax3.scatter(np.array(self.depth), np.array(self.lr), c='r')
plt.show()_____no_output_____# @title Helper functions
atform = AirtableForm('appn7VdPRseSoMXEG','W1D2_T2','https://portal.neuromatchacademy.org/api/redirect/to/9c55f6cb-cdf9-4429-ac1c-ec44fe64c303')
def gen_samples(n, a, sigma):
"""
Generates `n` samples with `y = z * x + noise(sgma)` linear relation.
Args:
n : int
a : float
sigma : float
Retutns:
x : np.array
y : np.array
"""
assert n > 0
assert sigma >= 0
if sigma > 0:
x = np.random.rand(n)
noise = np.random.normal(scale=sigma, size=(n))
y = a * x + noise
else:
x = np.linspace(0.0, 1.0, n, endpoint=True)
y = a * x
return x, y
class ShallowNarrowLNN:
"""
Shallow and narrow (one neuron per layer) linear neural network
"""
def __init__(self, init_ws):
"""
init_ws: initial weights as a list
"""
assert isinstance(init_ws, list)
assert len(init_ws) == 2
self.w1 = init_ws[0]
self.w2 = init_ws[1]
def forward(self, x):
"""
The forward pass through netwrok y = x * w1 * w2
"""
y = x * self.w1 * self.w2
return y
def loss(self, y_p, y_t):
"""
Mean squared error (L2) with 1/2 for convenience
"""
assert y_p.shape == y_t.shape
mse = ((y_t - y_p)**2).mean()
return mse
def dloss_dw(self, x, y_t):
"""
partial derivative of loss with respect to weights
Args:
x : np.array
y_t : np.array
"""
assert x.shape == y_t.shape
Error = y_t - self.w1 * self.w2 * x
dloss_dw1 = - (2 * self.w2 * x * Error).mean()
dloss_dw2 = - (2 * self.w1 * x * Error).mean()
return dloss_dw1, dloss_dw2
def train(self, x, y_t, eta, n_ep):
"""
Gradient descent algorithm
Args:
x : np.array
y_t : np.array
eta: float
n_ep : int
"""
assert x.shape == y_t.shape
loss_records = np.empty(n_ep) # pre allocation of loss records
weight_records = np.empty((n_ep, 2)) # pre allocation of weight records
for i in range(n_ep):
y_p = self.forward(x)
loss_records[i] = self.loss(y_p, y_t)
dloss_dw1, dloss_dw2 = self.dloss_dw(x, y_t)
self.w1 -= eta * dloss_dw1
self.w2 -= eta * dloss_dw2
weight_records[i] = [self.w1, self.w2]
return loss_records, weight_records
class DeepNarrowLNN:
"""
Deep but thin (one neuron per layer) linear neural network
"""
def __init__(self, init_ws):
"""
init_ws: initial weights as a numpy array
"""
self.n = init_ws.size
self.W = init_ws.reshape(1, -1)
def forward(self, x):
"""
x : np.array
input features
"""
y = np.prod(self.W) * x
return y
def loss(self, y_t, y_p):
"""
mean squared error (L2 loss)
Args:
y_t : np.array
y_p : np.array
"""
assert y_p.shape == y_t.shape
mse = ((y_t - y_p)**2 / 2).mean()
return mse
def dloss_dw(self, x, y_t, y_p):
"""
analytical gradient of weights
Args:
x : np.array
y_t : np.array
y_p : np.array
"""
E = y_t - y_p # = y_t - x * np.prod(self.W)
Ex = np.multiply(x, E).mean()
Wp = np.prod(self.W) / (self.W + 1e-9)
dW = - Ex * Wp
return dW
def train(self, x, y_t, eta, n_epochs):
"""
training using gradient descent
Args:
x : np.array
y_t : np.array
eta: float
n_epochs : int
"""
loss_records = np.empty(n_epochs)
loss_records[:] = np.nan
for i in range(n_epochs):
y_p = self.forward(x)
loss_records[i] = self.loss(y_t, y_p).mean()
dloss_dw = self.dloss_dw(x, y_t, y_p)
if np.isnan(dloss_dw).any() or np.isinf(dloss_dw).any():
return loss_records
self.W -= eta * dloss_dw
return loss_records_____no_output_____#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)_____no_output_____#@title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device_____no_output_____SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()_____no_output_____
</code>
---
# Section 1: A Shallow Narrow Linear Neural Network
*Time estimate: ~30 mins*_____no_output_____
<code>
# @title Video 1: Shallow Narrow Linear Net
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1F44y117ot", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"6e5JIYsqVvU", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('video 1: Shallow Narrow Linear Net')
display(out)_____no_output_____
</code>
## Section 1.1: A Shallow Narrow Linear Net_____no_output_____To better understand the behavior of neural network training with gradient descent, we start with the incredibly simple case of a shallow narrow linear neural net, since state-of-the-art models are impossible to dissect and comprehend with our current mathematical tools.
The model we use has one hidden layer, with only one neuron, and two weights. We consider the squared error (or L2 loss) as the cost function. As you may have already guessed, we can visualize the model as a neural network:
<center><img src="https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/shallow_narrow_nn.png" width="400"/></center>
<br/>
or by its computation graph:
<center><img src="https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/shallow_narrow.png" alt="Shallow Narrow Graph" width="400"/></center>
or on a rare occasion, even as a reasonably compact mapping:
$$ loss = (y - w_1 \cdot w_2 \cdot x)^2 $$
<br/>
Implementing a neural network from scratch without using any Automatic Differentiation tool is rarely necessary. The following two exercises are therefore **Bonus** (optional) exercises. Please ignore them if you have any time-limits or pressure and continue to Section 1.2._____no_output_____### Analytical Exercise 1.1: Loss Gradients (Optional)
Once again, we ask you to calculate the network gradients analytically, since you will need them for the next exercise. We understand how annoying this is.
$\dfrac{\partial{loss}}{\partial{w_1}} = ?$
$\dfrac{\partial{loss}}{\partial{w_2}} = ?$
<br/>
---
#### Solution
$\dfrac{\partial{loss}}{\partial{w_1}} = -2 \cdot w_2 \cdot x \cdot (y - w_1 \cdot w_2 \cdot x)$
$\dfrac{\partial{loss}}{\partial{w_2}} = -2 \cdot w_1 \cdot x \cdot (y - w_1 \cdot w_2 \cdot x)$
---
_____no_output_____### Coding Exercise 1.1: Implement simple narrow LNN (Optional)
Next, we ask you to implement the `forward` pass for our model from scratch without using PyTorch.
Also, although our model gets a single input feature and outputs a single prediction, we could calculate the loss and perform training for multiple samples at once. This is the common practice for neural networks, since computers are incredibly fast doing matrix (or tensor) operations on batches of data, rather than processing samples one at a time through `for` loops. Therefore, for the `loss` function, please implement the **mean** squared error (MSE), and adjust your analytical gradients accordingly when implementing the `dloss_dw` function.
Finally, complete the `train` function for the gradient descent algorithm:
\begin{equation}
\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla loss (\mathbf{w}^{(t)})
\end{equation}_____no_output_____
<code>
class ShallowNarrowExercise:
"""Shallow and narrow (one neuron per layer) linear neural network
"""
def __init__(self, init_weights):
"""
Args:
init_weights (list): initial weights
"""
assert isinstance(init_weights, (list, np.ndarray, tuple))
assert len(init_weights) == 2
self.w1 = init_weights[0]
self.w2 = init_weights[1]
def forward(self, x):
"""The forward pass through netwrok y = x * w1 * w2
Args:
x (np.ndarray): features (inputs) to neural net
returns:
(np.ndarray): neural network output (prediction)
"""
#################################################
## Implement the forward pass to calculate prediction
## Note that prediction is not the loss
# Complete the function and remove or comment the line below
raise NotImplementedError("Forward Pass `forward`")
#################################################
y = ...
return y
def dloss_dw(self, x, y_true):
"""Gradient of loss with respect to weights
Args:
x (np.ndarray): features (inputs) to neural net
y_true (np.ndarray): true labels
returns:
(float): mean gradient of loss with respect to w1
(float): mean gradient of loss with respect to w2
"""
assert x.shape == y_true.shape
#################################################
## Implement the gradient computation function
# Complete the function and remove or comment the line below
raise NotImplementedError("Gradient of Loss `dloss_dw`")
#################################################
dloss_dw1 = ...
dloss_dw2 = ...
return dloss_dw1, dloss_dw2
def train(self, x, y_true, lr, n_ep):
"""Training with Gradient descent algorithm
Args:
x (np.ndarray): features (inputs) to neural net
y_true (np.ndarray): true labels
lr (float): learning rate
n_ep (int): number of epochs (training iterations)
returns:
(list): training loss records
(list): training weight records (evolution of weights)
"""
assert x.shape == y_true.shape
loss_records = np.empty(n_ep) # pre allocation of loss records
weight_records = np.empty((n_ep, 2)) # pre allocation of weight records
for i in range(n_ep):
y_prediction = self.forward(x)
loss_records[i] = loss(y_prediction, y_true)
dloss_dw1, dloss_dw2 = self.dloss_dw(x, y_true)
#################################################
## Implement the gradient descent step
# Complete the function and remove or comment the line below
raise NotImplementedError("Training loop `train`")
#################################################
self.w1 -= ...
self.w2 -= ...
weight_records[i] = [self.w1, self.w2]
return loss_records, weight_records
def loss(y_prediction, y_true):
"""Mean squared error
Args:
y_prediction (np.ndarray): model output (prediction)
y_true (np.ndarray): true label
returns:
(np.ndarray): mean squared error loss
"""
assert y_prediction.shape == y_true.shape
#################################################
## Implement the MEAN squared error
# Complete the function and remove or comment the line below
raise NotImplementedError("Loss function `loss`")
#################################################
mse = ...
return mse
#add event to airtable
atform.add_event('Coding Exercise 1.1: Implement simple narrow LNN')
set_seed(seed=SEED)
n_epochs = 211
learning_rate = 0.02
initial_weights = [1.4, -1.6]
x_train, y_train = gen_samples(n=73, a=2.0, sigma=0.2)
x_eval = np.linspace(0.0, 1.0, 37, endpoint=True)
## Uncomment to run
# sn_model = ShallowNarrowExercise(initial_weights)
# loss_log, weight_log = sn_model.train(x_train, y_train, learning_rate, n_epochs)
# y_eval = sn_model.forward(x_eval)
# plot_x_y_(x_train, y_train, x_eval, y_eval, loss_log, weight_log)_____no_output_____
</code>
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial2_Solution_46492cd6.py)
*Example output:*
<img alt='Solution hint' align='left' width=1696.0 height=544.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/W1D2_Tutorial2_Solution_46492cd6_1.png>
_____no_output_____## Section 1.2: Learning landscapes_____no_output_____
<code>
# @title Video 2: Training Landscape
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Nv411J71X", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"k28bnNAcOEg", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 2: Training Landscape')
display(out)_____no_output_____
</code>
As you may have already asked yourself, we can analytically find $w_1$ and $w_2$ without using gradient descent:
\begin{equation}
w_1 \cdot w_2 = \dfrac{y}{x}
\end{equation}
In fact, we can plot the gradients, the loss function and all the possible solutions in one figure. In this example, we use the $y = 1x$ mapping:
**Blue ribbon**: shows all possible solutions: $~ w_1 w_2 = \dfrac{y}{x} = \dfrac{x}{x} = 1 \Rightarrow w_1 = \dfrac{1}{w_2}$
**Contour background**: Shows the loss values, red being higher loss
**Vector field (arrows)**: shows the gradient vector field. The larger yellow arrows show larger gradients, which correspond to bigger steps by gradient descent.
**Scatter circles**: the trajectory (evolution) of weights during training for three different initializations, with blue dots marking the start of training and red crosses ( **x** ) marking the end of training. You can also try your own initializations (keep the initial values between `-2.0` and `2.0`) as shown here:
```python
plot_vector_field('all', [1.0, -1.0])
```
Finally, if the plot is too crowded, feel free to pass one of the following strings as argument:
```python
plot_vector_field('vectors') # for vector field
plot_vector_field('trajectory') # for training trajectory
plot_vector_field('loss') # for loss contour
```
**Think!**
Explore the next two plots. Try different initial values. Can you find the saddle point? Why does training slow down near the minima?_____no_output_____
<code>
plot_vector_field('all')_____no_output_____
</code>
Here, we also visualize the loss landscape in a 3-D plot, with two training trajectories for different initial conditions.
Note: the trajectories from the 3D plot and the previous plot are independent and different._____no_output_____
<code>
plot_loss_landscape()_____no_output_____# @title Student Response
from ipywidgets import widgets
text=widgets.Textarea(
value='Type your here and Push submit',
placeholder='Type something',
description='',
disabled=False
)
button = widgets.Button(description="Submit!")
display(text,button)
def on_button_clicked(b):
atform.add_answer('q1', text.value)
print("Submission successful!")
button.on_click(on_button_clicked)_____no_output_____# @title Video 3: Training Landscape - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1py4y1j7cv", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"0EcUGgxOdkI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 3: Training Landscape - Discussiond')
display(out)_____no_output_____
</code>
---
# Section 2: Depth, Learning rate, and initialization
*Time estimate: ~45 mins*_____no_output_____Successful deep learning models are often developed by a team of very clever people, spending many many hours "tuning" learning hyperparameters, and finding effective initializations. In this section, we look at three basic (but often not simple) hyperparameters: depth, learning rate, and initialization._____no_output_____## Section 2.1: The effect of depth_____no_output_____
<code>
# @title Video 4: Effect of Depth
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1z341167di", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"Ii_As9cRR5Q", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 4: Effect of Depth')
display(out)_____no_output_____
</code>
Why might depth be useful? What makes a network or learning system "deep"? The reality is that shallow neural nets are often incapable of learning complex functions due to data limitations. On the other hand, depth seems like magic. Depth can change the functions a network can represent, the way a network learns, and how a network generalizes to unseen data.
So let's look at the challenges that depth poses in training a neural network. Imagine a single input, single output linear network with 50 hidden layers and only one neuron per layer (i.e. a narrow deep neural network). The output of the network is easy to calculate:
$$ prediction = x \cdot w_1 \cdot w_2 \cdot \cdot \cdot w_{50} $$
If the initial value for all the weights is $w_i = 2$, the prediction for $x=1$ would be **exploding**: $y_p = 2^{50} \approx 1.1256 \times 10^{15}$. On the other hand, for weights initialized to $w_i = 0.5$, the output is **vanishing**: $y_p = 0.5^{50} \approx 8.88 \times 10^{-16}$. Similarly, if we recall the chain rule, as the graph gets deeper, the number of elements in the chain multiplication increases, which could lead to exploding or vanishing gradients. To avoid such numerical vulnerablities that could impair our training algorithm, we need to understand the effect of depth.
_____no_output_____### Interactive Demo 2.1: Depth widget
Use the widget to explore the impact of depth on the training curve (loss evolution) of a deep but narrow neural network.
**Think!**
Which networks trained the fastest? Did all networks eventually "work" (converge)? What is the shape of their learning trajectory?_____no_output_____
<code>
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(depth_widget,
depth = IntSlider(min=0, max=51,
step=5, value=0,
continuous_update=False))_____no_output_____# @title Video 5: Effect of Depth - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qq4y1H7uk", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"EqSDkwmSruk", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 5: Effect of Depth - Discussion')
display(out)_____no_output_____
</code>
## Section 2.2: Choosing a learning rate_____no_output_____The learning rate is a common hyperparameter for most optimization algorithms. How should we set it? Sometimes the only option is to try all the possibilities, but sometimes knowing some key trade-offs will help guide our search for good hyperparameters._____no_output_____
<code>
# @title Video 6: Learning Rate
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV11f4y157MT", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"w_GrCVM-_Qo", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 6: Learning Rate')
display(out)_____no_output_____
</code>
### Interactive Demo 2.2: Learning rate widget
Here, we fix the network depth to 50 layers. Use the widget to explore the impact of learning rate $\eta$ on the training curve (loss evolution) of a deep but narrow neural network.
**Think!**
Can we say that larger learning rates always lead to faster learning? Why not? _____no_output_____
<code>
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(lr_widget,
lr = FloatSlider(min=0.005, max=0.045, step=0.005, value=0.005,
continuous_update=False, readout_format='.3f',
description='eta'))_____no_output_____# @title Video 7: Learning Rate - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Aq4y1p7bh", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"cmS0yqImz2E", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 7: Learning Rate - Discussion')
display(out)_____no_output_____
</code>
## Section 2.3: Depth vs Learning Rate_____no_output_____
<code>
# @title Video 8: Depth and Learning Rate
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1V44y1177e", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"J30phrux_3k", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 8: Depth and Learning Rate')
display(out)_____no_output_____
</code>
### Interactive Demo 2.3: Depth and Learning-Rate
_____no_output_____**Important instruction**
The exercise starts with 10 hidden layers. Your task is to find the learning rate that delivers fast but robust convergence (learning). When you are confident about the learning rate, you can **Register** the optimal learning rate for the given depth. Once you press register, a deeper model is instantiated, so you can find the next optimal learning rate. The Register button turns green only when the training converges, but does not imply the fastest convergence. Finally, be patient :) the widgets are slow.
**Think!**
Can you explain the relationship between the depth and optimal learning rate?_____no_output_____
<code>
# @markdown Make sure you execute this cell to enable the widget!
intpl_obj = InterPlay()
intpl_obj.slider = FloatSlider(min=0.005, max=0.105, step=0.005, value=0.005,
layout=Layout(width='500px'),
continuous_update=False,
readout_format='.3f',
description='eta')
intpl_obj.button = ToggleButton(value=intpl_obj.converged, description='Register')
widgets_ui = HBox([intpl_obj.slider, intpl_obj.button])
widgets_out = interactive_output(intpl_obj.train,
{'lr': intpl_obj.slider,
'update': intpl_obj.button,
'init_weights': fixed(0.9)})
display(widgets_ui, widgets_out)_____no_output_____# @title Video 9: Depth and Learning Rate - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV15q4y1p7Uq", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"7Fl8vH7cgco", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 9: Depth and Learning Rate - Discussion')
display(out)_____no_output_____
</code>
## Section 2.4: Why initialization is important_____no_output_____
<code>
# @title Video 10: Initialization Matters
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1UL411J7vu", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"KmqCz95AMzY", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 10: Initialization Matters')
display(out)_____no_output_____
</code>
We’ve seen, even in the simplest of cases, that depth can slow learning. Why? From the chain rule, gradients are multiplied by the current weight at each layer, so the product can vanish or explode. Therefore, weight initialization is a fundamentally important hyperparameter.
Although in practice initial values for learnable parameters are often sampled from different $\mathcal{Uniform}$ or $\mathcal{Normal}$ probability distribution, here we use a single value for all the parameters.
The figure below shows the effect of initialization on the speed of learning for the deep but narrow LNN. We have excluded initializations that lead to numerical errors such as `nan` or `inf`, which are the consequence of smaller or larger initializations._____no_output_____
<code>
# @markdown Make sure you execute this cell to see the figure!
plot_init_effect()_____no_output_____# @title Video 11: Initialization Matters Explained
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hM4y1T7gJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"vKktGdiQDsE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 11: Initialization Matters Explained')
display(out)_____no_output_____
</code>
---
# Summary
In the second tutorial, we have learned what is the training landscape, and also we have see in depth the effect of the depth of the network and the learning rate, and their interplay. Finally, we have seen that initialization matters and why we need smart ways of initialization._____no_output_____
<code>
# @title Video 12: Tutorial 2 Wrap-up
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1P44y117Pd", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r3K8gtak3wA", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 12: Tutorial 2 Wrap-up')
display(out)_____no_output_____
# @title Airtable Submission Link
from IPython import display as IPydisplay
IPydisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/AirtableSubmissionButton.png?raw=1"
alt="button link to Airtable" style="width:410px"></a>
</div>""" )_____no_output_____
</code>
---
# Bonus_____no_output_____## Hyperparameter interaction
Finally, let's put everything we learned together and find best initial weights and learning rate for a given depth. By now you should have learned the interactions and know how to find the optimal values quickly. If you get `numerical overflow` warnings, don't be discouraged! They are often caused by "exploding" or "vanishing" gradients.
**Think!**
Did you experience any surprising behaviour
or difficulty finding the optimal parameters?_____no_output_____
<code>
# @markdown Make sure you execute this cell to enable the widget!
_ = interact(depth_lr_init_interplay,
depth = IntSlider(min=10, max=51, step=5, value=25,
continuous_update=False),
lr = FloatSlider(min=0.001, max=0.1,
step=0.005, value=0.005,
continuous_update=False,
readout_format='.3f',
description='eta'),
init_weights = FloatSlider(min=0.1, max=3.0,
step=0.1, value=0.9,
continuous_update=False,
readout_format='.3f',
description='initial weights'))_____no_output_____
</code>
| {
"repository": "moekay/course-content-dl",
"path": "tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial2.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 65160,
"hexsha": "cb6dc4a318bbc8cf8b0cf26053528cda3eb3156c",
"max_line_length": 602,
"avg_line_length": 34.8449197861,
"alphanum_fraction": 0.5464702271
} |
# Notebook from caganze/popsims
Path: notebooks/gizmo_read_tutorial.ipynb
# tutorial for reading a Gizmo snapshot
@author: Andrew Wetzel <[email protected]>_____no_output_____
<code>
# First, move within a simulation directory, or point 'directory' below to a simulation directory.
# This directory should contain either a snapshot file
# snapshot_???.hdf5
# or a snapshot directory
# snapdir_???
# In general, the simulation directory also should contain a text file:
# m12*_center.txt
# that contains pre-computed galaxy center coordinates
# and rotation vectors to align with the principal axes of the galaxy,
# although that file is not required to read a snapshot.
# The simulation directory also may contain text files:
# m12*_LSR{0,1,2}.txt
# that contains the local standard of rest (LSR) coordinates
# used by Ananke in creating Gaia synthetic surveys._____no_output_____# Ensure that your python path points to this python package, then:
import gizmo_read_____no_output_____directory = '.' # if running this notebook from within a simulation directory
#directory = 'm12i/' # if running higher-level directory
#directory = 'm12f/' # if running higher-level directory
#directory = 'm12m/' # if running higher-level directory_____no_output_____
</code>
# read particle data from a snapshot_____no_output_____
<code>
# read star particles (all properties)
part = gizmo_read.read.Read.read_snapshot(species='star', directory=directory)reading header from:
snapdir_600/snapshot_600.0.hdf5
snapshot contains the following number of particles:
star (id = 4): 13976485 particles
reading star properties:
['form.scalefactor', 'id', 'mass', 'massfraction', 'position', 'potential', 'velocity']
reading particles from:
snapshot_600.0.hdf5
snapshot_600.1.hdf5
snapshot_600.2.hdf5
snapshot_600.3.hdf5
reading galaxy center coordinates and principal axes from: m12i_res7100_center.txt
center position [kpc] = 41792.145, 44131.235, 46267.676
center velocity [km/s] = -52.5, 71.9, 95.2
adjusting particle coordinates to be relative to galaxy center
and aligned with the principal axes
# alternately, read all particle species (stars, gas, dark matter)
part = gizmo_read.read.Read.read_snapshot(species='all', directory=directory)reading header from:
snapdir_600/snapshot_600.0.hdf5
snapshot contains the following number of particles:
dark (id = 1): 70514272 particles
dark.2 (id = 2): 5513331 particles
gas (id = 0): 57060074 particles
star (id = 4): 13976485 particles
reading dark properties:
['id', 'mass', 'position', 'potential', 'velocity']
reading dark.2 properties:
['id', 'mass', 'position', 'potential', 'velocity']
reading gas properties:
['density', 'electron.fraction', 'hydrogen.neutral.fraction', 'id', 'mass', 'massfraction', 'position', 'potential', 'temperature', 'velocity']
reading star properties:
['form.scalefactor', 'id', 'mass', 'massfraction', 'position', 'potential', 'velocity']
reading particles from:
snapshot_600.0.hdf5
snapshot_600.1.hdf5
snapshot_600.2.hdf5
snapshot_600.3.hdf5
reading galaxy center coordinates and principal axes from: m12i_res7100_center.txt
center position [kpc] = 41792.145, 44131.235, 46267.676
center velocity [km/s] = -52.5, 71.9, 95.2
adjusting particle coordinates to be relative to galaxy center
and aligned with the principal axes
# alternately, read just stars and dark matter (or any combination of species)
part = gizmo_read.read.Read.read_snapshot(species=['star', 'dark'], directory=directory)reading header from:
snapdir_600/snapshot_600.0.hdf5
snapshot contains the following number of particles:
star (id = 4): 13976485 particles
dark (id = 1): 70514272 particles
reading star properties:
['form.scalefactor', 'id', 'mass', 'massfraction', 'position', 'potential', 'velocity']
reading dark properties:
['id', 'mass', 'position', 'potential', 'velocity']
reading particles from:
snapshot_600.0.hdf5
snapshot_600.1.hdf5
snapshot_600.2.hdf5
snapshot_600.3.hdf5
reading galaxy center coordinates and principal axes from: m12i_res7100_center.txt
center position [kpc] = 41792.145, 44131.235, 46267.676
center velocity [km/s] = -52.5, 71.9, 95.2
adjusting particle coordinates to be relative to galaxy center
and aligned with the principal axes
# alternately, read only a subset of particle properties (to save memory)
part = gizmo_read.read.Read.read_snapshot(species='star', properties=['position', 'velocity', 'mass'], directory=directory)reading header from:
m12i/m12i_res7100/output/snapdir_600/snapshot_600.0.hdf5
snapshot contains the following number of particles:
star (id = 4): 13976485 particles
read star : ['mass', 'position', 'velocity']
reading particles from:
snapshot_600.0.hdf5
snapshot_600.1.hdf5
snapshot_600.2.hdf5
snapshot_600.3.hdf5
reading galaxy center coordinates and principal axes from:
m12i/m12i_res7100/output/m12i_res7100_center.txt
center position [kpc] = 41792.145, 44131.235, 46267.676
center velocity [km/s] = -52.5, 71.9, 95.2
# also can use particle_subsample_factor to periodically sub-sample particles, to save memory
part = gizmo_read.read.Read.read_snapshot(species='all', directory=directory, particle_subsample_factor=10)reading header from:
snapdir_600/snapshot_600.0.hdf5
snapshot contains the following number of particles:
dark (id = 1): 70514272 particles
dark.2 (id = 2): 5513331 particles
gas (id = 0): 57060074 particles
star (id = 4): 13976485 particles
reading dark properties:
['id', 'mass', 'position', 'potential', 'velocity']
reading dark.2 properties:
['id', 'mass', 'position', 'potential', 'velocity']
reading gas properties:
['density', 'electron.fraction', 'hydrogen.neutral.fraction', 'id', 'mass', 'massfraction', 'position', 'potential', 'temperature', 'velocity']
reading star properties:
['form.scalefactor', 'id', 'mass', 'massfraction', 'position', 'potential', 'velocity']
reading particles from:
snapshot_600.0.hdf5
snapshot_600.1.hdf5
snapshot_600.2.hdf5
snapshot_600.3.hdf5
periodically subsampling all particles by factor = 10
reading galaxy center coordinates and principal axes from: m12i_res7100_center.txt
center position [kpc] = 41792.145, 44131.235, 46267.676
center velocity [km/s] = -52.5, 71.9, 95.2
adjusting particle coordinates to be relative to galaxy center
and aligned with the principal axes
</code>
# species dictionary_____no_output_____
<code>
# each particle species is stored as its own dictionary
# 'star' = stars, 'gas' = gas, 'dark' = dark matter, 'dark.2' = low-resolution dark matter
part.keys()_____no_output_____# properties of particles are stored as dictionary_____no_output_____# properties of star particles
for k in part['star'].keys():
print(k)position
mass
massfraction
id
potential
form.scalefactor
velocity
age
metallicity.total
metallicity.he
metallicity.c
metallicity.n
metallicity.o
metallicity.ne
metallicity.mg
metallicity.si
metallicity.s
metallicity.ca
metallicity.fe
# properties of dark matter particles
for k in part['dark'].keys():
print(k)position
mass
id
potential
velocity
# properties of gas particles
for k in part['gas'].keys():
print(k)position
density
electron.fraction
temperature
mass
massfraction
hydrogen.neutral.fraction
id
potential
velocity
metallicity.total
metallicity.he
metallicity.c
metallicity.n
metallicity.o
metallicity.ne
metallicity.mg
metallicity.si
metallicity.s
metallicity.ca
metallicity.fe
</code>
# particle coordinates_____no_output_____
<code>
# 3-D position of star particle (particle number x dimension number) in cartesian coordiantes [kpc physical]
# if directory contains file m12*_center.txt, this reader automatically reads this file and
# convert all positions to be in galactocentric coordinates, alined with principal axes of the galaxy
part['star']['position']_____no_output_____# you can convert these to cylindrical coordiantes...
star_positions_cylindrical = gizmo_read.coordinate.get_positions_in_coordinate_system(
part['star']['position'], system_to='cylindrical')
print(star_positions_cylindrical)[[ 2.98728375e+04 9.79966007e+01 5.11315470e+00]
[ 8.44916513e+00 -1.42074969e-01 4.81602573e+00]
[ 8.56924321e+00 -9.42783421e-02 4.78167971e+00]
...
[ 2.07095818e+03 3.64950773e+01 2.72158217e+00]
[ 2.09849995e+03 4.84814836e+01 2.71389451e+00]
[ 2.05142556e+03 1.05859972e+03 2.82709171e-01]]
# or spherical coordiantes
star_positions_spherical = gizmo_read.coordinate.get_positions_in_coordinate_system(
part['star']['position'], system_to='spherical')
print(star_positions_spherical)[[2.98729983e+04 1.56751588e+00 5.11315470e+00]
[8.45035956e+00 1.58761001e+00 4.81602573e+00]
[8.56976181e+00 1.58179783e+00 4.78167971e+00]
...
[2.07127972e+03 1.55317584e+00 2.72158217e+00]
[2.09905991e+03 1.54769751e+00 2.71389451e+00]
[2.30845840e+03 1.09440612e+00 2.82709171e-01]]
# 3-D velocity of star particle (particle number x dimension number) in cartesian coordiantes [km/s]
part['star']['velocity']_____no_output_____# you can convert these to cylindrical coordiantes...
star_velocities_cylindrical = gizmo_read.coordinate.get_velocities_in_coordinate_system(
part['star']['velocity'], part['star']['position'], system_to='cylindrical')
print(star_velocities_cylindrical)[[ 3.26827881e+03 7.08891220e+01 -2.79372520e+01]
[-3.65740891e+01 1.09564304e+01 1.62347977e+02]
[ 2.74234409e+01 -7.61478271e+01 2.27197754e+02]
...
[ 1.33282959e+02 2.58180070e+00 2.25322895e+01]
[ 1.26326935e+02 1.60031185e+01 1.44918041e+01]
[ 1.65938049e+02 9.77062912e+01 -2.39563694e+01]]
# or spherical coordiantes
star_velocities_spherical = gizmo_read.coordinate.get_velocities_in_coordinate_system(
part['star']['velocity'], part['star']['position'], system_to='spherical')
print(star_velocities_spherical)[[ 3.2684939e+03 -6.0167347e+01 -2.7937252e+01]
[-3.6753128e+01 -1.0339966e+01 1.6234798e+02]
[ 2.8259504e+01 7.5841530e+01 2.2719775e+02]
...
[ 1.3330775e+02 -2.3301035e-01 2.2532290e+01]
[ 1.2666286e+02 -1.3081106e+01 1.4491804e+01]
[ 1.9226746e+02 -1.0732359e+01 -2.3956369e+01]]
# the galaxy center position [kpc comoving] and velocity [km/s] are stored via
print(part.center_position)
print(part.center_velocity)[41792.14534 44131.23473 46267.67629]
[-52.45083 71.85282 95.19746]
# the rotation vectors to align with the principal axes are stored via
print(part.principal_axes_vectors)[[ 0.11681398 -0.98166206 0.1506456 ]
[-0.86026934 -0.02421714 0.50926436]
[-0.49627729 -0.18908499 -0.84732267]]
</code>
# LSR coordinates for mock_____no_output_____
<code>
# you can read the assumed local standard of rest (LSR) coordinates used in the Ananke mock catalogs
# you need to input which LSR to use (currently 0, 1, or 2, because we use 3 per galaxy)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=0)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=1)
gizmo_read.read.Read.read_lsr_coordinates(part, directory=directory, lsr_index=2)reading LSR coordinates from:
m12i_res7100_LSR0.txt
LSR_0 position [kpc] = 0.000, 8.200, 0.000
LSR_0 velocity [km/s] = -224.7, -20.4, 3.9
reading LSR coordinates from:
m12i_res7100_LSR1.txt
LSR_1 position [kpc] = -7.101, -4.100, 0.000
LSR_1 velocity [km/s] = 87.3, -186.9, -9.5
reading LSR coordinates from:
m12i_res7100_LSR2.txt
LSR_2 position [kpc] = 7.101, -4.100, 0.000
LSR_2 velocity [km/s] = 80.4, 191.7, 1.5
# the particle catalog can store one LSR at a time via
print(part.lsr_position)
print(part.lsr_velocity)[ 7.1014 -4.1 0. ]
[ 80.4269 191.724 1.5039]
# you can convert coordinates to be relative to LSR via
star_positions_wrt_lsr = part['star']['position'] - part.lsr_position
star_velocities_wrt_lsr = part['star']['velocity'] - part.lsr_velocity
print(star_positions_wrt_lsr)
print(star_velocities_wrt_lsr)[[ 1.16469946e+04 -2.75016897e+04 9.79966007e+01]
[-6.22732260e+00 -4.30383128e+00 -1.42074969e-01]
[-6.50810594e+00 -4.44868009e+00 -9.42783421e-02]
...
[-1.89806156e+03 8.48574680e+02 3.64950773e+01]
[-1.91657460e+03 8.74510327e+02 4.84814836e+01]
[ 1.96288917e+03 5.76362177e+02 1.05859972e+03]]
[[ 1.1688820e+03 -3.2119316e+03 6.9385223e+01]
[ 7.7266365e+01 -1.3855103e+02 9.4525299e+00]
[ 1.4812433e+02 -2.0335153e+02 -7.7651726e+01]
...
[-2.1131351e+02 -1.5794910e+02 1.0779006e+00]
[-2.0138556e+02 -1.5251286e+02 1.4499218e+01]
[ 8.5606773e+01 -1.6843958e+02 9.6202393e+01]]
</code>
# other particle properties_____no_output_____
<code>
# mass of star particle [M_sun]
# note that star particles are created with an initial mass of ~7070 Msun,
# but because of stellar mass loss they can be less massive by z = 0
# a few star particles form from slightly higher-mass gas particles
# (because gas particles gain mass via stellar mass loss)
# so some star particles are a little more massive than 7070 Msun
part['star']['mass']_____no_output_____# formation scale-factor of star particle
part['star']['form.scalefactor']_____no_output_____# or more usefully, the current age of star particle (the lookback time to when it formed) [Gyr]
part['star']['age']_____no_output_____# gravitational potential at position of star particle [km^2 / s^2 physical]
# note: normalization is arbitrary
part['star']['potential']_____no_output_____# ID of star particle
# NOTE: Ananke uses/references the *index* (within this array) of star particles, *not* their ID!
# (because for technical reasons some star particles can end up with the same ID)
# So you generally should never have to use this ID!
part['star']['id']_____no_output_____
</code>
# metallicities_____no_output_____
<code>
# elemental abundance (metallicity) is stored natively as *linear mass fraction*
# one value for each element, in a particle_number x element_number array
# the first value is the mass fraction of all metals (everything not H, He)
# 0 = all metals (everything not H, He), 1 = He, 2 = C, 3 = N, 4 = O, 5 = Ne, 6 = Mg, 7 = Si, 8 = S, 9 = Ca, 10 = Fe
part['star']['massfraction']_____no_output_____# get individual elements by their index
# total metal mass fraction (everything not H, He) is index 0
print(part['star']['massfraction'][:, 0])
# iron is index 10
print(part['star']['massfraction'][:, 10])[6.0437708e-03 3.2043904e-02 4.6177451e-02 ... 7.9349702e-04 5.3998221e-05
1.9502458e-03]
[2.1929388e-04 1.3037791e-03 1.7796059e-03 ... 3.1033269e-05 8.7730750e-06
6.7164707e-05]
# for convenience, this reader also stores 'metallicity' := log10(mass_fraction / mass_fraction_solar)
# where mass_fraction_solar is from Asplund et al 2009
print(part['star']['metallicity.total'])
print(part['star']['metallicity.fe'])
print(part['star']['metallicity.o'])[-0.3457968 0.37864065 0.53732514 ... -1.2275594 -2.3947253
-0.8370154 ]
[-0.77062804 0.00354949 0.13866928 ... -1.619827 -2.1685026
-1.2845135 ]
[-0.23240621 0.4630599 0.62019897 ... -1.125268 -2.4857357
-0.69058067]
# see gizmo_read.constant for assumed solar values (Asplund et al 2009) and other constants
gizmo_read.constant.sun_composition_____no_output_____
</code>
# additional information stored in sub-dictionaries_____no_output_____
<code>
# dictionary of 'header' information about the simulation
part.info_____no_output_____# dictionary of information about this snapshot's scale-factor, redshift, time, lookback-time
part.snapshot_____no_output_____# dictionary class of cosmological parameters, with function for cosmological conversions
part.Cosmology_____no_output_____
</code>
See gizmo_read.constant for assumed (astro)physical constants used throughout.
See gizmo_read.coordinate for more coordiante transformation, zoom-in center _____no_output_____
| {
"repository": "caganze/popsims",
"path": "notebooks/gizmo_read_tutorial.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 35248,
"hexsha": "cb6ec754ea38d5a32874f84de3b2b7ebddeb615c",
"max_line_length": 279,
"avg_line_length": 29.8458933108,
"alphanum_fraction": 0.5436904222
} |
# Notebook from mmithil/Web-Mining-Repo
Path: .ipynb_checkpoints/Updated_goodreads_script-checkpoint.ipynb
<code>
import urllib2
from bs4 import BeautifulSoup
import csv
import time
import re
import urllib2
import csv
import time
import sys
import xml.etree.ElementTree as ET
import os
import random
import traceback
from IPython.display import clear_output
def createUserDict(user_element):
#userDict = []
id = getval(user_element,'id')
name = getval(user_element,'name')
user_name = getval(user_element,'user_name')
profile_url = getval(user_element,'link')
image_url = getval(user_element,'image_url')
about = getval(user_element,'about')
age = getval(user_element,'age')
gender = getval(user_element,'gender')
location = getval(user_element,'location')
joined = getval(user_element,'joined')
last_active = getval(user_element,'last_active')
userDict = dict ([('user_id', id), ('name', name) , ('user_name' , user_name),
('profile_url', profile_url), ('image_url', image_url),
('about', about), ('age', age), ('gender', gender),
('location', location) , ('joined', joined), ('last_active', last_active)])
return userDict
def writeToCSV(writer, mydict):
#writer = csv.DictWriter(csvfile, delimiter=',', lineterminator='\n', fieldnames=insert_fieldnames)
#for key, value in mydict.items():
writer.writerow(mydict)
def getAmazonDetails(isbn):
with open('csv_files/amazon_book_ratings.csv', 'a') as csvfile_ratings, open('csv_files/amazon_book_reviews.csv', 'a') as csvfile_reviews:
##Create file headers and writer
ratings_fieldnames = ['book_isbn', 'avg_rating', 'five_rating', 'four_rating', 'three_rating', 'two_rating', 'one_rating' ]
#writer = csv.DictWriter(csvfile_ratings, delimiter=',', lineterminator='\n', fieldnames=ratings_fieldnames)
##writer.writeheader()
reviews_fieldnames = ['book_isbn', 'review']
writer_book = csv.DictWriter(csvfile_reviews, delimiter=',', lineterminator='\n', fieldnames=reviews_fieldnames)
##writer_book.writeheader()
##Get Overall details of the book
req = urllib2.Request('http://www.amazon.com/product-reviews/' + isbn + '?ie=UTF8&showViewpoints=1&sortBy=helpful&pageNumber=1', headers={ 'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11' })
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'html.parser')
avgRatingTemp = soup.find_all('div',{'class':"a-row averageStarRatingNumerical"})[0].get_text()
avgRating = re.findall('\d+\.\d+', avgRatingTemp)[0]
try:
fiveStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 5star histogram-review-count"})[0].get_text()
fiveStarRating = fiveStarRatingTemp.strip('%')
except:
fiveStarRating = 0
try:
fourStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 4star histogram-review-count"})[0].get_text()
fourStarRating = fourStarRatingTemp.strip('%')
except:
fourStarRating = 0
try:
threeStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 3star histogram-review-count"})[0].get_text()
threeStarRating = threeStarRatingTemp.strip('%')
except:
threeStarRating = 0
try:
twoStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 2star histogram-review-count"})[0].get_text()
twoStarRating = twoStarRatingTemp.strip('%')
except:
twoStarRating = 0
try:
oneStarRatingTemp = soup.find_all('a',{'class':"a-size-small a-link-normal 1star histogram-review-count"})[0].get_text()
oneStarRating = oneStarRatingTemp.strip('%')
except:
oneStarRating = 0
writer.writerow({'book_isbn': isbn, 'avg_rating': avgRating, 'five_rating': fiveStarRating,
'four_rating': fourStarRating, 'three_rating': threeStarRating, 'two_rating': twoStarRating,
'one_rating': oneStarRating})
##Get top 20 helpful review of book
for pagenumber in range(1,3):
req = urllib2.Request('http://www.amazon.com/product-reviews/' + isbn + '?ie=UTF8&showViewpoints=1&sortBy=helpful&pageNumber='+ str(pagenumber), headers={ 'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11' })
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'html.parser')
for i in range(0,10):
try:
review = soup.find_all('div',{'class':"a-section review"})[i].contents[3].get_text().encode('UTF-8')
writer_book.writerow({'book_isbn': isbn, 'review': review})
except:
print "No Reviews ISBN - " + isbn
def getval(root, element):
try:
ret = root.find(element).text
if ret is None:
return ""
else:
return ret.encode("utf8")
except:
return ""
with open('csv_files/amazon_book_ratings.csv', 'w') as csvfile_ratings, open('csv_files/amazon_book_reviews.csv', 'w') as csvfile_reviews:
##Create file headers and writer
ratings_fieldnames = ['book_isbn', 'avg_rating', 'five_rating', 'four_rating', 'three_rating', 'two_rating', 'one_rating' ]
writer = csv.DictWriter(csvfile_ratings, delimiter=',', lineterminator='\n', fieldnames=ratings_fieldnames)
writer.writeheader()
reviews_fieldnames = ['book_isbn', 'review']
writer_book = csv.DictWriter(csvfile_reviews, delimiter=',', lineterminator='\n', fieldnames=reviews_fieldnames)
writer_book.writeheader()
with open('csv_files/user_data.csv', 'w') as csvfile, open('csv_files/book_data.csv', 'w') as csvfile_book, open('csv_files/book_author.csv', 'w') as csvfile_author, open('csv_files/goodreads_user_reviews_ratings.csv', 'w') as gdrds_rr:
fieldnames = ['user_id', 'name','user_name', 'profile_url','image_url', 'about', 'age', 'gender',
'location','joined','last_active' ]
writer = csv.DictWriter(csvfile, delimiter = ',', lineterminator = '\n', fieldnames=fieldnames)
writer.writeheader()
book_fieldnames = [
'user_id',
'b_id',
'shelf',
'isbn',
'isbn13',
'text_reviews_count',
'title',
'image_url',
'link',
'num_pages',
'b_format',
'publisher',
'publication_day',
'publication_year',
'publication_month',
'average_rating',
'ratings_count',
'description',
'published',
'fiction' ,
'fantasy' ,
'classics' ,
'young_adult' ,
'romance' ,
'non_fiction' ,
'historical_fiction' ,
'science_fiction' ,
'dystopian' ,
'horror' ,
'paranormal' ,
'contemporary' ,
'childrens' ,
'adult' ,
'adventure' ,
'novels' ,
'urban_fantasy' ,
'history' ,
'chick_lit' ,
'thriller' ,
'audiobook' ,
'drama' ,
'biography' ,
'vampires' ]
writer_book = csv.DictWriter(csvfile_book, delimiter = ',', lineterminator = '\n', fieldnames=book_fieldnames)
writer_book.writeheader()
goodreads_ratings_fieldnames = ['user_id', 'b_id', 'rating', 'review' ]
rr_writer = csv.DictWriter(gdrds_rr, delimiter=',', lineterminator='\n', fieldnames=goodreads_ratings_fieldnames)
rr_writer.writeheader()
author_fieldnames = [
'u_id',
'b_id',
'a_id',
'name',
'average_rating',
'ratings_count',
'text_reviews_count']
writer_author = csv.DictWriter(csvfile_author, delimiter = ',', lineterminator = '\n', fieldnames = author_fieldnames)
writer_author.writeheader()
lst = []
i = 0
while i < 50:
try:
#time.sleep(1)
clear_output()
c = random.randint(1, 2500000)
#c = 23061285
print "random number: " + str(c)
if (c not in lst):
print "getting information for user id:"+ str(c)
lst.append(c)
url = 'https://www.goodreads.com/user/show/'+ str(c) +'.xml?key=i3Zsl7r13oHEQCjv1vXw'
response = urllib2.urlopen(url)
user_data_xml = response.read()
#write xml to file
f = open("xml_docs/user"+ str(c) +".xml", "w")
try:
f.write(user_data_xml)
finally:
f.close()
#root = ET.fromstring()
root = ET.parse("xml_docs/user"+ str(c) +".xml").getroot()
os.remove("xml_docs/user"+ str(c) +".xml")
user_element = root.find('user')
user_shelf_to_count = user_element.find('user_shelves')
b_count = 0
for user_shelf in user_shelf_to_count.findall('user_shelf'):
b_count = b_count + int(getval(user_shelf,'book_count'))
print 'Book count is ' + str(b_count)
if(b_count > 10):
print 'Collecting data for user ' + str(c)
'''id = getval(user_element,'id')
name = getval(user_element,'name')
user_name = getval(user_element,'user_name')
profile_url = getval(user_element,'link')
image_url = getval(user_element,'image_url')
about = getval(user_element,'about')
age = getval(user_element,'age')
gender = getval(user_element,'gender')
location = getval(user_element,'location')
joined = getval(user_element,'joined')
last_active = getval(user_element,'last_active')
'''
userDict = createUserDict(user_element)
id = userDict['user_id']
#writer.writerow({'id': id, 'name' : name,'user_name' : user_name,
# 'profile_url' : profile_url,'image_url' : image_url,
# 'about' : about, 'age': age, 'gender' : gender,
# 'location' : location, 'joined' : joined, 'last_active': last_active})
writeToCSV(writer,userDict)
print "Saved user data for user id:" + str(c)
# get list of user shelves
user_shelves_root = user_element.find('user_shelves')
user_shelf_list = []
for user_shelf in user_shelves_root.findall("user_shelf"):
shelf = getval(user_shelf,"name")
#Books on Shelf
print "Checking for books in shelf: " + shelf + " for user id:" + str(c)
shelf_url = "https://www.goodreads.com/review/list/"+ str(c) +".xml?key=i3Zsl7r13oHEQCjv1vXw&v=2&per_page=200&shelf=" + shelf
#time.sleep(1)
print shelf_url
response = urllib2.urlopen(shelf_url)
shelf_data_xml = response.read()
# write xml to file
f = open("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml", "w")
try:
f.write(shelf_data_xml)
finally:
f.close()
shelf_root = ET.parse("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml").getroot()
os.remove("xml_docs/user_shelf_" + shelf + "_"+ str(c) + ".xml")
try:
reviews = shelf_root.find("reviews")
for review in reviews.findall("review"):
for book in review.findall("book"):
b_id = getval(book,"id")
isbn = getval(book,"isbn")
print "Fetching data for book with isbn:" + str(isbn) + " and id:" + str(id)
isbn13 = getval(book,"isbn13")
text_reviews_count = getval(book,"text_reviews_count")
title = getval(book,"title")
image_url = getval(book,"image_url")
link = getval(book,"link")
num_pages = getval(book,"num_pages")
b_format = getval(book,"format")
publisher = getval(book,"publisher")
publication_day = getval(book,"publication_day")
publication_year = getval(book, "publication_year")
publication_month = getval(book,"publication_month")
average_rating = getval(book,"average_rating")
ratings_count = getval(book,"rating_count")
description = getval(book,"description")
published = getval(book,"published")
#getAmazonDetails(isbn)
print "Fetched review data from Amazon for book :" + title
#get number of books on each type of shelf
book_url = 'https://www.goodreads.com/book/show/'+str(b_id)+'.xml?key=i3Zsl7r13oHEQCjv1vXw'
response = urllib2.urlopen(book_url)
book_data_xml = response.read()
# write xml to file
f = open("xml_docs/book_data_" + str(b_id) + ".xml", "w")
try:
f.write(book_data_xml)
finally:
f.close()
book_root = ET.parse("xml_docs/book_data_" + str(b_id) + ".xml").getroot()
os.remove("xml_docs/book_data_" + str(b_id) + ".xml")
print "checking count in shelf for book_id:" + str(b_id)
book_root = book_root.find("book")
book_shelves = book_root.find("popular_shelves")
fiction = 0
fantasy = 0
classics = 0
young_adult = 0
romance = 0
non_fiction = 0
historical_fiction = 0
science_fiction = 0
dystopian = 0
horror = 0
paranormal = 0
contemporary = 0
childrens = 0
adult = 0
adventure = 0
novels = 0
urban_fantasy = 0
history = 0
chick_lit = 0
thriller = 0
audiobook = 0
drama = 0
biography = 0
vampires = 0
cnt = 0
for shelf_type in book_shelves.findall("shelf"):
attributes = shelf_type.attrib
name = attributes['name']
count = attributes['count']
#print name + ":" + count
if ( name == 'fiction'):
fiction = count
cnt = cnt+count
if ( name == 'fantasy'):
fantasy = count
cnt = cnt+count
if ( name == 'classics' or name == 'classic'):
classics = count
cnt = cnt+count
if ( name == 'young-adult'):
young_adult = count
cnt = cnt+count
if ( name == 'romance'):
romance = count
cnt = cnt+count
if ( name == 'non-fiction' or name == 'nonfiction'):
non_fiction = count
cnt = cnt+count
if ( name == 'historical-fiction'):
historical_fiction = count
cnt = cnt+count
if ( name == 'science-fiction' or name == 'sci-fi fantasy' or name == 'scifi' or name == 'fantasy-sci-fi' or name == 'sci-fi'):
science_fiction = count
cnt = cnt+count
if ( name == 'dystopian' or name == 'dystopia'):
dystopian = count
cnt = cnt+count
if ( name == 'horror'):
horror = count
cnt = cnt+count
if ( name == 'paranormal'):
paranormal = count
cnt = cnt+count
if ( name == 'contemporary' or name == 'contemporary-fiction'):
contemporary = count
cnt = cnt+count
if ( name == 'childrens' or name == 'children' or name == 'kids' or name =='children-s-books'):
childrens = count
cnt = cnt+count
if ( name == 'adult'):
adult = count
cnt = cnt+count
if ( name == 'adventure'):
adventure = count
cnt = cnt+count
if ( name == 'novels' or name == 'novel'):
novels = count
cnt = cnt+count
if ( name == 'urban-fantasy'):
urban_fantasy = count
cnt = cnt+count
if ( name == 'history' or name == 'historical'):
history = count
cnt = cnt+count
if ( name == 'chick-lit'):
chick_lit = count
cnt = cnt+count
if ( name == 'thriller'):
thriller = count
cnt = cnt+count
if ( name == 'audiobook' or name == "audio"):
audiobook = count
cnt = cnt+count
if ( name == 'drama'):
drama = count
cnt = cnt+count
if ( name == 'biography' or name == 'memoirs'):
biography = count
cnt = cnt+count
if ( name == 'vampires' or name == 'vampire'):
vampires = count
cnt = cnt+count
fiction = fiction/cnt
fantasy = fantasy/cnt
classics = classics/cnt
young_adult = young_adult/cnt
romance = romance/cnt
non_fiction = non_fiction/cnt
historical_fiction = historical_fiction/cnt
science_fiction = science_fiction/cnt
dystopian = dystopian/cnt
horror = horror/cnt
paranormal = paranormal/cnt
contemporary = contemporary/cnt
childrens = childrens/cnt
adult = adult/cnt
adventure = adventures/cnt
novels = novels/cnt
urban_fantasy = urban_fantasy/cnt
history = history/cnt
chick_lit = chick_lit/cnt
thriller = thriller/cnt
audiobook = audiobook/cnt
drama = drama/cnt
biography = biography/cnt
vampires = vampires/cnt
writer_book.writerow({
'user_id': id,
'b_id' : b_id ,
'shelf' : shelf,
'isbn' : isbn,
'isbn13': isbn13,
'text_reviews_count' : text_reviews_count,
'title' : title,
'image_url' : image_url,
'link' : link,
'num_pages' : num_pages,
'b_format' : b_format,
'publisher' : publisher,
'publication_day' : publication_day,
'publication_year' : publication_year,
'publication_month' : publication_month,
'average_rating' : average_rating,
'ratings_count' : ratings_count,
'description' : description,
'fiction' : fiction ,
'fantasy' : fantasy ,
'classics' : classics ,
'young_adult' : young_adult ,
'romance' : romance ,
'non_fiction' : non_fiction ,
'historical_fiction' : historical_fiction ,
'science_fiction' : science_fiction ,
'dystopian' : dystopian ,
'horror' : horror ,
'paranormal' : paranormal ,
'contemporary' : contemporary ,
'childrens' : childrens ,
'adult' : adult ,
'adventure' : adventure ,
'novels' : novels ,
'urban_fantasy' : urban_fantasy ,
'history' : history ,
'chick_lit' : chick_lit ,
'thriller' : thriller ,
'audiobook' : audiobook ,
'drama' : drama ,
'biography' : biography ,
'vampires' : vampires })
#bookDict = createBookDict(book)
print "Data written on csv for book:" + title
print "Getting reviews details from user: " + str(id) + " and book_id: " + str(b_id)
review_url = "https://www.goodreads.com/review/show_by_user_and_book.xml?book_id=" +str(b_id)+ "&key=i3Zsl7r13oHEQCjv1vXw&user_id=" + str(id)
review_response = urllib2.urlopen(review_url)
review_response_xml = review_response.read()
review_root = ET.fromstring(review_response_xml)
user_rr = review_root.find("review")
user_r_rating = getval(user_rr, "rating")
print "Got user review rating: " + user_r_rating
user_r_review = getval(user_rr, "body")
print "User review is: " + user_r_review
rr_writer.writerow({
'user_id': id,
'b_id' : b_id ,
'rating' : user_r_rating,
'review' : user_r_review })
authors = book.find("authors")
for author in authors.findall("author"):
a_id = getval(author,"id")
name = getval(author,"name")
average_rating = getval(author,"average_rating")
ratings_count = getval(author,"ratings_count")
text_reviews_count = getval(author,"text_reviews_count")
writer_author.writerow({'u_id': id,
'b_id' : b_id,
'a_id' : a_id,
'name' : name,
'average_rating' : average_rating,
'ratings_count' : ratings_count,
'text_reviews_count' : text_reviews_count})
except Exception, e:
traceback.print_exc()
i = i + 1
except:
#time.sleep(1)
print "Exception!!"
traceback.print_exc()
print "End of Program"random number: 2373891
getting information for user id:2373891
Book count is 257
Collecting data for user 2373891
Saved user data for user id:2373891
Checking for books in shelf: read for user id:2373891
https://www.goodreads.com/review/list/2373891.xml?key=i3Zsl7r13oHEQCjv1vXw&v=2&per_page=200&shelf=read
Fetching data for book with isbn:0778328791 and id:2373891
Fetched review data from Amazon for book :These Things Hidden
checking count in shelf for book_id:9166559
Data written on csv for book:These Things Hidden
Getting reviews details from user: 2373891 and book_id: 9166559
Got user review rating: 3
User review is:
Fetching data for book with isbn:1620610078 and id:2373891
Fetched review data from Amazon for book :Obsidian (Lux, #1)
checking count in shelf for book_id:12578077
Data written on csv for book:Obsidian (Lux, #1)
Getting reviews details from user: 2373891 and book_id: 12578077
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743454154 and id:2373891
Fetched review data from Amazon for book :Blood Memory (Mississipi #5)
checking count in shelf for book_id:80631
Data written on csv for book:Blood Memory (Mississipi #5)
Getting reviews details from user: 2373891 and book_id: 80631
Got user review rating: 0
User review is:
Fetching data for book with isbn:0345534352 and id:2373891
Fetched review data from Amazon for book :He's Gone
checking count in shelf for book_id:15841844
Data written on csv for book:He's Gone
Getting reviews details from user: 2373891 and book_id: 15841844
Got user review rating: 0
User review is:
Fetching data for book with isbn:1595540547 and id:2373891
Fetched review data from Amazon for book :When Crickets Cry
checking count in shelf for book_id:241387
Data written on csv for book:When Crickets Cry
Getting reviews details from user: 2373891 and book_id: 241387
Got user review rating: 0
User review is:
Fetching data for book with isbn:0007524277 and id:2373891
Fetched review data from Amazon for book :Allegiant (Divergent, #3)
checking count in shelf for book_id:18710190
Data written on csv for book:Allegiant (Divergent, #3)
Getting reviews details from user: 2373891 and book_id: 18710190
Got user review rating: 0
User review is:
Fetching data for book with isbn:0007442912 and id:2373891
Fetched review data from Amazon for book :Insurgent (Divergent, #2)
checking count in shelf for book_id:11735983
Data written on csv for book:Insurgent (Divergent, #2)
Getting reviews details from user: 2373891 and book_id: 11735983
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062024035 and id:2373891
Fetched review data from Amazon for book :Divergent (Divergent, #1)
checking count in shelf for book_id:13335037
Data written on csv for book:Divergent (Divergent, #1)
Getting reviews details from user: 2373891 and book_id: 13335037
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316099295 and id:2373891
Fetched review data from Amazon for book :Right As Rain: A Derek Strange Novel
checking count in shelf for book_id:10451507
Data written on csv for book:Right As Rain: A Derek Strange Novel
Getting reviews details from user: 2373891 and book_id: 10451507
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476733953 and id:2373891
Fetched review data from Amazon for book :Wool (Silo, #1) (Wool, #1-5)
checking count in shelf for book_id:17164655
Data written on csv for book:Wool (Silo, #1) (Wool, #1-5)
Getting reviews details from user: 2373891 and book_id: 17164655
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061928178 and id:2373891
Fetched review data from Amazon for book :Beautiful Ruins
checking count in shelf for book_id:15818133
Data written on csv for book:Beautiful Ruins
Getting reviews details from user: 2373891 and book_id: 15818133
Got user review rating: 0
User review is:
Fetching data for book with isbn:1439168857 and id:2373891
Fetched review data from Amazon for book :The Runaway Princess
checking count in shelf for book_id:13547080
Data written on csv for book:The Runaway Princess
Getting reviews details from user: 2373891 and book_id: 13547080
Got user review rating: 0
User review is:
Fetching data for book with isbn:006224454X and id:2373891
Fetched review data from Amazon for book :Before I Go To Sleep
checking count in shelf for book_id:15818923
Data written on csv for book:Before I Go To Sleep
Getting reviews details from user: 2373891 and book_id: 15818923
Got user review rating: 0
User review is:
Fetching data for book with isbn:0679644199 and id:2373891
Fetched review data from Amazon for book :Tell the Wolves I'm Home
checking count in shelf for book_id:12875258
Data written on csv for book:Tell the Wolves I'm Home
Getting reviews details from user: 2373891 and book_id: 12875258
Got user review rating: 4
User review is:
Fetching data for book with isbn:0316725366 and id:2373891
Fetched review data from Amazon for book :Derailed
checking count in shelf for book_id:314362
Data written on csv for book:Derailed
Getting reviews details from user: 2373891 and book_id: 314362
Got user review rating: 0
User review is:
Fetching data for book with isbn:0385342063 and id:2373891
Fetched review data from Amazon for book :I've Got Your Number
checking count in shelf for book_id:12033455
Data written on csv for book:I've Got Your Number
Getting reviews details from user: 2373891 and book_id: 12033455
Got user review rating: 4
User review is:
Fetching data for book with isbn:0739465511 and id:2373891
Fetched review data from Amazon for book :The Lincoln Lawyer (Mickey Haller, #1)
checking count in shelf for book_id:79885
Data written on csv for book:The Lincoln Lawyer (Mickey Haller, #1)
Getting reviews details from user: 2373891 and book_id: 79885
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416524797 and id:2373891
Fetched review data from Amazon for book :Angels & Demons (Robert Langdon, #1)
checking count in shelf for book_id:960
Data written on csv for book:Angels & Demons (Robert Langdon, #1)
Getting reviews details from user: 2373891 and book_id: 960
Got user review rating: 0
User review is:
Fetching data for book with isbn:0449006530 and id:2373891
Fetched review data from Amazon for book :Midnight Voices
checking count in shelf for book_id:6552
Data written on csv for book:Midnight Voices
Getting reviews details from user: 2373891 and book_id: 6552
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553288342 and id:2373891
Fetched review data from Amazon for book :Sleepwalk
checking count in shelf for book_id:760305
Data written on csv for book:Sleepwalk
Getting reviews details from user: 2373891 and book_id: 760305
Got user review rating: 0
User review is:
Fetching data for book with isbn:034548701X and id:2373891
Fetched review data from Amazon for book :In the Dark of the Night
checking count in shelf for book_id:6545
Data written on csv for book:In the Dark of the Night
Getting reviews details from user: 2373891 and book_id: 6545
Got user review rating: 0
User review is:
Fetching data for book with isbn:0440170842 and id:2373891
Fetched review data from Amazon for book :Punish the Sinners
checking count in shelf for book_id:816858
Data written on csv for book:Punish the Sinners
Getting reviews details from user: 2373891 and book_id: 816858
Got user review rating: 0
User review is:
Fetching data for book with isbn:0440114756 and id:2373891
Fetched review data from Amazon for book :Comes the Blind Fury
checking count in shelf for book_id:239887
Data written on csv for book:Comes the Blind Fury
Getting reviews details from user: 2373891 and book_id: 239887
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553284118 and id:2373891
Fetched review data from Amazon for book :Creature
checking count in shelf for book_id:6562
Data written on csv for book:Creature
Getting reviews details from user: 2373891 and book_id: 6562
Got user review rating: 0
User review is:
Fetching data for book with isbn:044018293X and id:2373891
Fetched review data from Amazon for book :Suffer the Children
checking count in shelf for book_id:6572
Data written on csv for book:Suffer the Children
Getting reviews details from user: 2373891 and book_id: 6572
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553560271 and id:2373891
Fetched review data from Amazon for book :Shadows
checking count in shelf for book_id:6556
Data written on csv for book:Shadows
Getting reviews details from user: 2373891 and book_id: 6556
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425098605 and id:2373891
Fetched review data from Amazon for book :The Vision
checking count in shelf for book_id:481335
Data written on csv for book:The Vision
Getting reviews details from user: 2373891 and book_id: 481335
Got user review rating: 0
User review is:
Fetching data for book with isbn:042511984X and id:2373891
Fetched review data from Amazon for book :The Face of Fear
checking count in shelf for book_id:64960
Data written on csv for book:The Face of Fear
Getting reviews details from user: 2373891 and book_id: 64960
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425127583 and id:2373891
Fetched review data from Amazon for book :The Mask
checking count in shelf for book_id:228221
Data written on csv for book:The Mask
Getting reviews details from user: 2373891 and book_id: 228221
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425099334 and id:2373891
Fetched review data from Amazon for book :Shattered
checking count in shelf for book_id:32438
Data written on csv for book:Shattered
Getting reviews details from user: 2373891 and book_id: 32438
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425142485 and id:2373891
Fetched review data from Amazon for book :The Funhouse
checking count in shelf for book_id:11166889
Data written on csv for book:The Funhouse
Getting reviews details from user: 2373891 and book_id: 11166889
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553807145 and id:2373891
Fetched review data from Amazon for book :Relentless
checking count in shelf for book_id:4946005
Data written on csv for book:Relentless
Getting reviews details from user: 2373891 and book_id: 4946005
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553584480 and id:2373891
Fetched review data from Amazon for book :The Face
checking count in shelf for book_id:32437
Data written on csv for book:The Face
Getting reviews details from user: 2373891 and book_id: 32437
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425199584 and id:2373891
Fetched review data from Amazon for book :Cold Fire
checking count in shelf for book_id:32442
Data written on csv for book:Cold Fire
Getting reviews details from user: 2373891 and book_id: 32442
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425208435 and id:2373891
Fetched review data from Amazon for book :Dragon Tears
checking count in shelf for book_id:32429
Data written on csv for book:Dragon Tears
Getting reviews details from user: 2373891 and book_id: 32429
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425210758 and id:2373891
Fetched review data from Amazon for book :Mr. Murder
checking count in shelf for book_id:32434
Data written on csv for book:Mr. Murder
Getting reviews details from user: 2373891 and book_id: 32434
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425100650 and id:2373891
Fetched review data from Amazon for book :Twilight Eyes
checking count in shelf for book_id:693172
Data written on csv for book:Twilight Eyes
Getting reviews details from user: 2373891 and book_id: 693172
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553479016 and id:2373891
Fetched review data from Amazon for book :Seize the Night (Moonlight Bay, #2)
checking count in shelf for book_id:21362
Data written on csv for book:Seize the Night (Moonlight Bay, #2)
Getting reviews details from user: 2373891 and book_id: 21362
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425203891 and id:2373891
Fetched review data from Amazon for book :Hideaway
checking count in shelf for book_id:32422
Data written on csv for book:Hideaway
Getting reviews details from user: 2373891 and book_id: 32422
Got user review rating: 0
User review is:
Fetching data for book with isbn:042520992X and id:2373891
Fetched review data from Amazon for book :Whispers
checking count in shelf for book_id:64948
Data written on csv for book:Whispers
Getting reviews details from user: 2373891 and book_id: 64948
Got user review rating: 4
User review is:
Fetching data for book with isbn:0345405137 and id:2373891
Fetched review data from Amazon for book :Tick Tock
checking count in shelf for book_id:281433
Data written on csv for book:Tick Tock
Getting reviews details from user: 2373891 and book_id: 281433
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553588249 and id:2373891
Fetched review data from Amazon for book :Life Expectancy
checking count in shelf for book_id:16435
Data written on csv for book:Life Expectancy
Getting reviews details from user: 2373891 and book_id: 16435
Got user review rating: 5
User review is:
Fetching data for book with isbn:0553804812 and id:2373891
Fetched review data from Amazon for book :The Good Guy
checking count in shelf for book_id:32441
Data written on csv for book:The Good Guy
Getting reviews details from user: 2373891 and book_id: 32441
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553579754 and id:2373891
Fetched review data from Amazon for book :Fear Nothing (Moonlight Bay, #1)
checking count in shelf for book_id:32432
Data written on csv for book:Fear Nothing (Moonlight Bay, #1)
Getting reviews details from user: 2373891 and book_id: 32432
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425181111 and id:2373891
Fetched review data from Amazon for book :Strangers
checking count in shelf for book_id:15676
Data written on csv for book:Strangers
Getting reviews details from user: 2373891 and book_id: 15676
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553804790 and id:2373891
Fetched review data from Amazon for book :The Husband
checking count in shelf for book_id:16429
Data written on csv for book:The Husband
Getting reviews details from user: 2373891 and book_id: 16429
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425192032 and id:2373891
Fetched review data from Amazon for book :Lightning
checking count in shelf for book_id:32424
Data written on csv for book:Lightning
Getting reviews details from user: 2373891 and book_id: 32424
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553804804 and id:2373891
Fetched review data from Amazon for book :Brother Odd (Odd Thomas, #3)
checking count in shelf for book_id:14996
Data written on csv for book:Brother Odd (Odd Thomas, #3)
Getting reviews details from user: 2373891 and book_id: 14996
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425181103 and id:2373891
Fetched review data from Amazon for book :Phantoms
checking count in shelf for book_id:32435
Data written on csv for book:Phantoms
Getting reviews details from user: 2373891 and book_id: 32435
Got user review rating: 0
User review is:
Fetching data for book with isbn:0553588265 and id:2373891
Fetched review data from Amazon for book :Forever Odd (Odd Thomas, #2)
checking count in shelf for book_id:16433
Data written on csv for book:Forever Odd (Odd Thomas, #2)
Getting reviews details from user: 2373891 and book_id: 16433
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Watchers
checking count in shelf for book_id:32423
Data written on csv for book:Watchers
Getting reviews details from user: 2373891 and book_id: 32423
Got user review rating: 0
User review is:
Fetching data for book with isbn:067100042X and id:2373891
Fetched review data from Amazon for book :Silent Night
checking count in shelf for book_id:842355
Data written on csv for book:Silent Night
Getting reviews details from user: 2373891 and book_id: 842355
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743484355 and id:2373891
Fetched review data from Amazon for book :A Cry In The Night
checking count in shelf for book_id:43345
Data written on csv for book:A Cry In The Night
Getting reviews details from user: 2373891 and book_id: 43345
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743484304 and id:2373891
Fetched review data from Amazon for book :Moonlight Becomes You
checking count in shelf for book_id:237114
Data written on csv for book:Moonlight Becomes You
Getting reviews details from user: 2373891 and book_id: 237114
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416516743 and id:2373891
Fetched review data from Amazon for book :Let Me Call You Sweetheart
checking count in shelf for book_id:170632
Data written on csv for book:Let Me Call You Sweetheart
Getting reviews details from user: 2373891 and book_id: 170632
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743484363 and id:2373891
Fetched review data from Amazon for book :Remember Me
checking count in shelf for book_id:35354
Data written on csv for book:Remember Me
Getting reviews details from user: 2373891 and book_id: 35354
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671004549 and id:2373891
Fetched review data from Amazon for book :You Belong To Me
checking count in shelf for book_id:43342
Data written on csv for book:You Belong To Me
Getting reviews details from user: 2373891 and book_id: 43342
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416516727 and id:2373891
Fetched review data from Amazon for book :All Around the Town
checking count in shelf for book_id:170619
Data written on csv for book:All Around the Town
Getting reviews details from user: 2373891 and book_id: 170619
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671673688 and id:2373891
Fetched review data from Amazon for book :While My Pretty One Sleeps
checking count in shelf for book_id:571555
Data written on csv for book:While My Pretty One Sleeps
Getting reviews details from user: 2373891 and book_id: 571555
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671758896 and id:2373891
Fetched review data from Amazon for book :Loves Music, Loves to Dance
checking count in shelf for book_id:170650
Data written on csv for book:Loves Music, Loves to Dance
Getting reviews details from user: 2373891 and book_id: 170650
Got user review rating: 0
User review is:
Fetching data for book with isbn:0142000205 and id:2373891
Fetched review data from Amazon for book :Icy Sparks
checking count in shelf for book_id:3476
Data written on csv for book:Icy Sparks
Getting reviews details from user: 2373891 and book_id: 3476
Got user review rating: 4
User review is:
Fetching data for book with isbn:0751529818 and id:2373891
Fetched review data from Amazon for book :Tuesdays with Morrie
checking count in shelf for book_id:6900
Data written on csv for book:Tuesdays with Morrie
Getting reviews details from user: 2373891 and book_id: 6900
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Taking Chances (Taking Chances, #1)
checking count in shelf for book_id:15739018
Data written on csv for book:Taking Chances (Taking Chances, #1)
Getting reviews details from user: 2373891 and book_id: 15739018
Got user review rating: 0
User review is:
Fetching data for book with isbn:0749932775 and id:2373891
Fetched review data from Amazon for book :Dance Upon the Air (Three Sisters Island, #1)
checking count in shelf for book_id:59829
Data written on csv for book:Dance Upon the Air (Three Sisters Island, #1)
Getting reviews details from user: 2373891 and book_id: 59829
Got user review rating: 0
User review is:
Fetching data for book with isbn:0749932988 and id:2373891
Fetched review data from Amazon for book :Face the Fire (Three Sisters Island, #3)
checking count in shelf for book_id:59822
Data written on csv for book:Face the Fire (Three Sisters Island, #3)
Getting reviews details from user: 2373891 and book_id: 59822
Got user review rating: 0
User review is:
Fetching data for book with isbn:0749932821 and id:2373891
Fetched review data from Amazon for book :Heaven and Earth (Three Sisters Island, #2)
checking count in shelf for book_id:59830
Data written on csv for book:Heaven and Earth (Three Sisters Island, #2)
Getting reviews details from user: 2373891 and book_id: 59830
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416537775 and id:2373891
Fetched review data from Amazon for book :Hello, Darkness
checking count in shelf for book_id:30381
Data written on csv for book:Hello, Darkness
Getting reviews details from user: 2373891 and book_id: 30381
Got user review rating: 0
User review is:
Fetching data for book with isbn:0340827688 and id:2373891
Fetched review data from Amazon for book :The Crush
checking count in shelf for book_id:96262
Data written on csv for book:The Crush
Getting reviews details from user: 2373891 and book_id: 96262
Got user review rating: 0
User review is:
Fetching data for book with isbn:044619154X and id:2373891
Fetched review data from Amazon for book :The Witness
checking count in shelf for book_id:323289
Data written on csv for book:The Witness
Getting reviews details from user: 2373891 and book_id: 323289
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446353957 and id:2373891
Fetched review data from Amazon for book :Mirror Image
checking count in shelf for book_id:685788
Data written on csv for book:Mirror Image
Getting reviews details from user: 2373891 and book_id: 685788
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743245539 and id:2373891
Fetched review data from Amazon for book :White Hot
checking count in shelf for book_id:710826
Data written on csv for book:White Hot
Getting reviews details from user: 2373891 and book_id: 710826
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416563067 and id:2373891
Fetched review data from Amazon for book :Smoke Screen
checking count in shelf for book_id:2306910
Data written on csv for book:Smoke Screen
Getting reviews details from user: 2373891 and book_id: 2306910
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743466772 and id:2373891
Fetched review data from Amazon for book :Chill Factor
checking count in shelf for book_id:268275
Data written on csv for book:Chill Factor
Getting reviews details from user: 2373891 and book_id: 268275
Got user review rating: 0
User review is:
Fetching data for book with isbn:0778325857 and id:2373891
Fetched review data from Amazon for book :Deadly Night (Flynn Brothers, #1)
checking count in shelf for book_id:2930659
Data written on csv for book:Deadly Night (Flynn Brothers, #1)
Getting reviews details from user: 2373891 and book_id: 2930659
Got user review rating: 0
User review is:
Fetching data for book with isbn:0778325601 and id:2373891
Fetched review data from Amazon for book :Deadly Harvest (Flynn Brothers, #2)
checking count in shelf for book_id:3643502
Data written on csv for book:Deadly Harvest (Flynn Brothers, #2)
Getting reviews details from user: 2373891 and book_id: 3643502
Got user review rating: 0
User review is:
Fetching data for book with isbn:0778325164 and id:2373891
Fetched review data from Amazon for book :The Widow (Boston Police/FBI, #1)
checking count in shelf for book_id:2071486
Data written on csv for book:The Widow (Boston Police/FBI, #1)
Getting reviews details from user: 2373891 and book_id: 2071486
Got user review rating: 0
User review is:
Fetching data for book with isbn:0778328511 and id:2373891
Fetched review data from Amazon for book :The Whisper (Boston Police/FBI, #4)
checking count in shelf for book_id:7798635
Data written on csv for book:The Whisper (Boston Police/FBI, #4)
Getting reviews details from user: 2373891 and book_id: 7798635
Got user review rating: 0
User review is:
Fetching data for book with isbn:0451933028 and id:2373891
Fetched review data from Amazon for book :The Green Mile
checking count in shelf for book_id:11566
Data written on csv for book:The Green Mile
Getting reviews details from user: 2373891 and book_id: 11566
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425214435 and id:2373891
Fetched review data from Amazon for book :Eyes of Prey (Lucas Davenport, #3)
checking count in shelf for book_id:37297
Data written on csv for book:Eyes of Prey (Lucas Davenport, #3)
Getting reviews details from user: 2373891 and book_id: 37297
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743484207 and id:2373891
Fetched review data from Amazon for book :Secret Prey (Lucas Davenport, #9)
checking count in shelf for book_id:216131
Data written on csv for book:Secret Prey (Lucas Davenport, #9)
Getting reviews details from user: 2373891 and book_id: 216131
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425141233 and id:2373891
Fetched review data from Amazon for book :Winter Prey (Lucas Davenport, #5)
checking count in shelf for book_id:37304
Data written on csv for book:Winter Prey (Lucas Davenport, #5)
Getting reviews details from user: 2373891 and book_id: 37304
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425205819 and id:2373891
Fetched review data from Amazon for book :Rules of Prey (Lucas Davenport, #1)
checking count in shelf for book_id:37301
Data written on csv for book:Rules of Prey (Lucas Davenport, #1)
Getting reviews details from user: 2373891 and book_id: 37301
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425199606 and id:2373891
Fetched review data from Amazon for book :Hidden Prey (Lucas Davenport, #15)
checking count in shelf for book_id:888927
Data written on csv for book:Hidden Prey (Lucas Davenport, #15)
Getting reviews details from user: 2373891 and book_id: 888927
Got user review rating: 0
User review is:
Fetching data for book with isbn:031606792X and id:2373891
Fetched review data from Amazon for book :Breaking Dawn (Twilight, #4)
checking count in shelf for book_id:1162543
Data written on csv for book:Breaking Dawn (Twilight, #4)
Getting reviews details from user: 2373891 and book_id: 1162543
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743453018 and id:2373891
Fetched review data from Amazon for book :Monday Mourning (Temperance Brennan, #7)
checking count in shelf for book_id:181116
Data written on csv for book:Monday Mourning (Temperance Brennan, #7)
Getting reviews details from user: 2373891 and book_id: 181116
Got user review rating: 0
User review is:
Fetching data for book with isbn:0684859734 and id:2373891
Fetched review data from Amazon for book :Grave Secrets (Temperance Brennan, #5)
checking count in shelf for book_id:281350
Data written on csv for book:Grave Secrets (Temperance Brennan, #5)
Getting reviews details from user: 2373891 and book_id: 281350
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671028367 and id:2373891
Fetched review data from Amazon for book :Deadly Decisions (Temperance Brennan, #3)
checking count in shelf for book_id:128754
Data written on csv for book:Deadly Decisions (Temperance Brennan, #3)
Getting reviews details from user: 2373891 and book_id: 128754
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671011375 and id:2373891
Fetched review data from Amazon for book :Death du Jour (Temperance Brennan, #2)
checking count in shelf for book_id:128756
Data written on csv for book:Death du Jour (Temperance Brennan, #2)
Getting reviews details from user: 2373891 and book_id: 128756
Got user review rating: 0
User review is:
Fetching data for book with isbn:1416510567 and id:2373891
Fetched review data from Amazon for book :Fatal Voyage (Temperance Brennan, #4)
checking count in shelf for book_id:128759
Data written on csv for book:Fatal Voyage (Temperance Brennan, #4)
Getting reviews details from user: 2373891 and book_id: 128759
Got user review rating: 0
User review is:
Fetching data for book with isbn:074345300X and id:2373891
Fetched review data from Amazon for book :Bare Bones (Temperance Brennan, #6)
checking count in shelf for book_id:128752
Data written on csv for book:Bare Bones (Temperance Brennan, #6)
Getting reviews details from user: 2373891 and book_id: 128752
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425175405 and id:2373891
Fetched review data from Amazon for book :Black Notice (Kay Scarpetta, #10)
checking count in shelf for book_id:123598
Data written on csv for book:Black Notice (Kay Scarpetta, #10)
Getting reviews details from user: 2373891 and book_id: 123598
Got user review rating: 0
User review is:
Fetching data for book with isbn:0751525359 and id:2373891
Fetched review data from Amazon for book :The Last Precinct (Kay Scarpetta, #11)
checking count in shelf for book_id:320167
Data written on csv for book:The Last Precinct (Kay Scarpetta, #11)
Getting reviews details from user: 2373891 and book_id: 320167
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425213382 and id:2373891
Fetched review data from Amazon for book :Cause of Death (Kay Scarpetta, #7)
checking count in shelf for book_id:6541
Data written on csv for book:Cause of Death (Kay Scarpetta, #7)
Getting reviews details from user: 2373891 and book_id: 6541
Got user review rating: 0
User review is:
Fetching data for book with isbn:0751530492 and id:2373891
Fetched review data from Amazon for book :Unnatural Exposure (Kay Scarpetta, #8)
checking count in shelf for book_id:232145
Data written on csv for book:Unnatural Exposure (Kay Scarpetta, #8)
Getting reviews details from user: 2373891 and book_id: 232145
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425204693 and id:2373891
Fetched review data from Amazon for book :From Potter's Field (Kay Scarpetta, #6)
checking count in shelf for book_id:6537
Data written on csv for book:From Potter's Field (Kay Scarpetta, #6)
Getting reviews details from user: 2373891 and book_id: 6537
Got user review rating: 0
User review is:
Fetching data for book with isbn:0751530484 and id:2373891
Fetched review data from Amazon for book :Point of Origin (Kay Scarpetta, #9)
checking count in shelf for book_id:6531
Data written on csv for book:Point of Origin (Kay Scarpetta, #9)
Getting reviews details from user: 2373891 and book_id: 6531
Got user review rating: 0
User review is:
Fetching data for book with isbn:0684193957 and id:2373891
Fetched review data from Amazon for book :All That Remains (Kay Scarpetta, #3)
checking count in shelf for book_id:232123
Data written on csv for book:All That Remains (Kay Scarpetta, #3)
Getting reviews details from user: 2373891 and book_id: 232123
Got user review rating: 0
User review is:
Fetching data for book with isbn:0425201449 and id:2373891
Fetched review data from Amazon for book :The Body Farm (Kay Scarpetta, #5)
checking count in shelf for book_id:6539
Data written on csv for book:The Body Farm (Kay Scarpetta, #5)
Getting reviews details from user: 2373891 and book_id: 6539
Got user review rating: 0
User review is:
Fetching data for book with isbn:0380718340 and id:2373891
Fetched review data from Amazon for book :Cruel & Unusual (Kay Scarpetta, #4)
checking count in shelf for book_id:85379
Data written on csv for book:Cruel & Unusual (Kay Scarpetta, #4)
Getting reviews details from user: 2373891 and book_id: 85379
Got user review rating: 0
User review is:
Fetching data for book with isbn:006222543X and id:2373891
Fetched review data from Amazon for book :Reconstructing Amelia
checking count in shelf for book_id:15776309
Data written on csv for book:Reconstructing Amelia
Getting reviews details from user: 2373891 and book_id: 15776309
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061878251 and id:2373891
Fetched review data from Amazon for book :Into the Dark (Brenna Spector, #2)
checking count in shelf for book_id:15818129
Data written on csv for book:Into the Dark (Brenna Spector, #2)
Getting reviews details from user: 2373891 and book_id: 15818129
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316176486 and id:2373891
Fetched review data from Amazon for book :Life After Life
checking count in shelf for book_id:15790842
Data written on csv for book:Life After Life
Getting reviews details from user: 2373891 and book_id: 15790842
Got user review rating: 0
User review is:
Fetching data for book with isbn:1301949825 and id:2373891
Fetched review data from Amazon for book :Hopeless (Hopeless, #1)
checking count in shelf for book_id:15717943
Data written on csv for book:Hopeless (Hopeless, #1)
Getting reviews details from user: 2373891 and book_id: 15717943
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061726818 and id:2373891
Fetched review data from Amazon for book :Before I Fall
checking count in shelf for book_id:6482837
Data written on csv for book:Before I Fall
Getting reviews details from user: 2373891 and book_id: 6482837
Got user review rating: 0
User review is:
Fetching data for book with isbn:0399159371 and id:2373891
Fetched review data from Amazon for book :The Witness
checking count in shelf for book_id:12716613
Data written on csv for book:The Witness
Getting reviews details from user: 2373891 and book_id: 12716613
Got user review rating: 3
User review is:
Fetching data for book with isbn:0446572993 and id:2373891
Fetched review data from Amazon for book :The Innocent (Will Robie, #1)
checking count in shelf for book_id:12849385
Data written on csv for book:The Innocent (Will Robie, #1)
Getting reviews details from user: 2373891 and book_id: 12849385
Got user review rating: 4
User review is:
Fetching data for book with isbn:1582341028 and id:2373891
Fetched review data from Amazon for book :Bone in the Throat
checking count in shelf for book_id:111129
Data written on csv for book:Bone in the Throat
Getting reviews details from user: 2373891 and book_id: 111129
Got user review rating: 0
User review is:
Fetching data for book with isbn:0312315732 and id:2373891
Fetched review data from Amazon for book :Little Children
checking count in shelf for book_id:37426
Data written on csv for book:Little Children
Getting reviews details from user: 2373891 and book_id: 37426
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Flat-Out Love (Flat-Out Love, #1)
checking count in shelf for book_id:11096647
Data written on csv for book:Flat-Out Love (Flat-Out Love, #1)
Getting reviews details from user: 2373891 and book_id: 11096647
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061143316 and id:2373891
Fetched review data from Amazon for book :Promise Not to Tell
checking count in shelf for book_id:659546
Data written on csv for book:Promise Not to Tell
Getting reviews details from user: 2373891 and book_id: 659546
Got user review rating: 0
User review is:
Fetching data for book with isbn:0374533571 and id:2373891
Fetched review data from Amazon for book :The Silver Linings Playbook
checking count in shelf for book_id:13539044
Data written on csv for book:The Silver Linings Playbook
Getting reviews details from user: 2373891 and book_id: 13539044
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316160199 and id:2373891
Fetched review data from Amazon for book :New Moon (Twilight, #2)
checking count in shelf for book_id:49041
Data written on csv for book:New Moon (Twilight, #2)
Getting reviews details from user: 2373891 and book_id: 49041
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316160202 and id:2373891
Fetched review data from Amazon for book :Eclipse (Twilight, #3)
checking count in shelf for book_id:428263
Data written on csv for book:Eclipse (Twilight, #3)
Getting reviews details from user: 2373891 and book_id: 428263
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061147931 and id:2373891
Fetched review data from Amazon for book :Heart-Shaped Box
checking count in shelf for book_id:153025
Data written on csv for book:Heart-Shaped Box
Getting reviews details from user: 2373891 and book_id: 153025
Got user review rating: 4
User review is:
Fetching data for book with isbn:0446696110 and id:2373891
Fetched review data from Amazon for book :The Guardian
checking count in shelf for book_id:15925
Data written on csv for book:The Guardian
Getting reviews details from user: 2373891 and book_id: 15925
Got user review rating: 4
User review is:
Fetching data for book with isbn:0446579939 and id:2373891
Fetched review data from Amazon for book :The Lucky One
checking count in shelf for book_id:3063499
Data written on csv for book:The Lucky One
Getting reviews details from user: 2373891 and book_id: 3063499
Got user review rating: 0
User review is:
Fetching data for book with isbn:1846054729 and id:2373891
Fetched review data from Amazon for book :Don't Blink
checking count in shelf for book_id:6987558
Data written on csv for book:Don't Blink
Getting reviews details from user: 2373891 and book_id: 6987558
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316014508 and id:2373891
Fetched review data from Amazon for book :You've Been Warned
checking count in shelf for book_id:13134
Data written on csv for book:You've Been Warned
Getting reviews details from user: 2373891 and book_id: 13134
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316120553 and id:2373891
Fetched review data from Amazon for book :Now You See Her
checking count in shelf for book_id:7926569
Data written on csv for book:Now You See Her
Getting reviews details from user: 2373891 and book_id: 7926569
Got user review rating: 0
User review is:
Fetching data for book with isbn:044661761X and id:2373891
Fetched review data from Amazon for book :Lifeguard
checking count in shelf for book_id:86424
Data written on csv for book:Lifeguard
Getting reviews details from user: 2373891 and book_id: 86424
Got user review rating: 3
User review is:
Fetching data for book with isbn:0446609404 and id:2373891
Fetched review data from Amazon for book :Cradle and All
checking count in shelf for book_id:5575
Data written on csv for book:Cradle and All
Getting reviews details from user: 2373891 and book_id: 5575
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316159786 and id:2373891
Fetched review data from Amazon for book :Beach Road
checking count in shelf for book_id:85733
Data written on csv for book:Beach Road
Getting reviews details from user: 2373891 and book_id: 85733
Got user review rating: 4
User review is:
Fetching data for book with isbn:0446613355 and id:2373891
Fetched review data from Amazon for book :London Bridges (Alex Cross, #10)
checking count in shelf for book_id:13151
Data written on csv for book:London Bridges (Alex Cross, #10)
Getting reviews details from user: 2373891 and book_id: 13151
Got user review rating: 0
User review is:
Fetching data for book with isbn:0747266921 and id:2373891
Fetched review data from Amazon for book :Four Blind Mice (Alex Cross, #8)
checking count in shelf for book_id:53625
Data written on csv for book:Four Blind Mice (Alex Cross, #8)
Getting reviews details from user: 2373891 and book_id: 53625
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316013935 and id:2373891
Fetched review data from Amazon for book :Judge & Jury
checking count in shelf for book_id:13130
Data written on csv for book:Judge & Jury
Getting reviews details from user: 2373891 and book_id: 13130
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446696587 and id:2373891
Fetched review data from Amazon for book :The Lake House (When the Wind Blows, #2)
checking count in shelf for book_id:110444
Data written on csv for book:The Lake House (When the Wind Blows, #2)
Getting reviews details from user: 2373891 and book_id: 110444
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316117366 and id:2373891
Fetched review data from Amazon for book :The Quickie
checking count in shelf for book_id:13133
Data written on csv for book:The Quickie
Getting reviews details from user: 2373891 and book_id: 13133
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446619035 and id:2373891
Fetched review data from Amazon for book :Mary, Mary (Alex Cross, #11)
checking count in shelf for book_id:84736
Data written on csv for book:Mary, Mary (Alex Cross, #11)
Getting reviews details from user: 2373891 and book_id: 84736
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446610224 and id:2373891
Fetched review data from Amazon for book :The Big Bad Wolf (Alex Cross, #9)
checking count in shelf for book_id:6588
Data written on csv for book:The Big Bad Wolf (Alex Cross, #9)
Getting reviews details from user: 2373891 and book_id: 6588
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316159794 and id:2373891
Fetched review data from Amazon for book :Cross (Alex Cross, #12)
checking count in shelf for book_id:13128
Data written on csv for book:Cross (Alex Cross, #12)
Getting reviews details from user: 2373891 and book_id: 13128
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446676438 and id:2373891
Fetched review data from Amazon for book :When the Wind Blows (When the Wind Blows, #1)
checking count in shelf for book_id:13162
Data written on csv for book:When the Wind Blows (When the Wind Blows, #1)
Getting reviews details from user: 2373891 and book_id: 13162
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446611212 and id:2373891
Fetched review data from Amazon for book :Violets Are Blue (Alex Cross, #7)
checking count in shelf for book_id:79379
Data written on csv for book:Violets Are Blue (Alex Cross, #7)
Getting reviews details from user: 2373891 and book_id: 79379
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446605484 and id:2373891
Fetched review data from Amazon for book :Roses are Red (Alex Cross, #6)
checking count in shelf for book_id:79378
Data written on csv for book:Roses are Red (Alex Cross, #6)
Getting reviews details from user: 2373891 and book_id: 79378
Got user review rating: 0
User review is:
Fetching data for book with isbn:1570425779 and id:2373891
Fetched review data from Amazon for book :Cat and Mouse (Alex Cross, #4)
checking count in shelf for book_id:21436
Data written on csv for book:Cat and Mouse (Alex Cross, #4)
Getting reviews details from user: 2373891 and book_id: 21436
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446608815 and id:2373891
Fetched review data from Amazon for book :Pop Goes the Weasel (Alex Cross #5)
checking count in shelf for book_id:13143
Data written on csv for book:Pop Goes the Weasel (Alex Cross #5)
Getting reviews details from user: 2373891 and book_id: 13143
Got user review rating: 0
User review is:
Fetching data for book with isbn:0446692654 and id:2373891
Fetched review data from Amazon for book :Jack & Jill (Alex Cross, #3)
checking count in shelf for book_id:13140
Data written on csv for book:Jack & Jill (Alex Cross, #3)
Getting reviews details from user: 2373891 and book_id: 13140
Got user review rating: 3
User review is:
Fetching data for book with isbn:0446612545 and id:2373891
Fetched review data from Amazon for book :The Beach House
checking count in shelf for book_id:7510
Data written on csv for book:The Beach House
Getting reviews details from user: 2373891 and book_id: 7510
Got user review rating: 0
User review is:
Fetching data for book with isbn:044654759X and id:2373891
Fetched review data from Amazon for book :Safe Haven
checking count in shelf for book_id:7812659
Data written on csv for book:Safe Haven
Getting reviews details from user: 2373891 and book_id: 7812659
Got user review rating: 0
User review is:
Fetching data for book with isbn:0452286913 and id:2373891
Fetched review data from Amazon for book :The Doctor's Wife
checking count in shelf for book_id:227462
Data written on csv for book:The Doctor's Wife
Getting reviews details from user: 2373891 and book_id: 227462
Got user review rating: 3
User review is:
Fetching data for book with isbn:0345470990 and id:2373891
Fetched review data from Amazon for book :The Virgin of Small Plains
checking count in shelf for book_id:180648
Data written on csv for book:The Virgin of Small Plains
Getting reviews details from user: 2373891 and book_id: 180648
Got user review rating: 4
User review is:
Fetching data for book with isbn:0345471016 and id:2373891
Fetched review data from Amazon for book :The Scent of Rain and Lightning
checking count in shelf for book_id:6606456
Data written on csv for book:The Scent of Rain and Lightning
Getting reviews details from user: 2373891 and book_id: 6606456
Got user review rating: 4
User review is:
Fetching data for book with isbn:0385344082 and id:2373891
Fetched review data from Amazon for book :The Homecoming of Samuel Lake
checking count in shelf for book_id:7632310
Data written on csv for book:The Homecoming of Samuel Lake
Getting reviews details from user: 2373891 and book_id: 7632310
Got user review rating: 4
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Beautiful Disaster (Beautiful, #1)
checking count in shelf for book_id:11505797
Data written on csv for book:Beautiful Disaster (Beautiful, #1)
Getting reviews details from user: 2373891 and book_id: 11505797
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743493915 and id:2373891
Fetched review data from Amazon for book :Body of Evidence (Kay Scarpetta, #2)
checking count in shelf for book_id:132336
Data written on csv for book:Body of Evidence (Kay Scarpetta, #2)
Getting reviews details from user: 2373891 and book_id: 132336
Got user review rating: 0
User review is:
Fetching data for book with isbn:0312362080 and id:2373891
Fetched review data from Amazon for book :One for the Money (Stephanie Plum, #1)
checking count in shelf for book_id:6853
Data written on csv for book:One for the Money (Stephanie Plum, #1)
Getting reviews details from user: 2373891 and book_id: 6853
Got user review rating: 0
User review is:
Fetching data for book with isbn:0743477154 and id:2373891
Fetched review data from Amazon for book :Postmortem (Kay Scarpetta, #1)
checking count in shelf for book_id:6534
Data written on csv for book:Postmortem (Kay Scarpetta, #1)
Getting reviews details from user: 2373891 and book_id: 6534
Got user review rating: 0
User review is:
Fetching data for book with isbn:0671011367 and id:2373891
Fetched review data from Amazon for book :Déjà Dead (Temperance Brennan, #1)
checking count in shelf for book_id:231604
Data written on csv for book:Déjà Dead (Temperance Brennan, #1)
Getting reviews details from user: 2373891 and book_id: 231604
Got user review rating: 0
User review is:
Fetching data for book with isbn:0297859382 and id:2373891
Fetched review data from Amazon for book :Gone Girl
checking count in shelf for book_id:8442457
Data written on csv for book:Gone Girl
Getting reviews details from user: 2373891 and book_id: 8442457
Got user review rating: 4
User review is:
Fetching data for book with isbn:0743418174 and id:2373891
Fetched review data from Amazon for book :Good in Bed (Cannie Shapiro, #1)
checking count in shelf for book_id:14748
Data written on csv for book:Good in Bed (Cannie Shapiro, #1)
Getting reviews details from user: 2373891 and book_id: 14748
Got user review rating: 4
User review is:
Fetching data for book with isbn:006075995X and id:2373891
Fetched review data from Amazon for book :Divine Secrets of the Ya-Ya Sisterhood
checking count in shelf for book_id:137791
Data written on csv for book:Divine Secrets of the Ya-Ya Sisterhood
Getting reviews details from user: 2373891 and book_id: 137791
Got user review rating: 4
User review is:
Fetching data for book with isbn:0316015849 and id:2373891
Fetched review data from Amazon for book :Twilight (Twilight, #1)
checking count in shelf for book_id:41865
Data written on csv for book:Twilight (Twilight, #1)
Getting reviews details from user: 2373891 and book_id: 41865
Got user review rating: 4
User review is:
Fetching data for book with isbn:0439554934 and id:2373891
Fetched review data from Amazon for book :Harry Potter and the Sorcerer's Stone (Harry Potter, #1)
checking count in shelf for book_id:3
Data written on csv for book:Harry Potter and the Sorcerer's Stone (Harry Potter, #1)
Getting reviews details from user: 2373891 and book_id: 3
Got user review rating: 4
User review is:
Fetching data for book with isbn:0553384287 and id:2373891
Fetched review data from Amazon for book :Odd Thomas (Odd Thomas, #1)
checking count in shelf for book_id:14995
Data written on csv for book:Odd Thomas (Odd Thomas, #1)
Getting reviews details from user: 2373891 and book_id: 14995
Got user review rating: 4
User review is:
Fetching data for book with isbn:0446677388 and id:2373891
Fetched review data from Amazon for book :Kiss the Girls (Alex Cross, #2)
checking count in shelf for book_id:13148
Data written on csv for book:Kiss the Girls (Alex Cross, #2)
Getting reviews details from user: 2373891 and book_id: 13148
Got user review rating: 4
User review is:
Fetching data for book with isbn:0385338600 and id:2373891
Fetched review data from Amazon for book :A Time to Kill (Jake Brigance, #1)
checking count in shelf for book_id:32542
Data written on csv for book:A Time to Kill (Jake Brigance, #1)
Getting reviews details from user: 2373891 and book_id: 32542
Got user review rating: 4
User review is:
Fetching data for book with isbn:0450040186 and id:2373891
Fetched review data from Amazon for book :The Shining (The Shining, #1)
checking count in shelf for book_id:11588
Data written on csv for book:The Shining (The Shining, #1)
Getting reviews details from user: 2373891 and book_id: 11588
Got user review rating: 4
User review is:
Fetching data for book with isbn:0307277674 and id:2373891
Fetched review data from Amazon for book :The Da Vinci Code (Robert Langdon, #2)
checking count in shelf for book_id:968
Data written on csv for book:The Da Vinci Code (Robert Langdon, #2)
Getting reviews details from user: 2373891 and book_id: 968
Got user review rating: 4
User review is:
Fetching data for book with isbn:0143037145 and id:2373891
Fetched review data from Amazon for book :The Memory Keeper's Daughter
checking count in shelf for book_id:10441
Data written on csv for book:The Memory Keeper's Daughter
Getting reviews details from user: 2373891 and book_id: 10441
Got user review rating: 2
User review is:
Fetching data for book with isbn:0316166685 and id:2373891
Fetched review data from Amazon for book :The Lovely Bones
checking count in shelf for book_id:12232938
Data written on csv for book:The Lovely Bones
Getting reviews details from user: 2373891 and book_id: 12232938
Got user review rating: 4
User review is:
Fetching data for book with isbn:0349113912 and id:2373891
Fetched review data from Amazon for book :Me Talk Pretty One Day
checking count in shelf for book_id:4137
Data written on csv for book:Me Talk Pretty One Day
Getting reviews details from user: 2373891 and book_id: 4137
Got user review rating: 4
User review is:
Fetching data for book with isbn:0060513039 and id:2373891
Fetched review data from Amazon for book :Where the Sidewalk Ends: The Poems and Drawings of Shel Silverstein
checking count in shelf for book_id:30119
Data written on csv for book:Where the Sidewalk Ends: The Poems and Drawings of Shel Silverstein
Getting reviews details from user: 2373891 and book_id: 30119
Got user review rating: 4
User review is:
Fetching data for book with isbn:0316769177 and id:2373891
Fetched review data from Amazon for book :The Catcher in the Rye
checking count in shelf for book_id:5107
Data written on csv for book:The Catcher in the Rye
Getting reviews details from user: 2373891 and book_id: 5107
Got user review rating: 3
User review is:
Fetching data for book with isbn:0061120081 and id:2373891
Fetched review data from Amazon for book :To Kill a Mockingbird
checking count in shelf for book_id:2657
Data written on csv for book:To Kill a Mockingbird
Getting reviews details from user: 2373891 and book_id: 2657
Got user review rating: 4
User review is:
Checking for books in shelf: currently-reading for user id:2373891
https://www.goodreads.com/review/list/2373891.xml?key=i3Zsl7r13oHEQCjv1vXw&v=2&per_page=200&shelf=currently-reading
Fetching data for book with isbn:0385344228 and id:2373891
Fetched review data from Amazon for book :Defending Jacob
checking count in shelf for book_id:11367726
Data written on csv for book:Defending Jacob
Getting reviews details from user: 2373891 and book_id: 11367726
Got user review rating: 0
User review is:
Checking for books in shelf: to-read for user id:2373891
https://www.goodreads.com/review/list/2373891.xml?key=i3Zsl7r13oHEQCjv1vXw&v=2&per_page=200&shelf=to-read
Fetching data for book with isbn:1250077001 and id:2373891
Fetched review data from Amazon for book :Furiously Happy: A Funny Book About Horrible Things
checking count in shelf for book_id:23848559
Data written on csv for book:Furiously Happy: A Funny Book About Horrible Things
Getting reviews details from user: 2373891 and book_id: 23848559
Got user review rating: 0
User review is:
Fetching data for book with isbn:1402298684 and id:2373891
Fetched review data from Amazon for book :The Magician's Lie
checking count in shelf for book_id:21897317
Data written on csv for book:The Magician's Lie
Getting reviews details from user: 2373891 and book_id: 21897317
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476728747 and id:2373891
Fetched review data from Amazon for book :The Wright Brothers
checking count in shelf for book_id:22609391
Data written on csv for book:The Wright Brothers
Getting reviews details from user: 2373891 and book_id: 22609391
Got user review rating: 0
User review is:
Fetching data for book with isbn:1501105779 and id:2373891
Fetched review data from Amazon for book :Before We Were Strangers
checking count in shelf for book_id:23309634
Data written on csv for book:Before We Were Strangers
Getting reviews details from user: 2373891 and book_id: 23309634
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476755205 and id:2373891
Fetched review data from Amazon for book :All We Had
checking count in shelf for book_id:18775258
Data written on csv for book:All We Had
Getting reviews details from user: 2373891 and book_id: 18775258
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316334529 and id:2373891
Fetched review data from Amazon for book :The Rumor
checking count in shelf for book_id:23341607
Data written on csv for book:The Rumor
Getting reviews details from user: 2373891 and book_id: 23341607
Got user review rating: 0
User review is:
Fetching data for book with isbn:0804139024 and id:2373891
Fetched review data from Amazon for book :The Martian
checking count in shelf for book_id:18007564
Data written on csv for book:The Martian
Getting reviews details from user: 2373891 and book_id: 18007564
Got user review rating: 0
User review is:
Fetching data for book with isbn:0778317706 and id:2373891
Fetched review data from Amazon for book :Pretty Baby
checking count in shelf for book_id:23638955
Data written on csv for book:Pretty Baby
Getting reviews details from user: 2373891 and book_id: 23638955
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062363239 and id:2373891
Fetched review data from Amazon for book :A Head Full of Ghosts
checking count in shelf for book_id:23019294
Data written on csv for book:A Head Full of Ghosts
Getting reviews details from user: 2373891 and book_id: 23019294
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476789630 and id:2373891
Fetched review data from Amazon for book :Luckiest Girl Alive
checking count in shelf for book_id:22609317
Data written on csv for book:Luckiest Girl Alive
Getting reviews details from user: 2373891 and book_id: 22609317
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062225464 and id:2373891
Fetched review data from Amazon for book :Where They Found Her
checking count in shelf for book_id:22693182
Data written on csv for book:Where They Found Her
Getting reviews details from user: 2373891 and book_id: 22693182
Got user review rating: 0
User review is:
Fetching data for book with isbn:1595549404 and id:2373891
Fetched review data from Amazon for book :Waking Hours (East Salem, #1)
checking count in shelf for book_id:10634346
Data written on csv for book:Waking Hours (East Salem, #1)
Getting reviews details from user: 2373891 and book_id: 10634346
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Never Never (Never Never, #1)
checking count in shelf for book_id:24378015
Data written on csv for book:Never Never (Never Never, #1)
Getting reviews details from user: 2373891 and book_id: 24378015
Got user review rating: 0
User review is:
Fetching data for book with isbn:1602862729 and id:2373891
Fetched review data from Amazon for book :The Haunting of Sunshine Girl (The Haunting of Sunshine Girl, #1)
checking count in shelf for book_id:21413855
Data written on csv for book:The Haunting of Sunshine Girl (The Haunting of Sunshine Girl, #1)
Getting reviews details from user: 2373891 and book_id: 21413855
Got user review rating: 0
User review is:
Fetching data for book with isbn:1594633665 and id:2373891
Fetched review data from Amazon for book :The Girl on the Train
checking count in shelf for book_id:22557272
Data written on csv for book:The Girl on the Train
Getting reviews details from user: 2373891 and book_id: 22557272
Got user review rating: 0
User review is:
Fetching data for book with isbn:0812995201 and id:2373891
Fetched review data from Amazon for book :The Weight of Blood
checking count in shelf for book_id:18209468
Data written on csv for book:The Weight of Blood
Getting reviews details from user: 2373891 and book_id: 18209468
Got user review rating: 0
User review is:
Fetching data for book with isbn:0345539931 and id:2373891
Fetched review data from Amazon for book :As Chimney Sweepers Come to Dust (Flavia de Luce, #7)
checking count in shelf for book_id:21874813
Data written on csv for book:As Chimney Sweepers Come to Dust (Flavia de Luce, #7)
Getting reviews details from user: 2373891 and book_id: 21874813
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476790221 and id:2373891
Fetched review data from Amazon for book :The Accidental Empress
checking count in shelf for book_id:22609307
Data written on csv for book:The Accidental Empress
Getting reviews details from user: 2373891 and book_id: 22609307
Got user review rating: 0
User review is:
Fetching data for book with isbn:0356500152 and id:2373891
Fetched review data from Amazon for book :The Girl with All the Gifts
checking count in shelf for book_id:17235026
Data written on csv for book:The Girl with All the Gifts
Getting reviews details from user: 2373891 and book_id: 17235026
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476710791 and id:2373891
Fetched review data from Amazon for book :Wayfaring Stranger (Weldon Holland, #1)
checking count in shelf for book_id:18775356
Data written on csv for book:Wayfaring Stranger (Weldon Holland, #1)
Getting reviews details from user: 2373891 and book_id: 18775356
Got user review rating: 0
User review is:
Fetching data for book with isbn:1414336241 and id:2373891
Fetched review data from Amazon for book :The Auschwitz Escape
checking count in shelf for book_id:18232495
Data written on csv for book:The Auschwitz Escape
Getting reviews details from user: 2373891 and book_id: 18232495
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062311077 and id:2373891
Fetched review data from Amazon for book :Natchez Burning (Penn Cage, #4)
checking count in shelf for book_id:18505832
Data written on csv for book:Natchez Burning (Penn Cage, #4)
Getting reviews details from user: 2373891 and book_id: 18505832
Got user review rating: 0
User review is:
Fetching data for book with isbn:0525953493 and id:2373891
Fetched review data from Amazon for book :Missing You
checking count in shelf for book_id:18114060
Data written on csv for book:Missing You
Getting reviews details from user: 2373891 and book_id: 18114060
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316206873 and id:2373891
Fetched review data from Amazon for book :The Silkworm (Cormoran Strike, #2)
checking count in shelf for book_id:18214414
Data written on csv for book:The Silkworm (Cormoran Strike, #2)
Getting reviews details from user: 2373891 and book_id: 18214414
Got user review rating: 0
User review is:
Fetching data for book with isbn:0385539703 and id:2373891
Fetched review data from Amazon for book :The Children Act
checking count in shelf for book_id:21965107
Data written on csv for book:The Children Act
Getting reviews details from user: 2373891 and book_id: 21965107
Got user review rating: 0
User review is:
Fetching data for book with isbn:1616203218 and id:2373891
Fetched review data from Amazon for book :The Storied Life of A.J. Fikry
checking count in shelf for book_id:18293427
Data written on csv for book:The Storied Life of A.J. Fikry
Getting reviews details from user: 2373891 and book_id: 18293427
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :The Billionaire's Christmas (The Sinclairs, #0.5)
checking count in shelf for book_id:22883581
Data written on csv for book:The Billionaire's Christmas (The Sinclairs, #0.5)
Getting reviews details from user: 2373891 and book_id: 22883581
Got user review rating: 0
User review is:
Fetching data for book with isbn:1592408605 and id:2373891
Fetched review data from Amazon for book :Graduates in Wonderland: The International Misadventures of Two (Almost) Adults
checking count in shelf for book_id:18668008
Data written on csv for book:Graduates in Wonderland: The International Misadventures of Two (Almost) Adults
Getting reviews details from user: 2373891 and book_id: 18668008
Got user review rating: 0
User review is:
Fetching data for book with isbn:1471113876 and id:2373891
Fetched review data from Amazon for book :The Professional (The Game Maker, #1)
checking count in shelf for book_id:17558070
Data written on csv for book:The Professional (The Game Maker, #1)
Getting reviews details from user: 2373891 and book_id: 17558070
Got user review rating: 0
User review is:
Fetching data for book with isbn:1444762443 and id:2373891
Fetched review data from Amazon for book :Remember Me This Way
checking count in shelf for book_id:22045253
Data written on csv for book:Remember Me This Way
Getting reviews details from user: 2373891 and book_id: 22045253
Got user review rating: 0
User review is:
Fetching data for book with isbn:0345544927 and id:2373891
Fetched review data from Amazon for book :Leaving Time
checking count in shelf for book_id:18816603
Data written on csv for book:Leaving Time
Getting reviews details from user: 2373891 and book_id: 18816603
Got user review rating: 0
User review is:
Fetching data for book with isbn:0141043768 and id:2373891
Fetched review data from Amazon for book :What Alice Forgot
checking count in shelf for book_id:6469165
Data written on csv for book:What Alice Forgot
Getting reviews details from user: 2373891 and book_id: 6469165
Got user review rating: 0
User review is:
Fetching data for book with isbn:0399159347 and id:2373891
Fetched review data from Amazon for book :The Husband's Secret
checking count in shelf for book_id:17802724
Data written on csv for book:The Husband's Secret
Getting reviews details from user: 2373891 and book_id: 17802724
Got user review rating: 0
User review is:
Fetching data for book with isbn:0399166238 and id:2373891
Fetched review data from Amazon for book :While Beauty Slept
checking count in shelf for book_id:18079665
Data written on csv for book:While Beauty Slept
Getting reviews details from user: 2373891 and book_id: 18079665
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476746583 and id:2373891
Fetched review data from Amazon for book :All the Light We Cannot See
checking count in shelf for book_id:18143977
Data written on csv for book:All the Light We Cannot See
Getting reviews details from user: 2373891 and book_id: 18143977
Got user review rating: 0
User review is:
Fetching data for book with isbn:0399167064 and id:2373891
Fetched review data from Amazon for book :Big Little Lies
checking count in shelf for book_id:19486412
Data written on csv for book:Big Little Lies
Getting reviews details from user: 2373891 and book_id: 19486412
Got user review rating: 0
User review is:
Fetching data for book with isbn:0670026603 and id:2373891
Fetched review data from Amazon for book :Me Before You (Me Before You, #1)
checking count in shelf for book_id:15507958
Data written on csv for book:Me Before You (Me Before You, #1)
Getting reviews details from user: 2373891 and book_id: 15507958
Got user review rating: 0
User review is:
Fetching data for book with isbn:0670014737 and id:2373891
Fetched review data from Amazon for book :Dollbaby
checking count in shelf for book_id:18693929
Data written on csv for book:Dollbaby
Getting reviews details from user: 2373891 and book_id: 18693929
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476702993 and id:2373891
Fetched review data from Amazon for book :The House We Grew Up In
checking count in shelf for book_id:18764826
Data written on csv for book:The House We Grew Up In
Getting reviews details from user: 2373891 and book_id: 18764826
Got user review rating: 0
User review is:
Fetching data for book with isbn:0060779632 and id:2373891
Fetched review data from Amazon for book :Help for the Haunted
checking count in shelf for book_id:17348985
Data written on csv for book:Help for the Haunted
Getting reviews details from user: 2373891 and book_id: 17348985
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062209841 and id:2373891
Fetched review data from Amazon for book :The Death of Bees
checking count in shelf for book_id:15818333
Data written on csv for book:The Death of Bees
Getting reviews details from user: 2373891 and book_id: 15818333
Got user review rating: 0
User review is:
Fetching data for book with isbn:0545627044 and id:2373891
Fetched review data from Amazon for book :Catch a Falling Star
checking count in shelf for book_id:18527496
Data written on csv for book:Catch a Falling Star
Getting reviews details from user: 2373891 and book_id: 18527496
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316204277 and id:2373891
Fetched review data from Amazon for book :Where'd You Go, Bernadette
checking count in shelf for book_id:13526165
Data written on csv for book:Where'd You Go, Bernadette
Getting reviews details from user: 2373891 and book_id: 13526165
Got user review rating: 0
User review is:
Fetching data for book with isbn:0804138567 and id:2373891
Fetched review data from Amazon for book :Bittersweet
checking count in shelf for book_id:18339743
Data written on csv for book:Bittersweet
Getting reviews details from user: 2373891 and book_id: 18339743
Got user review rating: 0
User review is:
Fetching data for book with isbn:0307589293 and id:2373891
Fetched review data from Amazon for book :A Triple Knot
checking count in shelf for book_id:18759930
Data written on csv for book:A Triple Knot
Getting reviews details from user: 2373891 and book_id: 18759930
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Guidebook to Murder (A Tourist Trap Mystery #1)
checking count in shelf for book_id:20817232
Data written on csv for book:Guidebook to Murder (A Tourist Trap Mystery #1)
Getting reviews details from user: 2373891 and book_id: 20817232
Got user review rating: 0
User review is:
Fetching data for book with isbn: and id:2373891
Fetched review data from Amazon for book :Shame on You (Fool Me Once, #1)
checking count in shelf for book_id:18895851
Data written on csv for book:Shame on You (Fool Me Once, #1)
Getting reviews details from user: 2373891 and book_id: 18895851
Got user review rating: 0
User review is:
Fetching data for book with isbn:1451667116 and id:2373891
Fetched review data from Amazon for book :The Sun and Other Stars
checking count in shelf for book_id:16130454
Data written on csv for book:The Sun and Other Stars
Getting reviews details from user: 2373891 and book_id: 16130454
Got user review rating: 0
User review is:
Fetching data for book with isbn:1439164681 and id:2373891
Fetched review data from Amazon for book :Love Anthony
checking count in shelf for book_id:13547381
Data written on csv for book:Love Anthony
Getting reviews details from user: 2373891 and book_id: 13547381
Got user review rating: 0
User review is:
Fetching data for book with isbn:1414375662 and id:2373891
Fetched review data from Amazon for book :A Little Salty to Cut the Sweet: Southern Stories of Faith, Family, and Fifteen Pounds of Bacon
checking count in shelf for book_id:17131044
Data written on csv for book:A Little Salty to Cut the Sweet: Southern Stories of Faith, Family, and Fifteen Pounds of Bacon
Getting reviews details from user: 2373891 and book_id: 17131044
Got user review rating: 0
User review is:
Fetching data for book with isbn:0385537131 and id:2373891
Fetched review data from Amazon for book :Sycamore Row (Jake Brigance, #2)
checking count in shelf for book_id:17288661
Data written on csv for book:Sycamore Row (Jake Brigance, #2)
Getting reviews details from user: 2373891 and book_id: 17288661
Got user review rating: 0
User review is:
Fetching data for book with isbn:1455501514 and id:2373891
Fetched review data from Amazon for book :Deadline
checking count in shelf for book_id:17333403
Data written on csv for book:Deadline
Getting reviews details from user: 2373891 and book_id: 17333403
Got user review rating: 0
User review is:
Fetching data for book with isbn:0062217135 and id:2373891
Fetched review data from Amazon for book :The Beginning of Everything
checking count in shelf for book_id:13522285
Data written on csv for book:The Beginning of Everything
Getting reviews details from user: 2373891 and book_id: 13522285
Got user review rating: 0
User review is:
Fetching data for book with isbn:0375842209 and id:2373891
Fetched review data from Amazon for book :The Book Thief
checking count in shelf for book_id:1118668
Data written on csv for book:The Book Thief
Getting reviews details from user: 2373891 and book_id: 1118668
Got user review rating: 0
User review is:
Fetching data for book with isbn:0061827010 and id:2373891
Fetched review data from Amazon for book :The Bricklayer (Steve Vail, #1)
checking count in shelf for book_id:6497521
Data written on csv for book:The Bricklayer (Steve Vail, #1)
Getting reviews details from user: 2373891 and book_id: 6497521
Got user review rating: 0
User review is:
Fetching data for book with isbn:0312577206 and id:2373891
Fetched review data from Amazon for book :Home Front
checking count in shelf for book_id:12022079
Data written on csv for book:Home Front
Getting reviews details from user: 2373891 and book_id: 12022079
Got user review rating: 0
User review is:
Fetching data for book with isbn:1451651694 and id:2373891
Fetched review data from Amazon for book :A House in the Sky
checking count in shelf for book_id:18039963
Data written on csv for book:A House in the Sky
Getting reviews details from user: 2373891 and book_id: 18039963
Got user review rating: 0
User review is:
Fetching data for book with isbn:145162137X and id:2373891
Fetched review data from Amazon for book :Brain on Fire: My Month of Madness
checking count in shelf for book_id:13547180
Data written on csv for book:Brain on Fire: My Month of Madness
Getting reviews details from user: 2373891 and book_id: 13547180
Got user review rating: 0
User review is:
Fetching data for book with isbn:1476707723 and id:2373891
Fetched review data from Amazon for book :Whistling Past the Graveyard
checking count in shelf for book_id:16058610
Data written on csv for book:Whistling Past the Graveyard
Getting reviews details from user: 2373891 and book_id: 16058610
Got user review rating: 0
User review is:
Fetching data for book with isbn:0786031581 and id:2373891
Fetched review data from Amazon for book :Unspeakable
checking count in shelf for book_id:16281013
Data written on csv for book:Unspeakable
Getting reviews details from user: 2373891 and book_id: 16281013
Got user review rating: 0
User review is:
Fetching data for book with isbn:0440241901 and id:2373891
Fetched review data from Amazon for book :Can You Keep a Secret?
checking count in shelf for book_id:33724
Data written on csv for book:Can You Keep a Secret?
Getting reviews details from user: 2373891 and book_id: 33724
Got user review rating: 0
User review is:
Fetching data for book with isbn:0060899220 and id:2373891
Fetched review data from Amazon for book :Kitchen Confidential: Adventures in the Culinary Underbelly
checking count in shelf for book_id:33313
Data written on csv for book:Kitchen Confidential: Adventures in the Culinary Underbelly
Getting reviews details from user: 2373891 and book_id: 33313
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316069353 and id:2373891
Fetched review data from Amazon for book :The Fifth Witness (Mickey Haller, #4)
checking count in shelf for book_id:9681098
Data written on csv for book:The Fifth Witness (Mickey Haller, #4)
Getting reviews details from user: 2373891 and book_id: 9681098
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316069485 and id:2373891
Fetched review data from Amazon for book :The Reversal (Harry Bosch, #16; Mickey Haller, #3)
checking count in shelf for book_id:7936809
Data written on csv for book:The Reversal (Harry Bosch, #16; Mickey Haller, #3)
Getting reviews details from user: 2373891 and book_id: 7936809
Got user review rating: 0
User review is:
Fetching data for book with isbn:0316166294 and id:2373891
Fetched review data from Amazon for book :The Brass Verdict (Harry Bosch, #14; Mickey Haller, #2)
checking count in shelf for book_id:2761626
Data written on csv for book:The Brass Verdict (Harry Bosch, #14; Mickey Haller, #2)
Getting reviews details from user: 2373891 and book_id: 2761626
Got user review rating: 0
User review is:
Fetching data for book with isbn:1594480001 and id:2373891
Fetched review data from Amazon for book :The Kite Runner
checking count in shelf for book_id:77203
Data written on csv for book:The Kite Runner
Getting reviews details from user: 2373891 and book_id: 77203
Got user review rating: 0
User review is:
Fetching data for book with isbn:0525953221 and id:2373891
Fetched review data from Amazon for book :The Shadow Tracer
checking count in shelf for book_id:16101033
Data written on csv for book:The Shadow Tracer
Getting reviews details from user: 2373891 and book_id: 16101033
Got user review rating: 0
User review is:
Fetching data for book with isbn:0399256601 and id:2373891
Fetched review data from Amazon for book :The Name of the Star (Shades of London, #1)
checking count in shelf for book_id:9802372
Data written on csv for book:The Name of the Star (Shades of London, #1)
Getting reviews details from user: 2373891 and book_id: 9802372
Got user review rating: 0
User review is:
Fetching data for book with isbn:1451654967 and id:2373891
Fetched review data from Amazon for book :Who Owns the Future?
checking count in shelf for book_id:15802693
Data written on csv for book:Who Owns the Future?
Getting reviews details from user: 2373891 and book_id: 15802693
Got user review rating: 0
User review is:
Fetching data for book with isbn:1400032717 and id:2373891
Fetched review data from Amazon for book :The Curious Incident of the Dog in the Night-Time
checking count in shelf for book_id:1618
Data written on csv for book:The Curious Incident of the Dog in the Night-Time
Getting reviews details from user: 2373891 and book_id: 1618
Got user review rating: 0
</code>
| {
"repository": "mmithil/Web-Mining-Repo",
"path": ".ipynb_checkpoints/Updated_goodreads_script-checkpoint.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 135424,
"hexsha": "cb6efa7e5bf687625ad33ebcfa32abc68ae3c49c",
"max_line_length": 259,
"avg_line_length": 54.2346816179,
"alphanum_fraction": 0.5708293951
} |
# Notebook from LucianoPereiraValenzuela/QuantumNeuralNetworks_for_StateDiscrimination
Path: qnn/tests/test_minimum_error_discrimination.ipynb
# Test: Minimum error discrimination
In this notebook we are testing the evolution of the error probability with the number of evaluations._____no_output_____
<code>
import sys
sys.path.append('../../')
import itertools
import numpy as np
import matplotlib.pyplot as plt
from numpy import pi
from qiskit.algorithms.optimizers import SPSA
from qnn.quantum_neural_networks import StateDiscriminativeQuantumNeuralNetworks as nnd
from qnn.quantum_state import QuantumState
plt.style.use('ggplot')_____no_output_____def callback(params, results, prob_error, prob_inc, prob):
data.append(prob_error)_____no_output_____# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = [0], [pi]
th_v1, th_v2 = [0], [0]
fi_v1, fi_v2 = [0], [0]
lam_v1, lam_v2 = [0], [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
optimal = nnd.helstrom_bound(ψ, ϕ)
print(f'Optimal results: {optimal}\nActual results: {results}')Optimal results: 0.1842391754983393
Actual results: (array([-1.00757705e+00, 3.32056507e+00, 1.16330850e+00, 1.98082431e-03,
3.23876708e+00, -1.86481792e+00, -2.30335836e+00, -9.06714945e-01,
-1.46201088e+00, 1.96589739e-01, 4.87138113e-01]), 0.177734375, 200)
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 2 states')
fig.savefig('twostates.png')
plt.show()_____no_output_____th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3]
th2 = results[0][4]
th_v1 = results[0][5]
th_v2 = results[0][6]
fi_v1 = results[0][7]
fi_v2 = results[0][8]
lam_v1 = results[0][9]
lam_v2 = results[0][10]
M = nnd.povm( 2,
[th_u], [fi_u], [lam_u],
[th1], [th2],
[th_v1], [th_v2],
[fi_v1], [fi_v2],
[lam_v1], [lam_v2], output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ] )
sphere.render()
plt.savefig('sphere_2_states')
plt.style.use('ggplot')_____no_output_____# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
χ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = 2 * [0], 2 * [pi]
th_v1, th_v2 = 2 * [0], 2 * [0]
fi_v1, fi_v2 = 2 * [0], 2 * [0]
lam_v1, lam_v2 = 2 * [0], 2 * [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')Results: (array([ 1.10538359, -1.70015769, -0.48348853, -2.11682256, -0.03103935,
3.19922651, 3.11729522, 0.31810485, -0.13160315, -0.18140742,
-0.76371755, 0.27253825, 0.03474263, 0.01515616, -0.5763371 ,
0.10982771, 1.07646142, -0.01327666, -0.93122126]), 0.4046223958333333, 200)
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states')
fig.savefig('3states.png')
plt.show()_____no_output_____th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states.png')
plt.style.use('ggplot')_____no_output_____# Create random states
ψ = QuantumState([ np.array([1,0]) ])
ϕ = QuantumState([ np.array([np.cos(np.pi/4), np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4),np.sin(0.1+np.pi/4)] ) ])
χ = QuantumState([ np.array([np.cos(np.pi/4), 1j*np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4), 1j*np.sin(0.1+np.pi/4)] ),
np.array([np.cos(-0.1+np.pi/4), 1j*np.sin(-0.1+np.pi/4)] )])
# Parameters
th_u, fi_u, lam_u = list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1))
th1, th2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
th_v1, th_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
fi_v1, fi_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
lam_v1, lam_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')Results: (array([ 9.81810357, 0.59561506, -1.36035727, 3.18600335, -2.84531461,
0.99830421, -5.75338446, -0.4057666 , -3.25167801, -4.81894089,
-6.46027888, -1.67360266, -1.92398161, 1.14189749, 5.38115967,
-1.12673158, 2.53992157, 0.28855823, 1.09164108]), 0.4171549479166666, 200)
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states with noise')
fig.savefig('noisy.png')
plt.show()_____no_output_____th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states_noisy.png')
plt.style.use('ggplot')_____no_output_____
</code>
| {
"repository": "LucianoPereiraValenzuela/QuantumNeuralNetworks_for_StateDiscrimination",
"path": "qnn/tests/test_minimum_error_discrimination.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 6,
"size": 693284,
"hexsha": "cb6fab6ebe3492f72e0630def9aab27466922c37",
"max_line_length": 143704,
"avg_line_length": 1716.0495049505,
"alphanum_fraction": 0.9606423919
} |
# Notebook from heprom/cvml
Path: corrections/single_layer_neural_network_cor.ipynb
# Single layer Neural Network
In this notebook, we will code a single neuron and use it as a linear classifier with two inputs. The tuning of the neuron parameters is done by backpropagation using gradient descent._____no_output_____
<code>
from sklearn.datasets import make_blobs
import numpy as np
# matplotlib to display the data
import matplotlib
matplotlib.rc('font', size=16)
matplotlib.rc('xtick', labelsize=16)
matplotlib.rc('ytick', labelsize=16)
from matplotlib import pyplot as plt, cm
from matplotlib.colors import ListedColormap
%matplotlib inline_____no_output_____
</code>
## Dataset
Let's create some labeled data in the form of (X, y) with an associated class which can be 0 or 1. For this we can use the function `make_blobs` in the `sklearn.datasets` module. Here we use 2 centers with coordinates (-0.5, -1.0) and (1.0, 1.0)._____no_output_____
<code>
X, y = make_blobs(n_features=2, random_state=42, centers=[(-0.5, -1.0), (1.0, 1.0)])
y = y.reshape((y.shape[0], 1))
print(X.shape)
print(y.shape)(100, 2)
(100, 1)
</code>
Plot our training data using `plt.scatter` to have a first visualization. Here we color the points with their labels stored in `y`._____no_output_____
<code>
plt.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
plt.title('training data with labels')
plt.axis('equal')
plt.show()_____no_output_____
</code>
## Activation functions
Here we play with popular activation functions like tanh, ReLu or sigmoid._____no_output_____
<code>
def heaviside(x):
return np.heaviside(x, np.zeros_like(x))
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def ReLU(x):
return np.maximum(0, x)
def leaky_ReLU(x, alpha=0.1):
return np.maximum(alpha * x, x)
def tanh(x):
return np.tanh(x)_____no_output_____from math import pi
plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.axhline(y=0., color='gray', linestyle='dashed')
plt.axhline(y=-1, color='gray', linestyle='dashed')
plt.axhline(y=1., color='gray', linestyle='dashed')
plt.axvline(x=0., color='gray', linestyle='dashed')
plt.xlim(-pi, pi)
plt.ylim(-1.2, 1.2)
plt.title('activation functions', fontsize=16)
plt.plot(x, heaviside(x), label='heavyside', linewidth=3)
legend = plt.legend(loc='lower right')
plt.savefig('activation_functions_1.pdf')
plt.plot(x, sigmoid(x), label='sigmoid', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_2.pdf')
plt.plot(x, tanh(x), label='tanh', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_3.pdf')
plt.plot(x, ReLU(x), label='ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_4.pdf')
plt.plot(x, leaky_ReLU(x), label='leaky ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_5.pdf')
plt.show()_____no_output_____# gradients of the activation functions
def sigmoid_grad(x):
s = sigmoid(x)
return s * (1 - s)
def relu_grad(x):
return 1. * (x > 0)
def tanh_grad(x):
return 1 - np.tanh(x) ** 2_____no_output_____plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.plot(x, sigmoid_grad(x), label='sigmoid gradient', linewidth=3)
plt.plot(x, relu_grad(x), label='ReLU gradient', linewidth=3)
plt.plot(x, tanh_grad(x), label='tanh gradient', linewidth=3)
plt.xlim(-pi, pi)
plt.title('activation function derivatives', fontsize=16)
legend = plt.legend()
legend.get_frame().set_linewidth(2)
plt.savefig('activation_functions_derivatives.pdf')
plt.show()_____no_output_____
</code>
## ANN implementation
A simple neuron with two inputs $(x_1, x_2)$ which applies an affine transform of weigths $(w_1, w_2)$ and bias $w_0$.
The neuron compute the quantity called activation $a=\sum_i w_i x_i + w_0 = w_0 + w_1 x_1 + w_2 x_2$
This quantity is send to the activation function chosen to be a sigmoid function here: $f(a)=\dfrac{1}{1+e^{-a}}$
$f(a)$ is the output of the neuron bounded between 0 and 1._____no_output_____### Quick implementation
First let's implement our network in a concise fashion._____no_output_____
<code>
import numpy as np
from numpy.random import randn
X, y = make_blobs(n_samples= 100, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
# adjust the sizes of our arrays
X = np.c_[np.ones(X.shape[0]), X]
print(X.shape)
y = y.reshape((y.shape[0], 1))
np.random.seed(2)
W = randn(3, 1)
print('* model params: {}'.format(W.tolist()))
eta = 1e-2 # learning rate
n_epochs = 50
for t in range(n_epochs):
# forward pass
y_pred = sigmoid(X.dot(W))
loss = np.sum((y_pred - y) ** 2)
print(t, loss)
# backprop
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update rule
W -= eta * grad_W
print('* new model params: {}'.format(W.tolist()))
(100, 3)
* model params: [[-0.4167578474054706], [-0.056266827226329474], [-2.136196095668454]]
0 69.888678007119
1 68.10102729420157
2 66.01234835808539
3 63.52920474304009
4 60.511931504891606
5 56.781412908257316
6 52.17672528838464
7 46.67324119014275
8 40.42792006624582
9 33.65215682518604
10 26.70717521140393
11 20.544252734780706
12 16.198710182740403
13 13.580478268635446
14 11.983692035385701
15 10.922630988132667
16 10.16098958742113
17 9.58305898101223
18 9.126953943573383
19 8.756534709365818
20 8.449122187454204
21 8.189655164236605
22 7.967673714087983
23 7.77564748980427
24 7.607993124138909
25 7.460467877966521
26 7.329780007443549
27 7.213329709562506
28 7.109031799422667
29 7.0151912380112025
30 6.93041381013332
31 6.85354076229679
32 6.783600131596645
33 6.71976992958227
34 6.661349894574377
35 6.607739535904203
36 6.558420865715362
37 6.512944669754438
38 6.470919482932122
39 6.43200265563964
40 6.395893053272442
41 6.36232504406938
42 6.331063512489355
43 6.30189969588866
44 6.274647687380679
45 6.249141481725432
46 6.225232466911465
47 6.202787283890317
48 6.1816859922367575
49 6.161820491448388
* new model params: [[-0.2559481613676354], [1.3472482576843752], [1.3640333350148732]]
</code>
### Modular implementation
Now let's create a class to represent our neural network to have more flexibility and modularity. This will prove to be useful later when we add more layers._____no_output_____
<code>
class SingleLayerNeuralNetwork:
"""A simple artificial neuron with a single layer and two inputs.
This type of network is called a Single Layer Neural Network and belongs to
the Feed-Forward Neural Networks. Here, the activation function is a sigmoid,
the loss is computed using the squared error between the target and
the prediction. Learning the parameters is achieved using back-propagation
and gradient descent
"""
def __init__(self, eta=0.01, rand_seed=42):
"""Initialisation routine."""
np.random.seed(rand_seed)
self.W = np.random.randn(3, 1) # weigths
self.eta = eta # learning rate
self.loss_history = []
def sigmoid(self, x):
"""Our activation function."""
return 1 / (1 + np.exp(-x))
def sigmoid_grad(self, x):
"""Gradient of the sigmoid function."""
return self.sigmoid(x) * (1 - self.sigmoid(x))
def predict(self, X, bias_trick=True):
X = np.atleast_2d(X)
if bias_trick:
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
return self.sigmoid(np.dot(X, self.W))
def loss(self, X, y, bias_trick=False):
"""Compute the squared error loss for a given set of inputs."""
y_pred = self.predict(X, bias_trick=bias_trick)
y_pred = y_pred.reshape((y_pred.shape[0], 1))
loss = np.sum((y_pred - y) ** 2)
return loss
def back_propagation(self, X, y):
"""Conduct backpropagation to update the weights."""
X = np.atleast_2d(X)
y_pred = self.sigmoid(np.dot(X, self.W)).reshape((X.shape[0], 1))
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update weights
self.W -= eta * grad_W
def fit(self, X, y, n_epochs=10, method='batch', save_fig=False):
"""Perform gradient descent on a given number of epochs to update the weights."""
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
self.loss_history.append(self.loss(X, y)) # initial loss
for i_epoch in range(n_epochs):
if method == 'batch':
# perform backprop on the whole training set (batch)
self.back_propagation(X, y)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
print(i_epoch, self.loss_history[-1])
else:
# here we update the weight for every data point (SGD)
for (xi, yi) in zip(X, y):
self.back_propagation(xi, yi)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
if save_fig:
self.plot_model(i_epoch, save=True, display=False)
def decision_boundary(self, x):
"""Return the decision boundary in 2D."""
return -self.W[0] / self.W[2] - self.W[1] / self.W[2] * x
def plot_model(self, i_epoch=-1, save=False, display=True):
"""Build a figure to vizualise how the model perform."""
xx0, xx1 = np.arange(-3, 3.1, 0.1), np.arange(-3, 4.1, 0.1)
XX0, XX1 = np.meshgrid(xx0, xx1)
# apply the model to the grid
y_an = np.empty(len(XX0.ravel()))
i = 0
for (x0, x1) in zip(XX0.ravel(), XX1.ravel()):
y_an[i] = self.predict(np.array([x0, x1]))
i += 1
y_an = y_an.reshape((len(xx1), len(xx0)))
figure = plt.figure(figsize=(12, 4))
ax1 = plt.subplot(1, 3, 1)
#ax1.set_title(r'$w_0=%.3f$, $w_1=%.3f$, $w_2=%.3f$' % (self.W[0], self.W[1], self.W[2]))
ax1.set_title("current prediction")
ax1.contourf(XX0, XX1, y_an, alpha=.5)
ax1.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
ax1.set_xlim(-3, 3)
ax1.set_ylim(-3, 4)
print(ax1.get_xlim())
x = np.array(ax1.get_xlim())
ax1.plot(x, self.decision_boundary(x), 'k-', linewidth=2)
ax2 = plt.subplot(1, 3, 2)
x = np.arange(3) # the label locations
rects1 = ax2.bar(x, [self.W[0, 0], self.W[1, 0], self.W[2, 0]])
ax2.set_title('model parameters')
ax2.set_xticks(x)
ax2.set_xticklabels([r'$w_0$', r'$w_1$', r'$w_2$'])
ax2.set_ylim(-1, 2)
ax2.set_yticks([0, 2])
ax2.axhline(xmin=0, xmax=2)
ax3 = plt.subplot(1, 3, 3)
ax3.plot(self.loss_history, c='lightgray', lw=2)
if i_epoch < 0:
i_epoch = len(self.loss_history) - 1
ax3.plot(i_epoch, self.loss_history[i_epoch], 'o')
ax3.set_title('loss evolution')
ax3.set_yticks([])
plt.subplots_adjust(left=0.05, right=0.98)
if save:
plt.savefig('an_%02d.png' % i_epoch)
if display:
plt.show()
plt.close()
_____no_output_____
</code>
### Train our model on the data set
Create two blobs with $n=1000$ data points.
Instantiate the model with $\eta$=0.1 and a random seed of 2.
Train the model using the batch gradient descent on 20 epochs._____no_output_____
<code>
X, y = make_blobs(n_samples=10000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
y = y.reshape((y.shape[0], 1))
an1 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an1.W.tolist()))
print(an1.loss(X, y, bias_trick=True))
an1.fit(X, y, n_epochs=100, method='batch', save_fig=False)
print('* new model params: {}'.format(an1.W.tolist()))* init model params: [[-0.4167578474054706], [-0.056266827226329474], [-2.136196095668454]]
6813.791619744032
0 1065.5244005560853
1 1004.2834940985097
2 972.1020148641242
3 952.1899581995993
4 938.3234296045975
5 927.9765410132449
6 919.2260496173226
7 911.8159986339842
8 906.2581276170479
9 902.2935497103937
10 899.2937229391691
11 896.835926605091
12 894.6988964194702
13 892.7681058525101
14 890.9810531145381
15 889.2998538790825
16 887.6974929083249
17 886.1513240313166
18 884.6406287755832
19 883.1462463333798
20 881.6509308886809
21 880.1396760889339
22 878.5997412921436
23 877.0204069660281
24 875.3925836022929
25 873.7083854938818
26 871.9607354016401
27 870.1430269399661
28 868.2488478128737
29 866.2717560113587
30 864.2050976414034
31 862.0418553064648
32 859.7745176798607
33 857.3949630082162
34 854.8943514010089
35 852.2630228507501
36 849.4904001520829
37 846.5648985409571
38 843.4738474024027
39 840.2034344600186
40 836.7386904645787
41 833.063544079041
42 829.1609947726868
43 825.0134796837673
44 820.6035539374298
45 815.9150703886328
46 810.9351438457842
47 805.6573257588102
48 800.0865976476773
49 794.2469781265381
50 788.1925885333343
51 782.0225479579994
52 775.898199029485
53 770.0564150296796
54 764.8039653311268
55 760.4688673389976
56 757.2929187670451
57 755.3023390375763
58 754.2672366495963
59 753.8256841536671
60 753.6685438700401
61 753.619993570175
62 753.6063543359508
63 753.6027346285179
64 753.6018037351533
65 753.601568267135
66 753.6015092100265
67 753.601494462039
68 753.6014907887106
69 753.6014898812546
70 753.6014896826166
71 753.6014897395092
72 753.601490170193
73 753.6014919095405
74 753.6014987380806
75 753.6015255006284
76 753.6016303665517
77 753.602041352367
78 753.6036514124133
79 753.6099638645105
80 753.6346720066709
81 753.7316650514454
82 754.1097074452814
83 755.5959994997058
84 761.2170186342629
85 782.3616728324131
86 839.7242689213219
87 947.4866771926982
88 857.4699877558302
89 802.3820107211028
90 778.6472220261489
91 771.2017368689822
92 765.5866829418633
93 761.1148555762665
94 757.8825548351401
95 756.0128641512408
96 755.6007852090613
97 757.447139475298
98 764.9244410329911
99 791.2484420481214
* new model params: [[-0.014337638336448744], [1.8730398776699777], [1.9891406713721984]]
</code>
Now we have trained our model, plot the results_____no_output_____
<code>
an1.plot_model()(-3.0, 3.0)
</code>
Now try to train another network using SGD. Use only 1 epoch since with SGD, we are updating the weights with every training point (so $n$ times per epoch)._____no_output_____
<code>
an2 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an2.W.tolist()))
an2.fit(X, y, n_epochs=1, method='SGD', save_fig=False)
print('* new model params: {}'.format(an2.W.tolist()))* init model params: [[-0.4167578474054706], [-0.056266827226329474], [-2.136196095668454]]
* new model params: [[-0.3089337810149369], [1.2172382480136865], [1.6614794435241786]]
</code>
plot the difference in terms of loss evolution using batch or stochastic gradient descent_____no_output_____
<code>
plt.plot(an1.loss_history[:], label='batch GD')
plt.plot(an2.loss_history[::100], label='stochastic GD')
#plt.ylim(0, 2000)
plt.legend()
plt.show()_____no_output_____an2.plot_model()(-3.0, 3.0)
</code>
## Logistic regression
Our single layer network using the logistic function for activation is very similar to the logistic regression we saw in a previous tutorial. We can easily compare our result with the logistic regression using `sklearn` toolbox._____no_output_____
<code>
from sklearn.linear_model import LogisticRegression
X, y = make_blobs(n_samples=1000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X, y)
print(log_reg.coef_)
print(log_reg.intercept_)[[1.5698506 1.81179711]]
[-0.50179977]
x0, x1 = np.meshgrid(
np.linspace(-3, 3.1, 62).reshape(-1, 1),
np.linspace(-3, 4.1, 72).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
zz = y_proba[:, 1].reshape(x0.shape)
plt.figure(figsize=(4, 4))
contour = plt.contourf(x0, x1, zz, alpha=0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='gray')
# decision boundary
x_bounds = np.array([-3, 3])
boundary = -(log_reg.coef_[0][0] * x_bounds + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.plot(x_bounds, boundary, "k-", linewidth=3)
plt.xlim(-3, 3)
plt.ylim(-3, 4)
plt.show()_____no_output__________no_output_____
</code>
| {
"repository": "heprom/cvml",
"path": "corrections/single_layer_neural_network_cor.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 3,
"size": 597192,
"hexsha": "cb6fc042cd9bd318eea9ff897449d7c005244c84",
"max_line_length": 178674,
"avg_line_length": 611.2507676561,
"alphanum_fraction": 0.9403659125
} |
# Notebook from j-berg/explore_colon
Path: explore.ipynb
# Analyzing colon tumor gene expression data
Data source:
- https://dx.doi.org/10.1038%2Fsdata.2018.61
- https://www.ncbi.nlm.nih.gov/gds?term=GSE8671
- https://www.ncbi.nlm.nih.gov/gds?term=GSE20916_____no_output_____### 1. Initialize the environment and variables
Upon launching this page, run the below code to initialize the analysis environment by selecting the cell and pressing `Shift + Enter`_____no_output_____
<code>
#Set path to this directory for accessing and saving files
import os
import warnings
warnings.filterwarnings('ignore')
__path__ = os.getcwd() + os.path.sep
print('Current path: ' + __path__)
from local_utils import init_tcga, init_GSE8671, init_GSE20916, sort_data
from local_utils import eval_gene, make_heatmap
%matplotlib inline
# Read data
print("Loading data. Please wait...")
tcga_scaled, tcga_data, tcga_info, tcga_palette = init_tcga()
GSE8671_scaled, GSE8671_data, GSE8671_info, GSE8671_palette = init_GSE8671()
GSE20916_scaled, GSE20916_data, GSE20916_info, GSE20916_palette = init_GSE20916()
print("Data import complete. Continue below...")_____no_output_____
</code>
### 2a. Explore a gene of interest in the Unified TCGA data or GSE8671 and GSE20916
- In the first line, edit the gene name (human) within the quotes
- Press `Shift + Enter`_____no_output_____
<code>
gene = "FABP1" # <-- edit between the quotation marks here
# Do not edit below this line
# ------------------------------------------------------------------------
print("Running analysis. Please wait...\n\n")
eval_gene(gene, tcga_data, tcga_info, tcga_palette, 'TCGA (unified)')
eval_gene(gene, GSE8671_data, GSE8671_info, GSE8671_palette, 'GSE8671')
eval_gene(gene, GSE20916_data, GSE20916_info, GSE20916_palette, 'GSE20916')_____no_output_____
</code>
### 2a. Explore a set of genes in the Unified TCGA data or GSE8671 and GSE20916
- Between the brackets, edit the gene names (human) within the quotes
- If you want to have less than the provided number of genes, remove the necessary number of lines
- If you want to have more than the provided number of genes, add lines with the gene name in quotes, followed by a comma outside of the quotes
- Press `Shift + Enter`_____no_output_____
<code>
gene_list = [
"FABP1", # <-- edit between the quote marks here
"ME1",
"ME2",
"PC", # <-- add more genes by adding a line, the gene name between quotes, and a comma after that quote
]
# Do not edit below this line
# ------------------------------------------------------------------------
print("Running analysis. Please wait...\n\n")
make_heatmap(gene_list, tcga_scaled, tcga_info, tcga_palette, 'TCGA (unified)')
make_heatmap(gene_list, GSE8671_scaled, GSE8671_info, GSE8671_palette, 'GSE8671')
make_heatmap(gene_list, sort_data(GSE20916_scaled, GSE20916_info, ['adenoma', 'adenocarcinoma','normal_colon']), GSE20916_info, GSE20916_palette, 'GSE20916')_____no_output_____
</code>
| {
"repository": "j-berg/explore_colon",
"path": "explore.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": null,
"size": 4457,
"hexsha": "cb710b7e771a240e75b7dfe1537bb4864ec2a733",
"max_line_length": 163,
"avg_line_length": 32.5328467153,
"alphanum_fraction": 0.5831276643
} |
# Notebook from veeralakrishna/DataCamp-Portofolio-Projects-R
Path: Modeling the Volatility of US Bond Yields/notebook.ipynb
## 1. Volatility changes over time
<p>What is financial risk? </p>
<p>Financial risk has many faces, and we measure it in many ways, but for now, let's agree that it is a measure of the possible loss on an investment. In financial markets, where we measure prices frequently, volatility (which is analogous to <em>standard deviation</em>) is an obvious choice to measure risk. But in real markets, volatility changes with the market itself. </p>
<p><img src="https://assets.datacamp.com/production/project_738/img/VolaClusteringAssetClasses.png" alt=""></p>
<p>In the picture above, we see the returns of four very different assets. All of them exhibit alternating regimes of low and high volatilities. The highest volatility is observed around the end of 2008 - the most severe period of the recent financial crisis.</p>
<p>In this notebook, we will build a model to study the nature of volatility in the case of US government bond yields.</p>_____no_output_____
<code>
# Load the packages
library(xts)
library(readr)
# Load the data
yc_raw <- read_csv("datasets/FED-SVENY.csv")
# Convert the data into xts format
yc_all <- as.xts(x = yc_raw[, -1], order.by = yc_raw$Date)
# Show only the tail of the 1st, 5th, 10th, 20th and 30th columns
yc_all_tail <- tail(yc_all[,c(1,5,10, 20, 30)])
yc_all_tailLoading required package: zoo
Attaching package: 'zoo'
The following objects are masked from 'package:base':
as.Date, as.Date.numeric
Parsed with column specification:
cols(
.default = col_double(),
Date = [34mcol_date(format = "")[39m
)
See spec(...) for full column specifications.
</code>
## 2. Plotting the evolution of bond yields
<p>In the output table of the previous task, we see the yields for some maturities.</p>
<p>These data include the whole yield curve. The yield of a bond is the price of the money lent. The higher the yield, the more money you receive on your investment. The yield curve has many maturities; in this case, it ranges from 1 year to 30 years. Different maturities have different yields, but yields of neighboring maturities are relatively close to each other and also move together.</p>
<p>Let's visualize the yields over time. We will see that the long yields (e.g. SVENY30) tend to be more stable in the long term, while the short yields (e.g. SVENY01) vary a lot. These movements are related to the monetary policy of the FED and economic cycles.</p>_____no_output_____
<code>
library(viridis)
# Define plot arguments
yields <- yc_all
plot.type <- "single"
plot.palette <- viridis(n = 30)
asset.names <- colnames(yc_all)
# Plot the time series
plot.zoo(x = yc_all, plot.type = plot.type, col = plot.palette)
# Add the legend
legend(x = "topleft", legend = asset.names,
col = plot.palette, cex = 0.45, lwd = 3)Loading required package: viridisLite
</code>
## 3. Make the difference
<p>In the output of the previous task, we see the level of bond yields for some maturities, but to understand how volatility evolves we have to examine the changes in the time series. Currently, we have yield levels; we need to calculate the changes in the yield levels. This is called "differentiation" in time series analysis. Differentiation has the added benefit of making a time series independent of time.</p>_____no_output_____
<code>
# Differentiate the time series
ycc_all <- diff.xts(yc_all)
# Show the tail of the 1st, 5th, 10th, 20th and 30th columns
ycc_all_tail <- tail(ycc_all[, c(1, 5, 10, 20, 30)])
ycc_all_tail_____no_output_____
</code>
## 4. The US yields are no exceptions, but maturity matters
<p>Now that we have a time series of the changes in US government yields let's examine it visually.</p>
<p>By taking a look at the time series from the previous plots, we see hints that the returns following each other have some unique properties:</p>
<ul>
<li>The direction (positive or negative) of a return is mostly independent of the previous day's return. In other words, you don't know if the next day's return will be positive or negative just by looking at the time series.</li>
<li>The magnitude of the return is similar to the previous day's return. That means, if markets are calm today, we expect the same tomorrow. However, in a volatile market (crisis), you should expect a similarly turbulent tomorrow.</li>
</ul>_____no_output_____
<code>
# Define the plot parameters
yield.changes <- ycc_all
plot.type <- "multiple"
# Plot the differentiated time series
plot.zoo(x = yield.changes, plot.type = plot.type,
ylim = c(-0.5, 0.5), cex.axis = 0.7,
ylab = 1:30, col = plot.palette)_____no_output_____
</code>
## 5. Let's dive into some statistics
<p>The statistical properties visualized earlier can be measured by analytical tools. The simplest method is to test for autocorrelation. Autocorrelation measures how a datapoint's past determines the future of a time series. </p>
<ul>
<li>If the autocorrelation is close to 1, the next day's value will be very close to today's value. </li>
<li>If the autocorrelation is close to 0, the next day's value will be unaffected by today's value.</li>
</ul>
<p>Because we are interested in the recent evolution of bond yields, we will filter the time series for data from 2000 onward.</p>_____no_output_____
<code>
# Filter for changes in and after 2000
ycc <- ycc_all["2000/",]
# Save the 1-year and 20-year maturity yield changes into separate variables
x_1 <- ycc[,"SVENY01"]
x_20 <- ycc[, "SVENY20"]
# Plot the autocorrelations of the yield changes
par(mfrow=c(2,2))
acf_1 <- acf(x_1)
acf_20 <- acf(x_20)
# Plot the autocorrelations of the absolute changes of yields
acf_abs_1 <- acf(abs(x_1))
acf_abs_20 <- acf(abs(x_20))_____no_output_____
</code>
## 6. GARCH in action
<p>A Generalized AutoRegressive Conditional Heteroskedasticity (<a href="https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity">GARCH</a>) model is the most well known econometric tool to handle changing volatility in financial time series data. It assumes a hidden volatility variable that has a long-run average it tries to return to while the short-run behavior is affected by the past returns.</p>
<p>The most popular form of the GARCH model assumes that the volatility follows this process:
</p><p></p>
<math>
σ<sup>2</sup><sub>t</sub> = ω + α ⋅ ε<sup>2</sup><sub>t-1</sub> + β ⋅ σ<sup>2</sup><sub>t-1</sub>
</math>
<p></p><p></p>
<math>
where σ is the current volatility, σ<sub>t-1</sub> the last day's volatility and ε<sub>t-1</sub> is the last day's return. The estimated parameters are ω, α, and β.
</math>
<p>For GARCH modeling we will use <a href="https://cran.r-project.org/web/packages/rugarch/index.html"><code>rugarch</code></a> package developed by Alexios Ghalanos.</p>_____no_output_____
<code>
library(rugarch)
# Specify the GARCH model with the skewed t-distribution
spec <- ugarchspec(distribution.model = "sstd")
# Fit the model
fit_1 <- ugarchfit(x_1, spec = spec)
# Save the volatilities and the rescaled residuals
vol_1 <- sigma(fit_1)
res_1 <- scale(residuals(fit_1, standardize = TRUE)) * sd(x_1) + mean(x_1)
# Plot the yield changes with the estimated volatilities and residuals
merge_1 <- merge.xts(x_1, vol_1, res_1)
plot.zoo(merge_1)Loading required package: parallel
Attaching package: 'rugarch'
The following object is masked from 'package:stats':
sigma
</code>
## 7. Fitting the 20-year maturity
<p>Let's do the same for the 20-year maturity. As we can see in the plot from Task 6, the bond yields of various maturities show similar but slightly different characteristics. These different characteristics can be the result of multiple factors such as the monetary policy of the FED or the fact that the investors might be different.</p>
<p>Are there differences between the 1-year maturity and 20-year maturity plots?</p>_____no_output_____
<code>
# Fit the model
fit_20 <- ugarchfit(x_20, spec = spec)
# Save the volatilities and the rescaled residuals
vol_20 <- sigma(fit_20)
res_20 <- scale(residuals(fit_20, standardize = TRUE)) * sd(x_20) + mean(x_20)
# Plot the yield changes with the estimated volatilities and residuals
merge_20 <- merge.xts(x_20, vol_20, res_20)
plot.zoo(merge_20)_____no_output_____
</code>
## 8. What about the distributions? (Part 1)
<p>From the plots in Task 6 and Task 7, we can see that the 1-year GARCH model shows a similar but more erratic behavior compared to the 20-year GARCH model. Not only does the 1-year model have greater volatility, but the volatility of its volatility is larger than the 20-year model. That brings us to two statistical facts of financial markets not mentioned yet. </p>
<ul>
<li>The unconditional (before GARCH) distribution of the yield differences has heavier tails than the normal distribution.</li>
<li>The distribution of the yield differences adjusted by the GARCH model has lighter tails than the unconditional distribution, but they are still heavier than the normal distribution.</li>
</ul>
<p>Let's find out what the fitted GARCH model did with the distribution we examined.</p>_____no_output_____
<code>
# Calculate the kernel density for the 1-year maturity and residuals
density_x_1 <- density(x_1)
density_res_1 <- density(res_1)
# Plot the density diagram for the 1-year maturity and residuals
plot(density_x_1)
lines(density_res_1, col = "red")
# Add the normal distribution to the plot
norm_dist <- dnorm(seq(-0.4, 0.4, by = .01), mean = mean(x_1), sd = sd(x_1))
lines(seq(-0.4, 0.4, by = .01),
norm_dist,
col = "darkgreen"
)
# Add legend
legend <- c("Before GARCH", "After GARCH", "Normal distribution")
legend("topleft", legend = legend,
col = c("black", "red", "darkgreen"), lty=c(1,1))_____no_output_____
</code>
## 9. What about the distributions? (Part 2)
<p>In the previous plot, we see that the two distributions from the GARCH models are different from the normal distribution of the data, but the tails, where the differences are the most profound, are hard to see. Using a Q-Q plot will help us focus in on the tails.</p>
<p>You can read an excellent summary of Q-Q plots <a href="https://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot">here</a>.</p>_____no_output_____
<code>
# Define the data to plot: the 1-year maturity yield changes and residuals
data_orig <- x_1
data_res <- res_1
# Define the benchmark distribution
distribution <- qnorm
# Make the Q-Q plot of original data with the line of normal distribution
qqnorm(data_orig, ylim = c(-0.5, 0.5))
qqline(data_orig, distribution = distribution, col = "darkgreen")
# Make the Q-Q plot of GARCH residuals with the line of normal distribution
par(new=TRUE)
qqnorm(data_res * 0.614256270265139, col = "red", ylim = c(-0.5, 0.5))
qqline(data_res * 0.614256270265139, distribution = distribution, col = "darkgreen")
legend("topleft", c("Before GARCH", "After GARCH"), col = c("black", "red"), pch=c(1,1))_____no_output_____
</code>
## 10. A final quiz
<p>In this project, we fitted a GARCH model to develop a better understanding of how bond volatility evolves and how it affects the probability distribution. In the final task, we will evaluate our model. Did the model succeed, or did it fail?</p>_____no_output_____
<code>
# Q1: Did GARCH revealed how volatility changed over time? # Yes or No?
(Q1 <- "Yes")
# Q2: Did GARCH bring the residuals closer to normal distribution? Yes or No?
(Q2 <- "Yes")
# Q3: Which time series of yield changes deviates more
# from a normally distributed white noise process? Choose 1 or 20.
(Q3 <- 1)_____no_output_____
</code>
| {
"repository": "veeralakrishna/DataCamp-Portofolio-Projects-R",
"path": "Modeling the Volatility of US Bond Yields/notebook.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 13,
"size": 847929,
"hexsha": "cb71a4786ce4cc7e119dc951796315aa0759257b",
"max_line_length": 847929,
"avg_line_length": 847929,
"alphanum_fraction": 0.9497481511
} |
# Notebook from zealseeker/deepchem
Path: examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb
# Tutorial Part 2: Learning MNIST Digit Classifiers
In the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in DeepChem. You might ask, why are we bothering to learn this material in DeepChem? Part of the reason is that image processing is an increasingly important part of AI for the life sciences. So learning how to train image processing models will be very useful for using some of the more advanced DeepChem features.
The MNIST dataset contains handwritten digits along with their human annotated labels. The learning challenge for this dataset is to train a model that maps the digit image to its true label. MNIST has been a standard benchmark for machine learning for decades at this point.

## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb)
## Setup
We recommend running this tutorial on Google colab. You'll need to run the following cell of installation commands on Colab to get your environment set up. If you'd rather run the tutorial locally, make sure you don't run these commands (since they'll download and install a new Anaconda python setup)_____no_output_____
<code>
%%capture
%tensorflow_version 1.x
!wget -c https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')_____no_output_____from tensorflow.examples.tutorials.mnist import input_data_____no_output_____# TODO: This is deprecated. Let's replace with a DeepChem native loader for maintainability.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)WARNING:tensorflow:From <ipython-input-3-a839aeb82f4b>:1: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
import deepchem as dc
import tensorflow as tf
from tensorflow.keras.layers import Reshape, Conv2D, Flatten, Dense, Softmax/usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=FutureWarning)
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)_____no_output_____keras_model = tf.keras.Sequential([
Reshape((28, 28, 1)),
Conv2D(filters=32, kernel_size=5, activation=tf.nn.relu),
Conv2D(filters=64, kernel_size=5, activation=tf.nn.relu),
Flatten(),
Dense(1024, activation=tf.nn.relu),
Dense(10),
Softmax()
])
model = dc.models.KerasModel(keras_model, dc.models.losses.CategoricalCrossEntropy())_____no_output_____model.fit(train, nb_epoch=2)WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/keras_model.py:169: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/optimizers.py:76: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/keras_model.py:258: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/keras_model.py:260: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/keras_model.py:200: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(model.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))Validation
class 0:auc=0.9999057979948827
class 1:auc=0.9999335476621387
class 2:auc=0.9998705637425881
class 3:auc=0.999911789233876
class 4:auc=0.9999623237852037
class 5:auc=0.9998804023326087
class 6:auc=0.9998620230088834
class 7:auc=0.9995460674157303
class 8:auc=0.9998530924048773
class 9:auc=0.9996017892577271
</code>
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!_____no_output_____
| {
"repository": "zealseeker/deepchem",
"path": "examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb",
"matched_keywords": [
"STAR",
"drug discovery"
],
"stars": 1,
"size": 16269,
"hexsha": "cb728ebea24a239b2f459b9dc1596e3912a530b6",
"max_line_length": 593,
"avg_line_length": 45.5714285714,
"alphanum_fraction": 0.5946278198
} |
# Notebook from PabloSoto1995/teachopencadd
Path: teachopencadd/talktorials/T008_query_pdb/talktorial.ipynb
# T008 · Protein data acquisition: Protein Data Bank (PDB)
Authors:
- Anja Georgi, CADD seminar, 2017, Charité/FU Berlin
- Majid Vafadar, CADD seminar, 2018, Charité/FU Berlin
- Jaime Rodríguez-Guerra, Volkamer lab, Charité
- Dominique Sydow, Volkamer lab, Charité_____no_output_______Talktorial T008__: This talktorial is part of the TeachOpenCADD pipeline described in the first TeachOpenCADD publication ([_J. Cheminform._ (2019), **11**, 1-7](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x)), comprising of talktorials T001-T010._____no_output_____## Aim of this talktorial
In this talktorial, we conduct the groundwork for the next talktorial where we will generate a ligand-based ensemble pharmacophore for EGFR. Therefore, we
(i) fetch all PDB IDs for EGFR from the PDB database,
(ii) retrieve five protein-ligand structures, which have the best structural quality and are derived from X-ray crystallography, and
(iii) align all structures to each in 3D as well as extract and save the ligands to be used in the next talktorial._____no_output_____### Contents in Theory
* Protein Data Bank (PDB)
* Python package `pypdb`_____no_output_____### Contents in Practical
* Select query protein
* Get all PDB IDs for query protein
* Get statistic on PDB entries for query protein
* Get meta information on PDB entries
* Filter and sort meta information on PDB entries
* Get meta information of ligands from top structures
* Draw top ligand molecules
* Create protein-ligand ID pairs
* Get the PDB structure files
* Align PDB structures_____no_output_____### References
* Protein Data Bank
([PDB website](http://www.rcsb.org/))
* `pypdb` python package
([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/))
* Molecular superposition with the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd))_____no_output_____## Theory_____no_output_____### Protein Data Bank (PDB)
The Protein Data Bank (PDB) is one of the most comprehensive structural biology information database and a key resource in areas of structural biology, such as structural genomics and drug design ([PDB website](http://www.rcsb.org/)).
Structural data is generated from structural determination methods such as X-ray crystallography (most common method), nuclear magnetic resonance (NMR), and cryo electron microscopy (cryo-EM).
For each entry, the database contains (i) the 3D coordinates of the atoms and the bonds connecting these atoms for proteins, ligand, cofactors, water molecules, and ions, as well as (ii) meta information on the structural data such as the PDB ID, the authors, the deposition date, the structural determination method used and the structural resolution.
The structural resolution is a measure of the quality of the data that has been collected and has the unit Å (Angstrom). The lower the value, the higher the quality of the structure.
The PDB website offers a 3D visualization of the protein structures (with ligand interactions if available) and a structure quality metrics, as can be seen for the PDB entry of an example epidermal growth factor receptor (EGFR) with the PDB ID [3UG5](https://www.rcsb.org/structure/3UG5).

Figure 1: The protein structure (in gray) with an interacting ligand (in green) is shown for an example epidermal growth factor receptor (EGFR) with the PDB ID 3UG5 (figure by Dominique Sydow)._____no_output_____### Python package `pypdb`
`pypdb` is a python programming interface for the PDB and works exclusively in Python 3 ([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/)).
This package facilitates the integration of automatic PDB searches within bioinformatics workflows and simplifies the process of performing multiple searches based on the results of existing searches.
It also allows an advanced querying of information on PDB entries.
The PDB currently uses a RESTful API that allows for the retrieval of information via standard HTML vocabulary. `pypdb` converts these objects into XML strings. _____no_output_____## Practical_____no_output_____
<code>
import collections
import logging
import pathlib
import time
import warnings
import pandas as pd
from tqdm.auto import tqdm
import redo
import requests_cache
import nglview
import pypdb
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from opencadd.structure.superposition.api import align, METHODS
from opencadd.structure.core import Structure
# Disable some unneeded warnings
logger = logging.getLogger("opencadd")
logger.setLevel(logging.ERROR)
warnings.filterwarnings("ignore")
# cache requests -- this will speed up repeated queries to PDB
requests_cache.install_cache("rcsb_pdb", backend="memory")_____no_output_____# define paths
HERE = pathlib.Path(_dh[-1])
DATA = HERE / "data"_____no_output_____
</code>
### Select query protein
We use EGFR as query protein for this talktorial. The UniProt ID of EGFR is `P00533`, which will be used in the following to query the PDB database._____no_output_____### Get all PDB IDs for query protein
First, we get all PDB structures for our query protein EGFR, using the `pypdb` functions `make_query` and `do_search`._____no_output_____
<code>
search_dict = pypdb.make_query("P00533")
found_pdb_ids = pypdb.do_search(search_dict)
print("Sample PDB IDs found for query:", *found_pdb_ids[:3], "...")
print("Number of EGFR structures found:", len(found_pdb_ids))Sample PDB IDs found for query: 1IVO 1M14 1M17 ...
Number of EGFR structures found: 214
</code>
### Get statistics on PDB entries for query protein
Next, we ask the question: How many PDB entries are deposited in the PDB for EGFR per year and how many in total?
Using `pypdb`, we can find all deposition dates of EGFR structures from the PDB database. The number of deposited structures was already determined and is needed to set the parameter `max_results` of the function `find_dates`._____no_output_____
<code>
# Query database
dates = pypdb.find_dates("P00533", max_results=len(found_pdb_ids))_____no_output_____# Example of the first three deposition dates
dates[:3]_____no_output_____
</code>
We extract the year from the deposition dates and calculate a depositions-per-year histogram._____no_output_____
<code>
# Extract year
years = pd.Series([int(date[:4]) for date in dates])
bins = years.max() - years.min() + 1
axes = years.hist(bins=bins)
axes.set_ylabel("New entries per year")
axes.set_xlabel("Year")
axes.set_title("PDB entries for EGFR");_____no_output_____
</code>
### Get meta information for PDB entries
We use `describe_pdb` to get meta information about the structures, which is stored per structure as a dictionary.
Note: we only fetch meta information on PDB structures here, we do not fetch the structures (3D coordinates), yet.
> The `redo.retriable` line is a _decorator_. This wraps the function and provides extra functionality. In this case, it will retry failed queries automatically (10 times maximum)._____no_output_____
<code>
@redo.retriable(attempts=10, sleeptime=2)
def describe_one_pdb_id(pdb_id):
"""Fetch meta information from PDB."""
described = pypdb.describe_pdb(pdb_id)
if described is None:
print(f"! Error while fetching {pdb_id}, retrying ...")
raise ValueError(f"Could not fetch PDB id {pdb_id}")
return described_____no_output_____pdbs = [describe_one_pdb_id(pdb_id) for pdb_id in found_pdb_ids]
pdbs[0]_____no_output_____
</code>
### Filter and sort meta information on PDB entries
Since we want to use the information to filter for relevant PDB structures, we convert the data set from dictionary to DataFrame for easier handling._____no_output_____
<code>
pdbs = pd.DataFrame(pdbs)
pdbs.head()_____no_output_____print(f"Number of PDB structures for EGFR: {len(pdbs)}")Number of PDB structures for EGFR: 214
</code>
We start filtering our dataset based on the following criteria:_____no_output_____#### 1. Experimental method: X-ray diffraction
We only keep structures resolved by `X-RAY DIFFRACTION`, the most commonly used structure determination method. _____no_output_____
<code>
pdbs = pdbs[pdbs.expMethod == "X-RAY DIFFRACTION"]
print(f"Number of PDB structures for EGFR from X-ray: {len(pdbs)}")Number of PDB structures for EGFR from X-ray: 208
</code>
#### 2. Structural resolution
We only keep structures with a resolution equal or lower than 3 Å. The lower the resolution value, the higher is the quality of the structure (-> the higher is the certainty that the assigned 3D coordinates of the atoms are correct). Below 3 Å, atomic orientations can be determined and therefore is often used as threshold for structures relevant for structure-based drug design._____no_output_____
<code>
pdbs.resolution = pdbs.resolution.astype(float) # convert to floats
pdbs = pdbs[pdbs.resolution <= 3.0]
print(f"Number of PDB entries for EGFR from X-ray with resolution <= 3.0 Angstrom: {len(pdbs)}")Number of PDB entries for EGFR from X-ray with resolution <= 3.0 Angstrom: 173
</code>
We sort the data set by the structural resolution. _____no_output_____
<code>
pdbs = pdbs.sort_values(["resolution"], ascending=True, na_position="last")_____no_output_____
</code>
We check the top PDB structures (sorted by resolution): _____no_output_____
<code>
pdbs.head()[["structureId", "resolution"]]_____no_output_____
</code>
#### 3. Ligand-bound structures
Since we will create ensemble ligand-based pharmacophores in the next talktorial, we remove all PDB structures from our DataFrame, which do not contain a bound ligand: we use the `pypdb` function `get_ligands` to check/retrieve the ligand(s) from a PDB structure. PDB-annotated ligands can be ligands, cofactors, but also solvents and ions. In order to filter only ligand-bound structures, we (i) remove all structures without any annotated ligand and (ii) remove all structures that do not contain any ligands with a molecular weight (MW) greater than 100 Da (Dalton), since many solvents and ions weight less. Note: this is a simple, but not comprehensive exclusion of solvents and ions. _____no_output_____
<code>
# Get all PDB IDs from DataFrame
pdb_ids = pdbs["structureId"].tolist()_____no_output_____# Remove structures
# (i) without ligand and
# (ii) without any ligands with molecular weight (MW) greater than 100 Da (Dalton)
@redo.retriable(attempts=10, sleeptime=2)
def get_ligands(pdb_id):
"""Decorate pypdb.get_ligands so it retries after a failure."""
return pypdb.get_ligands(pdb_id)
mw_cutoff = 100.0 # Molecular weight cutoff in Da
# This database query may take a moment
passed_pdb_ids = []
removed_pdb_ids = []
progressbar = tqdm(pdb_ids)
for pdb_id in progressbar:
progressbar.set_description(f"Processing {pdb_id}...")
ligand_dict = get_ligands(pdb_id)
# (i) Remove structure if no ligand present
if ligand_dict["ligandInfo"] is None:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
# (ii) Remove structure if not a single annotated ligand has a MW above mw_cutoff
else:
# Get ligand information
ligands = ligand_dict["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if type(ligands) == dict:
ligands = [ligands]
# Get MW per annotated ligand
mw_list = [float(ligand["@molecularWeight"]) for ligand in ligands]
# Remove structure if not a single annotated ligand has a MW above mw_cutoff
if sum([mw > mw_cutoff for mw in mw_list]) == 0:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
else:
passed_pdb_ids.append(pdb_id) # Remove ligand-free PDB IDs from list_____no_output_____print(
"PDB structures without a ligand (removed from our data set):",
*removed_pdb_ids,
)
print("Number of structures with ligand:", len(passed_pdb_ids))PDB structures without a ligand (removed from our data set): 3P0Y 2EB2 1M14 2GS2 3GOP 5EDP 2RFE 5WB8 4I1Z
Number of structures with ligand: 164
</code>
### Get meta information of ligands from top structures
In the next talktorial, we will build ligand-based ensemble pharmacophores from the top `top_num` structures with the highest resolution._____no_output_____
<code>
top_num = 8 # Number of top structures
selected_pdb_ids = passed_pdb_ids[:top_num]
selected_pdb_ids_____no_output_____
</code>
The selected highest resolution PDB entries can contain ligands targeting different binding sites, e.g. allosteric and orthosteric ligands, which would hamper ligand-based pharmacophore generation. Thus, we will focus on the following 4 structures, which contain ligands in the orthosteric binding pocket. The code provided later in the notebook can be used to verify this._____no_output_____
<code>
selected_pdb_ids = ["5UG9", "5HG8", "5UG8", "3POZ"]_____no_output_____
</code>
We fetch the PDB information about the top `top_num` ligands using `get_ligands`, to be stored as *csv* file (as dictionary per ligand).
If a structure contains several ligands, we select the largest ligand. Note: this is a simple, but not comprehensive method to select ligand binding the binding site of a protein. This approach may also select a cofactor bound to the protein. Therefore, please check the automatically selected top ligands visually before further usage._____no_output_____
<code>
ligands_list = []
for pdb_id in selected_pdb_ids:
ligands = get_ligands(pdb_id)["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if isinstance(ligands, dict):
ligands = [ligands]
weight = 0
this_lig = {}
# If several ligands contained, take largest
for ligand in ligands:
if float(ligand["@molecularWeight"]) > weight:
this_ligand = ligand
weight = float(ligand["@molecularWeight"])
ligands_list.append(this_ligand)_____no_output_____# NBVAL_CHECK_OUTPUT
# Change the format to DataFrame
ligands = pd.DataFrame(ligands_list)
ligands_____no_output_____ligands.to_csv(DATA / "PDB_top_ligands.csv", header=True, index=False)_____no_output_____
</code>
### Draw top ligand molecules_____no_output_____
<code>
PandasTools.AddMoleculeColumnToFrame(ligands, "smiles")
Draw.MolsToGridImage(
mols=list(ligands.ROMol),
legends=list(ligands["@chemicalID"] + ", " + ligands["@structureId"]),
molsPerRow=top_num,
)_____no_output_____
</code>
### Create protein-ligand ID pairs_____no_output_____
<code>
# NBVAL_CHECK_OUTPUT
pairs = collections.OrderedDict(zip(ligands["@structureId"], ligands["@chemicalID"]))
pairs_____no_output_____
</code>
### Align PDB structures
Since we want to build ligand-based ensemble pharmacophores in the next talktorial, it is necessary to align all structures to each other in 3D.
We will use one the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd)), which includes a 3D superposition subpackage to guide the structural alignment of the proteins. The approach is based on superposition guided by sequence alignment provided matched residues. There are other methods in the package, but this simple one will be enough for the task at hand._____no_output_____#### Get the PDB structure files
We now fetch the PDB structure files, i.e. 3D coordinates of the protein, ligand (and if available other atomic or molecular entities such as cofactors, water molecules, and ions) from the PDB using `opencadd.structure.superposition`.
Available file formats are *pdb* and *cif*, which store the 3D coordinations of atoms of the protein (and ligand, cofactors, water molecules, and ions) as well as information on bonds between atoms. Here, we work with *pdb* files._____no_output_____
<code>
# Download PDB structures
structures = [Structure.from_pdbid(pdb_id) for pdb_id in pairs]
structures_____no_output_____
</code>
#### Extract protein and ligand
Extract protein and ligand from the structure in order to remove solvent and other artifacts of crystallography._____no_output_____
<code>
complexes = [
Structure.from_atomgroup(structure.select_atoms(f"protein or resname {ligand}"))
for structure, ligand in zip(structures, pairs.values())
]
complexes_____no_output_____# Write complex to file
for complex_, pdb_id in zip(complexes, pairs.keys()):
complex_.write(DATA / f"{pdb_id}.pdb")_____no_output_____
</code>
#### Align proteins
Align complexes (based on protein atoms)._____no_output_____
<code>
results = align(complexes, method=METHODS["mda"])_____no_output_____
</code>
`nglview` can be used to visualize molecular data within Jupyter notebooks. With the next cell we will visualize out aligned protein-ligand complexes._____no_output_____
<code>
view = nglview.NGLWidget()
for complex_ in complexes:
view.add_component(complex_.atoms)
view_____no_output_____view.render_image(trim=True, factor=2, transparent=True);_____no_output_____view._display_image()_____no_output_____
</code>
#### Extract ligands _____no_output_____
<code>
ligands = [
Structure.from_atomgroup(complex_.select_atoms(f"resname {ligand}"))
for complex_, ligand in zip(complexes, pairs.values())
]
ligands_____no_output_____for ligand, pdb_id in zip(ligands, pairs.keys()):
ligand.write(DATA / f"{pdb_id}_lig.pdb")_____no_output_____
</code>
We check the existence of all ligand *pdb* files._____no_output_____
<code>
ligand_files = []
for file in DATA.glob("*_lig.pdb"):
ligand_files.append(file.name)
ligand_files_____no_output_____
</code>
We can also use `nglview` to depict the co-crystallized ligands alone. As we can see, the selected complexes contain ligands populating the same binding pocket and can thus be used in the next talktorial for ligand-based pharmacophore generation._____no_output_____
<code>
view = nglview.NGLWidget()
for component_id, ligand in enumerate(ligands):
view.add_component(ligand.atoms)
view.remove_ball_and_stick(component=component_id)
view.add_licorice(component=component_id)
view_____no_output_____view.render_image(trim=True, factor=2, transparent=True);_____no_output_____view._display_image()_____no_output_____
</code>
## Discussion
In this talktorial, we learned how to retrieve protein and ligand meta information and structural information from the PDB. We retained only X-ray structures and filtered our data by resolution and ligand availability. Ultimately, we aimed for an aligned set of ligands to be used in the next talktorial for the generation of ligand-based ensemble pharmacophores.
In order to enrich information about ligands for pharmacophore modeling, it is advisable to not only filter by PDB structure resolution, but also to check for ligand diversity (see **Talktorial 005** on molecule clustering by similarity) and to check for ligand activity (i.e. to include only potent ligands). _____no_output_____## Quiz
1. Summarize the kind of data that the Protein Data Bank contains.
2. Explain what the resolution of a structure stands for and how and why we filter for it in this talktorial.
3. Explain what an alignment of structures means and discuss the alignment performed in this talktorial._____no_output_____
| {
"repository": "PabloSoto1995/teachopencadd",
"path": "teachopencadd/talktorials/T008_query_pdb/talktorial.ipynb",
"matched_keywords": [
"structural biology",
"genomics",
"bioinformatics",
"biology"
],
"stars": 2,
"size": 426639,
"hexsha": "cb72a0d9dad626860f53fb2fd1d9b3010a6b87e2",
"max_line_length": 246356,
"avg_line_length": 275.428663654,
"alphanum_fraction": 0.914937453
} |
# Notebook from AngShengJun/dsiCapstone
Path: codes/P5.2 Topic Modeling.ipynb
## P5.2 Topic Modeling_____no_output_____---_____no_output_____### Content
- [Topic Modelling using LDA](#Topic-Modelling-using-LDA)
- [Topic Modeling (Train data)](#Topic-Modeling-(Train-data))
- [Optimal Topic Size](#Optimal-Topic-Size)
- [Binary Classification (LDA topic features)](#Binary-Classification-(LDA-topic-features))
- [Binary Classification (LDA topic and Countvectorizer features)](#Binary-Classification-(LDA-topic-and-Countvectorizer-features))
- [Recommendations (Part2)](#Recommendations-(Part2))
- [Future Work](#Future-Work)_____no_output_____### Topic Modelling using LDA
Inspired by Marc Kelechava's work [https://towardsdatascience.com/unsupervised-nlp-topic-models-as-a-supervised-learning-input-cf8ee9e5cf28] and Andrew Ng et al., 2003.
In this section, I explore if underlying semantic structures, discovered through the Latent Dirichlet Allocation (LDA) technique (unsupervised machine learning technique), could be utilized in a supervised text classification problem. LDA application poses significant challenge due to personal inexperience in the domain, and I allocated approx. a week in reading up on basic LDA applications. I'm interested to explore
Steps as follows:
- Explore LDA topic modelling, and derive optimum number of topics (train data)
- Investigate the use of LDA topic distributions as feature vectors for supervised, binary classification (i.e. bomb or non-bomb). If the supervised sensitivty and roc_auc score on the unseen data generalizes, it is an indication that the topic model trained on trainsub has identified latent semantic structure that persists over varying motive texts in identification of bombing incidents.
- Investigate generalizability of supervised, binary classification model using feature vectors from both LDA and countvectorizer. _____no_output_____
<code>
import pandas as pd
import numpy as np
import sys
import re
from pprint import pprint
# Gensim
import gensim, spacy, logging, warnings
import gensim.corpora as corpora
from gensim.utils import lemmatize, simple_preprocess
from gensim.models import CoherenceModel
# NLTK Stop words and stemmer
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
# Import library for cross-validation
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'_____no_output_____# Setting - display all columns
pd.set_option('display.max_columns', None)_____no_output_____# Read in cleaned featured engineered data
dframe = pd.read_csv('../assets/wordok.csv',encoding="ISO-8859-1",index_col=0)_____no_output_____# Instantiate the custom list of stopwords for modelling from P5_01
stop_words = stopwords.words('english')
own_stop = ['motive','specific','unknown','attack','sources','noted', 'claimed','stated','incident','targeted',\
'responsibility','violence','carried','government','suspected','trend','speculated','al','sectarian',\
'retaliation','group','related','security','forces','people','bomb','bombing','bombings']
# Extend the stop words
stop_words.extend(own_stop)_____no_output_____own_stopfn = ['death', 'want', 'off', 'momentum', 'star', 'colleg', 'aqi', 'treat', 'reveng', 'them', 'all', 'radio',\
'bodo', 'upcom', 'between', 'prior', 'enter', 'made', 'nimr', 'sectarian', 'muslim', 'past', 'previou',\
'intimid', 'held', 'fsa', 'women', 'are', 'mnlf', 'with', 'pattani', 'shutdown', 'border', 'departur',\
'advoc', 'have', 'eelam', 'across', 'villag', 'foreign', 'kill', 'shepherd', 'yemeni', 'develop', 'pro',\
'road', 'not', 'appear', 'jharkhand', 'spokesperson']_____no_output_____# Extend the Stop words
stop_words.extend(own_stopfn)
# Check the addition of firstset_words
stop_words[-5:]_____no_output_____# Create Train-Test split (80-20 split)
# X is motive text. y is bomb.
X_train,X_test,y_train,y_test = train_test_split(dframe[['motive']],dframe['bomb'],test_size=0.20,\
stratify=dframe['bomb'],\
random_state=42)_____no_output_____dframe.head(1)_____no_output_____
</code>
### Topic Modeling (Train data)_____no_output_____
<code>
def sent_to_words(sentences):
for sent in sentences:
sent = re.sub('\s+', ' ', sent) # remove newline chars
sent = re.sub("\'", "", sent) # remove single quotes
sent = gensim.utils.simple_preprocess(str(sent), deacc=True)
yield(sent)
# Convert to list
data = X_train.motive.values.tolist()
data_words = list(sent_to_words(data))
print(data_words[:1])[['the', 'specific', 'motive', 'is', 'unknown', 'however', 'sources', 'speculate', 'that', 'the', 'attack', 'was', 'part', 'of', 'larger', 'trend', 'of', 'sectarian', 'violence', 'between', 'iraqs', 'minority', 'sunni', 'and', 'majority', 'shiite', 'communities']]
</code>
Utilize Gensim's `Phrases` to build and implement bigrams and trigrams. The higher the parameters `min_count` and `threshold`, the harder it is for words to be combined to bigrams_____no_output_____
<code>
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def process_words(texts, stop_words=stop_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""Remove Stopwords, Form Bigrams, Trigrams and Lemmatization"""
texts = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
texts = [bigram_mod[doc] for doc in texts]
texts = [trigram_mod[bigram_mod[doc]] for doc in texts]
texts_out = []
# use 'en_core_web_sm' in place of 'en'
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
# remove stopwords once more after lemmatization
texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out]
return texts_out
data_ready = process_words(data_words) # processed Text Data_____no_output_____len(data_ready)_____no_output_____# Create Dictionary
id2word = corpora.Dictionary(data_ready)
## Create corpus texts
texts = data_ready
# Create Corpus: Term Document Frequency
corpus = [id2word.doc2bow(text) for text in data_ready]
# View
display(corpus[:4])
# Human readable format of corpus (term-frequency)
[[(id2word[id], freq) for id, freq in cp] for cp in corpus[:4]]_____no_output_____
</code>
Gensim creates unique id for each word in the document. The produced corpus shown above is a mapping of (word_id, word_frequency). A human-readable form of the corpus is displayed follows thereafter.
Build LDA model with 4 topics. Each topic is a combination of keywords (Each contributing certain weightage to topic)._____no_output_____
<code>
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=4,
random_state=42,
update_every=1,
chunksize=100,
passes=10,
alpha='symmetric',
iterations=100,
per_word_topics=True)
pprint(lda_model.print_topics())[(0,
'0.060*"police" + 0.044*"believe" + 0.027*"protest" + 0.020*"response" + '
'0.016*"informant" + 0.014*"intimidate" + 0.011*"rebel" + 0.009*"several" + '
'0.009*"refuse" + 0.009*"often"'),
(1,
'0.067*"however" + 0.052*"election" + 0.036*"area" + 0.026*"attempt" + '
'0.017*"recent" + 0.017*"local" + 0.016*"schedule" + 0.013*"extremist" + '
'0.012*"also" + 0.011*"official"'),
(2,
'0.098*"however" + 0.049*"victim" + 0.043*"state" + 0.030*"posit" + '
'0.029*"military" + 0.026*"campaign" + 0.022*"member" + 0.019*"accuse" + '
'0.018*"maoist" + 0.018*"islamic"'),
(3,
'0.123*"however" + 0.077*"part" + 0.052*"large" + 0.050*"may" + '
'0.045*"shiite" + 0.036*"community" + 0.022*"occur" + 0.021*"sunni" + '
'0.017*"camp" + 0.017*"member"')]
</code>
Interpretation: For topic 0, top 10 keywords that contribute to this topic are 'however', 'state' and so on, with weight of 'however' being 0.088._____no_output_____
<code>
# Compute Perplexity
print(f"Perplexity: {lda_model.log_perplexity(corpus)}") # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=data_ready, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print(f"Coherence Score: {coherence_lda}")Perplexity: -6.809074529210312
Coherence Score: 0.33088686316810534
def format_topics_sentences(ldamodel=None, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row_list in enumerate(ldamodel[corpus]):
row = row_list[0] if ldamodel.per_word_topics else row_list
# print(row)
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=data_ready)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.head(10)_____no_output_____
</code>
The dominant topic with percentage contribution for each document is represented above.
_____no_output_____
<code>
# Display setting to show more characters in column
pd.options.display.max_colwidth = 80
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=False).head(1)],
axis=0)
# Reset Index
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
# Format
sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Representative Text"]
# Show
sent_topics_sorteddf_mallet.head(10)_____no_output_____
</code>
The documents a given topic has contributed to the most to facilitate topic inference ate displayed above._____no_output_____
<code>
# 1. Wordcloud of Top N words in each topic
from matplotlib import pyplot as plt
from wordcloud import WordCloud, STOPWORDS
import matplotlib.colors as mcolors
cols = [color for name, color in mcolors.TABLEAU_COLORS.items()] # more colors: 'mcolors.XKCD_COLORS'
cloud = WordCloud(stopwords=stop_words,
background_color='white',
width=2500,
height=1800,
max_words=10,
colormap='tab10',
color_func=lambda *args, **kwargs: cols[i],
prefer_horizontal=1.0)
topics = lda_model.show_topics(formatted=False)
fig, axes = plt.subplots(2, 2, figsize=(10,10), sharex=True, sharey=True)
for i, ax in enumerate(axes.flatten()):
fig.add_subplot(ax)
topic_words = dict(topics[i][1])
cloud.generate_from_frequencies(topic_words, max_font_size=300)
plt.gca().imshow(cloud)
plt.gca().set_title('Topic ' + str(i), fontdict=dict(size=16))
plt.gca().axis('off')
plt.subplots_adjust(wspace=0, hspace=0)
plt.axis('off')
plt.margins(x=0, y=0)
plt.tight_layout()
plt.show()_____no_output_____
</code>
Interpretation of the Four topics using the representative text identified above: (topic 0: public unrest and law enforcement), (topic 1: tension admist elections), (topic 2: military campaigns and terror groups), (topic 3: sectarian violence).
Note: Changing the random_seed will also change the topics surfaced, currently versioned as 42._____no_output_____
<code>
# Sentence Coloring of N Sentences
from matplotlib.patches import Rectangle
# Pick documents amongst corpus
def sentences_chart(lda_model=lda_model, corpus=corpus, start = 7, end = 14):
corp = corpus[start:end]
mycolors = [color for name, color in mcolors.TABLEAU_COLORS.items()]
fig, axes = plt.subplots(end-start, 1, figsize=(20, (end-start)*0.95), dpi=160)
axes[0].axis('off')
for i, ax in enumerate(axes):
if i > 0:
corp_cur = corp[i-1]
topic_percs, wordid_topics, wordid_phivalues = lda_model[corp_cur]
word_dominanttopic = [(lda_model.id2word[wd], topic[0]) for wd, topic in wordid_topics]
ax.text(0.01, 0.5, "Doc " + str(i-1) + ": ", verticalalignment='center',
fontsize=16, color='black', transform=ax.transAxes, fontweight=700)
# Draw Rectange
topic_percs_sorted = sorted(topic_percs, key=lambda x: (x[1]), reverse=True)
ax.add_patch(Rectangle((0.0, 0.05), 0.99, 0.90, fill=None, alpha=1,
color=mycolors[topic_percs_sorted[0][0]], linewidth=2))
word_pos = 0.06
for j, (word, topics) in enumerate(word_dominanttopic):
if j < 14:
ax.text(word_pos, 0.5, word,
horizontalalignment='left',
verticalalignment='center',
fontsize=16, color=mycolors[topics],
transform=ax.transAxes, fontweight=700)
word_pos += .009 * len(word) # to move the word for the next iter
ax.axis('off')
ax.text(word_pos, 0.5, '. . .',
horizontalalignment='left',
verticalalignment='center',
fontsize=16, color='black',
transform=ax.transAxes)
plt.subplots_adjust(wspace=0, hspace=0)
plt.suptitle('Sentence Topic Coloring for Documents: ' + str(start) + ' to ' + str(end-2), fontsize=22, y=0.95, fontweight=700)
plt.tight_layout()
plt.show()
sentences_chart()_____no_output_____
</code>
We can review the topic percent contribution for each document. Here document 7 is selected as an example._____no_output_____
<code>
df_dominant_topic[df_dominant_topic['Document_No']==7]_____no_output_____import pyLDAvis.gensim
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda_model, corpus, dictionary=lda_model.id2word)
vis_____no_output_____
</code>
Interpretation: On the left hand plot, topics are represented by a bubble. Larger bubble size indicates higher prevalence. Good topic model will have fairly big, non-overlapping bubbles scattered throughout the chart. The salient keywords and frequency bars on the right hand chart updates with review of each bubble (cursor over bubble)._____no_output_____### Optimal Topic Size_____no_output_____
<code>
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=num_topics, id2word=id2word, random_state=42, update_every=1,\
chunksize=100, passes=10, alpha='symmetric', iterations=100, per_word_topics=True)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values_____no_output_____# Can take a long time to run (10mins approx)
model_list, coherence_values = compute_coherence_values(dictionary=id2word, corpus=corpus, texts=data_ready, start=5, limit=60, step=12)_____no_output_____# Show graph
limit=60; start=5; step=12;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()_____no_output_____# Print the coherence scores
for m, cv in zip(x, coherence_values):
print("Num Topics =", m, " has Coherence Value of", round(cv, 4))Num Topics = 5 has Coherence Value of 0.2823
Num Topics = 17 has Coherence Value of 0.4062
Num Topics = 29 has Coherence Value of 0.3934
Num Topics = 41 has Coherence Value of 0.4703
Num Topics = 53 has Coherence Value of 0.4587
</code>
coherence score saturates at 41 topics. We pick the model with 41 topics._____no_output_____
<code>
# Select the model and print the topics
optimal_model = model_list[3]
model_topics = optimal_model.show_topics(formatted=False)
pprint(optimal_model.print_topics(num_words=5))[(23,
'0.400*"civilian" + 0.365*"response" + 0.069*"action" + 0.039*"population" + '
'0.000*"phuket"'),
(37,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(16,
'0.126*"plan" + 0.120*"country" + 0.117*"include" + 0.095*"office" + '
'0.093*"education"'),
(13,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(22,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(3,
'0.183*"however" + 0.159*"large" + 0.157*"part" + 0.151*"may" + '
'0.136*"shiite"'),
(33,
'0.387*"prevent" + 0.218*"pilgrim" + 0.021*"travel" + 0.010*"iranian" + '
'0.000*"phuket"'),
(14,
'0.256*"provide" + 0.175*"oil" + 0.077*"colombian" + 0.076*"facility" + '
'0.000*"ddd"'),
(30,
'0.313*"victim" + 0.200*"however" + 0.141*"accuse" + 0.108*"work" + '
'0.093*"note"'),
(25,
'0.533*"however" + 0.420*"election" + 0.000*"recognize" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(9,
'0.315*"believe" + 0.225*"attempt" + 0.172*"however" + 0.102*"leave" + '
'0.070*"carry"'),
(11,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(8,
'0.624*"military" + 0.137*"school" + 0.101*"however" + 0.033*"bus" + '
'0.000*"phuket"'),
(29,
'0.568*"police" + 0.204*"day" + 0.140*"however" + 0.022*"belong" + '
'0.000*"phuket"'),
(40,
'0.175*"maoist" + 0.175*"islamic" + 0.174*"however" + 0.162*"state" + '
'0.064*"region"'),
(36,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(19,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(20,
'0.000*"dictatorship" + 0.000*"phuket" + 0.000*"tradition" + 0.000*"ddd" + '
'0.000*"grassroot"'),
(32,
'0.245*"however" + 0.242*"assailant" + 0.116*"oppose" + 0.064*"result" + '
'0.063*"lead"'),
(34,
'0.629*"protest" + 0.110*"sabotage" + 0.092*"participate" + '
'0.000*"dictatorship" + 0.000*"tradition"')]
</code>
The dominant topics in each sentence are identified using the defined function below._____no_output_____
<code>
def format_topics_sentences(ldamodel=None, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row_list in enumerate(ldamodel[corpus]):
row = row_list[0] if ldamodel.per_word_topics else row_list
# print(row)
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=optimal_model, corpus=corpus, texts=data_ready)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.head(10)_____no_output_____
</code>
Find the representative document for each topic and display them._____no_output_____
<code>
# Display setting to show more characters in column
pd.options.display.max_colwidth = 80
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=False).head(1)],
axis=0)
# Reset Index
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
# Format
sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Representative Text"]
# Show
sent_topics_sorteddf_mallet.head(10)_____no_output_____# Specify mds as 'tsne', otherwise TypeError: Object of type 'complex' is not JSON serializable
# complex number had come from coordinate calculation and specifying the "mds"
# Ref1: https://stackoverflow.com/questions/46379763/typeerror-object-of-type-complex-is-not-json-serializable-while-using-pyldavi
# Ref2: https://pyldavis.readthedocs.io/en/latest/modules/API.html#pyLDAvis.prepare
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(topic_model=optimal_model, corpus=corpus, dictionary=optimal_model.id2word,mds='tsne')
vis_____no_output_____
</code>
### Binary Classification (LDA topic features)_____no_output_____
<code>
# Set the dictionary and corpus based on trainsub data
trainid2word = id2word
traincorpus = corpus_____no_output_____# Train model
# Build LDA model on trainsub data, using optimum topics
lda_train = gensim.models.ldamodel.LdaModel(corpus=traincorpus,
id2word=trainid2word,
num_topics=41,
random_state=42,
update_every=1,
chunksize=100,
passes=10,
alpha='symmetric',
iterations=100,
per_word_topics=True)_____no_output_____
</code>
With the LDA model trained on train data, run the motive text through it using 'get document topics'. A list comprehension on that output (2nd line in loop) will give the probability distribution of the topics for a specific review (feature vector). _____no_output_____
<code>
# Make train Vectors
train_vecs = []
for i in range(len(X_train)):
top_topics = lda_train.get_document_topics(traincorpus[i], minimum_probability=0.0)
topic_vec = [top_topics[i][1] for i in range(41)]
train_vecs.append(topic_vec)_____no_output_____# Sanity check; should correspond with the number of optimal topics
print(f"Number of vectors per train text: {len(train_vecs[2])}")
print(f"Length of train vectors: {len(train_vecs)}")
print(f"Length of X_train: {len(X_train)}")Number of vectors per train text: 41
Length of train vectors: 26016
Length of X_train: 26016
# Pass the vectors into numpy array form
X_tr_vec = np.array(train_vecs)
y_tr = np.array(y_train)_____no_output_____# Split the train_vecs for training
X_trainsub,X_validate,y_trainsub,y_validate = train_test_split(X_tr_vec,y_tr,test_size=0.20,stratify=y_tr,random_state=42)_____no_output_____# Instantiate model
lr = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model
model_lr = lr.fit(X_trainsub,y_trainsub)_____no_output_____# Generate predictions from validate set
# Cross-validate 10 folds
predictions = cross_val_predict(model_lr, X_validate, y_validate, cv = 10)
print(f"Accuracy on validate set: {round(cross_val_score(model_lr, X_validate, y_validate, cv = 10).mean(),4)}")Accuracy on validate set: 0.582
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_validate, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df_____no_output_____# return nparray as a 1-D array.
confusion_matrix(y_validate, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_validate, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr.predict_proba(X_validate)]
pred_df = pd.DataFrame({'test_values': y_validate,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")Specificity: 0.3814
Sensitivity: 0.8024
roc_auc: 0.6121
def sent_to_words(sentences):
for sent in sentences:
sent = re.sub('\s+', ' ', sent) # remove newline chars
sent = re.sub("\'", "", sent) # remove single quotes
sent = gensim.utils.simple_preprocess(str(sent), deacc=True)
yield(sent)
# Convert to list
data = X_test.motive.values.tolist()
data_words = list(sent_to_words(data))
print(data_words[:1])<>:3: DeprecationWarning: invalid escape sequence \s
<>:3: DeprecationWarning: invalid escape sequence \s
<>:3: DeprecationWarning: invalid escape sequence \s
<ipython-input-45-f82c2e9a857b>:3: DeprecationWarning: invalid escape sequence \s
sent = re.sub('\s+', ' ', sent) # remove newline chars
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def process_words(texts, stop_words=stop_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""Remove Stopwords, Form Bigrams, Trigrams and Lemmatization"""
texts = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
texts = [bigram_mod[doc] for doc in texts]
texts = [trigram_mod[bigram_mod[doc]] for doc in texts]
texts_out = []
# use 'en_core_web_sm' in place of 'en'
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
# remove stopwords once more after lemmatization
texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out]
return texts_out
data_ready = process_words(data_words) # processed Text Data_____no_output_____# Using train dict on new unseen test words
testcorpus = [trainid2word.doc2bow(text) for text in data_ready]_____no_output_____# Use the LDA model from trained data on the unseen test corpus
# Code block similar to that for training code, except
# use the LDA model from the training data, and run them through the unseen test reviews
test_vecs = []
for i in range(len(X_test)):
top_topics = lda_train.get_document_topics(testcorpus[i], minimum_probability=0.0)
topic_vec = [top_topics[i][1] for i in range(41)]
test_vecs.append(topic_vec)_____no_output_____print(f"Length of test vectors: {len(test_vecs)}")
print(f"Length of X_test: {len(X_test)}")Length of test vectors: 6505
Length of X_test: 6505
# Pass the vectors into numpy array form
X_ts_vec = np.array(test_vecs)
y_ts = np.array(y_test)_____no_output_____# Instantiate model
lr = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model
model_lr = lr.fit(X_ts_vec,y_ts)
# Generate predictions from test set
predictions = lr.predict(X_ts_vec)
print(f"Accuracy on test set: {round(model_lr.score(X_ts_vec, y_ts),4)}")Accuracy on test set: 0.6034
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_ts, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df_____no_output_____# return nparray as a 1-D array.
confusion_matrix(y_ts, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_ts, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr.predict_proba(X_ts_vec)]
pred_df = pd.DataFrame({'test_values': y_ts,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")Specificity: 0.4147
Sensitivity: 0.8106
roc_auc: 0.6374
# Summary of the topic modeling + LR model scores in Dataframe
summary_df = pd.DataFrame({'accuracy' : [0.5820, 0.6034],
'specificity' : [0.3814, 0.4147],
'sensitivity' : [0.8024, 0.8106],
'roc_auc' : [0.6121, 0.6374]})
# Transpose dataframe
summary_dft = summary_df.T
# Rename columns
summary_dft.columns = ['Validate set','Test set']
print("Topic modeling + LR classifier scores: ")
display(summary_dft)Topic modeling + LR classifier scores:
</code>
From the sensitivity and roc_auc score, the model is not overfitted as test sensitivity and roc_auc is higher than on validate set. Before proceeding, a recap on the steps done to consolidate understanding.
- Topic modeling using the train dataset,
- Find optimum topics based on coherence score
- Train LDA model on train data. The probability distributions of the topics are then used as feature vectors in the Logistic Regression model for binary classification (bomb vs. non-bomb) on the validate data set.
- Thereafter, the trained LDA model is used to derive probability distributions of the topics from the test data.
- Run Logistic Regression model on these topic probability distributions, to see if model generalizes
In the next section, the topic probability distributions are added to the count vectorized word features for both train and test dataset. The dataset is then run through the Logistic Regression model to determine overall model generalizability_____no_output_____### Binary Classification (LDA topic and Countvectorizer features)_____no_output_____
<code>
# Instantiate porterstemmer
p_stemmer = PorterStemmer()_____no_output_____# Define function to convert a raw selftext to a string of words
def selftext_to_words(motive_text):
# 1. Remove non-letters.
letters_only = re.sub("[^a-zA-Z]", " ", motive_text)
# 2. Split into individual words
words = letters_only.split()
# 3. In Python, searching a set is much faster than searching
# a list, so convert the stopwords to a set.
stops = set(stop_words)
# 5. Remove stopwords.
meaningful_words = [w for w in words if w not in stops]
# 5.5 Stemming of words
meaningful_words = [p_stemmer.stem(w) for w in words]
# 6. Join the words back into one string separated by space,
# and return the result
return(" ".join(meaningful_words))_____no_output_____#Initialize an empty list to hold the clean test text.
X_train_clean = []
X_test_clean = []
for text in X_train['motive']:
"""Convert text to words, then append to X_train_clean."""
X_train_clean.append(selftext_to_words(text))
for text in X_test['motive']:
"""Convert text to words, then append to X_train_clean."""
X_test_clean.append(selftext_to_words(text))_____no_output_____# Instantiate our CountVectorizer
cv = CountVectorizer(ngram_range=(1,2),max_df=0.9,min_df=3,max_features=10000)
# Fit and transform on whole training data
X_train_cleancv = cv.fit_transform(X_train_clean)
# Transform test data
X_test_cleancv = cv.transform(X_test_clean)_____no_output_____# Add word vectors (topic modeling) to the sparse matrices
# Ref: https://stackoverflow.com/questions/55637498/numpy-ndarray-sparse-matrix-to-dense
# Ref: https://kite.com/python/docs/scipy.sparse
# Convert sparse matrix to dense
X_tr_dense = X_train_cleancv.toarray()
X_ts_dense = X_test_cleancv.toarray()
# add numpy array (train and test topic model vectors to dense matrix)
X_tr_dense_tm = np.concatenate((X_tr_dense,X_tr_vec),axis=1)
X_ts_dense_tm = np.concatenate((X_ts_dense,X_ts_vec),axis=1)_____no_output_____from scipy.sparse import csr_matrix
# Convert back to sparse matrix for modeling
X_tr_sparse = csr_matrix(X_tr_dense_tm)
X_ts_sparse = csr_matrix(X_ts_dense_tm)_____no_output_____# Sanity Check
display(X_tr_sparse)
display(X_train_cleancv)_____no_output_____
</code>
---_____no_output_____
<code>
# Instantiate model
lr_comb = LogisticRegression(random_state=42,solver='lbfgs',max_iter=500)
# Fit model on whole training data (without addn set of stopwords removed in NB model)
model_lr = lr_comb.fit(X_tr_sparse,y_train)
# Generate predictions from test set
predictions = lr_comb.predict(X_ts_sparse)
print(f"Accuracy on whole test set: {round(model_lr.score(X_ts_sparse, y_test),4)}")Accuracy on whole test set: 0.6893
# Confusion matrix for test set using NB model
# Pass in true values, predicted values to confusion matrix
# Convert Confusion matrix into dataframe
# Positive class (class 1) is bomb
cm = confusion_matrix(y_test, predictions)
cm_df = pd.DataFrame(cm,columns=['pred non-bomb','pred bomb'], index=['Actual non-bomb','Actual bomb'])
cm_df_____no_output_____# return nparray as a 1-D array.
confusion_matrix(y_test, predictions).ravel()
# Save TN/FP/FN/TP values.
tn, fp, fn, tp = confusion_matrix(y_test, predictions).ravel()
# Summary of metrics for LR model
spec = tn/(tn+fp)
sens = tp/(tp+fn)
print(f"Specificity: {round(spec,4)}")
print(f"Sensitivity: {round(sens,4)}")
# To compute the ROC AUC curve, first
# Create a dataframe called pred_df that contains:
# 1. The list of true values of our test set.
# 2. The list of predicted probabilities based on our model.
pred_proba = [i[1] for i in lr_comb.predict_proba(X_ts_sparse)]
pred_df = pd.DataFrame({'test_values': y_test,
'pred_probs':pred_proba})
# Calculate ROC AUC.
print(f"roc_auc: {round(roc_auc_score(pred_df['test_values'],pred_df['pred_probs']),4)}")Specificity: 0.5351
Sensitivity: 0.8587
roc_auc: 0.7621
# Summary of the topic modeling + LR model scores in Dataframe
summary_df = pd.DataFrame({'accuracy' : [0.6859, 0.6034, 0.6893],
'specificity' : [0.5257, 0.4147, 0.5351],
'sensitivity' : [0.8619, 0.8106, 0.8587],
'roc_auc' : [0.7568, 0.6374, 0.7621]})
# Transpose dataframe
summary_dft = summary_df.T
# Rename columns
summary_dft.columns = ['LR model (50 false neg wrd rmvd)','LR model (tm)', 'LR model (tm + wrd vec)']
display(summary_dft)_____no_output_____
</code>
### Recommendations (Part2)_____no_output_____From the model metric summaries, the model using topic distributions alone as feature vectors has the lowest performance scores (sensitivity and roc_auc). The addition of feature vectors from count vectorizer improved model sensitivity and roc_auc. Model generalizability using LDA topic distributions has been demonstrated, though the best performing model remains the production Logistic Regression model using count vectorized word features. Nevertheless, the results are encouraging, and could be further experimented upon (some prelim thoughts are listed under future work).
The approach applied in this project could work in general, for similar NLP-based classifiers._____no_output_____### Future Work_____no_output_____Terrorism is a complex topic as it covers politics, psychology, philosophy, military strategy, etc. The current model is a very simplistic model in that it classifies a terrorist attack mode as 'bomb' or 'non-bomb' based solely on one form of intel (motive text). Additional sources or forms of intel are not included, nor political and social factors trends that could serve as supporting sources of intelligence.
Here are a few areas that I would like to revisit for future project extensions:
- source for additional data to widen perspective
- feature engineer spatial and temporal aspects (e.g. attacks by region, attacks by decades)
- explore model performance using Tfidf vectorizer and spaCy
- explore other classification models (currently only 2 models explored; time allocated between studying the dataset variables, motive texts, longer than usual modeling times with the inherent size of the dataset, and research on topic modeling (LDA) and spaCy)_____no_output_____---_____no_output_____
| {
"repository": "AngShengJun/dsiCapstone",
"path": "codes/P5.2 Topic Modeling.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 830693,
"hexsha": "cb731c5b740407b334a049ae428027b8d3a9f244",
"max_line_length": 273304,
"avg_line_length": 285.5596424888,
"alphanum_fraction": 0.8335811184
} |
# Notebook from RPGroup-PBoC/chann_cap
Path: src/image_analysis/20190816_O2_long_growth_test/20190816_data_comparison.ipynb
# Comparison of the data taken with a long adaptation time_____no_output_____(c) 2019 Manuel Razo. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT)
---_____no_output_____
<code>
import os
import glob
import re
# Our numerical workhorses
import numpy as np
import scipy as sp
import pandas as pd
# Import matplotlib stuff for plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
# Seaborn, useful for graphics
import seaborn as sns
# Import the project utils
import sys
sys.path.insert(0, '../../../')
import ccutils
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline
%config InlineBackend.figure_format = 'retina'
tmpdir = '../../tmp/'
datadir = '../../../data/csv_microscopy/'_____no_output_____# Set PBoC plotting format
ccutils.viz.set_plotting_style()
# Increase dpi
mpl.rcParams['figure.dpi'] = 110_____no_output_____
</code>
## Comparing the data_____no_output_____For this dataset taken on `20190814` I grew cells overnight on M9 media, the reason being that I wanted to make sure that cells had no memory of every having been in LB._____no_output_____
<code>
df_long = pd.read_csv('outdir/20190816_O2__M9_growth_test_microscopy.csv',
comment='#')
df_long[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()_____no_output_____
</code>
Now the rest of the datasets taken with the laser system_____no_output_____
<code>
# Read the tidy-data frame
files = glob.glob(datadir + '/*IPTG*csv')# + mwc_files
df_micro = pd.concat(pd.read_csv(f, comment='#') for f in files if 'Oid' not in f)
## Remove data sets that are ignored because of problems with the data quality
## NOTE: These data sets are kept in the repository for transparency, but they
## failed at one of our quality criteria
## (see README.txt file in microscopy folder)
ignore_files = [x for x in os.listdir('../../image_analysis/ignore_datasets/')
if 'microscopy' in x]
# Extract data from these files
ignore_dates = [int(x.split('_')[0]) for x in ignore_files]
# Remove these dates
df_micro = df_micro[~df_micro['date'].isin(ignore_dates)]
# Keep only the O2 operator
df_micro = df_micro[df_micro.operator == 'O2']
df_micro[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()/Users/razo/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
This is separate from the ipykernel package so we can avoid doing imports until
</code>
Let's now look at the O2 $\Delta lacI$ strain data. For this we first have to extract the mean autofluorescence value. First let's process the new data._____no_output_____
<code>
# Define names for columns in dataframe
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize df_long frame to save the noise
df_noise_long = pd.DataFrame(columns=names)
# Extract the mean autofluorescence
I_auto = df_long[df_long.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_df_long = df_long[df_long.rbs == 'delta']
# Group df_long by IPTG measurement
df_long_group = strain_df_long.groupby('IPTG_uM')
for inducer, df_long_inducer in df_long_group:
# Append the require info
strain_info = [20190624, 0, df_long_inducer.operator.unique()[0],
df_long_inducer.binding_energy.unique()[0],
df_long_inducer.rbs.unique()[0],
df_long_inducer.repressors.unique()[0],
(df_long_inducer.intensity - I_auto).mean(),
(df_long_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the df_longframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the df_long frame
df_noise_long = df_noise_long.append(strain_info,
ignore_index=True)
df_noise_long.head()_____no_output_____# group by date and by IPTG concentration
df_group = df_micro.groupby(['date'])
# Define names for columns in data frame
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize data frame to save the noise
df_noise_delta = pd.DataFrame(columns=names)
for date, data in df_group:
# Extract the mean autofluorescence
I_auto = data[data.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_data = data[data.rbs == 'delta']
# Group data by IPTG measurement
data_group = strain_data.groupby('IPTG_uM')
for inducer, data_inducer in data_group:
# Append the require info
strain_info = [date, inducer, data_inducer.operator.unique()[0],
data_inducer.binding_energy.unique()[0],
data_inducer.rbs.unique()[0],
data_inducer.repressors.unique()[0],
(data_inducer.intensity - I_auto).mean(),
(data_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the dataframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the data frame
df_noise_delta = df_noise_delta.append(strain_info,
ignore_index=True)
df_noise_delta.head()_____no_output_____
</code>
It seems that the noise is exactly the same for both illumination systems, ≈ 0.4-0.5.
Let's look at the ECDF of single-cell fluorescence values. For all measurements to be comparable we will plot the fold-change distribution. What this means is that we will extract the mean autofluorescence value and we will normalize by the mean intensity of the $\Delta lacI$ strain._____no_output_____
<code>
# group laser data by date
df_group = df_micro.groupby('date')
colors = sns.color_palette('Blues', n_colors=len(df_group))
# Loop through dates
for j, (g, d) in enumerate(df_group):
# Extract mean autofluorescence
auto = d.loc[d.rbs == 'auto', 'intensity'].mean()
# Extract mean delta
delta = d.loc[d.rbs == 'delta', 'intensity'].mean()
# Keep only delta data
data = d[d.rbs == 'delta']
fold_change = (data.intensity - auto) / (delta - auto)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='.', color=colors[j],
alpha=0.3, label='')
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='24 hour', ms=3)
# Add fake plot for legend
plt.plot([], [], marker='.', color=colors[-1],
alpha=0.3, label='8 hour', lw=0)
# Label x axis
plt.xlabel('fold-change')
# Add legend
plt.legend()
# Label y axis of left plot
plt.ylabel('ECDF')
# Change limit
plt.xlim(right=3)
plt.savefig('outdir/ecdf_comparison.png', bbox_inches='tight')_____no_output_____
</code>
There is no difference whatsoever. Maybe it is not the memory of LB, but the memory of having been on a lag phase for quite a while._____no_output_____## Comparison with theoretical prediction._____no_output_____Let's compare these datasets with the theoretical prediction we obtained from the MaxEnt approach.
First we need to read the Lagrange multipliers to reconstruct the distribution._____no_output_____
<code>
# Define directory for MaxEnt data
maxentdir = '../../../data/csv_maxEnt_dist/'
# Read resulting values for the multipliers.
df_maxEnt = pd.read_csv(maxentdir + 'MaxEnt_Lagrange_mult_protein.csv')
df_maxEnt.head()_____no_output_____
</code>
Now let's define the necessary objects to build the distribution from these constraints obtained with the MaxEnt method._____no_output_____
<code>
# Extract protein moments in constraints
prot_mom = [x for x in df_maxEnt.columns if 'm0' in x]
# Define index of moments to be used in the computation
moments = [tuple(map(int, re.findall(r'\d+', s))) for s in prot_mom]
# Define sample space
mRNA_space = np.array([0])
protein_space = np.arange(0, 1.9E4)
# Extract values to be used
df_sample = df_maxEnt[(df_maxEnt.operator == 'O1') &
(df_maxEnt.repressor == 0) &
(df_maxEnt.inducer_uM == 0)]
# Select the Lagrange multipliers
lagrange_sample = df_sample.loc[:, [col for col in df_sample.columns
if 'lambda' in col]].values[0]
# Compute distribution from Lagrange multipliers values
Pp_maxEnt = ccutils.maxent.maxEnt_from_lagrange(mRNA_space,
protein_space,
lagrange_sample,
exponents=moments).T[0]
mean_p = np.sum(protein_space * Pp_maxEnt)_____no_output_____
</code>
Now we can compare both distributions._____no_output_____
<code>
# Define binstep for plot, meaning how often to plot
# an entry
binstep = 10
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='20 hour', ms=3)
# Plot MaxEnt results
plt.plot(protein_space[0::binstep] / mean_p, np.cumsum(Pp_maxEnt)[0::binstep],
drawstyle='steps', label='MaxEnt', lw=2)
# Add legend
plt.legend()
# Label axis
plt.ylabel('CDF')
plt.xlabel('fold-change')
plt.savefig('outdir/maxent_comparison.png', bbox_inches='tight')_____no_output_____
</code>
| {
"repository": "RPGroup-PBoC/chann_cap",
"path": "src/image_analysis/20190816_O2_long_growth_test/20190816_data_comparison.ipynb",
"matched_keywords": [
"single-cell"
],
"stars": 2,
"size": 168972,
"hexsha": "cb7364b05ec91ce7a75a1aba449dccfcc0c0f231",
"max_line_length": 78972,
"avg_line_length": 162.0057526366,
"alphanum_fraction": 0.8556506403
} |
# Notebook from QRussia/basics-of-quantum-computing-translate
Path: bronze/.ipynb_checkpoints/B03_One_Bit-checkpoint.ipynb
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>_____no_output_____<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $_____no_output_____<h2> One Bit </h2>
[Watch Lecture](https://youtu.be/kn53Qvl-h28)
In daily life, we use decimal number system. It is also called base-10 system, because we have 10 digits:
$ 0,~1,~2,~3,~4,~5,~6,~7,~8, \mbox{ and } 9 $.
In computer science, on the other hand, the widely used system is binary, which has only two digits:
$ 0 $ and $ 1 $.
Bit (or binary digit) is the basic unit of information used in computer science and information theory.
It can also be seen as the smallest "useful" memory unit, which has two states named 0 and 1.
At any moment, a bit can be in either state 0 or state 1._____no_output_____<h3> Four operators </h3>
How many different operators can be defined on a single bit?
<i>An operator, depending on the current state of the bit, updates the state of bit (the result may be the same state).</i>
We can apply four different operators to a single bit:
<ol>
<li> Identity: $ I(0) = 0 $ and $ I(1) = 1 $ </li>
<li> Negation: $ NOT(0) = 1 $ and $ NOT(1) = 0 $ </li>
<li> Constant (Zero): $ ZERO(0) = 0 $ and $ ZERO(1) = 0 $ </li>
<li> Constant (One): $ ONE(0) = 1 $ and $ ONE(1) = 1 $ </li>
</ol>
The first operator is called IDENTITY, because it does not change the content/value of the bit.
The second operator is named NOT, because it negates (flips) the value of bit.
<i>Remark that 0 and 1 also refer to Boolean values False and True, respectively, and, False is the negation of True, and True is the negation of False.</i>
The third (resp., fourth) operator returns a constant value 0 (resp., 1), whatever the input is._____no_output_____<h3> Table representation </h3>
We can represent the transitions of each operator by a table:
$
I = \begin{array}{lc|cc}
& & initial & states \\
& & \mathbf{0} & \mathbf{1} \\ \hline
final & \mathbf{0} & \mbox{goes-to} & \emptyset \\
states & \mathbf{1} & \emptyset & \mbox{goes-to} \end{array} ,
$
where
- the header (first row) represents the initial values, and
- the first column represents the final values.
We can also define the transitions numerically:
- we use 1 if there is a transition between two values, and,
- we use 0 if there is no transition between two values.
$
I = \begin{array}{lc|cc}
& & initial & states \\
& & \mathbf{0} & \mathbf{1} \\ \hline
final & \mathbf{0} & 1 & 0 \\
states & \mathbf{1} & 0 & 1 \end{array}
$_____no_output_____The values in <b>bold</b> are the initial and final values of the bits. The non-bold values represent the transitions.
<ul>
<li> The top-left non-bold 1 represents the transtion $ 0 \rightarrow 0 $. </li>
<li> The bottom-right non-bold 1 represents the transtion $ 1 \rightarrow 1 $. </li>
<li> The top-right non-bold 0 means that there is no transition from 1 to 0. </li>
<li> The bottom-left non-bold 0 means that there is no transition from 0 to 1. </li>
</ul>
The reader may think that the values 0 and 1 are representing the transitions as True (On) and False (Off), respectively.
Similarly, we can represent the other operators as below:
$
NOT = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 0 & 1 \\
states & \mathbf{1} & 1 & 0 \end{array}
~~~
ZERO = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 1 & 1 \\
states & \mathbf{1} & 0 & 0 \end{array}
~~~
ONE = \begin{array}{lc|cc} & & initial & states \\ & & \mathbf{0} & \mathbf{1} \\ \hline final & \mathbf{0} & 0 & 0 \\
states & \mathbf{1} & 1 & 1 \end{array}
.
$_____no_output_____<h3> Task 1 </h3>
Convince yourself with the correctness of each table._____no_output_____<h3> Reversibility and Irreversibility </h3>
After applying Identity or NOT operator, we can easily determine the initial value by checking the final value.
<ul>
<li> In the case of Identity operator, we simply say the same value. </li>
<li> In the case of NOT operator, we simply say the other value, i.e., if the final value is 0 (resp., 1), then we say 1 (resp., 0). </li>
</ul>
However, we cannot know the initial value by checking the final value after applying ZERO or ONE operator.
Based on this observation, we can classify the operators into two types: <i>Reversible</i> and <i>Irreversible</i>.
<ul>
<li> If we can recover the initial value(s) from the final value(s), then the operator is called reversible like Identity and NOT operators. </li>
<li> If we cannot know the initial value(s) from the final value(s), then the operator is called irreversible like ZERO and ONE operators. </li>
</ul>
<b> This classification is important, as the quantum evolution operators are reversible </b> (as long as the system is closed).
The Identity and NOT operators are two fundamental quantum operators._____no_output_____
| {
"repository": "QRussia/basics-of-quantum-computing-translate",
"path": "bronze/.ipynb_checkpoints/B03_One_Bit-checkpoint.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 9574,
"hexsha": "cb7367bc14d9aa7f44c71f6f03ca4993e4e7ef21",
"max_line_length": 309,
"avg_line_length": 44.738317757,
"alphanum_fraction": 0.5382285356
} |
# Notebook from lukexyz/insultswordfight
Path: notebooks/03_aoe2_civgen.ipynb
<code>
# default_exp core_____no_output_____
</code>
# Few-shot Learning with GPT-J
> API details._____no_output_____
<code>
# export
import os
import pandas as pd_____no_output_____#hide
from nbdev.showdoc import *
import toml
s = toml.load("../.streamlit/secrets.toml", _dict=dict)_____no_output_____
</code>
Using `GPT_J` model API from [Nlpcloud](https://nlpcloud.io/home/token)_____no_output_____
<code>
import nlpcloud
client = nlpcloud.Client("gpt-j", s['nlpcloud_token'], gpu=True)_____no_output_____
</code>
_____no_output_____## Aoe2 Civ Builder
https://ageofempires.fandom.com/wiki/Civilizations_(Age_of_Empires_II)_____no_output_____
<code>
# example API call
generation = client.generation("""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: New Zealand Maori""",
max_length=250,
length_no_input=True,
end_sequence="###",
remove_input=True)
print('Civilisation: New Zealand Maori\n ', generation["generated_text"])Civilisation: New Zealand Maori
Specialty: Armoured archers with pikes
Unique unit: Warrior
Unique technologies: Weapons (Archers do 80% damage to pikemen)
Wonder: Rangia
Civilization bonuses: Archers do 80% damage to pikemen
Team bonuses: Enemies of archers cannot use siege techniques.
###
def create_input_string(civname):
return f"""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: {civname}"""
def generate_civ(civname, client):
"""
Creates input string and sends to nlpcloud for few-shot learning
"""
print(f'🌐 Generating New Civ: {civname} \n')
input_str = create_input_string(civname)
generation = client.generation(input_str,
max_length=250,
length_no_input=True,
end_sequence='###',
remove_input=True)
civgen = generation["generated_text"].strip('\n')
print(f"🛡️ **{civname}**\n{civgen}")
return civgen
_____no_output_____c = generate_civ(civname='New Zealand Maori', client=client)Generating New Civ: New Zealand Maori
🛡️ New Zealand Maori/nSpecialty: Maori archers
Unique unit: Nga Moeroa
Unique technologies: Ngati Waarawake
Unique technologies: Taniwha
Wonder: Hei-O-Te-Po
Civilization bonuses: Maori units are 15% better off water.
Civilisation bonuses: Maori workers do 20% better than their European counterparts.
Civilization bonuses: Maori archers do 50% better damage with arrows.
Team bonus: Maori units are at their best on dry land.
###
c = generate_civ(civname='Fijians', client=client)🌐 Generating New Civ: Fijians
🛡️ **Fijians**
Specialty: Specialized attack, long range combat, and shipbuilding
Unique unit: War canoe
Unique technologies: War canoes (+2 attack, +20% range), Shipwrights (+70% build ship time)
Wonder: Tanna or Makatea Islands
Civilization bonuses: Taino can fight from land and sea. Taino can build shipwrights at +70% of normal rate.
Team bonus: Taino build shipwrights at +75% of normal rate. - +10% of time at sea to build ships.
###
</code>
_____no_output_____
<code>
c = generate_civ(civname='Canadians', client=client)🌐 Generating New Civ: Canadians
🛡️ **Canadians**
Specialty: Infantry archers
Unique unit: Longbowman
Unique technologies: Native American
Wonder: Champlain's Memorial
Civilization bonuses: Wood cutters work 30% faster.
Team bonus: Native Americans can live in empty houses.
Civilization bonuses: Infantry units move +2 Lines of Sight.
Civilization bonuses: All buildings built in the Plains are built as Stone.
Civilization bonuses: Hauler unit costs -15% wood.
Civilization bonus: Woodcutters build Walls for free
Civilization bonuses: Cavalry and Lumberjacks work 50% faster.
Team bonus: Cavalry cost -15% wood.
Civilization bonuses: Cavalry cost 25% less Gold (starting in the Feudal Age).
Civilization bonuses: Herdables have -20% Movement.
Civilization bonuses: Lumberjacks build Walls for free (starting in the Feudal Age)
Civilization bonuses: Can convert enemy herdables to Wood.
Civilization bonuses: Castles cost -30% Gold.
###
c = generate_civ(civname='European Union', client=client)🌐 Generating New Civ: European Union
🛡️ **European Union**
Specialty: Infantry, artillery and cavalry
Unique unit: Arbalet
Unique technologies: Feudalism (+1 population for peasants to work fields)
Wonder: Chartres Cathedral
Civilization bonuses: Woodcutters work 15% faster.
Civilization bonuses: Siege Workshops do +20% damage.
Civilization bonuses: Infantry units move 20% faster.
Civilization bonuses: Artillery units do +20% Damage.
Team Bonus: All cavalry units have +20% movement.
###
c = generate_civ(civname='Dutch', client=client)🌐 Generating New Civ: Dutch
🛡️ **Dutch**
Specialty: Infantry and siege
Unique unit: Fluiters (Musketmen +4 range)
Unique technologies: Watermolen (The Water Mill produces food when it is destroyed)
Unique technologies: Fletemen (Musketmen +4 attack range)
Ultimate technology: Hoist (Lowered by -5 techcost when next to a catapult)
Master technology: Fluiters (Musketmen +4 range)
Wonder: Watermolen
Civilization bonuses: Wood-Cutters and Carpenters work 45% faster.
Team bonus: Siege works cost -50% logs.
###
c = generate_civ(civname='Star Wars Death Star', client=client)🌐 Generating New Civ: Star Wars Death Star
🛡️ **Star Wars Death Star**
Specialty: Rocket launchers and star destroyers
Unique unit: Heavy weapons teams
Unique technologies: Trench defenses (Rocket turrets have +40% HP)
Unique technologies: Heavy weapons teams (Rocket turrets and star destroyers fire 75% faster)
Wonder: X-Wing starfighters
Civilization bonuses: Heavy weapons teams have +10% Accuracy.
#Civilization: Wurrrgh
Specialty: The space dog.
Unique unit: Wurrrrh
Hoardable: Wurrrh (The space dog always has a horde of space hamsters with it.)
Units: Space hamsters
Team bonus: Space hamsters are 10% more accurate for all attacks.
Specialist: Space hamsters
###
</code>
| {
"repository": "lukexyz/insultswordfight",
"path": "notebooks/03_aoe2_civgen.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 7,
"size": 462254,
"hexsha": "cb73d94554f4a2816161d00953424e7c2def5866",
"max_line_length": 357176,
"avg_line_length": 1097.9904988124,
"alphanum_fraction": 0.9550117468
} |
# Notebook from urviyi/artificial-intelligence
Path: Projects/4_HMM Tagger/HMM Tagger.ipynb
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!_____no_output_____<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>_____no_output_____<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>_____no_output_____### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger_____no_output_____
<code>
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
import helpers, tests
%autoreload 1The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution_____no_output_____
</code>
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```_____no_output_____
<code>
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
</code>
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>_____no_output_____#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`._____no_output_____
<code>
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
</code>
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`._____no_output_____
<code>
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
</code>
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset._____no_output_____
<code>
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(100):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
Sentence 3: ('Such', 'an', 'instrument', 'is', 'expected', 'to', 'be', 'especially', 'useful', 'if', 'it', 'could', 'be', 'used', 'to', 'measure', 'the', 'elasticity', 'of', 'heavy', 'pastes', 'such', 'as', 'printing', 'inks', ',', 'paints', ',', 'adhesives', ',', 'molten', 'plastics', ',', 'and', 'bread', 'dough', ',', 'for', 'the', 'elasticity', 'is', 'related', 'to', 'those', 'various', 'properties', 'termed', '``', 'length', "''", ',', '``', 'shortness', "''", ',', '``', 'spinnability', "''", ',', 'etc.', ',', 'which', 'are', 'usually', 'judged', 'by', 'subjective', 'methods', 'at', 'present', '.')
Labels 3: ('PRT', 'DET', 'NOUN', 'VERB', 'VERB', 'PRT', 'VERB', 'ADV', 'ADJ', 'ADP', 'PRON', 'VERB', 'VERB', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'ADJ', 'ADP', 'VERB', 'NOUN', '.', 'NOUN', '.', 'NOUN', '.', 'ADJ', 'NOUN', '.', 'CONJ', 'NOUN', 'NOUN', '.', 'ADP', 'DET', 'NOUN', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'VERB', '.', 'NOUN', '.', '.', '.', 'NOUN', '.', '.', '.', 'NOUN', '.', '.', 'ADV', '.', 'DET', 'VERB', 'ADV', 'VERB', 'ADP', 'ADJ', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 4: ('My', 'future', 'plans', 'are', 'to', 'become', 'a', 'language', 'teacher', '.')
Labels 4: ('DET', 'ADJ', 'NOUN', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'NOUN', '.')
Sentence 5: ('We', 'ran', 'east', 'for', 'about', 'half', 'a', 'mile', 'before', 'we', 'turned', 'back', 'to', 'the', 'road', ',', 'panting', 'from', 'the', 'effort', 'and', 'soaked', 'with', 'sweat', '.')
Labels 5: ('PRON', 'VERB', 'NOUN', 'ADP', 'ADV', 'PRT', 'DET', 'NOUN', 'ADP', 'PRON', 'VERB', 'ADV', 'ADP', 'DET', 'NOUN', '.', 'VERB', 'ADP', 'DET', 'NOUN', 'CONJ', 'VERB', 'ADP', 'NOUN', '.')
Sentence 6: ('After', 'television', ',', '``', 'La', 'Dolce', 'Vita', "''", 'seems', 'as', 'harmless', 'as', 'a', 'Gray', 'Line', 'tour', 'of', 'North', 'Beach', 'at', 'night', '.')
Labels 6: ('ADP', 'NOUN', '.', '.', 'X', 'X', 'X', '.', 'VERB', 'ADV', 'ADJ', 'ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 7: ('It', 'would', 'give', 'him', 'an', 'opportunity', 'to', 'take', 'the', 'measure', 'of', 'his', 'chief', 'adversary', 'in', 'the', 'cold', 'war', ',', 'to', 'try', 'to', 'probe', 'Mr.', "Khrushchev's", 'intentions', 'and', 'to', 'make', 'clear', 'his', 'own', 'views', '.')
Labels 7: ('PRON', 'VERB', 'VERB', 'PRON', 'DET', 'NOUN', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'PRT', 'VERB', 'PRT', 'VERB', 'NOUN', 'NOUN', 'NOUN', 'CONJ', 'PRT', 'VERB', 'ADJ', 'DET', 'ADJ', 'NOUN', '.')
Sentence 8: ('She', 'had', 'to', 'move', 'in', 'some', 'direction', '--', 'any', 'direction', 'that', 'would', 'take', 'her', 'away', 'from', 'this', 'evil', 'place', '.')
Labels 8: ('PRON', 'VERB', 'PRT', 'VERB', 'ADP', 'DET', 'NOUN', '.', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', 'PRON', 'ADV', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 9: ('English', 'philosopher', 'Samuel', "Alexander's", 'debt', 'to', 'Wordsworth', 'and', 'Meredith', 'is', 'a', 'recent', 'interesting', 'example', ',', 'as', 'also', 'A.', 'N.', "Whitehead's", 'understanding', 'of', 'the', 'English', 'romantics', ',', 'chiefly', 'Shelley', 'and', 'Wordsworth', '.')
Labels 9: ('ADJ', 'NOUN', 'NOUN', 'NOUN', 'NOUN', 'ADP', 'NOUN', 'CONJ', 'NOUN', 'VERB', 'DET', 'ADJ', 'ADJ', 'NOUN', '.', 'ADP', 'ADV', 'NOUN', 'NOUN', 'NOUN', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADV', 'NOUN', 'CONJ', 'NOUN', '.')
Sentence 10: ('So', 'we', 'are', 'faced', 'with', 'a', 'vast', 'network', 'of', 'amorphous', 'entities', 'perpetuating', 'themselves', 'in', 'whatever', 'manner', 'they', 'can', ',', 'without', 'regard', 'to', 'the', 'needs', 'of', 'society', ',', 'controlling', 'society', 'and', 'forcing', 'upon', 'it', 'a', 'regime', 'representing', 'only', 'the', "corporation's", 'needs', 'for', 'survival', '.')
Labels 10: ('ADV', 'PRON', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'VERB', 'PRON', 'ADP', 'DET', 'NOUN', 'PRON', 'VERB', '.', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', '.', 'VERB', 'NOUN', 'CONJ', 'VERB', 'ADP', 'PRON', 'DET', 'NOUN', 'VERB', 'ADV', 'DET', 'NOUN', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 11: ('First', ',', 'we', 'can', 'encourage', 'responsibility', 'by', 'establishing', 'as', 'conditions', 'for', 'assistance', 'on', 'a', 'substantial', 'and', 'sustained', 'scale', 'the', 'definition', 'of', 'objectives', 'and', 'the', 'assessment', 'of', 'costs', '.')
Labels 11: ('ADV', '.', 'PRON', 'VERB', 'VERB', 'NOUN', 'ADP', 'VERB', 'ADP', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'ADJ', 'CONJ', 'VERB', 'NOUN', 'DET', 'NOUN', 'ADP', 'NOUN', 'CONJ', 'DET', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 12: ('Could', 'it', 'just', 'be', ',', 'Theresa', 'wondered', ',', 'that', 'Anne', 'had', 'understood', 'only', 'too', 'well', ',', 'and', 'that', 'George', 'all', 'along', 'was', 'extraordinary', 'only', 'in', 'the', 'degree', 'to', 'which', 'he', 'was', 'dull', '?', '?')
Labels 12: ('VERB', 'PRON', 'ADV', 'VERB', '.', 'NOUN', 'VERB', '.', 'ADP', 'NOUN', 'VERB', 'VERB', 'ADV', 'ADV', 'ADV', '.', 'CONJ', 'ADP', 'NOUN', 'PRT', 'ADV', 'VERB', 'ADJ', 'ADV', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'PRON', 'VERB', 'ADJ', '.', '.')
Sentence 13: ('Since', 'Af', 'and', 'P', 'divides', 'Af', 'for', 'Af', ',', 'we', 'have', 'Af', '.')
Labels 13: ('ADP', 'NOUN', 'CONJ', 'NOUN', 'VERB', 'NOUN', 'ADP', 'NOUN', '.', 'PRON', 'VERB', 'NOUN', '.')
Sentence 14: ('1', '.')
Labels 14: ('NUM', '.')
Sentence 15: ('In', 'upper', 'teen', 'Jewish', 'life', ',', 'the', 'non-college', 'group', 'tends', 'to', 'have', 'a', 'sense', 'of', 'marginality', '.')
Labels 15: ('ADP', 'ADJ', 'NOUN', 'ADJ', 'NOUN', '.', 'DET', 'NOUN', 'NOUN', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 16: ('One', 'tempest', 'was', 'stirred', 'up', 'last', 'March', 'when', 'Udall', 'announced', 'that', 'an', 'eight-and-a-half-foot', 'bronze', 'statue', 'of', 'William', 'Jennings', 'Bryan', ',', 'sculpted', 'by', 'the', 'late', 'Gutzon', 'Borglum', ',', 'would', 'be', 'sent', '``', 'on', 'indefinite', 'loan', "''", 'to', 'Salem', ',', 'Illinois', ',', "Bryan's", 'birthplace', '.')
Labels 16: ('NUM', 'NOUN', 'VERB', 'VERB', 'PRT', 'ADJ', 'NOUN', 'ADV', 'NOUN', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', 'ADP', 'NOUN', 'NOUN', 'NOUN', '.', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', '.', 'VERB', 'VERB', 'VERB', '.', 'ADP', 'ADJ', 'NOUN', '.', 'ADP', 'NOUN', '.', 'NOUN', '.', 'NOUN', 'NOUN', '.')
Sentence 17: ('You', 'killed', 'him', ',', "didn't", 'you', "''", '?', '?')
Labels 17: ('PRON', 'VERB', 'PRON', '.', 'VERB', 'PRON', '.', '.', '.')
Sentence 18: ('In', 'the', 'first', 'case', 'the', 'fixed', 'elements', 'within', 'each', 'pencil', 'are', 'the', 'multiple', 'secant', 'and', 'the', 'line', 'joining', 'the', 'vertex', ',', 'P', ',', 'to', 'the', 'intersection', 'of', '**zg', 'and', 'the', 'plane', 'of', 'the', 'pencil', 'which', 'does', 'not', 'lie', 'on', 'the', 'multiple', 'secant', '.')
Labels 18: ('ADP', 'DET', 'ADJ', 'NOUN', 'DET', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', 'VERB', 'DET', 'ADJ', 'NOUN', 'CONJ', 'DET', 'NOUN', 'VERB', 'DET', 'NOUN', '.', 'NOUN', '.', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', 'CONJ', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'DET', 'VERB', 'ADV', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 19: ('Before', 'you', 'let', 'loose', 'a', 'howl', 'saying', 'we', 'announced', 'its', 'coming', ',', 'not', 'once', 'but', 'several', 'times', ',', 'indeed', 'we', 'did', '.')
Labels 19: ('ADP', 'PRON', 'VERB', 'ADJ', 'DET', 'NOUN', 'VERB', 'PRON', 'VERB', 'DET', 'NOUN', '.', 'ADV', 'ADV', 'CONJ', 'ADJ', 'NOUN', '.', 'ADV', 'PRON', 'VERB', '.')
Sentence 20: ('``', 'Either', 'that', 'or', 'a', 'veterinarian', "''", '.')
Labels 20: ('.', 'CONJ', 'DET', 'CONJ', 'DET', 'NOUN', '.', '.')
Sentence 21: ('Mr.', 'Kennan', 'takes', 'careful', 'account', 'of', 'every', 'mitigating', 'circumstance', 'in', 'recalling', 'the', 'historical', 'atmosphere', 'in', 'which', 'mistaken', 'decisions', 'were', 'taken', '.')
Labels 21: ('NOUN', 'NOUN', 'VERB', 'ADJ', 'NOUN', 'ADP', 'DET', 'VERB', 'NOUN', 'ADP', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'VERB', 'NOUN', 'VERB', 'VERB', '.')
Sentence 22: ('In', '1957', 'the', 'social-economic', 'approach', 'to', 'European', 'integration', 'was', 'capped', 'by', 'the', 'formation', 'among', '``', 'the', 'Six', "''", 'of', 'a', 'tariff-free', 'European', 'Common', 'Market', ',', 'and', 'Euratom', 'for', 'cooperation', 'in', 'the', 'development', 'of', 'atomic', 'energy', '.')
Labels 22: ('ADP', 'NUM', 'DET', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'VERB', 'VERB', 'ADP', 'DET', 'NOUN', 'ADP', '.', 'DET', 'NUM', '.', 'ADP', 'DET', 'ADJ', 'ADJ', 'ADJ', 'NOUN', '.', 'CONJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
Sentence 23: ('This', 'leads', 'one', 'to', 'conclude', ',', 'as', 'you', 'have', ',', 'that', 'there', 'is', 'inevitably', 'more', 'prestige', 'in', 'a', 'management', 'position', 'in', 'the', 'minds', 'of', 'our', 'people', "''", '.')
Labels 23: ('DET', 'VERB', 'NOUN', 'PRT', 'VERB', '.', 'ADP', 'PRON', 'VERB', '.', 'ADP', 'PRT', 'VERB', 'ADV', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', '.', '.')
Sentence 24: ('``', 'I', 'have', 'to', 'believe', 'it', "''", '.')
Labels 24: ('.', 'PRON', 'VERB', 'PRT', 'VERB', 'PRON', '.', '.')
Sentence 25: ('But', 'then', ',', 'when', 'you', 'stuck', 'things', 'into', 'the', 'holes', ',', 'why', "didn't", 'they', 'come', 'right', 'out', 'again', '?', '?')
Labels 25: ('CONJ', 'ADV', '.', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'ADV', 'VERB', 'PRON', 'VERB', 'ADV', 'PRT', 'ADV', '.', '.')
Sentence 26: ('But', 'no', 'art', 'at', 'all', 'was', 'born', 'of', 'the', 'art', 'effort', 'in', 'the', 'early', 'movies', '.')
Labels 26: ('CONJ', 'DET', 'NOUN', 'ADP', 'PRT', 'VERB', 'VERB', 'ADP', 'DET', 'NOUN', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 27: ('``', 'To', 'me', "you'll", 'always', 'be', 'the', 'girl', "o'", 'my', 'dreams', ',', "an'", 'the', 'sweetest', 'flower', 'that', 'grows', "''", '.')
Labels 27: ('.', 'ADP', 'PRON', 'PRT', 'ADV', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'CONJ', 'DET', 'ADJ', 'NOUN', 'DET', 'VERB', '.', '.')
Sentence 28: ('He', 'held', 'his', 'elbows', 'away', 'from', 'his', 'body', ',', 'and', 'the', 'little', 'sweet', 'potato', 'trilled', 'neatly', 'and', 'sweetly', 'as', 'he', 'tickled', 'its', 'tune-belly', '.')
Labels 28: ('PRON', 'VERB', 'DET', 'NOUN', 'ADV', 'ADP', 'DET', 'NOUN', '.', 'CONJ', 'DET', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'ADV', 'CONJ', 'ADV', 'ADP', 'PRON', 'VERB', 'DET', 'NOUN', '.')
Sentence 29: ('The', 'jury', 'also', 'commented', 'on', 'the', 'Fulton', "ordinary's", 'court', 'which', 'has', 'been', 'under', 'fire', 'for', 'its', 'practices', 'in', 'the', 'appointment', 'of', 'appraisers', ',', 'guardians', 'and', 'administrators', 'and', 'the', 'awarding', 'of', 'fees', 'and', 'compensation', '.')
Labels 29: ('DET', 'NOUN', 'ADV', 'VERB', 'ADP', 'DET', 'NOUN', 'NOUN', 'NOUN', 'DET', 'VERB', 'VERB', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', '.', 'NOUN', 'CONJ', 'NOUN', 'CONJ', 'DET', 'NOUN', 'ADP', 'NOUN', 'CONJ', 'NOUN', '.')
Sentence 30: ('I', "didn't", 'want', 'to', 'stir', 'things', 'up', '.')
Labels 30: ('PRON', 'VERB', 'VERB', 'PRT', 'VERB', 'NOUN', 'PRT', '.')
Sentence 31: ('Yet', 'there', 'was', 'some', 'precedent', 'for', 'it', '.')
Labels 31: ('ADV', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'PRON', '.')
Sentence 32: ('At', 'the', 'time', 'of', 'his', 'capture', 'Helion', 'had', 'on', 'his', 'person', 'a', 'sketchbook', 'he', 'had', 'bought', 'at', "Woolworth's", 'in', 'New', 'York', '.')
Labels 32: ('ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', 'VERB', 'ADP', 'DET', 'NOUN', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', 'ADP', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
Sentence 33: ('Nowhere', 'in', 'the', 'boat', 'do', 'the', 'frames', 'come', 'in', 'contact', 'with', 'the', 'plywood', 'planking', '.')
Labels 33: ('ADV', 'ADP', 'DET', 'NOUN', 'VERB', 'DET', 'NOUN', 'VERB', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', '.')
Sentence 34: ('Undoubtedly', ',', 'however', ',', 'the', 'significance', 'of', 'the', 'volume', 'is', 'greater', 'than', 'the', 'foregoing', 'paragraphs', 'suggest', '.')
Labels 34: ('ADV', '.', 'ADV', '.', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'VERB', 'ADJ', 'ADP', 'DET', 'VERB', 'NOUN', 'VERB', '.')
Sentence 35: ('Laguerre', 'Hanover', '(', '(', 'Tar', 'Heel-Lotus', 'Hanover', ')', ',', '2:30.3-:36.1', ';', ';')
Labels 35: ('NOUN', 'NOUN', '.', '.', 'NOUN', 'NOUN', 'NOUN', '.', '.', 'NUM', '.', '.')
Sentence 36: ('I', 'looked', 'from', 'her', 'to', 'him', '.')
Labels 36: ('PRON', 'VERB', 'ADP', 'PRON', 'ADP', 'PRON', '.')
Sentence 37: ('The', 'radioactivity', 'of', 'fallout', 'decays', 'rapidly', 'at', 'first', '.')
Labels 37: ('DET', 'NOUN', 'ADP', 'NOUN', 'VERB', 'ADV', 'ADP', 'ADV', '.')
Sentence 38: ('It', 'was', 'the', '7th', 'Cavalry', 'whose', 'troopers', 'were', 'charged', 'with', 'guarding', 'the', 'Imperial', 'Palace', 'of', 'the', 'Emperor', '.')
Labels 38: ('PRON', 'VERB', 'DET', 'ADJ', 'NOUN', 'DET', 'NOUN', 'VERB', 'VERB', 'ADP', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 39: ('He', 'said', ',', '``', 'We', 'had', 'a', 'good', 'time', 'tonight', ',', "didn't", 'we', ',', 'Earl', "''", '?', '?')
Labels 39: ('PRON', 'VERB', '.', '.', 'PRON', 'VERB', 'DET', 'ADJ', 'NOUN', 'NOUN', '.', 'VERB', 'PRON', '.', 'NOUN', '.', '.', '.')
Sentence 40: ('As', 'notable', 'examples', 'of', 'this', 'abuse', ',', 'he', 'quotes', 'passages', 'from', 'the', 'Examiner', ',', '``', 'that', 'Destroyer', 'of', 'all', 'things', "''", ',', 'and', 'The', 'Character', 'of', 'Richard', 'Steele', ',', 'which', 'he', 'here', 'attributes', 'to', 'Swift', '.')
Labels 40: ('ADP', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.', '.', 'DET', 'NOUN', 'ADP', 'PRT', 'NOUN', '.', '.', 'CONJ', 'DET', 'NOUN', 'ADP', 'NOUN', 'NOUN', '.', 'DET', 'PRON', 'ADV', 'VERB', 'ADP', 'NOUN', '.')
Sentence 41: ('The', 'truth', 'is', ',', 'though', ',', 'that', 'men', 'react', 'differently', 'to', 'different', 'treatment', '.')
Labels 41: ('DET', 'NOUN', 'VERB', '.', 'ADV', '.', 'ADP', 'NOUN', 'VERB', 'ADV', 'ADP', 'ADJ', 'NOUN', '.')
Sentence 42: ('Robert', 'E.', 'Lee', 'represented', 'the', 'dignity', 'needed', 'by', 'a', 'rebelling', 'confederacy', '.')
Labels 42: ('NOUN', 'NOUN', 'NOUN', 'VERB', 'DET', 'NOUN', 'VERB', 'ADP', 'DET', 'VERB', 'NOUN', '.')
Sentence 43: ('Heat', 'transfer', 'by', 'molecular', 'conduction', 'as', 'well', 'as', 'by', 'radiation', 'from', 'the', 'arc', 'column', '.')
Labels 43: ('NOUN', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'ADV', 'ADV', 'ADP', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', '.')
Sentence 44: ('Since', 'accurate', 'base', 'maps', 'are', 'necessary', 'for', 'any', 'planning', 'program', ',', 'the', 'first', 'step', 'taken', 'by', 'the', 'planning', 'division', 'to', 'implement', 'the', 'long-range', 'state', 'plan', 'has', 'been', 'to', 'prepare', 'two', 'series', 'of', 'base', 'maps', '--', 'one', 'at', 'a', 'scale', 'of', '1', 'inch', 'to', 'a', 'mile', ',', 'and', 'the', 'second', 'a', 'series', 'of', '26', 'sheets', 'at', 'a', 'scale', 'of', '1', 'inch', 'to', '2000', 'feet', ',', 'covering', 'the', 'entire', 'state', '.')
Labels 44: ('ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'ADJ', 'ADP', 'DET', 'VERB', 'NOUN', '.', 'DET', 'ADJ', 'NOUN', 'VERB', 'ADP', 'DET', 'VERB', 'NOUN', 'PRT', 'VERB', 'DET', 'NOUN', 'NOUN', 'NOUN', 'VERB', 'VERB', 'PRT', 'VERB', 'NUM', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.', 'NUM', 'ADP', 'DET', 'NOUN', 'ADP', 'NUM', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'CONJ', 'DET', 'ADJ', 'DET', 'NOUN', 'ADP', 'NUM', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NUM', 'NOUN', 'ADP', 'NUM', 'NOUN', '.', 'VERB', 'DET', 'ADJ', 'NOUN', '.')
Sentence 45: ('Hong', 'Kong', ',', 'India', 'and', 'Pakistan', 'have', 'been', 'limiting', 'exports', 'of', 'certain', 'types', 'of', 'textiles', 'to', 'Britain', 'for', 'several', 'years', 'under', 'the', '``', 'Lancashire', 'Pact', "''", '.')
Labels 45: ('NOUN', 'NOUN', '.', 'NOUN', 'CONJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'ADP', 'DET', '.', 'NOUN', 'NOUN', '.', '.')
Sentence 46: ('One', 'day', 'the', 'dogs', 'of', 'Ireland', 'will', 'do', 'that', 'too', 'and', 'perhaps', 'also', 'the', 'pigs', "''", '.')
Labels 46: ('NUM', 'NOUN', 'DET', 'NOUN', 'ADP', 'NOUN', 'VERB', 'VERB', 'DET', 'ADV', 'CONJ', 'ADV', 'ADV', 'DET', 'NOUN', '.', '.')
Sentence 47: ('The', 'big', 'fans', 'were', 'going', ',', 'drawing', 'from', 'the', 'large', 'room', 'the', 'remnants', 'of', 'stale', 'smoke', 'which', 'drifted', 'about', 'in', 'pale', 'strata', 'underneath', 'the', 'ceiling', '.')
Labels 47: ('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'DET', 'NOUN', 'ADP', 'ADJ', 'NOUN', 'DET', 'VERB', 'ADV', 'ADP', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 48: ('He', 'had', 'obtained', 'and', 'provisioned', 'a', 'veteran', 'ship', 'called', 'the', 'Discovery', 'and', 'had', 'recruited', 'a', 'crew', 'of', 'twenty-one', ',', 'the', 'largest', 'he', 'had', 'ever', 'commanded', '.')
Labels 48: ('PRON', 'VERB', 'VERB', 'CONJ', 'VERB', 'DET', 'ADJ', 'NOUN', 'VERB', 'DET', 'NOUN', 'CONJ', 'VERB', 'VERB', 'DET', 'NOUN', 'ADP', 'NUM', '.', 'DET', 'ADJ', 'PRON', 'VERB', 'ADV', 'VERB', '.')
Sentence 49: ('The', 'nuclei', 'of', 'these', 'fibers', ',', 'as', 'is', 'shown', 'in', 'Figures', '3', 'and', '4', ',', 'showed', 'remarkable', 'proliferation', 'and', 'were', 'closely', 'approximated', ',', 'forming', 'a', 'chainlike', 'structure', 'at', 'either', 'the', 'center', 'or', 'the', 'periphery', 'of', 'the', 'fiber', '.')
Labels 49: ('DET', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'VERB', 'VERB', 'ADP', 'NOUN', 'NUM', 'CONJ', 'NUM', '.', 'VERB', 'ADJ', 'NOUN', 'CONJ', 'VERB', 'ADV', 'VERB', '.', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'CONJ', 'DET', 'NOUN', 'CONJ', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 50: ('The', 'presence', 'of', 'normally', 'occurring', 'bronchial', 'artery-pulmonary', 'artery', 'anastomoses', 'was', 'first', 'noted', 'in', '1721', 'by', 'Ruysch', ',', 'and', 'thereafter', 'by', 'many', 'others', '.')
Labels 50: ('DET', 'NOUN', 'ADP', 'ADV', 'VERB', 'ADJ', 'ADJ', 'NOUN', 'NOUN', 'VERB', 'ADV', 'VERB', 'ADP', 'NUM', 'ADP', 'NOUN', '.', 'CONJ', 'ADV', 'ADP', 'ADJ', 'NOUN', '.')
Sentence 51: ('Much', 'information', 'has', 'been', 'gathered', 'relative', 'to', 'quantitative', 'sampling', 'and', 'assesment', 'techniques', '.')
Labels 51: ('ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADJ', 'ADP', 'ADJ', 'NOUN', 'CONJ', 'NOUN', 'NOUN', '.')
Sentence 52: ('But', 'most', 'disturbing', 'of', 'all', 'were', 'the', 'advisers', 'he', 'called', 'to', 'sit', 'with', 'him', 'in', 'the', 'Palace', ';', ';')
Labels 52: ('CONJ', 'ADV', 'ADJ', 'ADP', 'PRT', 'VERB', 'DET', 'NOUN', 'PRON', 'VERB', 'PRT', 'VERB', 'ADP', 'PRON', 'ADP', 'DET', 'NOUN', '.', '.')
Sentence 53: ('Precise',)
Labels 53: ('ADJ',)
Sentence 54: ('This', 'may', 'both', 'divert', 'the', 'attention', 'of', 'the', 'uninitiate', 'and', 'cause', 'confusion', 'for', 'the', 'more', 'knowledgeable', '.')
Labels 54: ('DET', 'VERB', 'DET', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'CONJ', 'VERB', 'NOUN', 'ADP', 'DET', 'ADV', 'ADJ', '.')
Sentence 55: ('6', '.')
Labels 55: ('NUM', '.')
Sentence 56: (')', 'Now', 'Af', 'is', 'a', 'diagonalizable', 'operator', 'which', 'is', 'also', 'nilpotent', '.')
Labels 56: ('.', 'ADV', 'NOUN', 'VERB', 'DET', 'ADJ', 'NOUN', 'DET', 'VERB', 'ADV', 'ADJ', '.')
Sentence 57: ('He', "don't", 'care', 'about', 'anybody', '.')
Labels 57: ('PRON', 'VERB', 'VERB', 'ADP', 'NOUN', '.')
Sentence 58: ('``', 'We', 'have', 'witnessed', 'in', 'this', 'campaign', 'the', 'effort', 'to', 'project', 'Mr.', 'Mitchell', 'as', 'the', 'image', 'of', 'a', 'unity', 'candidate', 'from', 'Washington', '.')
Labels 58: ('.', 'PRON', 'VERB', 'VERB', 'ADP', 'DET', 'NOUN', 'DET', 'NOUN', 'PRT', 'VERB', 'NOUN', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 59: ('This', 'was', 'the', 'crassest', 'kind', 'of', 'materialism', 'and', 'they', ',', 'the', 'Artists', ',', 'would', 'have', 'no', 'truck', 'with', 'it', '.')
Labels 59: ('DET', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'CONJ', 'PRON', '.', 'DET', 'NOUN', '.', 'VERB', 'VERB', 'DET', 'NOUN', 'ADP', 'PRON', '.')
Sentence 60: ("It's", 'easy', 'to', 'see', 'why', '.')
Labels 60: ('PRT', 'ADJ', 'PRT', 'VERB', 'ADV', '.')
Sentence 61: ('If', 'you', 'wish', 'to', 'budget', 'closely', 'on', 'transportation', ',', 'saving', 'your', 'extra', 'dollars', 'to', 'indulge', 'in', 'luxuries', ',', 'one', 'agency', 'lists', 'the', 'small', 'Fiat', '500', 'at', 'only', '$1.26', 'a', 'day', 'plus', '$.03', 'a', 'kilometer', 'and', 'the', 'Fiat', '2100', 'Station', 'Wagon', ',', 'seating', 'six', ',', 'at', 'just', '$1.10', 'a', 'day', 'and', '$.105', 'a', 'kilometer', '.')
Labels 61: ('ADP', 'PRON', 'VERB', 'PRT', 'VERB', 'ADV', 'ADP', 'NOUN', '.', 'VERB', 'DET', 'ADJ', 'NOUN', 'PRT', 'VERB', 'ADP', 'NOUN', '.', 'NUM', 'NOUN', 'VERB', 'DET', 'ADJ', 'NOUN', 'NUM', 'ADP', 'ADV', 'NOUN', 'DET', 'NOUN', 'CONJ', 'NOUN', 'DET', 'NOUN', 'CONJ', 'DET', 'NOUN', 'NUM', 'NOUN', 'NOUN', '.', 'VERB', 'NUM', '.', 'ADP', 'ADV', 'NOUN', 'DET', 'NOUN', 'CONJ', 'NOUN', 'DET', 'NOUN', '.')
Sentence 62: ('``', 'The', "kid's", 'froze', 'good', '.')
Labels 62: ('.', 'DET', 'PRT', 'VERB', 'ADV', '.')
Sentence 63: ('But', 'how', 'can', 'one', 'figure', 'symbolize', 'both', '?', '?')
Labels 63: ('CONJ', 'ADV', 'VERB', 'NUM', 'VERB', 'VERB', 'DET', '.', '.')
Sentence 64: ('Thus', 'when', 'Premier', 'Khrushchev', 'intimated', 'even', 'before', 'inauguration', 'that', 'he', 'hoped', 'for', 'an', 'early', 'meeting', 'with', 'the', 'new', 'President', ',', 'Mr.', 'Kennedy', 'was', 'confronted', 'with', 'a', 'delicate', 'problem', '.')
Labels 64: ('ADV', 'ADV', 'NOUN', 'NOUN', 'VERB', 'ADV', 'ADP', 'NOUN', 'ADP', 'PRON', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'NOUN', 'NOUN', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 65: ('These', 'are', 'as', 'follows', ':', '(', '1', ')', 'field', 'work', 'procedures', '.')
Labels 65: ('DET', 'VERB', 'ADV', 'VERB', '.', '.', 'NUM', '.', 'NOUN', 'NOUN', 'NOUN', '.')
Sentence 66: ('Felix', 'Kopstein', 'states', 'that', '``', 'when', 'the', 'snake', 'reaches', 'its', 'maturity', 'it', 'has', 'already', 'reached', 'about', 'its', 'maximal', 'length', "''", ',', 'but', 'goes', 'on', 'to', 'cite', 'the', 'reticulate', 'python', 'as', 'an', 'exception', ',', 'with', 'maximum', 'length', 'approximately', 'three', 'times', 'that', 'at', 'maturity', '.')
Labels 66: ('NOUN', 'NOUN', 'VERB', 'ADP', '.', 'ADV', 'DET', 'NOUN', 'VERB', 'DET', 'NOUN', 'PRON', 'VERB', 'ADV', 'VERB', 'ADV', 'DET', 'ADJ', 'NOUN', '.', '.', 'CONJ', 'VERB', 'PRT', 'PRT', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', 'ADV', 'NUM', 'NOUN', 'DET', 'ADP', 'NOUN', '.')
Sentence 67: ('He', 'went', 'around', 'the', 'corner', 'and', 'parked', ',', 'turning', 'off', 'his', 'lights', 'and', 'motor', '.')
Labels 67: ('PRON', 'VERB', 'ADP', 'DET', 'NOUN', 'CONJ', 'VERB', '.', 'VERB', 'PRT', 'DET', 'NOUN', 'CONJ', 'NOUN', '.')
Sentence 68: ('If', 'she', 'could', 'not', 'take', 'the', 'children', 'out', 'of', 'this', 'section', ',', 'at', 'least', 'she', 'could', 'take', 'other', 'children', 'out', 'of', 'their', 'countries', 'and', 'put', 'them', 'on', 'the', 'farms', '.')
Labels 68: ('ADP', 'PRON', 'VERB', 'ADV', 'VERB', 'DET', 'NOUN', 'ADP', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'ADJ', 'PRON', 'VERB', 'VERB', 'ADJ', 'NOUN', 'ADP', 'ADP', 'DET', 'NOUN', 'CONJ', 'VERB', 'PRON', 'ADP', 'DET', 'NOUN', '.')
Sentence 69: ('Under', 'its', 'plan', 'Du', 'Pont', 'would', 'retain', 'its', 'General', 'Motors', 'shares', 'but', 'be', 'required', 'to', 'pass', 'on', 'to', 'its', 'stockholders', 'the', 'right', 'to', 'vote', 'those', 'shares', '.')
Labels 69: ('ADP', 'DET', 'NOUN', 'NOUN', 'NOUN', 'VERB', 'VERB', 'DET', 'NOUN', 'NOUN', 'NOUN', 'CONJ', 'VERB', 'VERB', 'PRT', 'VERB', 'PRT', 'ADP', 'DET', 'NOUN', 'DET', 'NOUN', 'PRT', 'VERB', 'DET', 'NOUN', '.')
Sentence 70: ('his', 'eyes', 'were', 'black', 'and', 'deep-set', ',', 'and', 'expressionless', '.')
Labels 70: ('DET', 'NOUN', 'VERB', 'ADJ', 'CONJ', 'ADJ', '.', 'CONJ', 'ADJ', '.')
Sentence 71: ('It', 'is', 'one', 'of', 'the', 'very', 'few', ',', 'if', 'not', 'the', 'only', 'surviving', 'bridge', 'of', 'its', 'type', 'to', 'serve', 'a', 'main', 'artery', 'of', 'the', 'U.S.', 'highway', 'system', ',', 'thus', 'it', 'is', 'far', 'more', 'than', 'a', 'relic', 'of', 'the', 'horse', 'and', 'buggy', 'days', '.')
Labels 71: ('PRON', 'VERB', 'NUM', 'ADP', 'DET', 'ADV', 'ADJ', '.', 'ADP', 'ADV', 'DET', 'ADJ', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', 'PRT', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', 'NOUN', 'NOUN', '.', 'ADV', 'PRON', 'VERB', 'ADV', 'ADJ', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'CONJ', 'NOUN', 'NOUN', '.')
Sentence 72: ('The', 'promise', 'that', 'the', 'lion', 'and', 'the', 'lamb', 'will', 'lie', 'down', 'together', 'was', 'given', 'in', 'the', 'future', 'tense', '.')
Labels 72: ('DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'CONJ', 'DET', 'NOUN', 'VERB', 'VERB', 'PRT', 'ADV', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 73: ('``', 'But', "that's", 'what', 'he', 'told', 'me', '.')
Labels 73: ('.', 'CONJ', 'PRT', 'DET', 'PRON', 'VERB', 'PRON', '.')
Sentence 74: ('The', 'common', 'spices', ',', 'flavorings', ',', 'and', 'condiments', 'make', 'up', 'this', 'group', '.')
Labels 74: ('DET', 'ADJ', 'NOUN', '.', 'NOUN', '.', 'CONJ', 'NOUN', 'VERB', 'PRT', 'DET', 'NOUN', '.')
Sentence 75: ('The', 'values', 'and', 'talents', 'which', 'made', 'the', 'tile', 'and', 'the', 'dome', ',', 'the', 'rug', ',', 'the', 'poem', 'and', 'the', 'miniature', ',', 'continue', 'in', 'certain', 'social', 'institutions', 'which', 'rise', 'above', 'the', 'ordinary', 'life', 'of', 'this', 'city', ',', 'as', 'the', 'great', 'buildings', 'rise', 'above', 'blank', 'walls', 'and', 'dirty', 'lanes', '.')
Labels 75: ('DET', 'NOUN', 'CONJ', 'NOUN', 'DET', 'VERB', 'DET', 'NOUN', 'CONJ', 'DET', 'NOUN', '.', 'DET', 'NOUN', '.', 'DET', 'NOUN', 'CONJ', 'DET', 'NOUN', '.', 'VERB', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'DET', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'DET', 'ADJ', 'NOUN', 'VERB', 'ADP', 'ADJ', 'NOUN', 'CONJ', 'ADJ', 'NOUN', '.')
Sentence 76: ('But', 'do', 'the', 'plays', 'deal', 'with', 'the', 'same', 'facets', 'of', 'experience', 'religion', 'must', 'also', 'deal', 'with', '?', '?')
Labels 76: ('CONJ', 'VERB', 'DET', 'NOUN', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'NOUN', 'VERB', 'ADV', 'VERB', 'ADP', '.', '.')
Sentence 77: ('Called', 'Perennian', ',', 'to', 'indicate', 'its', 'lasting', ',', 'good', 'today', 'and', 'tomorrow', 'quality', ',', 'the', 'collection', 'truly', 'avoids', 'the', 'monotony', 'of', 'identical', 'pieces', '.')
Labels 77: ('VERB', 'ADJ', '.', 'PRT', 'VERB', 'DET', 'VERB', '.', 'ADJ', 'NOUN', 'CONJ', 'NOUN', 'NOUN', '.', 'DET', 'NOUN', 'ADV', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
Sentence 78: ('It', 'had', 'a', "gourmet's", 'corner', '(', 'instead', 'of', 'a', 'kitchen', ')', ',', 'a', 'breakfast', 'room', ',', 'a', 'luncheon', 'room', ',', 'a', 'dining', 'room', ',', 'a', 'sitting', 'room', ',', 'a', 'room', 'for', 'standing', 'up', ',', 'a', 'party', 'room', ',', 'dressing', 'rooms', 'for', 'everybody', ',', 'even', 'a', 'room', 'for', 'mud', '.')
Labels 78: ('PRON', 'VERB', 'DET', 'NOUN', 'NOUN', '.', 'ADV', 'ADP', 'DET', 'NOUN', '.', '.', 'DET', 'NOUN', 'NOUN', '.', 'DET', 'NOUN', 'NOUN', '.', 'DET', 'VERB', 'NOUN', '.', 'DET', 'VERB', 'NOUN', '.', 'DET', 'NOUN', 'ADP', 'VERB', 'PRT', '.', 'DET', 'NOUN', 'NOUN', '.', 'VERB', 'NOUN', 'ADP', 'NOUN', '.', 'ADV', 'DET', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 79: ('``', 'Few', 'crank', 'calls', "''", ',', 'McFeeley', 'said', '.')
Labels 79: ('.', 'ADJ', 'NOUN', 'NOUN', '.', '.', 'NOUN', 'VERB', '.')
Sentence 80: ('If', 'he', 'does', ',', "it's", 'still', 'better', 'than', 'an', 'even', 'chance', 'he', "won't", 'notice', 'the', 'transposition', 'of', 'the', 'numbers', ',', 'and', 'if', 'he', 'should', 'notice', 'it', ',', 'the', 'thing', 'can', 'be', 'passed', 'off', 'as', 'an', 'honest', 'mistake', '.')
Labels 80: ('ADP', 'PRON', 'VERB', '.', 'PRT', 'ADV', 'ADJ', 'ADP', 'DET', 'ADJ', 'NOUN', 'PRON', 'VERB', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'CONJ', 'ADP', 'PRON', 'VERB', 'VERB', 'PRON', '.', 'DET', 'NOUN', 'VERB', 'VERB', 'VERB', 'PRT', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 81: ('The', 'most', 'infamous', 'of', 'all', 'was', 'launched', 'by', 'the', 'explosion', 'of', 'the', 'island', 'of', 'Krakatoa', 'in', '1883', ';', ';')
Labels 81: ('DET', 'ADV', 'ADJ', 'ADP', 'PRT', 'VERB', 'VERB', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NUM', '.', '.')
Sentence 82: ('What', 'about', 'transfers', '?', '?')
Labels 82: ('DET', 'ADP', 'NOUN', '.', '.')
Sentence 83: ('This', 'angle', 'of', 'just', 'where', 'the', 'Orioles', 'can', 'look', 'for', 'improvement', 'this', 'year', 'is', 'an', 'interesting', 'one', '.')
Labels 83: ('DET', 'NOUN', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', 'ADP', 'NOUN', 'DET', 'NOUN', 'VERB', 'DET', 'ADJ', 'NUM', '.')
Sentence 84: (')',)
Labels 84: ('.',)
Sentence 85: ('Under', 'the', 'influence', 'of', 'marijuana', 'the', 'beatnik', 'comes', 'alive', 'within', 'and', 'experiences', 'a', 'wonderfully', 'enhanced', 'sense', 'of', 'self', 'as', 'if', 'he', 'had', 'discovered', 'the', 'open', 'sesame', 'to', 'the', 'universe', 'of', 'being', '.')
Labels 85: ('ADP', 'DET', 'NOUN', 'ADP', 'NOUN', 'DET', 'NOUN', 'VERB', 'ADJ', 'ADP', 'CONJ', 'VERB', 'DET', 'ADV', 'VERB', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'PRON', 'VERB', 'VERB', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'VERB', '.')
Sentence 86: ('She', 'asked', 'him', 'and', ',', 'laughing', ',', 'she', 'added', ',', '``', 'I', 'was', 'nervous', 'about', 'buying', 'a', 'book', 'with', 'a', 'title', 'like', 'that', ',', 'but', 'I', 'knew', "you'd", 'like', 'it', "''", '.')
Labels 86: ('PRON', 'VERB', 'PRON', 'CONJ', '.', 'VERB', '.', 'PRON', 'VERB', '.', '.', 'PRON', 'VERB', 'ADJ', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', '.', 'CONJ', 'PRON', 'VERB', 'PRT', 'VERB', 'PRON', '.', '.')
Sentence 87: ('At', 'the', 'same', 'moment', 'Wheeler', 'Fiske', 'fired', 'the', 'rifle', 'Mike', 'had', 'given', 'him', 'and', 'another', 'guerrilla', 'was', 'hit', '.')
Labels 87: ('ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', 'NOUN', 'VERB', 'DET', 'NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'CONJ', 'DET', 'NOUN', 'VERB', 'VERB', '.')
Sentence 88: ('``', 'And', 'what', 'makes', 'you', 'think', "you're", 'going', 'to', 'get', 'it', ',', 'pretty', 'boy', "''", '?', '?')
Labels 88: ('.', 'CONJ', 'DET', 'VERB', 'PRON', 'VERB', 'PRT', 'VERB', 'PRT', 'VERB', 'PRON', '.', 'ADJ', 'NOUN', '.', '.', '.')
Sentence 89: ('``', 'Certainly', ',', 'sir', "''", '.')
Labels 89: ('.', 'ADV', '.', 'NOUN', '.', '.')
Sentence 90: ('Later', ',', 'riding', 'in', 'for', 'some', 'lusty', 'enjoyment', 'of', 'the', 'liquor', 'and', 'professional', 'ladies', 'of', 'Cheyenne', ',', 'he', 'laid', 'claim', 'to', 'the', 'killing', 'with', 'the', 'vague', 'insinuations', 'he', 'made', '.')
Labels 90: ('ADV', '.', 'VERB', 'PRT', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'NOUN', '.', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', 'PRON', 'VERB', '.')
Sentence 91: ('It', 'was', 'said', 'that', 'the', 'Hetman', 'plotted', 'to', 'take', 'over', 'the', 'entire', 'Hearst', 'newspaper', 'empire', 'one', 'day', 'by', 'means', 'of', 'various', 'coups', ':', 'the', 'destruction', 'of', 'editors', 'who', 'tried', 'to', 'halt', 'his', 'course', ',', 'the', 'unfrocking', 'of', 'publishers', 'whose', 'mistakes', 'of', 'judgment', 'might', 'be', 'magnified', 'in', 'secret', 'reports', 'to', 'Mr.', 'Hearst', '.')
Labels 91: ('PRON', 'VERB', 'VERB', 'ADP', 'DET', 'NOUN', 'VERB', 'PRT', 'VERB', 'PRT', 'DET', 'ADJ', 'NOUN', 'NOUN', 'NOUN', 'NUM', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.', 'DET', 'NOUN', 'ADP', 'NOUN', 'PRON', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', '.', 'DET', 'NOUN', 'ADP', 'NOUN', 'DET', 'NOUN', 'ADP', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'NOUN', '.')
Sentence 92: ('Mama', 'always', 'felt', 'that', 'the', 'collection', 'symbolized', 'Mrs.', "Coolidge's", 'wish', 'for', 'a', 'little', 'girl', '.')
Labels 92: ('NOUN', 'ADV', 'VERB', 'ADP', 'DET', 'NOUN', 'VERB', 'NOUN', 'NOUN', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.')
Sentence 93: ('We', 'would', 'have', 'preferred', ',', 'however', ',', 'to', 'have', 'had', 'the', 'rest', 'of', 'the', 'orchestra', 'refrain', 'from', 'laughing', 'at', 'this', 'and', 'other', 'spots', 'on', 'the', 'recording', ',', 'since', 'it', 'mars', 'an', 'otherwise', 'sober', ',', 'if', 'not', 'lofty', ',', 'performance', '.')
Labels 93: ('PRON', 'VERB', 'VERB', 'VERB', '.', 'ADV', '.', 'PRT', 'VERB', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'VERB', 'ADP', 'VERB', 'ADP', 'DET', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'PRON', 'VERB', 'DET', 'ADV', 'ADJ', '.', 'ADP', 'ADV', 'ADJ', '.', 'NOUN', '.')
Sentence 94: ('Their', 'burgeoning', 'popularity', 'may', 'be', 'a', 'result', 'of', 'the', 'closing', 'of', 'the', '52nd', 'Street', 'burlesque', 'joints', ',', 'but', 'curiously', 'enough', 'their', 'atmosphere', 'is', 'almost', 'always', 'familial', '--', 'neighborhood', 'saloons', 'with', 'a', 'bit', 'of', 'epidermis', '.')
Labels 94: ('DET', 'VERB', 'NOUN', 'VERB', 'VERB', 'DET', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', 'NOUN', '.', 'CONJ', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'ADV', 'ADV', 'ADJ', '.', 'NOUN', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', '.')
Sentence 95: ('He', 'swung', 'around', ',', 'eyes', 'toward', 'the', 'bedroom', ',', 'some', 'fifteen', 'feet', 'away', '.')
Labels 95: ('PRON', 'VERB', 'ADV', '.', 'NOUN', 'ADP', 'DET', 'NOUN', '.', 'DET', 'NUM', 'NOUN', 'ADV', '.')
Sentence 96: ('It', 'is', 'possible', 'that', 'certain', 'mutational', 'forms', 'may', 'be', 'produced', 'such', 'as', 'antibiotic', 'resistant', 'strains', '.')
Labels 96: ('PRON', 'VERB', 'ADJ', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADJ', 'ADP', 'NOUN', 'ADJ', 'NOUN', '.')
Sentence 97: ('Few', 'passed', "''", '.')
Labels 97: ('ADJ', 'VERB', '.', '.')
Sentence 98: ('An', 'examination', 'of', 'some', 'forty', 'catalogs', 'of', 'schools', 'offering', 'courses', 'in', 'interior', 'design', ',', 'for', 'the', 'most', 'part', 'schools', 'accredited', 'by', 'membership', 'in', 'the', 'National', 'Association', 'of', 'Schools', 'of', 'Art', ',', 'and', 'a', 'further', '``', 'on', 'the', 'spot', "''", 'inspection', 'of', 'a', 'number', 'of', 'schools', ',', 'show', 'their', 'courses', 'adhere', 'pretty', 'closely', 'to', 'the', 'recommendations', '.')
Labels 98: ('DET', 'NOUN', 'ADP', 'DET', 'NUM', 'NOUN', 'ADP', 'NOUN', 'VERB', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.', 'ADP', 'DET', 'ADJ', 'NOUN', 'NOUN', 'VERB', 'ADP', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN', '.', 'CONJ', 'DET', 'ADJ', '.', 'ADP', 'DET', 'NOUN', '.', 'NOUN', 'ADP', 'DET', 'NOUN', 'ADP', 'NOUN', '.', 'VERB', 'DET', 'NOUN', 'VERB', 'ADV', 'ADV', 'ADP', 'DET', 'NOUN', '.')
Sentence 99: ('It', 'changes', 'the', 'answers', 'to', '``', 'Who', 'should', 'do', 'what', ',', 'and', 'where', "''", '?', '?')
Labels 99: ('PRON', 'VERB', 'DET', 'NOUN', 'ADP', '.', 'PRON', 'VERB', 'VERB', 'DET', '.', 'CONJ', 'ADV', '.', '.', '.')
Sentence 100: ('the', 'clay', 'he', 'used', 'plastically', 'to', 'suggest', 'soft', 'moving', 'flesh', ',', 'as', 'in', 'an', 'abdomen', ',', 'in', 'a', 'reclining', 'torso', ';', ';')
Labels 100: ('DET', 'NOUN', 'PRON', 'VERB', 'ADV', 'PRT', 'VERB', 'ADJ', 'VERB', 'NOUN', '.', 'ADP', 'ADP', 'DET', 'NOUN', '.', 'ADP', 'DET', 'VERB', 'NOUN', '.', '.')
</code>
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus._____no_output_____
<code>
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
</code>
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. _____no_output_____## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus._____no_output_____### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences._____no_output_____
<code>
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
#raise NotImplementedError
pair_count = {}
for i in range(len(sequences_A)):
for word, tag in zip(sequences_A[i], sequences_B[i]):
if tag not in pair_count:
pair_count[tag] = {}
pair_count[tag][word] = 1
else:
pair_count[tag][word] = pair_count[tag].get(word, 0) + 1
return pair_count
# Calculate C(t_i, w_i)
emission_counts = pair_counts(data.X, data.Y)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably._____no_output_____
<code>
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(data.training_set.Y, data.training_set.X)
mfc_table = dict((word, max(tags.keys(), key=lambda key: tags[key])) for word, tags in word_counts.items())
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')_____no_output_____
</code>
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger._____no_output_____
<code>
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions_____no_output_____
</code>
### Example Decoding Sequences with MFC Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
</code>
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. _____no_output_____
<code>
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions_____no_output_____
</code>
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus._____no_output_____
<code>
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.02%
</code>
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information._____no_output_____### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$_____no_output_____
<code>
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
return Counter(sequences)
# raise NotImplementedError
# TODO: call unigram_counts with a list of tag sequences from the training set
tags = [tag for i, (word, tag) in enumerate(data.training_set.stream())]
tag_unigrams = unigram_counts(tags)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
_____no_output_____
<code>
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
return Counter(sequences)
#raise NotImplementedError
# TODO: call bigram_counts with a list of tag sequences from the training set
tags = [tag for i, (word, tag) in enumerate(data.stream())]
tag_pairs = [(tags[i],tags[i+1]) for i in range(0,len(tags)-2,2)]
tag_bigrams = bigram_counts(tag_pairs)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag._____no_output_____
<code>
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
#raise NotImplementedError
return Counter(sequences)
# TODO: Calculate the count of each tag starting a sequence
starting_tags = [tag[0] for tag in data.training_set.Y ]
tag_starts = starting_counts(starting_tags)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag._____no_output_____
<code>
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
#raise NotImplementedError
return Counter(sequences)
# TODO: Calculate the count of each tag ending a sequence
ending_tags = [tag[-1] for tag in data.training_set.Y ]
tag_ends = ending_counts(ending_tags)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$_____no_output_____
<code>
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
tag_states = {}
for tag in data.training_set.tagset:
emission_prob = {word:emission_counts[tag][word]/tag_unigrams[tag] for word in emission_counts[tag]}
tag_emission = DiscreteDistribution(emission_prob)
tag_states[tag] = State(tag_emission, name=tag)
basic_model.add_states(tag_states[tag])
basic_model.add_transition(basic_model.start, tag_states[tag], tag_starts[tag]/len(data.training_set))
basic_model.add_transition(tag_states[tag], basic_model.end, tag_ends[tag]/len(data.training_set))
for tag1 in data.training_set.tagset:
for tag2 in data.training_set.tagset:
basic_model.add_transition(tag_states[tag1], tag_states[tag2], tag_bigrams[tag1,tag2]/tag_unigrams[tag1])
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
show_model(basic_model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=True)
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')_____no_output_____hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_training_acc > 0.955, "Uh oh. Your HMM accuracy on the training set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')training accuracy basic hmm model: 97.53%
testing accuracy basic hmm model: 96.16%
</code>
### Example Decoding Sequences with the HMM Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")_____no_output_____
</code>
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>_____no_output_____
<code>
!!jupyter nbconvert *.ipynb_____no_output_____
</code>
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets._____no_output_____
<code>
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]_____no_output_____
</code>
| {
"repository": "urviyi/artificial-intelligence",
"path": "Projects/4_HMM Tagger/HMM Tagger.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 109920,
"hexsha": "cb740baca0b7edcc8382a8953aecd89f505bf4f1",
"max_line_length": 21052,
"avg_line_length": 72.3157894737,
"alphanum_fraction": 0.6073144105
} |
# Notebook from MeganStalker/Advanced_Practical_Chemistry_Year_3
Path: Week_3/week_3.ipynb
# Week 3
## Introduction to Solid State _____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
from polypy.read import History
from polypy.msd import MSD
from polypy import plotting
def get_diffusion(file, atom):
with open(file) as f:
y = False
for line in f:
if str("atom D ") in line:
y = True
if y == True and str(atom) in line:
d = line.split()
break
return d_____no_output_____
</code>
# Background#
Now that you are familiar with molecular dynamics, you are now going to use it to tackle some real world problems.
The transport properties of a material determine many properties that are utilised for modern technological applications. For example, solid oxide fuel cell (SOFCs), which are an alternative to batteries, materials are dependent on the movement of charge carriers through the solid electrolyte. Another example are nuclear fuel materials which oxidise and fall apart - this corrosive behaviour is dependent on the diffusion of oxygen into the lattice.
Due to the importance of the transport properties of these materials, scientists and engineers spend large amounts of their time tring to optomise these properties using different stoichiometries, introducing defects and by using different syntheisis techniques._____no_output_____# Aim and Objectives #
The **Aim** of the next **five weeks** is to **investigate** the transport properties of a simple fluorite material - CaF$_2$.
The **first objective** is to **investigate** how the transport properties of CaF$_2$ are affected by temperature
The **second objective** is to **investigate** how the transport properties of CaF$_2$ are affected by structural defects (Schottky and Frenkel)
The **third objective** is to **investigate** how the transport properties of CaF$_2$ are affected by chemcial dopants (e.g. different cations)
A rough breakdown looks as follows:
**Week 3**
- Molecular dynamics simulations of stoichiomteric CaF$_2$
**Week 4**
- Molecular dynamics simulations of CaF$_2$ containing Schottky defects
**Week 5**
- Molecular dynamics simulations of CaF$_2$ containing Frenkel defects
**Week 6**
- Molecular dynamics simulations of CaF$_2$ containing various dopants
**Week 7**
- Molecular dynamics simulations of CaF$_2$ containing various dopants
By these **five weeks** you will be able to:
- **Perform** molecular dynamics simulations at different temperatures
- **Manipulate** the input files
- **Adjust** the ensemble for the simulation
- **Examine** the volume and energy of different simulations
- **Apply** VMD to visualize the simulation cell and evaluate radial distribution - coefficients
The **Aim** of this **week** (week 3) is to **investigate** the temperature-dependence of the transport properties of a simple fluorite material CaF$_2$ using molecular dynamics (MD).
The **first objective** is to **familiarise** yourself with the molecular simulation software package <code>DL_POLY</code>
The **second objective** is to **complete** a tutorial which demonstrates how to calculate diffusion coefficients
The **third objective** is to is to **complete** a tutorial which demonstrates how to **calculate** the activation energy barrier of F diffusion _____no_output_____## Introduction to DL_POLY
<code>DL_POLY</code> is a molecular dynamics (MD) program maintained by Daresbury laboratories. In contrast to <code>pylj</code>, <code>DL_POLY</code> is a three-dimensional MD code that is used worldwide by computational scientists for molecular simulation, but it should be noted that the theory is exactly the same and any understanding gained from <code>pylj</code> is completely applicable to <code>DL_POLY</code>.
For the next five weeks you will use <code>DL_POLY</code> to run short MD simulations on CaF$_2$. You first need to understand the input files required for <code>DL_POLY</code>.
<code>**CONTROL**</code>
This is the file that contains all of the simulation parameters, e.g. simulation temperature, pressure, number of steps etc.
<code>**CONFIG**</code>
This is the file that contains the structure - i.e. the atomic coordinates of each atom.
<code>**FIELD**</code>
This is the file that contains the force field or potential model e.g. Lennard-Jones. _____no_output_____# Exercise 1: Setting Up an MD Simulation#
First, we will use <code>METADISE</code> to produce <code>DL_POLY</code> input files.
Contained within the folder <code>Input/</code> you will find a file called <code>input.txt</code>.
This is the main file that you will interact with over the next five weeks and is the input file for <code>METADISE</code> which generates the 3 <code>DL_POLY</code> input files: <code>FIELD</code>, <code>CONTROL</code> and <code>CONFIG</code>.
Essentially it is easier to meddle with <code>input.txt</code> than it is to meddle with the 3 <code>DL_POLY</code> files everytime you want to change something.
To run <code>METADISE</code> we will use the <code>subprocess</code> <code>python</code> module.
To use <code>subprocess</code> - specify what program you want to run and the file that you want to run it in, you will need to ensure the file path is correct.
To **generate** the 3 <code>DL_POLY</code> input files: <code>FIELD</code>, <code>CONTROL</code> and <code>CONFIG</code>, **run** the cell below:
#### It is essential that the codes that were downloaded from [here](https://people.bath.ac.uk/chsscp/teach/adv.bho/progs.zip) are in the Codes/ folder in the parent directory, or this following cell will crash. _____no_output_____
<code>
subprocess.call('../Codes/metadise.exe', cwd='Input/')
os.rename('Input/control_o0001.dlp', 'Input/CONTROL')
os.rename('Input/config__o0001.dlp', 'Input/CONFIG')
os.rename('Input/field___o0001.dlp', 'Input/FIELD')_____no_output_____
</code>
Now you should have a <code>CONFIG</code>, <code>CONTROL</code> and <code>FIELD</code> file within the <code>Input/</code> directory.
In theory you could just call the <code>DL_POLY</code> program in this directory and your simulation would run.
However, we need to tweak the <code>CONTROL</code> file in order to set up our desired simulation.
1. **Make** a new subdirectory in the <code>week 3</code> directory named <code>"Example/"</code> and copy <code>CONFIG</code>, <code>CONTROL</code> and <code>FIELD</code> to that subdirectory.
2. Now **edit** the <code>CONTROL</code> file to change the following:
<code>Temperature 300 ---> Temperature 1500
Steps 5001 ---> Steps 40000
ensemble nve ---> ensemble npt hoover 0.1 0.5
trajectory nstraj= 1 istraj= 250 keytrj=0 ---> trajectory nstraj= 0 istraj= 100 keytrj=0</code>
3.Now your simulation is ready, **check** the structure before you run the simulation. You can view the <code>CONFIG</code> file in three dimensions using the VESTA program
It is always good to **check** your structure before (<code>CONFIG</code>) and after (<code>REVCON</code>) the simulation.
You can view the <code>CONFIG</code> and <code>REVCON</code> files in three dimensions using the <code>VESTA</code> program. <code>VESTA</code> can generate nice pictures which will look very good in a lab report.
<center>
<br>
<img src="./figures/vesta.png\" width=\"400px\">
<i>Figure 1. Fluorite CaF$_2$ unit cell visualised in VESTA.</i>
<br>
</center>_____no_output_____# Exercise 2: Running an MD Simulation
Now we have <code>DL_POLY</code> input files, we will run an MD simulation using <code>DL_POLY</code>.
1. **Run** <code>DL_POLY</code> from within the notebook use the command below
Keep in mind that this simulation will take 20 or so minutes so be patient.
If you are not comfortable with running things through this notebook then you can copy and paste the <code>dlpoly_classic.exe</code> executable into the Example/ sub directory and then **double click** the <code>.exe</code> file_____no_output_____
<code>
subprocess.call('../Codes/dlpoly_classic.exe', cwd='Example/')_____no_output_____
</code>
# Exercise 3: Inspecting an MD Simulation
Now we have run an MD simulation using <code>DL_POLY</code> we can analyse the data using the <code>VESTA</code>
Once <code>DL_POLY</code> has completed you will find several files relating to your simulaton.
<code> **HISTORY** </code>
This file contains the configuration of your system at each step during the simulation, known as a _trajectory_. You can view this as a movie using <code>VMD</code>
<code> **STATIS** </code>
Contains the statistics at each step of the simulation.
<code> **OUTPUT** </code>
Contains various properties of the simulation.
<code> **REVCON** </code>
This is the configuration at the end of the simulation. Can be viewed in <code>VESTA</code>. **Check** to see how it has changed, compare it to the <code>CONFIG</code> file.
_____no_output_____# Exercise 4: Analysing the Diffusion Properties
Now we have inspected the final structure from the simulation, we can calculate the diffusion coefficient.
## Mean Squared Displacements - Calculating Diffusion Coefficients
As we have seen molecules in liquds, gases and solids do not stay in the same place and move constantly. Think about a drop of dye in a glass of water, as time passes the dye distributes throughout the water. This process is called diffusion and is common throughout nature.
Using the dye as an example, the motion of a dye molecule is not simple. As it moves it is jostled by collisions with other molecules, preventing it from moving in a straight path. If the path is examined in close detail, it will be seen to be a good approximation to a _random walk_.
In mathmatics, a random walk is a series of steps, each taken in a random direction. This was analysed by Albert Einstein in a study of _Brownian motion_ and he showed that the mean square of the distance travelled by a particle following a random walk is proportional to the time elapsed, as given by:
\begin{align}
\Big \langle r^2 \big \rangle & = 6 D_t + C
\end{align}
where $\Big \langle r^2 \big \rangle$ is the mean squared distance, t is time, D is the diffusion rate and C is a constant.
## What is the Mean Squared Displacement?
Going back to the example of the dye in water, lets assume for the sake of simplicity that we are in one dimension. Each step can either be forwards or backwards and we cannot predict which.
From a given starting position, what distance is our dye molecule likely to travel after 1000 steps? This can be determined simply by adding together the steps, taking into account the fact that steps backwards subtract from the total, while steps forward add to the total. Since both forward and backward steps are equally probable, we come to the surprising conclusion that the probable distance travelled sums up to zero.
By adding the square of the distance we will always be adding positive numbers to our total which now increases linearly with time. Based upon equation 1 it should now be clear that a plot of $\Big \langle r^2 \big \rangle$ vs time with produce a line, the gradient of which is equal to 6D. Giving us direct access to the diffusion coefficient of the system.
Lets try explore this with an example.
1. **Run** a short <code>DL_POLY</code> simulation on the input files provided._____no_output_____You will run a small MSD program called <code>MSD.py</code> to analyse your simulation results.
First you need to **read** in the data. The <code>HISTORY</code> file contains a list of the atomic coordiantes held by the atoms during the simulation.
2.**Run** the cell below to read the <code>HISTORY</code> file into the <code>Jupyter Notebook</code>_____no_output_____
<code>
## Provide the path to the simulation and the atom that you want data for.
data = History("Example/HISTORY", "F")_____no_output_____
</code>
<code>data</code> is a class object containing information about the trajectory.
More information can be found here https://polypy.readthedocs.io/en/latest/reading_data.html and here https://github.com/symmy596/Polypy/blob/master/polypy/read.py .
The next step is to calculate the MSD.
3.**Run** the cell below to calculate the MSD of the chosen atom throughout the course of the simulation_____no_output_____
<code>
# Run the MSD calculation
f_msd = MSD(data.trajectory, sweeps=2)
output = f_msd.msd()_____no_output_____
</code>
The MSD calculation function returns an object with imformation about the MSD calculation.
More information and a full tutorial on this functionality can be found here https://polypy.readthedocs.io/en/latest/msd_tutorial.html
4.**Run** the cell below to give plots of the MSD which have a nice linear relationship. _____no_output_____
<code>
ax = plotting.msd_plot(output)
plt.show()_____no_output_____print("Three Dimensional Diffusion Coefficient", output.xyz_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in X", output.x_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Y", output.y_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Z", output.z_diffusion_coefficient())_____no_output_____
</code>
# Exercise 5: The Effect of Simulation Length
Now we have calculated the diffusion coefficient, we can investigate the influence of simulation length on the diffusion coefficient.
It is important to consider the length of your simulation (the number of steps).
1. **Create** a new folder called <code>"Example_2/"</code>
2. **Copy** the <code>CONFIG</code>, <code>FIELD</code> and <code>CONTROL</code> files from your previous simulation
3. **Change** the number of steps to 10000
4. **Rerun** the simulation by **running** the cell below_____no_output_____
<code>
subprocess.call('../Codes/dlpoly_classic.exe', cwd='Example_2/')_____no_output_____
</code>
5.**Run** the cell below to calculate and plot the MSD of the chosen atom throughout the course of the simulation_____no_output_____
<code>
data = History("Example_2/HISTORY", "F")
# Run the MSD calculation
f_msd = MSD(data.trajectory, sweeps=2)
output = f_msd.msd()
ax = plotting.msd_plot(output)
plt.show()_____no_output_____print("Three Dimensional Diffusion Coefficient", output.xyz_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in X", output.x_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Y", output.y_diffusion_coefficient())
print("One Dimensional Diffusion Coefficient in Z", output.z_diffusion_coefficient())_____no_output_____
</code>
You will hopefully see that your MSD plot has become considerably less linear. This shows that your simulation has not run long enough and your results will be unrealiable.
You will hopefully also see a change to the value of your diffusion coefficient.
**The length of your simulation is something that you should keep in mind for the next 5 weeks.** _____no_output_____# Exercise 6: Calculating the Activation Energy_____no_output_____Now we have investigated the influence of simulation length on the diffusion coefficient, we can calculate the activation energy for F diffusion by applying the Arrhenius equation.
To apply the Arrhensius equation, diffusion coefficients from a range of temperatures are required.
Common sense and chemical intuition suggest that the higher the temperature, the faster a given chemical reaction will proceed. Quantitatively, this relationship between the rate a reaction proceeds and the temperature is determined by the Arrhenius Equation.
At higher temperatures, the probability that two molecules will collide is higher. This higher collision rate results in a higher kinetic energy, which has an effect on the activation energy of the reaction. The activation energy is the amount of energy required to ensure that a reaction happens.
\begin{align}
k = A * e^{(-Ea / RT)}
\end{align}
where k is the rate coefficient, A is a constant, Ea is the activation energy, R is the universal gas constant, and T is the temperature (in kelvin)._____no_output_____# Exercise 7: Putting it All Together
Using what you have learned through the tutorials above, your task this week is to calculate the activation energy of F diffusion in CaF$_2$.
1. You will need to **select** a temperature range and carry out simulations at different temperatures within that range.
#### Questions to answer:
- In what temperature range is CaF$_2$ completely solid i.e. no diffusion?
- In what range is fluorine essentially liquid i.e. fluorine diffusion with no calcium diffusion?
- What is the melting temperature of CaF$_2$?
- Plot an Arrhenius plot and determine the activation energies in temperature range - You will need to rearange the equation.
You are encouraged to split the work up within your group and to learn how to view the simulation "movie" using VMD (Ask a demonstrator). VMD is a fantastic program that allows you to visualise your simulation, included below is a video showing a short snippet of an MD simulation of CaF$_2$. A single F atom has been highlighted to show that diffusion is occuring. _____no_output_____
<code>
%%HTML
<div align="middle">
<video width="80%" controls>
<source src="./figures/VMD_example.mp4" type="video/mp4">
</video></div>_____no_output_____
</code>
Furthermore, VMD can also be used to generate images showing the entire trajectory of the simulation, e.g.
<center>
<br>
<img src="./figures/CaF2.png\" width=\"400px\">
<i>Figure 2. A figure showing all positions occupied by F during an MD simulation at 1500 K. F positions are shown in orange and Ca atoms are shown in green.</i>
<br>
</center>
_____no_output_____To save you time you can use the function declared at the start of this notebook to pull out a diffusion coefficient directly from the simulation output file. <code>MSD.py</code> is a small code to allow visualisation of the MSD plot but it is not neccesary every time you want the diffusion coefficient.
It is up to you how you organise/create your directories but it is reccomended that you start a new notebook.
Use the commands/functions used in this notebook to:
1. **Generate** your input files
2. **Run** <code>DL_POLY</code>
3. **Extract** the diffusion coefficient of F diffusion
Then write your own code to:
4. **Generate** an Arrhenius plot
5. **Calculate** the activation energies of F diffusion
If you finish early then feel free to start the week 4 exercises. _____no_output_____
| {
"repository": "MeganStalker/Advanced_Practical_Chemistry_Year_3",
"path": "Week_3/week_3.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": null,
"size": 24069,
"hexsha": "cb75a2e0d6e9838de4c09f26e41c14c6467d0885",
"max_line_length": 461,
"avg_line_length": 42.8274021352,
"alphanum_fraction": 0.6368773111
} |
# Notebook from gfeiden/MagneticUpperSco
Path: notes/.ipynb_checkpoints/convective_structure-checkpoint.ipynb
# Radiative Cores & Convective Envelopes
Analysis of how magnetic fields influence the extent of radiative cores and convective envelopes in young, pre-main-sequence stars.
Begin with some preliminaries._____no_output_____
<code>
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d_____no_output_____
</code>
Load a standard and magnetic isochrone with equivalent ages. Here, the adopted age is 10 Myr to look specifically at the predicted internal structure of stars in Upper Scorpius._____no_output_____
<code>
# read standard 10 Myr isochrone
iso_std = np.genfromtxt('../models/iso/std/dmestar_00010.0myr_z+0.00_a+0.00_phx.iso')
# read standard 5 Myr isochrone
iso_5my = np.genfromtxt('../models/iso/std/dmestar_00005.0myr_z+0.00_a+0.00_phx.iso')
# read magnetic isochrone
iso_mag = np.genfromtxt('../models/iso/mag/dmestar_00010.0myr_z+0.00_a+0.00_phx_magBeq.iso')_____no_output_____
</code>
The magnetic isochrone is known to begin at a lower mass than the standard isochrone and both isochrones have gaps where individual models failed to converge. Gaps need not occur at the same masses along each isochrone. To overcome these inconsistencies, we can interpolate both isochrones onto a pre-defined mass domain._____no_output_____
<code>
masses = np.arange(0.09, 1.70, 0.01) # new mass domain
# create an interpolation curve for a standard isochrone
icurve = interp1d(iso_std[:,0], iso_std, axis=0, kind='cubic')
# and transform to new mass domain
iso_std_eq = icurve(masses)
# create interpolation curve for standard 5 Myr isochrone
icurve = interp1d(iso_5my[:,0], iso_5my, axis=0, kind='linear')
# and transform to a new mass domain
iso_5my_eq = icurve(masses)
# create an interpolation curve for a magnetic isochrone
icurve = interp1d(iso_mag[:,0], iso_mag, axis=0, kind='cubic')
# and transform to new mass domain
iso_mag_eq = icurve(masses)_____no_output_____
</code>
Let's compare the interpolated isochrones to the original, just to be sure that the resulting isochrones are smooth._____no_output_____
<code>
plt.plot(10**iso_std[:, 1], iso_std[:, 3], '-', lw=4, color='red')
plt.plot(10**iso_std_eq[:, 1], iso_std_eq[:, 3], '--', lw=4, color='black')
plt.plot(10**iso_mag[:, 1], iso_mag[:, 3], '-', lw=4, color='blue')
plt.plot(10**iso_mag_eq[:, 1], iso_mag_eq[:, 3], '--', lw=4, color='black')
plt.grid()
plt.xlim(2500., 8000.)
plt.ylim(-2, 1.1)
plt.xlabel('$T_{\\rm eff}\ [K]$', fontsize=20)
plt.ylabel('$\\log(L / L_{\\odot})$', fontsize=20)_____no_output_____
</code>
The interpolation appears to have worked well as there are no egregious discrepancies between the real and interpolated isochrones.
We can now analyze the properties of the radiative cores and the convective envelopes. Beginning with the radiative core, we can look as a function of stellar properties, how much of the total stellar mass is contained in the radiative core._____no_output_____
<code>
# as a function of stellar mass
plt.plot(iso_std_eq[:, 0], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0],
'--', lw=3, color='#333333')
plt.plot(iso_5my_eq[:, 0], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0],
'-.', lw=3, color='#333333')
plt.plot(iso_mag_eq[:, 0], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0],
'-' , lw=4, color='#01a9db')
plt.grid()
plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20)
plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20)_____no_output_____# as a function of effective temperature
plt.plot(10**iso_std_eq[:, 1], 1.0 - iso_std_eq[:, -1]/iso_std_eq[:, 0],
'--', lw=3, color='#333333')
plt.plot(10**iso_5my_eq[:, 1], 1.0 - iso_5my_eq[:, -1]/iso_5my_eq[:, 0],
'-.', lw=3, color='#333333')
plt.plot(10**iso_mag_eq[:, 1], 1.0 - iso_mag_eq[:, -1]/iso_mag_eq[:, 0],
'-' , lw=4, color='#01a9db')
plt.grid()
plt.xlim(3000., 7000.)
plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20)
plt.ylabel('$M_{\\rm rad\ core}\ /\ M_{\\star}$', fontsize=20)_____no_output_____
</code>
Now let's look at the relative difference in radiative core mass as a function of these stellar properties._____no_output_____
<code>
# as a function of stellar mass (note, there is a minus sign switch b/c we tabulate
# convective envelope mass)
plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_std_eq[:, -1]),
'-' , lw=4, color='#01a9db')
plt.plot(iso_mag_eq[:, 0], (iso_mag_eq[:, -1] - iso_5my_eq[:, -1]),
'--' , lw=4, color='#01a9db')
plt.grid()
plt.xlabel('${\\rm Stellar Mass}\ [M_{\\odot}]$', fontsize=20)
plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20)_____no_output_____
</code>
Analysis_____no_output_____
<code>
# interpolate into the temperature domain
Teffs = np.log10(np.arange(3050., 7000., 50.))
icurve = interp1d(iso_std[:, 1], iso_std, axis=0, kind='linear')
iso_std_te = icurve(Teffs)
icurve = interp1d(iso_5my[:, 1], iso_5my, axis=0, kind='linear')
iso_5my_te = icurve(Teffs)
icurve = interp1d(iso_mag[:, 1], iso_mag, axis=0, kind='linear')
iso_mag_te = icurve(Teffs)
# as a function of stellar mass
# (note, there is a minus sign switch b/c we tabulate convective envelope mass)
#
# plotting: standard - magnetic where + implies
plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] -
iso_std_te[:, 0] + iso_std_te[:, -1]),
'-' , lw=4, color='#01a9db')
plt.plot(10**Teffs, (iso_mag_te[:, 0] - iso_mag_te[:, -1] -
iso_5my_te[:, 0] + iso_5my_te[:, -1]),
'--' , lw=4, color='#01a9db')
np.savetxt('../models/rad_core_comp.txt',
np.column_stack((iso_std_te, iso_mag_te)),
fmt="%10.6f")
np.savetxt('../models/rad_core_comp_dage.txt',
np.column_stack((iso_5my_te, iso_mag_te)),
fmt="%10.6f")
plt.grid()
plt.xlim(3000., 7000.)
plt.xlabel('${\\rm Effective Temperature}\ [K]$', fontsize=20)
plt.ylabel('$\\Delta M_{\\rm rad\ core}\ [M_{\\odot}]$', fontsize=20)_____no_output_____
</code>
Stars are fully convective below 3500 K, regardless of whether there is magnetic inhibition of convection. On the other extreme, stars hotter than about 6500 K are approaching ignition of the CN-cycle, which coincides with the disappearnce of the outer convective envelope. However, delayed contraction means that stars of a given effective temperature have a higher mass in the magnetic case, which leads to a slight mass offset once the radiative core comprises nearly 100% of the star. Note that our use of the term "radiative core" is technically invalid in this regime due to the presence of a convective core. _____no_output_____
| {
"repository": "gfeiden/MagneticUpperSco",
"path": "notes/.ipynb_checkpoints/convective_structure-checkpoint.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 127108,
"hexsha": "cb7672ebe59ce31b26e75d58fc1aa0c6f1fdb472",
"max_line_length": 24178,
"avg_line_length": 327.5979381443,
"alphanum_fraction": 0.9197296787
} |
# Notebook from teo-milea/federated
Path: docs/tutorials/sparse_federated_learning.ipynb
##### Copyright 2021 The TensorFlow Federated Authors._____no_output_____
<code>
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License._____no_output_____
</code>
# Client-efficient large-model federated learning via `federated_select` and sparse aggregation
_____no_output_____<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/sparse_federated_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.20.0/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.20.0/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/sparse_federated_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>_____no_output_____
This tutorial shows how TFF can be used to train a very large model where each client device only downloads and updates a small part of the model, using
`tff.federated_select` and sparse aggregation. While this tutorial is fairly self-contained, the [`tff.federated_select` tutorial](https://www.tensorflow.org/federated/tutorials/federated_select) and [custom FL algorithms tutorial](https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm) provide good introductions to some of the techniques used here.
Concretely, in this tutorial we consider logistic regression for multi-label classification, predicting which "tags" are associated with a text string based on a bag-of-words feature representation. Importantly, communication and client-side computation costs are controlled by a fixed constant (`MAX_TOKENS_SELECTED_PER_CLIENT`), and *do not* scale with the overall vocabulary size, which could be extremely large in practical settings._____no_output_____
<code>
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()_____no_output_____import collections
import itertools
import numpy as np
from typing import Callable, List, Tuple
import tensorflow as tf
import tensorflow_federated as tff
tff.backends.native.set_local_python_execution_context()_____no_output_____
</code>
Each client will `federated_select` the rows of the model weights for at most this many unique tokens. This upper-bounds the size of the client's local model and the amount of server -> client (`federated_select`) and client - > server `(federated_aggregate`) communication performed.
This tutorial should still run correctly even if you set this as small as 1 (ensuring not all tokens from each client are selected) or to a large value, though model convergence may be effected._____no_output_____
<code>
MAX_TOKENS_SELECTED_PER_CLIENT = 6_____no_output_____
</code>
We also define a few constants for various types. For this colab, a **token** is an integer identifier for a particular word after parsing the dataset. _____no_output_____
<code>
# There are some constraints on types
# here that will require some explicit type conversions:
# - `tff.federated_select` requires int32
# - `tf.SparseTensor` requires int64 indices.
TOKEN_DTYPE = tf.int64
SELECT_KEY_DTYPE = tf.int32
# Type for counts of token occurences.
TOKEN_COUNT_DTYPE = tf.int32
# A sparse feature vector can be thought of as a map
# from TOKEN_DTYPE to FEATURE_DTYPE.
# Our features are {0, 1} indicators, so we could potentially
# use tf.int8 as an optimization.
FEATURE_DTYPE = tf.int32_____no_output_____
</code>
# Setting up the problem: Dataset and Model
We construct a tiny toy dataset for easy experimentation in this tutorial. However, the format of the dataset is compatible with [Federated StackOverflow](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data), and
the [pre-processing](https://github.com/google-research/federated/blob/0a558bac8a724fc38175ff4f0ce46c7af3d24be2/utils/datasets/stackoverflow_tag_prediction.py) and [model architecture](https://github.com/google-research/federated/blob/49a43456aa5eaee3e1749855eed89c0087983541/utils/models/stackoverflow_lr_models.py) are adopted from the StackOverflow
tag prediction problem of [*Adaptive Federated Optimization*](https://arxiv.org/abs/2003.00295)._____no_output_____## Dataset parsing and pre-processing_____no_output_____
<code>
NUM_OOV_BUCKETS = 1
BatchType = collections.namedtuple('BatchType', ['tokens', 'tags'])
def build_to_ids_fn(word_vocab: List[str],
tag_vocab: List[str]) -> Callable[[tf.Tensor], tf.Tensor]:
"""Constructs a function mapping examples to sequences of token indices."""
word_table_values = np.arange(len(word_vocab), dtype=np.int64)
word_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(word_vocab, word_table_values),
num_oov_buckets=NUM_OOV_BUCKETS)
tag_table_values = np.arange(len(tag_vocab), dtype=np.int64)
tag_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(tag_vocab, tag_table_values),
num_oov_buckets=NUM_OOV_BUCKETS)
def to_ids(example):
"""Converts a Stack Overflow example to a bag-of-words/tags format."""
sentence = tf.strings.join([example['tokens'], example['title']],
separator=' ')
# We represent that label (output tags) densely.
raw_tags = example['tags']
tags = tf.strings.split(raw_tags, sep='|')
tags = tag_table.lookup(tags)
tags, _ = tf.unique(tags)
tags = tf.one_hot(tags, len(tag_vocab) + NUM_OOV_BUCKETS)
tags = tf.reduce_max(tags, axis=0)
# We represent the features as a SparseTensor of {0, 1}s.
words = tf.strings.split(sentence)
tokens = word_table.lookup(words)
tokens, _ = tf.unique(tokens)
# Note: We could choose to use the word counts as the feature vector
# instead of just {0, 1} values (see tf.unique_with_counts).
tokens = tf.reshape(tokens, shape=(tf.size(tokens), 1))
tokens_st = tf.SparseTensor(
tokens,
tf.ones(tf.size(tokens), dtype=FEATURE_DTYPE),
dense_shape=(len(word_vocab) + NUM_OOV_BUCKETS,))
tokens_st = tf.sparse.reorder(tokens_st)
return BatchType(tokens_st, tags)
return to_ids_____no_output_____def build_preprocess_fn(word_vocab, tag_vocab):
@tf.function
def preprocess_fn(dataset):
to_ids = build_to_ids_fn(word_vocab, tag_vocab)
# We *don't* shuffle in order to make this colab deterministic for
# easier testing and reproducibility.
# But real-world training should use `.shuffle()`.
return dataset.map(to_ids, num_parallel_calls=tf.data.experimental.AUTOTUNE)
return preprocess_fn_____no_output_____
</code>
## A tiny toy dataset
We construct a tiny toy dataset with a global vocabulary of 12 words and 3 clients. This tiny example is useful for testing edge cases (for example,
we have two clients with less than `MAX_TOKENS_SELECTED_PER_CLIENT = 6` distinct tokens, and one with more) and developing the code.
However, the real-world use cases of this approach would be global vocabularies of 10s of millions or more, with perhaps 1000s of distinct tokens appearing on each client. Because the format of the data is the same, the extension to more realistic testbed problems, e.g. the `tff.simulation.datasets.stackoverflow.load_data()` dataset, should be straightforward.
First, we define our word and tag vocabularies._____no_output_____
<code>
# Features
FRUIT_WORDS = ['apple', 'orange', 'pear', 'kiwi']
VEGETABLE_WORDS = ['carrot', 'broccoli', 'arugula', 'peas']
FISH_WORDS = ['trout', 'tuna', 'cod', 'salmon']
WORD_VOCAB = FRUIT_WORDS + VEGETABLE_WORDS + FISH_WORDS
# Labels
TAG_VOCAB = ['FRUIT', 'VEGETABLE', 'FISH']_____no_output_____
</code>
Now, we create 3 clients with small local datasets. If you are running this tutorial in colab, it may be useful to use the "mirror cell in tab" feature to pin this cell and its output in order to interpret/check the output of the functions developed below._____no_output_____
<code>
preprocess_fn = build_preprocess_fn(WORD_VOCAB, TAG_VOCAB)
def make_dataset(raw):
d = tf.data.Dataset.from_tensor_slices(
# Matches the StackOverflow formatting
collections.OrderedDict(
tokens=tf.constant([t[0] for t in raw]),
tags=tf.constant([t[1] for t in raw]),
title=['' for _ in raw]))
d = preprocess_fn(d)
return d
# 4 distinct tokens
CLIENT1_DATASET = make_dataset([
('apple orange apple orange', 'FRUIT'),
('carrot trout', 'VEGETABLE|FISH'),
('orange apple', 'FRUIT'),
('orange', 'ORANGE|CITRUS') # 2 OOV tag
])
# 6 distinct tokens
CLIENT2_DATASET = make_dataset([
('pear cod', 'FRUIT|FISH'),
('arugula peas', 'VEGETABLE'),
('kiwi pear', 'FRUIT'),
('sturgeon', 'FISH'), # OOV word
('sturgeon bass', 'FISH') # 2 OOV words
])
# A client with all possible words & tags (13 distinct tokens).
# With MAX_TOKENS_SELECTED_PER_CLIENT = 6, we won't download the model
# slices for all tokens that occur on this client.
CLIENT3_DATASET = make_dataset([
(' '.join(WORD_VOCAB + ['oovword']), '|'.join(TAG_VOCAB)),
# Mathe the OOV token and 'salmon' occur in the largest number
# of examples on this client:
('salmon oovword', 'FISH|OOVTAG')
])
print('Word vocab')
for i, word in enumerate(WORD_VOCAB):
print(f'{i:2d} {word}')
print('\nTag vocab')
for i, tag in enumerate(TAG_VOCAB):
print(f'{i:2d} {tag}')Word vocab
0 apple
1 orange
2 pear
3 kiwi
4 carrot
5 broccoli
6 arugula
7 peas
8 trout
9 tuna
10 cod
11 salmon
Tag vocab
0 FRUIT
1 VEGETABLE
2 FISH
</code>
Define constants for the raw numbers of input features (tokens/words) and labels (post tags). Our actual input/output spaces are `NUM_OOV_BUCKETS = 1` larger because we add an OOV token / tag._____no_output_____
<code>
NUM_WORDS = len(WORD_VOCAB)
NUM_TAGS = len(TAG_VOCAB)
WORD_VOCAB_SIZE = NUM_WORDS + NUM_OOV_BUCKETS
TAG_VOCAB_SIZE = NUM_TAGS + NUM_OOV_BUCKETS_____no_output_____
</code>
Create batched versions of the datasets, and individual batches, which will be useful in testing code as we go._____no_output_____
<code>
batched_dataset1 = CLIENT1_DATASET.batch(2)
batched_dataset2 = CLIENT2_DATASET.batch(3)
batched_dataset3 = CLIENT3_DATASET.batch(2)
batch1 = next(iter(batched_dataset1))
batch2 = next(iter(batched_dataset2))
batch3 = next(iter(batched_dataset3))_____no_output_____
</code>
## Define a model with sparse inputs
We use a simple independent logistic regression model for each tag._____no_output_____
<code>
def create_logistic_model(word_vocab_size: int, vocab_tags_size: int):
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(word_vocab_size,), sparse=True),
tf.keras.layers.Dense(
vocab_tags_size,
activation='sigmoid',
kernel_initializer=tf.keras.initializers.zeros,
# For simplicity, don't use a bias vector; this means the model
# is a single tensor, and we only need sparse aggregation of
# the per-token slices of the model. Generalizing to also handle
# other model weights that are fully updated
# (non-dense broadcast and aggregate) would be a good exercise.
use_bias=False),
])
return model_____no_output_____
</code>
Let's make sure it works, first by making predictions:_____no_output_____
<code>
model = create_logistic_model(WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
p = model.predict(batch1.tokens)
print(p)[[0.5 0.5 0.5 0.5]
[0.5 0.5 0.5 0.5]]
</code>
And some simple centralized training:_____no_output_____
<code>
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.001),
loss=tf.keras.losses.BinaryCrossentropy())
model.train_on_batch(batch1.tokens, batch1.tags)_____no_output_____
</code>
# Building blocks for the federated computation
We will implement a simple version of the [Federated Averaging](https://arxiv.org/abs/1602.05629) algorithm with the key difference that each device only downloads a relevant subset of the model, and only contributes updates to that subset.
We use `M` as shorthand for `MAX_TOKENS_SELECTED_PER_CLIENT`. At a high level, one round of training involves these steps:
1. Each participating client scans over its local dataset, parsing the input strings and mapping them to the correct tokens (int indexes). This requires access to the global (large) dictionary (this could potentially be avoided using [feature hashing](https://en.wikipedia.org/wiki/Feature_hashing) techniques). We then sparsely count how many times each token occurs. If `U` unique tokens occur on device, we choose the `num_actual_tokens = min(U, M)` most frequent tokens to train.
1. The clients use `federated_select` to retrieve the model coefficients for the `num_actual_tokens` selected tokens from the server. Each model slice is a tensor of shape `(TAG_VOCAB_SIZE, )`, so the total data transmitted to the client is at most of size `TAG_VOCAB_SIZE * M` (see note below).
1. The clients construct a mapping `global_token -> local_token` where the local token (int index) is the index of the global token in the list of selected tokens.
1. The clients use a "small" version of the global model that only has coefficients for at most `M` tokens, from the range `[0, num_actual_tokens)`. The `global -> local` mapping is used to initialize the dense parameters of this model from the selected model slices.
1. Clients train their local model using SGD on data preprocessed with the `global -> local` mapping.
1. Clients turn the parameters of their local model into `IndexedSlices` updates using the `local -> global` mapping to index the rows. The server aggregates these updates using a sparse sum aggregation.
1. The server takes the (dense) result of the above aggregation, divides it by the number of clients participating, and applies the resulting average update to the global model.
In this section we construct the building blocks for these steps, which will then be combined in a final `federated_computation` that captures the full logic of one training round.
> NOTE: The above description hides one technical detail: Both `federated_select` and the construction of the local model require statically known shapes, and so we cannot use the dynamic per-client `num_actual_tokens` size. Instead, we use the static value `M`, adding padding where needed. This does not impact that semantics of the algorithm._____no_output_____### Count client tokens and decide which model slices to `federated_select`
Each device needs to decide which "slices" of the model are relevant to its local training dataset. For our problem, we do this by (sparsely!) counting how many examples contain each token in the client training data set.
_____no_output_____
<code>
@tf.function
def token_count_fn(token_counts, batch):
"""Adds counts from `batch` to the running `token_counts` sum."""
# Sum across the batch dimension.
flat_tokens = tf.sparse.reduce_sum(
batch.tokens, axis=0, output_is_sparse=True)
flat_tokens = tf.cast(flat_tokens, dtype=TOKEN_COUNT_DTYPE)
return tf.sparse.add(token_counts, flat_tokens)_____no_output_____# Simple tests
# Create the initial zero token counts using empty tensors.
initial_token_counts = tf.SparseTensor(
indices=tf.zeros(shape=(0, 1), dtype=TOKEN_DTYPE),
values=tf.zeros(shape=(0,), dtype=TOKEN_COUNT_DTYPE),
dense_shape=(WORD_VOCAB_SIZE,))
client_token_counts = batched_dataset1.reduce(initial_token_counts,
token_count_fn)
tokens = tf.reshape(client_token_counts.indices, (-1,)).numpy()
print('tokens:', tokens)
np.testing.assert_array_equal(tokens, [0, 1, 4, 8])
# The count is the number of *examples* in which the token/word
# occurs, not the total number of occurences, since we still featurize
# multiple occurences in the same example as a "1".
counts = client_token_counts.values.numpy()
print('counts:', counts)
np.testing.assert_array_equal(counts, [2, 3, 1, 1])tokens: [0 1 4 8]
counts: [2 3 1 1]
</code>
We will select the model parameters corresponding to the `MAX_TOKENS_SELECTED_PER_CLIENT` most frequently occuring tokens on device. If
fewer than this many tokens occur on device, we pad the list to enable the use
of `federated_select`.
Note that other strategies are possibly better, for example, randomly selecting tokens (perhaps based on their occurrence probability). This would ensure that all slices of the model (for which the client has data) have some chance of being updated._____no_output_____
<code>
@tf.function
def keys_for_client(client_dataset, max_tokens_per_client):
"""Computes a set of max_tokens_per_client keys."""
initial_token_counts = tf.SparseTensor(
indices=tf.zeros((0, 1), dtype=TOKEN_DTYPE),
values=tf.zeros((0,), dtype=TOKEN_COUNT_DTYPE),
dense_shape=(WORD_VOCAB_SIZE,))
client_token_counts = client_dataset.reduce(initial_token_counts,
token_count_fn)
# Find the most-frequently occuring tokens
tokens = tf.reshape(client_token_counts.indices, shape=(-1,))
counts = client_token_counts.values
perm = tf.argsort(counts, direction='DESCENDING')
tokens = tf.gather(tokens, perm)
counts = tf.gather(counts, perm)
num_raw_tokens = tf.shape(tokens)[0]
actual_num_tokens = tf.minimum(max_tokens_per_client, num_raw_tokens)
selected_tokens = tokens[:actual_num_tokens]
paddings = [[0, max_tokens_per_client - tf.shape(selected_tokens)[0]]]
padded_tokens = tf.pad(selected_tokens, paddings=paddings)
# Make sure the type is statically determined
padded_tokens = tf.reshape(padded_tokens, shape=(max_tokens_per_client,))
# We will pass these tokens as keys into `federated_select`, which
# requires SELECT_KEY_DTYPE=tf.int32 keys.
padded_tokens = tf.cast(padded_tokens, dtype=SELECT_KEY_DTYPE)
return padded_tokens, actual_num_tokens_____no_output_____# Simple test
# Case 1: actual_num_tokens > max_tokens_per_client
selected_tokens, actual_num_tokens = keys_for_client(batched_dataset1, 3)
assert tf.size(selected_tokens) == 3
assert actual_num_tokens == 3
# Case 2: actual_num_tokens < max_tokens_per_client
selected_tokens, actual_num_tokens = keys_for_client(batched_dataset1, 10)
assert tf.size(selected_tokens) == 10
assert actual_num_tokens == 4_____no_output_____
</code>
### Map global tokens to local tokens
The above selection gives us a dense set of tokens in the range `[0, actual_num_tokens)` which we will use for the on-device model. However, the dataset we read has tokens from the much larger global vocabulary range `[0, WORD_VOCAB_SIZE)`.
Thus, we need to map the global tokens to their corresponding local tokens. The
local token ids are simply given by the indexes into the `selected_tokens` tensor computed in the previous step._____no_output_____
<code>
@tf.function
def map_to_local_token_ids(client_data, client_keys):
global_to_local = tf.lookup.StaticHashTable(
# Note int32 -> int64 maps are not supported
tf.lookup.KeyValueTensorInitializer(
keys=tf.cast(client_keys, dtype=TOKEN_DTYPE),
# Note we need to use tf.shape, not the static
# shape client_keys.shape[0]
values=tf.range(0, limit=tf.shape(client_keys)[0],
dtype=TOKEN_DTYPE)),
# We use -1 for tokens that were not selected, which can occur for clients
# with more than MAX_TOKENS_SELECTED_PER_CLIENT distinct tokens.
# We will simply remove these invalid indices from the batch below.
default_value=-1)
def to_local_ids(sparse_tokens):
indices_t = tf.transpose(sparse_tokens.indices)
batch_indices = indices_t[0] # First column
tokens = indices_t[1] # Second column
tokens = tf.map_fn(
lambda global_token_id: global_to_local.lookup(global_token_id), tokens)
# Remove tokens that aren't actually available (looked up as -1):
available_tokens = tokens >= 0
tokens = tokens[available_tokens]
batch_indices = batch_indices[available_tokens]
updated_indices = tf.transpose(
tf.concat([[batch_indices], [tokens]], axis=0))
st = tf.sparse.SparseTensor(
updated_indices,
tf.ones(tf.size(tokens), dtype=FEATURE_DTYPE),
dense_shape=sparse_tokens.dense_shape)
st = tf.sparse.reorder(st)
return st
return client_data.map(lambda b: BatchType(to_local_ids(b.tokens), b.tags))_____no_output_____# Simple test
client_keys, actual_num_tokens = keys_for_client(
batched_dataset3, MAX_TOKENS_SELECTED_PER_CLIENT)
client_keys = client_keys[:actual_num_tokens]
d = map_to_local_token_ids(batched_dataset3, client_keys)
batch = next(iter(d))
all_tokens = tf.gather(batch.tokens.indices, indices=1, axis=1)
# Confirm we have local indices in the range [0, MAX):
assert tf.math.reduce_max(all_tokens) < MAX_TOKENS_SELECTED_PER_CLIENT
assert tf.math.reduce_max(all_tokens) >= 0_____no_output_____
</code>
### Train the local (sub)model on each client
Note `federated_select` will return the selected slices as a `tf.data.Dataset` in the same order as the selection keys. So, we first define a utility function to take such a Dataset and convert it to a single dense tensor which can be used as the model weights of the client model._____no_output_____
<code>
@tf.function
def slices_dataset_to_tensor(slices_dataset):
"""Convert a dataset of slices to a tensor."""
# Use batching to gather all of the slices into a single tensor.
d = slices_dataset.batch(MAX_TOKENS_SELECTED_PER_CLIENT,
drop_remainder=False)
iter_d = iter(d)
tensor = next(iter_d)
# Make sure we have consumed everything
opt = iter_d.get_next_as_optional()
tf.Assert(tf.logical_not(opt.has_value()), data=[''], name='CHECK_EMPTY')
return tensor_____no_output_____# Simple test
weights = np.random.random(
size=(MAX_TOKENS_SELECTED_PER_CLIENT, TAG_VOCAB_SIZE)).astype(np.float32)
model_slices_as_dataset = tf.data.Dataset.from_tensor_slices(weights)
weights2 = slices_dataset_to_tensor(model_slices_as_dataset)
np.testing.assert_array_equal(weights, weights2)_____no_output_____
</code>
We now have all the components we need to define a simple local training loop which will run on each client._____no_output_____
<code>
@tf.function
def client_train_fn(model, client_optimizer,
model_slices_as_dataset, client_data,
client_keys, actual_num_tokens):
initial_model_weights = slices_dataset_to_tensor(model_slices_as_dataset)
assert len(model.trainable_variables) == 1
model.trainable_variables[0].assign(initial_model_weights)
# Only keep the "real" (unpadded) keys.
client_keys = client_keys[:actual_num_tokens]
client_data = map_to_local_token_ids(client_data, client_keys)
loss_fn = tf.keras.losses.BinaryCrossentropy()
for features, labels in client_data:
with tf.GradientTape() as tape:
predictions = model(features)
loss = loss_fn(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
client_optimizer.apply_gradients(zip(grads, model.trainable_variables))
model_weights_delta = model.trainable_weights[0] - initial_model_weights
model_weights_delta = tf.slice(model_weights_delta, begin=[0, 0],
size=[actual_num_tokens, -1])
return client_keys, model_weights_delta_____no_output_____# Simple test
# Note if you execute this cell a second time, you need to also re-execute
# the preceeding cell to avoid "tf.function-decorated function tried to
# create variables on non-first call" errors.
on_device_model = create_logistic_model(MAX_TOKENS_SELECTED_PER_CLIENT,
TAG_VOCAB_SIZE)
client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
client_keys, actual_num_tokens = keys_for_client(
batched_dataset2, MAX_TOKENS_SELECTED_PER_CLIENT)
model_slices_as_dataset = tf.data.Dataset.from_tensor_slices(
np.zeros((MAX_TOKENS_SELECTED_PER_CLIENT, TAG_VOCAB_SIZE),
dtype=np.float32))
keys, delta = client_train_fn(
on_device_model,
client_optimizer,
model_slices_as_dataset,
client_data=batched_dataset3,
client_keys=client_keys,
actual_num_tokens=actual_num_tokens)
print(delta)_____no_output_____
</code>
### Aggregate IndexedSlices
We use `tff.federated_aggregate` to construct a federated sparse sum for `IndexedSlices`. This simple implementation has the constraint that the
`dense_shape` is known statically in advance. Note also that this sum is only *semi-sparse*, in the sense that the client -> server communication is sparse, but the server maintains a dense representation of the sum in `accumulate` and `merge`, and outputs this dense representation.
_____no_output_____
<code>
def federated_indexed_slices_sum(slice_indices, slice_values, dense_shape):
"""
Sumes IndexedSlices@CLIENTS to a dense @SERVER Tensor.
Intermediate aggregation is performed by converting to a dense representation,
which may not be suitable for all applications.
Args:
slice_indices: An IndexedSlices.indices tensor @CLIENTS.
slice_values: An IndexedSlices.values tensor @CLIENTS.
dense_shape: A statically known dense shape.
Returns:
A dense tensor placed @SERVER representing the sum of the client's
IndexedSclies.
"""
slices_dtype = slice_values.type_signature.member.dtype
zero = tff.tf_computation(
lambda: tf.zeros(dense_shape, dtype=slices_dtype))()
@tf.function
def accumulate_slices(dense, client_value):
indices, slices = client_value
# There is no built-in way to add `IndexedSlices`, but
# tf.convert_to_tensor is a quick way to convert to a dense representation
# so we can add them.
return dense + tf.convert_to_tensor(
tf.IndexedSlices(slices, indices, dense_shape))
return tff.federated_aggregate(
(slice_indices, slice_values),
zero=zero,
accumulate=tff.tf_computation(accumulate_slices),
merge=tff.tf_computation(lambda d1, d2: tf.add(d1, d2, name='merge')),
report=tff.tf_computation(lambda d: d))
_____no_output_____
</code>
Construct a minimal `federated_computation` as a test_____no_output_____
<code>
dense_shape = (6, 2)
indices_type = tff.TensorType(tf.int64, (None,))
values_type = tff.TensorType(tf.float32, (None, 2))
client_slice_type = tff.type_at_clients(
(indices_type, values_type))
@tff.federated_computation(client_slice_type)
def test_sum_indexed_slices(indices_values_at_client):
indices, values = indices_values_at_client
return federated_indexed_slices_sum(indices, values, dense_shape)
print(test_sum_indexed_slices.type_signature)({<int64[?],float32[?,2]>}@CLIENTS -> float32[6,2]@SERVER)
x = tf.IndexedSlices(
values=np.array([[2., 2.1], [0., 0.1], [1., 1.1], [5., 5.1]],
dtype=np.float32),
indices=[2, 0, 1, 5],
dense_shape=dense_shape)
y = tf.IndexedSlices(
values=np.array([[0., 0.3], [3.1, 3.2]], dtype=np.float32),
indices=[1, 3],
dense_shape=dense_shape)
# Sum one.
result = test_sum_indexed_slices([(x.indices, x.values)])
np.testing.assert_array_equal(tf.convert_to_tensor(x), result)
# Sum two.
expected = [[0., 0.1], [1., 1.4], [2., 2.1], [3.1, 3.2], [0., 0.], [5., 5.1]]
result = test_sum_indexed_slices([(x.indices, x.values), (y.indices, y.values)])
np.testing.assert_array_almost_equal(expected, result)_____no_output_____
</code>
# Putting it all together in a `federated_computation`
We now uses TFF to bind together the components into a `tff.federated_computation`.
_____no_output_____
<code>
DENSE_MODEL_SHAPE = (WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
client_data_type = tff.SequenceType(batched_dataset1.element_spec)
model_type = tff.TensorType(tf.float32, shape=DENSE_MODEL_SHAPE)_____no_output_____
</code>
We use a basic server training function based on Federated Averaging, applying the update with a server learning rate of 1.0. It is important that we apply an update (delta) to the model, rather than simply averaging client-supplied models, as otherwise if a given slice of the model wasn't trained on by any client on a given round its coefficients could be zeroed out._____no_output_____
<code>
@tff.tf_computation
def server_update(current_model_weights, update_sum, num_clients):
average_update = update_sum / num_clients
return current_model_weights + average_update_____no_output_____
</code>
We need a couple more `tff.tf_computation` components:_____no_output_____
<code>
# Function to select slices from the model weights in federated_select:
select_fn = tff.tf_computation(
lambda model_weights, index: tf.gather(model_weights, index))
# We need to wrap `client_train_fn` as a `tff.tf_computation`, making
# sure we do any operations that might construct `tf.Variable`s outside
# of the `tf.function` we are wrapping.
@tff.tf_computation
def client_train_fn_tff(model_slices_as_dataset, client_data, client_keys,
actual_num_tokens):
# Note this is amaller than the global model, using
# MAX_TOKENS_SELECTED_PER_CLIENT which is much smaller than WORD_VOCAB_SIZE.
# W7e would like a model of size `actual_num_tokens`, but we
# can't build the model dynamically, so we will slice off the padded
# weights at the end.
client_model = create_logistic_model(MAX_TOKENS_SELECTED_PER_CLIENT,
TAG_VOCAB_SIZE)
client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
return client_train_fn(client_model, client_optimizer,
model_slices_as_dataset, client_data, client_keys,
actual_num_tokens)
@tff.tf_computation
def keys_for_client_tff(client_data):
return keys_for_client(client_data, MAX_TOKENS_SELECTED_PER_CLIENT)_____no_output_____
</code>
We're now ready to put all the pieces together!_____no_output_____
<code>
@tff.federated_computation(
tff.type_at_server(model_type), tff.type_at_clients(client_data_type))
def sparse_model_update(server_model, client_data):
max_tokens = tff.federated_value(MAX_TOKENS_SELECTED_PER_CLIENT, tff.SERVER)
keys_at_clients, actual_num_tokens = tff.federated_map(
keys_for_client_tff, client_data)
model_slices = tff.federated_select(keys_at_clients, max_tokens, server_model,
select_fn)
update_keys, update_slices = tff.federated_map(
client_train_fn_tff,
(model_slices, client_data, keys_at_clients, actual_num_tokens))
dense_update_sum = federated_indexed_slices_sum(update_keys, update_slices,
DENSE_MODEL_SHAPE)
num_clients = tff.federated_sum(tff.federated_value(1.0, tff.CLIENTS))
updated_server_model = tff.federated_map(
server_update, (server_model, dense_update_sum, num_clients))
return updated_server_model
print(sparse_model_update.type_signature)(<server_model=float32[13,4]@SERVER,client_data={<tokens=<indices=int64[?,2],values=int32[?],dense_shape=int64[2]>,tags=float32[?,4]>*}@CLIENTS> -> float32[13,4]@SERVER)
</code>
# Let's train a model!
Now that we have our training function, let's try it out._____no_output_____
<code>
server_model = create_logistic_model(WORD_VOCAB_SIZE, TAG_VOCAB_SIZE)
server_model.compile( # Compile to make evaluation easy.
optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.0), # Unused
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.AUC(name='auc'),
tf.keras.metrics.Recall(top_k=2, name='recall_at_2'),
])
def evaluate(model, dataset, name):
metrics = model.evaluate(dataset, verbose=0)
metrics_str = ', '.join([f'{k}={v:.2f}' for k, v in
(zip(server_model.metrics_names, metrics))])
print(f'{name}: {metrics_str}')_____no_output_____print('Before training')
evaluate(server_model, batched_dataset1, 'Client 1')
evaluate(server_model, batched_dataset2, 'Client 2')
evaluate(server_model, batched_dataset3, 'Client 3')
model_weights = server_model.trainable_weights[0]
client_datasets = [batched_dataset1, batched_dataset2, batched_dataset3]
for _ in range(10): # Run 10 rounds of FedAvg
# We train on 1, 2, or 3 clients per round, selecting
# randomly.
cohort_size = np.random.randint(1, 4)
clients = np.random.choice([0, 1, 2], cohort_size, replace=False)
print('Training on clients', clients)
model_weights = sparse_model_update(
model_weights, [client_datasets[i] for i in clients])
server_model.set_weights([model_weights])
print('After training')
evaluate(server_model, batched_dataset1, 'Client 1')
evaluate(server_model, batched_dataset2, 'Client 2')
evaluate(server_model, batched_dataset3, 'Client 3')Before training
Client 1: loss=0.69, precision=0.00, auc=0.50, recall_at_2=0.60
Client 2: loss=0.69, precision=0.00, auc=0.50, recall_at_2=0.50
Client 3: loss=0.69, precision=0.00, auc=0.50, recall_at_2=0.40
Training on clients [0 1]
Training on clients [0 2 1]
Training on clients [2 0]
Training on clients [1 0 2]
Training on clients [2]
Training on clients [2 0]
Training on clients [1 2 0]
Training on clients [0]
Training on clients [2]
Training on clients [1 2]
After training
Client 1: loss=0.67, precision=0.80, auc=0.91, recall_at_2=0.80
Client 2: loss=0.68, precision=0.67, auc=0.96, recall_at_2=1.00
Client 3: loss=0.65, precision=1.00, auc=0.93, recall_at_2=0.80
</code>
| {
"repository": "teo-milea/federated",
"path": "docs/tutorials/sparse_federated_learning.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": null,
"size": 54798,
"hexsha": "cb774327104690ece7a78874f486585c0f31efe3",
"max_line_length": 498,
"avg_line_length": 40.2926470588,
"alphanum_fraction": 0.5560239425
} |
# Notebook from volfi/CVND---Image-Captioning-Project
Path: 2_Training.ipynb
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model_____no_output_____<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:** The Encoder which consists of a CNN (transfer learning used here as the resnet50 weights were loaded) was already given. I kept the decoder RNN simple for the start. It consists of one layer with 512 features. I used a standard value as the batch-size (128) and set the vocab-threshold to 4 in order to delete very uncommon words but still have a big enough vocabulary (about 10000 words) in order to be able to describe very specific pictures. I set the embedding size to 512 which worked fine in my case but probably could be set lower and still yield good results.
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:** I left the transform unchanged because I think it already produces very good data augmentation. Random parts of the pictures are cropped and flipped horizontically with a 50-50 chance. It is always good to introduce this kind of randomness in your data to prevent overfitting.
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:** I decided - as described above - to make all weights in the decoder trainable and only train the weights in the embedding layer of the encoder. Both hidden-size and embedding-size were set to 512, which I did not change during the different tests. I experimented more with vocab-threshold and learning rate.
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:** I selected the adam optimizer which is usually a good one to start with because it adjusts learning rate and momentum for each parameter individually._____no_output_____
<code>
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
import nltk
nltk.download('punkt')
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 128 # batch size
vocab_threshold = 4 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 10 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
print(vocab_size)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
#optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)
optimizer = torch.optim.Adam(params, lr=0.001, weight_decay=0)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
Done (t=1.25s)
creating index...
</code>
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset._____no_output_____
<code>
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d-bs128-voc4_lr001_bftrue.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d-bs128-voc4_lr001_bftrue.pkl' % epoch))
# Close the training log file.
f.close()Epoch [1/10], Step [100/3236], Loss: 3.5249, Perplexity: 33.9510
Epoch [1/10], Step [200/3236], Loss: 3.2077, Perplexity: 24.72255
Epoch [1/10], Step [300/3236], Loss: 3.0764, Perplexity: 21.6796
Epoch [1/10], Step [400/3236], Loss: 3.0551, Perplexity: 21.2224
Epoch [1/10], Step [500/3236], Loss: 2.9446, Perplexity: 19.0025
Epoch [1/10], Step [600/3236], Loss: 3.5423, Perplexity: 34.5453
Epoch [1/10], Step [700/3236], Loss: 3.1733, Perplexity: 23.8861
Epoch [1/10], Step [800/3236], Loss: 2.7361, Perplexity: 15.42696
Epoch [1/10], Step [900/3236], Loss: 2.4469, Perplexity: 11.5523
Epoch [1/10], Step [1000/3236], Loss: 2.6209, Perplexity: 13.7478
Epoch [1/10], Step [1100/3236], Loss: 2.4891, Perplexity: 12.0506
Epoch [1/10], Step [1200/3236], Loss: 2.6268, Perplexity: 13.8300
Epoch [1/10], Step [1300/3236], Loss: 2.4437, Perplexity: 11.5152
Epoch [1/10], Step [1400/3236], Loss: 2.3978, Perplexity: 10.9989
Epoch [1/10], Step [1500/3236], Loss: 2.5592, Perplexity: 12.9253
Epoch [1/10], Step [1600/3236], Loss: 2.5500, Perplexity: 12.8072
Epoch [1/10], Step [1700/3236], Loss: 2.3900, Perplexity: 10.9134
Epoch [1/10], Step [1800/3236], Loss: 2.3653, Perplexity: 10.6472
Epoch [1/10], Step [1900/3236], Loss: 2.3138, Perplexity: 10.1132
Epoch [1/10], Step [2000/3236], Loss: 2.5513, Perplexity: 12.8237
Epoch [1/10], Step [2100/3236], Loss: 2.2804, Perplexity: 9.78094
Epoch [1/10], Step [2200/3236], Loss: 2.2440, Perplexity: 9.43128
Epoch [1/10], Step [2300/3236], Loss: 2.3122, Perplexity: 10.0962
Epoch [1/10], Step [2400/3236], Loss: 2.4054, Perplexity: 11.0830
Epoch [1/10], Step [2500/3236], Loss: 2.1619, Perplexity: 8.68749
Epoch [1/10], Step [2600/3236], Loss: 2.1975, Perplexity: 9.00210
Epoch [1/10], Step [2700/3236], Loss: 2.3298, Perplexity: 10.2759
Epoch [1/10], Step [2800/3236], Loss: 2.2121, Perplexity: 9.13441
Epoch [1/10], Step [2900/3236], Loss: 2.1592, Perplexity: 8.66446
Epoch [1/10], Step [3000/3236], Loss: 2.2311, Perplexity: 9.31012
Epoch [1/10], Step [3100/3236], Loss: 2.1274, Perplexity: 8.39332
Epoch [1/10], Step [3200/3236], Loss: 2.1806, Perplexity: 8.85151
Epoch [2/10], Step [100/3236], Loss: 2.3409, Perplexity: 10.39070
Epoch [2/10], Step [200/3236], Loss: 2.2948, Perplexity: 9.92227
Epoch [2/10], Step [300/3236], Loss: 2.7835, Perplexity: 16.1760
Epoch [2/10], Step [400/3236], Loss: 2.3715, Perplexity: 10.7129
Epoch [2/10], Step [500/3236], Loss: 2.3021, Perplexity: 9.99476
Epoch [2/10], Step [600/3236], Loss: 2.1122, Perplexity: 8.26676
Epoch [2/10], Step [700/3236], Loss: 2.1197, Perplexity: 8.32835
Epoch [2/10], Step [800/3236], Loss: 2.0020, Perplexity: 7.40374
Epoch [2/10], Step [900/3236], Loss: 2.0606, Perplexity: 7.85075
Epoch [2/10], Step [1000/3236], Loss: 2.1091, Perplexity: 8.2410
Epoch [2/10], Step [1100/3236], Loss: 2.3499, Perplexity: 10.4846
Epoch [2/10], Step [1200/3236], Loss: 2.1320, Perplexity: 8.43157
Epoch [2/10], Step [1300/3236], Loss: 2.0885, Perplexity: 8.07287
Epoch [2/10], Step [1400/3236], Loss: 3.2426, Perplexity: 25.5999
Epoch [2/10], Step [1500/3236], Loss: 2.0226, Perplexity: 7.55827
Epoch [2/10], Step [1600/3236], Loss: 2.0741, Perplexity: 7.95726
Epoch [2/10], Step [1700/3236], Loss: 2.0529, Perplexity: 7.79064
Epoch [2/10], Step [1800/3236], Loss: 2.1569, Perplexity: 8.64391
Epoch [2/10], Step [1900/3236], Loss: 2.1514, Perplexity: 8.59655
Epoch [2/10], Step [2000/3236], Loss: 1.9809, Perplexity: 7.24951
Epoch [2/10], Step [2100/3236], Loss: 2.8798, Perplexity: 17.8111
Epoch [2/10], Step [2200/3236], Loss: 1.9769, Perplexity: 7.22021
Epoch [2/10], Step [2300/3236], Loss: 1.9406, Perplexity: 6.96276
Epoch [2/10], Step [2400/3236], Loss: 2.0572, Perplexity: 7.82436
Epoch [2/10], Step [2500/3236], Loss: 2.0030, Perplexity: 7.41145
Epoch [2/10], Step [2600/3236], Loss: 2.1025, Perplexity: 8.18675
Epoch [2/10], Step [2700/3236], Loss: 1.9915, Perplexity: 7.32642
Epoch [2/10], Step [2800/3236], Loss: 2.0553, Perplexity: 7.80948
Epoch [2/10], Step [2900/3236], Loss: 1.9969, Perplexity: 7.36627
Epoch [2/10], Step [3000/3236], Loss: 2.0922, Perplexity: 8.10312
Epoch [2/10], Step [3100/3236], Loss: 2.4134, Perplexity: 11.1719
Epoch [2/10], Step [3200/3236], Loss: 1.9474, Perplexity: 7.01012
Epoch [3/10], Step [100/3236], Loss: 1.9895, Perplexity: 7.312221
Epoch [3/10], Step [200/3236], Loss: 2.0218, Perplexity: 7.55207
Epoch [3/10], Step [300/3236], Loss: 2.2966, Perplexity: 9.94084
Epoch [3/10], Step [400/3236], Loss: 1.9364, Perplexity: 6.93401
Epoch [3/10], Step [500/3236], Loss: 1.9944, Perplexity: 7.34817
Epoch [3/10], Step [600/3236], Loss: 1.8719, Perplexity: 6.50051
Epoch [3/10], Step [700/3236], Loss: 1.9498, Perplexity: 7.02769
Epoch [3/10], Step [800/3236], Loss: 2.1352, Perplexity: 8.45898
Epoch [3/10], Step [900/3236], Loss: 1.8685, Perplexity: 6.47858
Epoch [3/10], Step [1000/3236], Loss: 1.8063, Perplexity: 6.0880
Epoch [3/10], Step [1100/3236], Loss: 2.1233, Perplexity: 8.35849
Epoch [3/10], Step [1200/3236], Loss: 2.1820, Perplexity: 8.86422
Epoch [3/10], Step [1300/3236], Loss: 2.0544, Perplexity: 7.80245
Epoch [3/10], Step [1400/3236], Loss: 2.6476, Perplexity: 14.1201
Epoch [3/10], Step [1500/3236], Loss: 1.9202, Perplexity: 6.82248
Epoch [3/10], Step [1600/3236], Loss: 1.9066, Perplexity: 6.73010
Epoch [3/10], Step [1700/3236], Loss: 1.9248, Perplexity: 6.85394
Epoch [3/10], Step [1800/3236], Loss: 3.6730, Perplexity: 39.3686
Epoch [3/10], Step [1900/3236], Loss: 2.0762, Perplexity: 7.97375
Epoch [3/10], Step [2000/3236], Loss: 1.9610, Perplexity: 7.10617
Epoch [3/10], Step [2100/3236], Loss: 1.9734, Perplexity: 7.19540
Epoch [3/10], Step [2200/3236], Loss: 2.0315, Perplexity: 7.62530
Epoch [3/10], Step [2300/3236], Loss: 2.0791, Perplexity: 7.99746
Epoch [3/10], Step [2400/3236], Loss: 2.5655, Perplexity: 13.0073
Epoch [3/10], Step [2500/3236], Loss: 1.7970, Perplexity: 6.03158
Epoch [3/10], Step [2600/3236], Loss: 1.8940, Perplexity: 6.64592
Epoch [3/10], Step [2700/3236], Loss: 1.8039, Perplexity: 6.07329
Epoch [3/10], Step [2800/3236], Loss: 2.0745, Perplexity: 7.96065
Epoch [3/10], Step [2900/3236], Loss: 1.8706, Perplexity: 6.49237
Epoch [3/10], Step [3000/3236], Loss: 1.9789, Perplexity: 7.23487
Epoch [3/10], Step [3100/3236], Loss: 2.5673, Perplexity: 13.0306
Epoch [3/10], Step [3200/3236], Loss: 2.0277, Perplexity: 7.59685
Epoch [4/10], Step [100/3236], Loss: 1.9070, Perplexity: 6.73283
Epoch [4/10], Step [200/3236], Loss: 2.2594, Perplexity: 9.57697
Epoch [4/10], Step [300/3236], Loss: 2.1634, Perplexity: 8.70108
Epoch [4/10], Step [372/3236], Loss: 1.9685, Perplexity: 7.15973
</code>
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset._____no_output_____
<code>
# (Optional) TODO: Validate your model._____no_output_____
</code>
| {
"repository": "volfi/CVND---Image-Captioning-Project",
"path": "2_Training.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 29367,
"hexsha": "cb784212e878861ec170cd3f01f3472ca7bc9ccd",
"max_line_length": 734,
"avg_line_length": 60.801242236,
"alphanum_fraction": 0.6270303402
} |
# Notebook from 0todd0000/fdr1d
Path: Appendix/ipynb/AppendixE.ipynb
# Appendix E: Validation of FDR’s control of false positive node proportion
This appendix contains RFT and FDR results (Fig.E1) from six experimental datasets and a total of eight different analyses (Table E1) that were conducted but were not included in the main manuscript. The datasets represent a variety of biomechanical modalities, experimental designs and tasks._____no_output________
**Table E1**. Experimental datasets and analyses. J and Q are the sample size and number of time nodes, respectively. GRF = ground reaction force. EMG = electromyography.
| Dataset | Source | J | Q | Model | Task | Variables |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| A | Caravaggi et al., 2010 | 10 | 101 | Paired t-test | Walking | Plantar arch deformation |
| B | Dorn, Schache & Pandy, 2012 | 7 | 100 | Linear regression | Running/ sprinting | GRF |
| C | Pataky et al., 2008 | 59 | 101 | Linear regression | Walking | GRF
| D | Neptune, Wright & Van Den Bogert, 1999 | 15 | 101 | Two sample t-test | Cutting movement | Kinematics, EMG |
| E | Pataky et al., 2014 | 10 | 101 | Paired t-test | Walking | Center of pressure |
| F | Caravaggi et al., 2010 | 19 | 101 | Two sample t-test | Walking | Plantar arch deformation |
| G | Pataky et al., 2008 | 20 | 101 | One sample t-test | Walking | GRF |
| H | Besier et al., 2009 | 40 | 100 | Two sample t-test | Walking, running | GRF, muscle forces_____no_output________
| | |
|------|------|
| <img src="./figs/A.png" alt="FigA" width="300"/> | <img src="./figs/B.png" alt="FigB" width="300"/> |
| <img src="./figs/C.png" alt="FigC" width="300"/> | <img src="./figs/D.png" alt="FigD" width="300"/> |
| <img src="./figs/E.png" alt="FigE" width="300"/> | <img src="./figs/F.png" alt="FigF" width="300"/> |
| <img src="./figs/G.png" alt="FigG" width="300"/> | <img src="./figs/H.png" alt="FigH" width="300"/> |
**Figure E1**. Results from six datasets depicting two thresholds: false discovery rate (FDR) and random field theory (RFT). The null hypothesis is rejected if the t value traverses a threshold._____no_output_____## References
1. Besier TF, Fredericson M, Gold GE, Beaupré GS, Delp SL. 2009. Knee muscle forces during walking and running in patellofemoral pain patients and pain-free controls. Journal of Biomechanics 42:898–905. DOI: 10.1016/j.jbiomech.2009.01.032.
1. Caravaggi P, Pataky T, Günther M, Savage R, Crompton R. 2010. Dynamics of longitudinal arch support in relation to walking speed: Contribution of the plantar aponeurosis. Journal of Anatomy 217:254–261. DOI: 10.1111/j.1469-7580.2010.01261.x.
1. Dorn TW, Schache AG, Pandy MG. 2012. Muscular strategy shift in human running: dependence of running speed on hip and ankle muscle performance. Journal of Experimental Biology 215:1944–1956. DOI: 10.1242/jeb.064527.
1. Neptune RR, Wright IC, Van Den Bogert AJ. 1999. Muscle coordination and function during cutting movements. Medicine and Science in Sports and Exercise 31:294–302. DOI: 10.1097/00005768-199902000-00014.
1. Pataky TC, Caravaggi P, Savage R, Parker D, Goulermas JY, Sellers WI, Crompton RH. 2008. New insights into the plantar pressure correlates of walking speed using pedobarographic statistical parametric mapping (pSPM). Journal of Biomechanics 41:1987–1994. DOI: 10.1016/j.jbiomech.2008.03.034.
1. Pataky TC, Robinson MA, Vanrenterghem J, Savage R, Bates KT, Crompton RH. 2014. Vector field statistics for objective center-of-pressure trajectory analysis during gait, with evidence of scalar sensitivity to small coordinate system rotations. Gait and Posture 40:255–258. DOI: 10.1016/j.gaitpost.2014.01.023._____no_output_____
| {
"repository": "0todd0000/fdr1d",
"path": "Appendix/ipynb/AppendixE.ipynb",
"matched_keywords": [
"biology"
],
"stars": null,
"size": 4829,
"hexsha": "cb7842d14ce86b6fc130c948dfa1033b904ed23b",
"max_line_length": 318,
"avg_line_length": 49.7835051546,
"alphanum_fraction": 0.6156554152
} |
# Notebook from davemcg/scEiaD
Path: colab/cell_type_ML_labelling.ipynb
<a href="https://colab.research.google.com/github/davemcg/scEiaD/blob/master/colab/cell_type_ML_labelling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Auto Label Retinal Cell Types
## tldr
You can take your (retina) scRNA data and fairly quickly use the scEiaD ML model
to auto label your cell types. I say fairly quickly because it is *best* if you re-quantify your data with the same reference and counter (kallisto) that we use. You *could* try using your counts from cellranger/whatever....but uh...stuff might get weird.
_____no_output_____# Install scvi and kallisto-bustools_____no_output_____
<code>
import sys
import re
#if True, will install via pypi, else will install from source
stable = True
IN_COLAB = "google.colab" in sys.modules
if IN_COLAB and stable:
!pip install --quiet scvi-tools[tutorials]==0.9.0
#!pip install --quiet python==3.8 pandas numpy scikit-learn xgboost==1.3
!pip install --quiet kb-python
[K |████████████████████████████████| 184kB 17.9MB/s
[K |████████████████████████████████| 849kB 37.0MB/s
[K |████████████████████████████████| 133kB 57.5MB/s
[K |████████████████████████████████| 245kB 42.2MB/s
[K |████████████████████████████████| 634kB 57.1MB/s
[K |████████████████████████████████| 81kB 11.2MB/s
[K |████████████████████████████████| 204kB 58.9MB/s
[K |████████████████████████████████| 10.3MB 27.3MB/s
[K |████████████████████████████████| 51kB 7.9MB/s
[K |████████████████████████████████| 8.7MB 27.2MB/s
[K |████████████████████████████████| 3.2MB 58.7MB/s
[K |████████████████████████████████| 1.4MB 50.8MB/s
[K |████████████████████████████████| 184kB 63.1MB/s
[K |████████████████████████████████| 829kB 57.1MB/s
[K |████████████████████████████████| 276kB 30.7MB/s
[K |████████████████████████████████| 112kB 58.7MB/s
[K |████████████████████████████████| 51kB 8.4MB/s
[K |████████████████████████████████| 81kB 11.7MB/s
[K |████████████████████████████████| 112kB 60.5MB/s
[K |████████████████████████████████| 1.3MB 50.8MB/s
[K |████████████████████████████████| 71kB 11.2MB/s
[K |████████████████████████████████| 1.2MB 50.3MB/s
[K |████████████████████████████████| 296kB 61.1MB/s
[K |████████████████████████████████| 143kB 60.5MB/s
[?25h Building wheel for loompy (setup.py) ... [?25l[?25hdone
Building wheel for future (setup.py) ... [?25l[?25hdone
Building wheel for PyYAML (setup.py) ... [?25l[?25hdone
Building wheel for sinfo (setup.py) ... [?25l[?25hdone
Building wheel for umap-learn (setup.py) ... [?25l[?25hdone
Building wheel for numpy-groupies (setup.py) ... [?25l[?25hdone
Building wheel for pynndescent (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 59.1MB 46kB/s
[K |████████████████████████████████| 13.2MB 253kB/s
[?25h!pip install --quiet pandas numpy scikit-learn xgboost==1.3.1[K |████████████████████████████████| 157.5MB 90kB/s
[?25h
</code>
# Download our kallisto index
As our example set is mouse, we use the Gencode vM25 transcript reference.
The script that makes the idx and t2g file is [here](https://github.com/davemcg/scEiaD/raw/c3a9dd09a1a159b1f489065a3f23a753f35b83c9/src/build_idx_and_t2g_for_colab.sh). This is precomputed as it takes about 30 minutes and 32GB of memory.
There's one more wrinkle worth noting: as scEiaD was built across human, mouse, and macaque unified gene names are required. We chose to use the *human* ensembl ID (e.g. CRX is ENSG00000105392) as the base gene naming system.
(Download links):
```
# Mouse
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv
# Human
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.v35.transcripts.idx
https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/v35.tr2gX.tsv
```
_____no_output_____
<code>
%%time
!wget -O idx.idx https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx
!wget -O t2g.txt https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv--2021-04-29 12:05:21-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/gencode.vM25.transcripts.idx
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2662625893 (2.5G) [application/octet-stream]
Saving to: ‘idx.idx’
idx.idx 100%[===================>] 2.48G 36.3MB/s in 35s
2021-04-29 12:05:58 (72.8 MB/s) - ‘idx.idx’ saved [2662625893/2662625893]
--2021-04-29 12:05:58-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/vM25.tr2gX.humanized.tsv
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22502749 (21M) [application/octet-stream]
Saving to: ‘t2g.txt’
t2g.txt 100%[===================>] 21.46M 90.2MB/s in 0.2s
2021-04-29 12:06:00 (90.2 MB/s) - ‘t2g.txt’ saved [22502749/22502749]
CPU times: user 377 ms, sys: 71.2 ms, total: 448 ms
Wall time: 39.2 s
</code>
# Quantify with kbtools (Kallisto - Bustools wrapper) in one easy step.
Going into the vagaries of turning a SRA deposit into a non-borked pair of fastq files is beyond the scope of this document. Plus I would swear a lot. So we just give an example set from a Human organoid retina 10x (version 2) experiment.
The Pachter Lab has a discussion of how/where to get public data here: https://colab.research.google.com/github/pachterlab/kallistobustools/blob/master/notebooks/data_download.ipynb
If you have your own 10X bam file, then 10X provides a very nice and simple tool to turn it into fastq file here: https://github.com/10XGenomics/bamtofastq
To reduce run-time we have taken the first five million reads from this fastq pair.
This will take ~3 minutes, depending on the internet speed between Google and our server
You can also directly stream the file to improve wall-time, but I was getting periodic errors, so we are doing the simpler thing and downloading each fastq file here first.
_____no_output_____
<code>
%%time
!wget -O sample_1.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_1.head.fastq.gz
!wget -O sample_2.fastq.gz https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_2.head.fastq.gz
!kb count --overwrite --h5ad -i idx.idx -g t2g.txt -x DropSeq -o output --filter bustools -t 2 \
sample_1.fastq.gz \
sample_2.fastq.gz--2021-04-29 12:06:31-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_1.head.fastq.gz
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 103529059 (99M) [application/octet-stream]
Saving to: ‘sample_1.fastq.gz’
sample_1.fastq.gz 100%[===================>] 98.73M 90.3MB/s in 1.1s
2021-04-29 12:06:33 (90.3 MB/s) - ‘sample_1.fastq.gz’ saved [103529059/103529059]
--2021-04-29 12:06:33-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/colab/SRR11799731_2.head.fastq.gz
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 245302496 (234M) [application/octet-stream]
Saving to: ‘sample_2.fastq.gz’
sample_2.fastq.gz 100%[===================>] 233.94M 97.8MB/s in 2.4s
2021-04-29 12:06:35 (97.8 MB/s) - ‘sample_2.fastq.gz’ saved [245302496/245302496]
[2021-04-29 12:06:36,630] INFO Using index idx.idx to generate BUS file to output from
[2021-04-29 12:06:36,630] INFO sample_1.fastq.gz
[2021-04-29 12:06:36,630] INFO sample_2.fastq.gz
[2021-04-29 12:07:17,592] INFO Sorting BUS file output/output.bus to output/tmp/output.s.bus
[2021-04-29 12:07:20,908] INFO Whitelist not provided
[2021-04-29 12:07:20,908] INFO Generating whitelist output/whitelist.txt from BUS file output/tmp/output.s.bus
[2021-04-29 12:07:20,929] INFO Inspecting BUS file output/tmp/output.s.bus
[2021-04-29 12:07:21,695] INFO Correcting BUS records in output/tmp/output.s.bus to output/tmp/output.s.c.bus with whitelist output/whitelist.txt
[2021-04-29 12:07:21,900] INFO Sorting BUS file output/tmp/output.s.c.bus to output/output.unfiltered.bus
[2021-04-29 12:07:24,360] INFO Generating count matrix output/counts_unfiltered/cells_x_genes from BUS file output/output.unfiltered.bus
[2021-04-29 12:07:26,177] INFO Reading matrix output/counts_unfiltered/cells_x_genes.mtx
[2021-04-29 12:07:26,915] INFO Writing matrix to h5ad output/counts_unfiltered/adata.h5ad
[2021-04-29 12:07:27,075] INFO Filtering with bustools
[2021-04-29 12:07:27,075] INFO Generating whitelist output/filter_barcodes.txt from BUS file output/output.unfiltered.bus
[2021-04-29 12:07:27,088] INFO Correcting BUS records in output/output.unfiltered.bus to output/tmp/output.unfiltered.c.bus with whitelist output/filter_barcodes.txt
[2021-04-29 12:07:27,180] INFO Sorting BUS file output/tmp/output.unfiltered.c.bus to output/output.filtered.bus
[2021-04-29 12:07:29,651] INFO Generating count matrix output/counts_filtered/cells_x_genes from BUS file output/output.filtered.bus
[2021-04-29 12:07:31,353] INFO Reading matrix output/counts_filtered/cells_x_genes.mtx
[2021-04-29 12:07:32,041] INFO Writing matrix to h5ad output/counts_filtered/adata.h5ad
CPU times: user 368 ms, sys: 60.4 ms, total: 428 ms
Wall time: 1min
</code>
# Download models
(and our xgboost functions for cell type labelling)
The scVI model is the same that we use to create the data for plae.nei.nih.gov
The xgboost model is a simplified version that *only* uses the scVI latent dims and omits the Early/Late/RPC cell types and collapses them all into "RPC"_____no_output_____
<code>
!wget -O scVI_scEiaD.tgz https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_03_17__scVI_scEiaD.tgz
!tar -xzf scVI_scEiaD.tgz
!wget -O celltype_ML_model.tar https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_cell_type_ML_all.tar
!tar -xf celltype_ML_model.tar
!wget -O celltype_predictor.py https://raw.githubusercontent.com/davemcg/scEiaD/master/src/cell_type_predictor.py
--2021-04-29 12:12:38-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_03_17__scVI_scEiaD.tgz
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12851811 (12M) [application/octet-stream]
Saving to: ‘scVI_scEiaD.tgz’
scVI_scEiaD.tgz 100%[===================>] 12.26M 36.9MB/s in 0.3s
2021-04-29 12:12:40 (36.9 MB/s) - ‘scVI_scEiaD.tgz’ saved [12851811/12851811]
--2021-04-29 12:12:40-- https://hpc.nih.gov/~mcgaugheyd/scEiaD/2021_03_17/2021_cell_type_ML_all.tar
Resolving hpc.nih.gov (hpc.nih.gov)... 128.231.2.150, 2607:f220:418:4801::2:96
Connecting to hpc.nih.gov (hpc.nih.gov)|128.231.2.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12359680 (12M) [application/octet-stream]
Saving to: ‘celltype_ML_model.tar’
celltype_ML_model.t 100%[===================>] 11.79M 39.5MB/s in 0.3s
2021-04-29 12:12:40 (39.5 MB/s) - ‘celltype_ML_model.tar’ saved [12359680/12359680]
--2021-04-29 12:12:40-- https://raw.githubusercontent.com/davemcg/scEiaD/master/src/cell_type_predictor.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11534 (11K) [text/plain]
Saving to: ‘celltype_predictor.py’
celltype_predictor. 100%[===================>] 11.26K --.-KB/s in 0s
2021-04-29 12:12:41 (117 MB/s) - ‘celltype_predictor.py’ saved [11534/11534]
</code>
# Python time_____no_output_____
<code>
import anndata
import sys
import os
import numpy as np
import pandas as pd
import random
import scanpy as sc
from scipy import sparse
import scvi
import torch
# 2 cores
sc.settings.n_jobs = 2
# set seeds
random.seed(234)
scvi.settings.seed = 234
# set some args
org = 'mouse'
n_epochs = 15
confidence = 0.5_____no_output_____
</code>
# Load adata
And process (mouse processing requires a bit more jiggling that can be skipped if you have human data)_____no_output_____
<code>
# load query data
adata_query = sc.read_h5ad('output/counts_filtered/adata.h5ad')
adata_query.layers["counts"] = adata_query.X.copy()
adata_query.layers["counts"] = sparse.csr_matrix(adata_query.layers["counts"])
# Set scVI model path
scVI_model_dir_path = 'scVIprojectionSO_scEiaD_model/n_features-5000__transform-counts__partition-universe__covariate-batch__method-scVIprojectionSO__dims-8/'
# Read in HVG genes used in scVI model
var_names = pd.read_csv(scVI_model_dir_path + '/var_names.csv', header = None)
# cut down query adata object to use just the var_names used in the scVI model training
if org.lower() == 'mouse':
adata_query.var_names = adata_query.var['gene_name']
n_missing_genes = sum(~var_names[0].isin(adata_query.var_names))
dummy_adata = anndata.AnnData(X=sparse.csr_matrix((adata_query.shape[0], n_missing_genes)))
dummy_adata.obs_names = adata_query.obs_names
dummy_adata.var_names = var_names[0][~var_names[0].isin(adata_query.var_names)]
adata_fixed = anndata.concat([adata_query, dummy_adata], axis=1)
adata_query_HVG = adata_fixed[:, var_names[0]]
_____no_output_____
</code>
# Run scVI (trained on scEiaD data)
Goal: get scEiaD batch corrected latent space for *your* data_____no_output_____
<code>
adata_query_HVG.obs['batch'] = 'New Data'
scvi.data.setup_anndata(adata_query_HVG, batch_key="batch")
vae_query = scvi.model.SCVI.load_query_data(
adata_query_HVG,
scVI_model_dir_path
)
# project scVI latent dims from scEiaD onto query data
vae_query.train(max_epochs=n_epochs, plan_kwargs=dict(weight_decay=0.0))
# get the latent dims into the adata
adata_query_HVG.obsm["X_scVI"] = vae_query.get_latent_representation()
Trying to set attribute `.obs` of view, copying.
</code>
# Get Cell Type predictions
(this xgboost model does NOT use the organim or Age information, but as those field were often used by use, they got hard-coded in. So we will put dummy values in)._____no_output_____
<code>
# extract latent dimensions
obs=pd.DataFrame(adata_query_HVG.obs)
obsm=pd.DataFrame(adata_query_HVG.obsm["X_scVI"])
features = list(obsm.columns)
obsm.index = obs.index.values
obsm['Barcode'] = obsm.index
obsm['Age'] = 1000
obsm['organism'] = 'x'
# xgboost ML time
from celltype_predictor import *
CT_predictions = scEiaD_classifier_predict(inputMatrix=obsm,
labelIdCol='ID',
labelNameCol='CellType',
trainedModelFile= os.getcwd() + '/2021_cell_type_ML_all',
featureCols=features,
predProbThresh=confidence)
Loading Data...
Predicting Data...
19 samples Failed to meet classification threshold of 0.5
</code>
# What do we have?_____no_output_____
<code>
CT_predictions['CellType'].value_counts()_____no_output_____
</code>
| {
"repository": "davemcg/scEiaD",
"path": "colab/cell_type_ML_labelling.ipynb",
"matched_keywords": [
"Scanpy",
"scRNA",
"CellRanger"
],
"stars": 17,
"size": 30198,
"hexsha": "cb794bc0310fc103b5acc42b24f678bbf11bd41c",
"max_line_length": 268,
"avg_line_length": 43.7018813314,
"alphanum_fraction": 0.4997350818
} |
# Notebook from cilsya/coursera
Path: Machine _Learning_and_Reinforcement_Learning_in_Finance/03_Reinforcement_Learning_in_Finance/02_QLBS Model Implementation/dp_qlbs_oneset_m3_ex2_v3.ipynb
## The QLBS model for a European option
Welcome to your 2nd assignment in Reinforcement Learning in Finance. In this exercise you will arrive to an option price and the hedging portfolio via standard toolkit of Dynamic Pogramming (DP).
QLBS model learns both the optimal option price and optimal hedge directly from trading data.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
- When encountering **```# dummy code - remove```** please replace this code with your own
**After this assignment you will:**
- Re-formulate option pricing and hedging method using the language of Markov Decision Processes (MDP)
- Setup foward simulation using Monte Carlo
- Expand optimal action (hedge) $a_t^\star(X_t)$ and optimal Q-function $Q_t^\star(X_t, a_t^\star)$ in basis functions with time-dependend coefficients
Let's get started!_____no_output_____## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter._____no_output_____
<code>
#import warnings
#warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from scipy.stats import norm
import random
import time
import matplotlib.pyplot as plt
import sys
sys.path.append("..")
import grading_____no_output_____### ONLY FOR GRADING. DO NOT EDIT ###
submissions=dict()
assignment_key="wLtf3SoiEeieSRL7rCBNJA"
all_parts=["15mYc", "h1P6Y", "q9QW7","s7MpJ","Pa177"]
### ONLY FOR GRADING. DO NOT EDIT ###_____no_output_____COURSERA_TOKEN = 'gF094cwtidz2YQpP' # the key provided to the Student under his/her email on submission page
COURSERA_EMAIL = '[email protected]' # the email_____no_output_____
</code>
## Parameters for MC simulation of stock prices_____no_output_____
<code>
S0 = 100 # initial stock price
mu = 0.05 # drift
sigma = 0.15 # volatility
r = 0.03 # risk-free rate
M = 1 # maturity
T = 24 # number of time steps
N_MC = 10000 # number of paths
delta_t = M / T # time interval
gamma = np.exp(- r * delta_t) # discount factor_____no_output_____
</code>
### Black-Sholes Simulation
Simulate $N_{MC}$ stock price sample paths with $T$ steps by the classical Black-Sholes formula.
$$dS_t=\mu S_tdt+\sigma S_tdW_t\quad\quad S_{t+1}=S_te^{\left(\mu-\frac{1}{2}\sigma^2\right)\Delta t+\sigma\sqrt{\Delta t}Z}$$
where $Z$ is a standard normal random variable.
Based on simulated stock price $S_t$ paths, compute state variable $X_t$ by the following relation.
$$X_t=-\left(\mu-\frac{1}{2}\sigma^2\right)t\Delta t+\log S_t$$
Also compute
$$\Delta S_t=S_{t+1}-e^{r\Delta t}S_t\quad\quad \Delta\hat{S}_t=\Delta S_t-\Delta\bar{S}_t\quad\quad t=0,...,T-1$$
where $\Delta\bar{S}_t$ is the sample mean of all values of $\Delta S_t$.
Plots of 5 stock price $S_t$ and state variable $X_t$ paths are shown below._____no_output_____
<code>
# make a dataset
starttime = time.time()
np.random.seed(42)
# stock price
S = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
S.loc[:,0] = S0
# standard normal random numbers
RN = pd.DataFrame(np.random.randn(N_MC,T), index=range(1, N_MC+1), columns=range(1, T+1))
for t in range(1, T+1):
S.loc[:,t] = S.loc[:,t-1] * np.exp((mu - 1/2 * sigma**2) * delta_t + sigma * np.sqrt(delta_t) * RN.loc[:,t])
delta_S = S.loc[:,1:T].values - np.exp(r * delta_t) * S.loc[:,0:T-1]
delta_S_hat = delta_S.apply(lambda x: x - np.mean(x), axis=0)
# state variable
X = - (mu - 1/2 * sigma**2) * np.arange(T+1) * delta_t + np.log(S) # delta_t here is due to their conventions
endtime = time.time()
print('\nTime Cost:', endtime - starttime, 'seconds')
Time Cost: 0.20280027389526367 seconds
# plot 10 paths
step_size = N_MC // 10
idx_plot = np.arange(step_size, N_MC, step_size)
plt.plot(S.T.iloc[:,idx_plot])
plt.xlabel('Time Steps')
plt.title('Stock Price Sample Paths')
plt.show()
plt.plot(X.T.iloc[:,idx_plot])
plt.xlabel('Time Steps')
plt.ylabel('State Variable')
plt.show()_____no_output_____
</code>
Define function *terminal_payoff* to compute the terminal payoff of a European put option.
$$H_T\left(S_T\right)=\max\left(K-S_T,0\right)$$_____no_output_____
<code>
def terminal_payoff(ST, K):
# ST final stock price
# K strike
payoff = max(K - ST, 0)
return payoff_____no_output_____type(delta_S)_____no_output_____
</code>
## Define spline basis functions _____no_output_____
<code>
import bspline
import bspline.splinelab as splinelab
X_min = np.min(np.min(X))
X_max = np.max(np.max(X))
print('X.shape = ', X.shape)
print('X_min, X_max = ', X_min, X_max)
p = 4 # order of spline (as-is; 3 = cubic, 4: B-spline?)
ncolloc = 12
tau = np.linspace(X_min,X_max,ncolloc) # These are the sites to which we would like to interpolate
# k is a knot vector that adds endpoints repeats as appropriate for a spline of order p
# To get meaninful results, one should have ncolloc >= p+1
k = splinelab.aptknt(tau, p)
# Spline basis of order p on knots k
basis = bspline.Bspline(k, p)
f = plt.figure()
# B = bspline.Bspline(k, p) # Spline basis functions
print('Number of points k = ', len(k))
basis.plot()
plt.savefig('Basis_functions.png', dpi=600)X.shape = (10000, 25)
X_min, X_max = 4.024923524903037 5.190802775129617
Number of points k = 17
type(basis)_____no_output_____X.values.shape_____no_output_____
</code>
### Make data matrices with feature values
"Features" here are the values of basis functions at data points
The outputs are 3D arrays of dimensions num_tSteps x num_MC x num_basis_____no_output_____
<code>
num_t_steps = T + 1
num_basis = ncolloc # len(k) #
data_mat_t = np.zeros((num_t_steps, N_MC,num_basis ))
print('num_basis = ', num_basis)
print('dim data_mat_t = ', data_mat_t.shape)
t_0 = time.time()
# fill it
for i in np.arange(num_t_steps):
x = X.values[:,i]
data_mat_t[i,:,:] = np.array([ basis(el) for el in x ])
t_end = time.time()
print('Computational time:', t_end - t_0, 'seconds')num_basis = 12
dim data_mat_t = (25, 10000, 12)
Computational time: 55.5485999584198 seconds
# save these data matrices for future re-use
np.save('data_mat_m=r_A_%d' % N_MC, data_mat_t)_____no_output_____print(data_mat_t.shape) # shape num_steps x N_MC x num_basis
print(len(k))(25, 10000, 12)
17
</code>
## Dynamic Programming solution for QLBS
The MDP problem in this case is to solve the following Bellman optimality equation for the action-value function.
$$Q_t^\star\left(x,a\right)=\mathbb{E}_t\left[R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\space|\space X_t=x,a_t=a\right],\space\space t=0,...,T-1,\quad\gamma=e^{-r\Delta t}$$
where $R_t\left(X_t,a_t,X_{t+1}\right)$ is the one-step time-dependent random reward and $a_t\left(X_t\right)$ is the action (hedge).
Detailed steps of solving this equation by Dynamic Programming are illustrated below._____no_output_____With this set of basis functions $\left\{\Phi_n\left(X_t^k\right)\right\}_{n=1}^N$, expand the optimal action (hedge) $a_t^\star\left(X_t\right)$ and optimal Q-function $Q_t^\star\left(X_t,a_t^\star\right)$ in basis functions with time-dependent coefficients.
$$a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}\quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$$
Coefficients $\phi_{nt}$ and $\omega_{nt}$ are computed recursively backward in time for $t=T−1,...,0$. _____no_output_____Coefficients for expansions of the optimal action $a_t^\star\left(X_t\right)$ are solved by
$$\phi_t=\mathbf A_t^{-1}\mathbf B_t$$
where $\mathbf A_t$ and $\mathbf B_t$ are matrix and vector respectively with elements given by
$$A_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)\left(\Delta\hat{S}_t^k\right)^2}\quad\quad B_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left[\hat\Pi_{t+1}^k\Delta\hat{S}_t^k+\frac{1}{2\gamma\lambda}\Delta S_t^k\right]}$$
$$\Delta S_t=S_{t+1} - e^{-r\Delta t} S_t\space \quad t=T-1,...,0$$
where $\Delta\hat{S}_t$ is the sample mean of all values of $\Delta S_t$.
Define function *function_A* and *function_B* to compute the value of matrix $\mathbf A_t$ and vector $\mathbf B_t$._____no_output_____## Define the option strike and risk aversion parameter_____no_output_____
<code>
risk_lambda = 0.001 # risk aversion
K = 100 # option stike
# Note that we set coef=0 below in function function_B_vec. This correspond to a pure risk-based hedging_____no_output_____
</code>
### Part 1 Calculate coefficients $\phi_{nt}$ of the optimal action $a_t^\star\left(X_t\right)$
**Instructions:**
- implement function_A_vec() which computes $A_{nm}^{\left(t\right)}$ matrix
- implement function_B_vec() which computes $B_n^{\left(t\right)}$ column vector_____no_output_____
<code>
# functions to compute optimal hedges
def function_A_vec(t, delta_S_hat, data_mat, reg_param):
"""
function_A_vec - compute the matrix A_{nm} from Eq. (52) (with a regularization!)
Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
Arguments:
t - time index, a scalar, an index into time axis of data_mat
delta_S_hat - pandas.DataFrame of dimension N_MC x T
data_mat - pandas.DataFrame of dimension T x N_MC x num_basis
reg_param - a scalar, regularization parameter
Return:
- np.array, i.e. matrix A_{nm} of dimension num_basis x num_basis
"""
### START CODE HERE ### (≈ 5-6 lines of code)
# store result in A_mat for grading
# # The cell above shows the equations we need
# # Eq. (53) in QLBS Q-Learner in the Black-Scholes-Merton article we are trying to solve for
# # Phi* = (At^-1)(Bt)
# #
# # This function solves for the A coeffecient, which is shown in the cell above, which is
# # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
# #
# # The article is located here
# # https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3087076
# # Get the data matrix at this specific time index
# Xt = data_mat[t,:,:]
# # As shown in the description of the arguments in this function
# # data_mat - pandas.DataFrame of dimension T x N_MC x num_basis
# #
# # We got Xt at a certain t time index, so
# # Xt pandas.DataFrame of dimension N_MC x num_basis
# #
# # Therefore...
# num_basis = Xt.shape[1]
# # Now we need Delta S hat at this time index for the
# # 'A' coefficient from the
# # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
# #
# # We are feed the parameter delta_S_hat into this function
# # and
# # delta_S_hat - pandas.DataFrame of dimension N_MC x T
# #
# # We what the delta_S_hat at this time index
# #
# # Therefore...
# current_delta_S_hat = delta_S_hat.loc[:, t]
# # The last term in the A coefficient calculation in the
# # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
# # is delta_S_hat squared
# #
# # NOTE: There is .reshape(-1,1) which means that 1 for the columns
# # MUST be respected, but the -1 for the rows means that whatever
# # elements are left, fill it up to be whatever number.
# current_delta_S_hat_squared = np.square(current_delta_S_hat).reshape( -1, 1)
# # Now we have the terms to make up the equation.
# # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
# # NOTE: The summation is not done in this function.
# # NOTE: You do not see it in the equation
# # Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article
# # but regularization is a technique used in Machine Learning.
# # You add the term.
# # np.eye() creates an identity matrix of size you specify.
# #
# # NOTE: When doing dot products, might have to transpose so the dimensions
# # align.
# A_mat = ( np.dot( Xt.T, Xt*current_delta_S_hat_squared )
# +
# reg_param * np.eye(num_basis) )
X_mat = data_mat[t, :, :]
num_basis_funcs = X_mat.shape[1]
this_dS = delta_S_hat.loc[:, t]
hat_dS2 = (this_dS ** 2).reshape(-1, 1)
A_mat = np.dot(X_mat.T, X_mat * hat_dS2) + reg_param * np.eye(num_basis_funcs)
### END CODE HERE ###
return A_mat
def function_B_vec(t,
Pi_hat,
delta_S_hat=delta_S_hat,
S=S,
data_mat=data_mat_t,
gamma=gamma,
risk_lambda=risk_lambda):
"""
function_B_vec - compute vector B_{n} from Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article
Arguments:
t - time index, a scalar, an index into time axis of delta_S_hat
Pi_hat - pandas.DataFrame of dimension N_MC x T of portfolio values
delta_S_hat - pandas.DataFrame of dimension N_MC x T
S - pandas.DataFrame of simulated stock prices of dimension N_MC x T
data_mat - pandas.DataFrame of dimension T x N_MC x num_basis
gamma - one time-step discount factor $exp(-r \delta t)$
risk_lambda - risk aversion coefficient, a small positive number
Return:
np.array() of dimension num_basis x 1
"""
# coef = 1.0/(2 * gamma * risk_lambda)
# override it by zero to have pure risk hedge
### START CODE HERE ### (≈ 5-6 lines of code)
# store result in B_vec for grading
# # Get the data matrix at this specific time index
# Xt = data_mat[t,:,:]
# # Computer the first term in the brackets.
# first_term = Pi_hat[ :, t+1 ] * delta_S_hat.loc[:, t]
# # NOTE: for the last term in the equation
# # Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article
# #
# # would be
# # last_term = 1.0/(2 * gamma * risk_lambda) * S.loc[:, t]
# last_coefficient = 1.0/(2 * gamma * risk_lambda)
# #
# # But the instructions say make it equal override it by zero to have pure risk hedge
# last_coefficient = 0
# last_term = last_coefficient * S.loc[:, t]
# # Compute
# second_factor = first_term + last_term
# # Compute the equation
# # NOTE: When doing dot products, might have to transpose so the dimensions
# # align.
# B_vec = np.dot(Xt.T, second_factor)
tmp = Pi_hat.loc[:,t+1] * delta_S_hat.loc[:, t]
X_mat = data_mat[t, :, :] # matrix of dimension N_MC x num_basis
B_vec = np.dot(X_mat.T, tmp)
### END CODE HERE ###
return B_vec_____no_output_____### GRADED PART (DO NOT EDIT) ###
reg_param = 1e-3
np.random.seed(42)
A_mat = function_A_vec(T-1, delta_S_hat, data_mat_t, reg_param)
idx_row = np.random.randint(low=0, high=A_mat.shape[0], size=50)
np.random.seed(42)
idx_col = np.random.randint(low=0, high=A_mat.shape[1], size=50)
part_1 = list(A_mat[idx_row, idx_col])
try:
part1 = " ".join(map(repr, part_1))
except TypeError:
part1 = repr(part_1)
submissions[all_parts[0]]=part1
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions)
A_mat[idx_row, idx_col]
### GRADED PART (DO NOT EDIT) ###D:\application\Anaconda3\envs\pyalgo\lib\site-packages\ipykernel_launcher.py:82: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
### GRADED PART (DO NOT EDIT) ###
np.random.seed(42)
risk_lambda = 0.001
Pi = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Pi.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K))
Pi_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Pi_hat.iloc[:,-1] = Pi.iloc[:,-1] - np.mean(Pi.iloc[:,-1])
B_vec = function_B_vec(T-1, Pi_hat, delta_S_hat, S, data_mat_t, gamma, risk_lambda)
part_2 = list(B_vec)
try:
part2 = " ".join(map(repr, part_2))
except TypeError:
part2 = repr(part_2)
submissions[all_parts[1]]=part2
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions)
B_vec
### GRADED PART (DO NOT EDIT) ###Submission successful, please check on the coursera grader page for the status
</code>
## Compute optimal hedge and portfolio value_____no_output_____Call *function_A* and *function_B* for $t=T-1,...,0$ together with basis function $\Phi_n\left(X_t\right)$ to compute optimal action $a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}$ backward recursively with terminal condition $a_T^\star\left(X_T\right)=0$.
Once the optimal hedge $a_t^\star\left(X_t\right)$ is computed, the portfolio value $\Pi_t$ could also be computed backward recursively by
$$\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0$$
together with the terminal condition $\Pi_T=H_T\left(S_T\right)=\max\left(K-S_T,0\right)$ for a European put option.
Also compute $\hat{\Pi}_t=\Pi_t-\bar{\Pi}_t$, where $\bar{\Pi}_t$ is the sample mean of all values of $\Pi_t$.
Plots of 5 optimal hedge $a_t^\star$ and portfolio value $\Pi_t$ paths are shown below._____no_output_____
<code>
starttime = time.time()
# portfolio value
Pi = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Pi.iloc[:,-1] = S.iloc[:,-1].apply(lambda x: terminal_payoff(x, K))
Pi_hat = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Pi_hat.iloc[:,-1] = Pi.iloc[:,-1] - np.mean(Pi.iloc[:,-1])
# optimal hedge
a = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
a.iloc[:,-1] = 0
reg_param = 1e-3 # free parameter
for t in range(T-1, -1, -1):
A_mat = function_A_vec(t, delta_S_hat, data_mat_t, reg_param)
B_vec = function_B_vec(t, Pi_hat, delta_S_hat, S, data_mat_t, gamma, risk_lambda)
# print ('t = A_mat.shape = B_vec.shape = ', t, A_mat.shape, B_vec.shape)
# coefficients for expansions of the optimal action
phi = np.dot(np.linalg.inv(A_mat), B_vec)
a.loc[:,t] = np.dot(data_mat_t[t,:,:],phi)
Pi.loc[:,t] = gamma * (Pi.loc[:,t+1] - a.loc[:,t] * delta_S.loc[:,t])
Pi_hat.loc[:,t] = Pi.loc[:,t] - np.mean(Pi.loc[:,t])
a = a.astype('float')
Pi = Pi.astype('float')
Pi_hat = Pi_hat.astype('float')
endtime = time.time()
print('Computational time:', endtime - starttime, 'seconds')D:\application\Anaconda3\envs\pyalgo\lib\site-packages\ipykernel_launcher.py:82: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
# plot 10 paths
plt.plot(a.T.iloc[:,idx_plot])
plt.xlabel('Time Steps')
plt.title('Optimal Hedge')
plt.show()
plt.plot(Pi.T.iloc[:,idx_plot])
plt.xlabel('Time Steps')
plt.title('Portfolio Value')
plt.show()_____no_output_____
</code>
## Compute rewards for all paths_____no_output_____Once the optimal hedge $a_t^\star$ and portfolio value $\Pi_t$ are all computed, the reward function $R_t\left(X_t,a_t,X_{t+1}\right)$ could then be computed by
$$R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=0,...,T-1$$
with terminal condition $R_T=-\lambda Var\left[\Pi_T\right]$.
Plot of 5 reward function $R_t$ paths is shown below._____no_output_____
<code>
# Compute rewards for all paths
starttime = time.time()
# reward function
R = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
R.iloc[:,-1] = - risk_lambda * np.var(Pi.iloc[:,-1])
for t in range(T):
R.loc[1:,t] = gamma * a.loc[1:,t] * delta_S.loc[1:,t] - risk_lambda * np.var(Pi.loc[1:,t])
endtime = time.time()
print('\nTime Cost:', endtime - starttime, 'seconds')
# plot 10 paths
plt.plot(R.T.iloc[:, idx_plot])
plt.xlabel('Time Steps')
plt.title('Reward Function')
plt.show()
Time Cost: 0.1530001163482666 seconds
</code>
## Part 2: Compute the optimal Q-function with the DP approach
_____no_output_____Coefficients for expansions of the optimal Q-function $Q_t^\star\left(X_t,a_t^\star\right)$ are solved by
$$\omega_t=\mathbf C_t^{-1}\mathbf D_t$$
where $\mathbf C_t$ and $\mathbf D_t$ are matrix and vector respectively with elements given by
$$C_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)}\quad\quad D_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left(R_t\left(X_t,a_t^\star,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}$$_____no_output_____Define function *function_C* and *function_D* to compute the value of matrix $\mathbf C_t$ and vector $\mathbf D_t$.
**Instructions:**
- implement function_C_vec() which computes $C_{nm}^{\left(t\right)}$ matrix
- implement function_D_vec() which computes $D_n^{\left(t\right)}$ column vector_____no_output_____
<code>
def function_C_vec(t, data_mat, reg_param):
"""
function_C_vec - calculate C_{nm} matrix from Eq. (56) (with a regularization!)
Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article
Arguments:
t - time index, a scalar, an index into time axis of data_mat
data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis
reg_param - regularization parameter, a scalar
Return:
C_mat - np.array of dimension num_basis x num_basis
"""
### START CODE HERE ### (≈ 5-6 lines of code)
# your code here ....
# C_mat = your code here ...
X_mat = data_mat[t, :, :]
num_basis_funcs = X_mat.shape[1]
C_mat = np.dot(X_mat.T, X_mat) + reg_param * np.eye(num_basis_funcs)
### END CODE HERE ###
return C_mat
def function_D_vec(t, Q, R, data_mat, gamma=gamma):
"""
function_D_vec - calculate D_{nm} vector from Eq. (56) (with a regularization!)
Eq. (56) in QLBS Q-Learner in the Black-Scholes-Merton article
Arguments:
t - time index, a scalar, an index into time axis of data_mat
Q - pandas.DataFrame of Q-function values of dimension N_MC x T
R - pandas.DataFrame of rewards of dimension N_MC x T
data_mat - pandas.DataFrame of values of basis functions of dimension T x N_MC x num_basis
gamma - one time-step discount factor $exp(-r \delta t)$
Return:
D_vec - np.array of dimension num_basis x 1
"""
### START CODE HERE ### (≈ 5-6 lines of code)
# your code here ....
# D_vec = your code here ...
X_mat = data_mat[t, :, :]
D_vec = np.dot(X_mat.T, R.loc[:,t] + gamma * Q.loc[:, t+1])
### END CODE HERE ###
return D_vec_____no_output_____### GRADED PART (DO NOT EDIT) ###
C_mat = function_C_vec(T-1, data_mat_t, reg_param)
np.random.seed(42)
idx_row = np.random.randint(low=0, high=C_mat.shape[0], size=50)
np.random.seed(42)
idx_col = np.random.randint(low=0, high=C_mat.shape[1], size=50)
part_3 = list(C_mat[idx_row, idx_col])
try:
part3 = " ".join(map(repr, part_3))
except TypeError:
part3 = repr(part_3)
submissions[all_parts[2]]=part3
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions)
C_mat[idx_row, idx_col]
### GRADED PART (DO NOT EDIT) ###Submission successful, please check on the coursera grader page for the status
### GRADED PART (DO NOT EDIT) ###
Q = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Q.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1])
D_vec = function_D_vec(T-1, Q, R, data_mat_t,gamma)
part_4 = list(D_vec)
try:
part4 = " ".join(map(repr, part_4))
except TypeError:
part4 = repr(part_4)
submissions[all_parts[3]]=part4
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions)
D_vec
### GRADED PART (DO NOT EDIT) ###Submission successful, please check on the coursera grader page for the status
</code>
Call *function_C* and *function_D* for $t=T-1,...,0$ together with basis function $\Phi_n\left(X_t\right)$ to compute optimal action Q-function $Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}$ backward recursively with terminal condition $Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$._____no_output_____
<code>
starttime = time.time()
# Q function
Q = pd.DataFrame([], index=range(1, N_MC+1), columns=range(T+1))
Q.iloc[:,-1] = - Pi.iloc[:,-1] - risk_lambda * np.var(Pi.iloc[:,-1])
reg_param = 1e-3
for t in range(T-1, -1, -1):
######################
C_mat = function_C_vec(t,data_mat_t,reg_param)
D_vec = function_D_vec(t, Q,R,data_mat_t,gamma)
omega = np.dot(np.linalg.inv(C_mat), D_vec)
Q.loc[:,t] = np.dot(data_mat_t[t,:,:], omega)
Q = Q.astype('float')
endtime = time.time()
print('\nTime Cost:', endtime - starttime, 'seconds')
# plot 10 paths
plt.plot(Q.T.iloc[:, idx_plot])
plt.xlabel('Time Steps')
plt.title('Optimal Q-Function')
plt.show()
Time Cost: 0.16299986839294434 seconds
</code>
The QLBS option price is given by $C_t^{\left(QLBS\right)}\left(S_t,ask\right)=-Q_t\left(S_t,a_t^\star\right)$
_____no_output_____## Summary of the QLBS pricing and comparison with the BSM pricing _____no_output_____Compare the QLBS price to European put price given by Black-Sholes formula.
$$C_t^{\left(BS\right)}=Ke^{-r\left(T-t\right)}\mathcal N\left(-d_2\right)-S_t\mathcal N\left(-d_1\right)$$_____no_output_____
<code>
# The Black-Scholes prices
def bs_put(t, S0=S0, K=K, r=r, sigma=sigma, T=M):
d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)
d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)
price = K * np.exp(-r * (T-t)) * norm.cdf(-d2) - S0 * norm.cdf(-d1)
return price
def bs_call(t, S0=S0, K=K, r=r, sigma=sigma, T=M):
d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)
d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)
price = S0 * norm.cdf(d1) - K * np.exp(-r * (T-t)) * norm.cdf(d2)
return price
_____no_output_____
</code>
## The DP solution for QLBS_____no_output_____
<code>
# QLBS option price
C_QLBS = - Q.copy()
print('-------------------------------------------')
print(' QLBS Option Pricing (DP solution) ')
print('-------------------------------------------\n')
print('%-25s' % ('Initial Stock Price:'), S0)
print('%-25s' % ('Drift of Stock:'), mu)
print('%-25s' % ('Volatility of Stock:'), sigma)
print('%-25s' % ('Risk-free Rate:'), r)
print('%-25s' % ('Risk aversion parameter: '), risk_lambda)
print('%-25s' % ('Strike:'), K)
print('%-25s' % ('Maturity:'), M)
print('%-26s %.4f' % ('\nQLBS Put Price: ', C_QLBS.iloc[0,0]))
print('%-26s %.4f' % ('\nBlack-Sholes Put Price:', bs_put(0)))
print('\n')
# plot 10 paths
plt.plot(C_QLBS.T.iloc[:,idx_plot])
plt.xlabel('Time Steps')
plt.title('QLBS Option Price')
plt.show()-------------------------------------------
QLBS Option Pricing (DP solution)
-------------------------------------------
Initial Stock Price: 100
Drift of Stock: 0.05
Volatility of Stock: 0.15
Risk-free Rate: 0.03
Risk aversion parameter: 0.001
Strike: 100
Maturity: 1
QLBS Put Price: 4.9261
Black-Sholes Put Price: 4.5296
### GRADED PART (DO NOT EDIT) ###
part5 = str(C_QLBS.iloc[0,0])
submissions[all_parts[4]]=part5
grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions)
C_QLBS.iloc[0,0]
### GRADED PART (DO NOT EDIT) ###Submission successful, please check on the coursera grader page for the status
</code>
### make a summary picture_____no_output_____
<code>
# plot: Simulated S_t and X_t values
# optimal hedge and portfolio values
# rewards and optimal Q-function
f, axarr = plt.subplots(3, 2)
f.subplots_adjust(hspace=.5)
f.set_figheight(8.0)
f.set_figwidth(8.0)
axarr[0, 0].plot(S.T.iloc[:,idx_plot])
axarr[0, 0].set_xlabel('Time Steps')
axarr[0, 0].set_title(r'Simulated stock price $S_t$')
axarr[0, 1].plot(X.T.iloc[:,idx_plot])
axarr[0, 1].set_xlabel('Time Steps')
axarr[0, 1].set_title(r'State variable $X_t$')
axarr[1, 0].plot(a.T.iloc[:,idx_plot])
axarr[1, 0].set_xlabel('Time Steps')
axarr[1, 0].set_title(r'Optimal action $a_t^{\star}$')
axarr[1, 1].plot(Pi.T.iloc[:,idx_plot])
axarr[1, 1].set_xlabel('Time Steps')
axarr[1, 1].set_title(r'Optimal portfolio $\Pi_t$')
axarr[2, 0].plot(R.T.iloc[:,idx_plot])
axarr[2, 0].set_xlabel('Time Steps')
axarr[2, 0].set_title(r'Rewards $R_t$')
axarr[2, 1].plot(Q.T.iloc[:,idx_plot])
axarr[2, 1].set_xlabel('Time Steps')
axarr[2, 1].set_title(r'Optimal DP Q-function $Q_t^{\star}$')
# plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu=r.png', dpi=600)
# plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu>r.png', dpi=600)
#plt.savefig('QLBS_DP_summary_graphs_ATM_option_mu>r.png', dpi=600)
plt.savefig('r.png', dpi=600)
plt.show()_____no_output_____# plot convergence to the Black-Scholes values
# lam = 0.0001, Q = 4.1989 +/- 0.3612 # 4.378
# lam = 0.001: Q = 4.9004 +/- 0.1206 # Q=6.283
# lam = 0.005: Q = 8.0184 +/- 0.9484 # Q = 14.7489
# lam = 0.01: Q = 11.9158 +/- 2.2846 # Q = 25.33
lam_vals = np.array([0.0001, 0.001, 0.005, 0.01])
# Q_vals = np.array([3.77, 3.81, 4.57, 7.967,12.2051])
Q_vals = np.array([4.1989, 4.9004, 8.0184, 11.9158])
Q_std = np.array([0.3612,0.1206, 0.9484, 2.2846])
BS_price = bs_put(0)
# f, axarr = plt.subplots(1, 1)
fig, ax = plt.subplots(1, 1)
f.subplots_adjust(hspace=.5)
f.set_figheight(4.0)
f.set_figwidth(4.0)
# ax.plot(lam_vals,Q_vals)
ax.errorbar(lam_vals, Q_vals, yerr=Q_std, fmt='o')
ax.set_xlabel('Risk aversion')
ax.set_ylabel('Optimal option price')
ax.set_title(r'Optimal option price vs risk aversion')
ax.axhline(y=BS_price,linewidth=2, color='r')
textstr = 'BS price = %2.2f'% (BS_price)
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
# place a text box in upper left in axes coords
ax.text(0.05, 0.95, textstr, fontsize=11,transform=ax.transAxes, verticalalignment='top', bbox=props)
plt.savefig('Opt_price_vs_lambda_Markowitz.png')
plt.show()_____no_output_____
</code>
| {
"repository": "cilsya/coursera",
"path": "Machine _Learning_and_Reinforcement_Learning_in_Finance/03_Reinforcement_Learning_in_Finance/02_QLBS Model Implementation/dp_qlbs_oneset_m3_ex2_v3.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 656099,
"hexsha": "cb79682da5a370a9b8708b28f61cb54aaee7b6a2",
"max_line_length": 159484,
"avg_line_length": 431.9282422646,
"alphanum_fraction": 0.9315362468
} |
# Notebook from tAndreani/scATAC-benchmarking
Path: Extra/Buenrostro_2018/test_blacklist/SCRAT_buenrostro2018-blacklist-rm.ipynb
### Installation_____no_output_____`devtools::install_github("zji90/SCRATdatahg19")`
`source("https://raw.githubusercontent.com/zji90/SCRATdata/master/installcode.R")` _____no_output_____### Import packages_____no_output_____
<code>
library(devtools)
library(GenomicAlignments)
library(Rsamtools)
library(SCRATdatahg19)
library(SCRAT)Loading required package: BiocGenerics
Loading required package: parallel
Attaching package: ‘BiocGenerics’
The following objects are masked from ‘package:parallel’:
clusterApply, clusterApplyLB, clusterCall, clusterEvalQ,
clusterExport, clusterMap, parApply, parCapply, parLapply,
parLapplyLB, parRapply, parSapply, parSapplyLB
The following objects are masked from ‘package:stats’:
IQR, mad, sd, var, xtabs
The following objects are masked from ‘package:base’:
anyDuplicated, append, as.data.frame, basename, cbind, colMeans,
colnames, colSums, dirname, do.call, duplicated, eval, evalq,
Filter, Find, get, grep, grepl, intersect, is.unsorted, lapply,
lengths, Map, mapply, match, mget, order, paste, pmax, pmax.int,
pmin, pmin.int, Position, rank, rbind, Reduce, rowMeans, rownames,
rowSums, sapply, setdiff, sort, table, tapply, union, unique,
unsplit, which, which.max, which.min
Loading required package: S4Vectors
Loading required package: stats4
Attaching package: ‘S4Vectors’
The following object is masked from ‘package:base’:
expand.grid
Loading required package: IRanges
Loading required package: GenomeInfoDb
Loading required package: GenomicRanges
Loading required package: SummarizedExperiment
Loading required package: Biobase
Welcome to Bioconductor
Vignettes contain introductory material; view with
'browseVignettes()'. To cite Bioconductor, see
'citation("Biobase")', and for packages 'citation("pkgname")'.
Loading required package: DelayedArray
Loading required package: matrixStats
Attaching package: ‘matrixStats’
The following objects are masked from ‘package:Biobase’:
anyMissing, rowMedians
Loading required package: BiocParallel
Attaching package: ‘DelayedArray’
The following objects are masked from ‘package:matrixStats’:
colMaxs, colMins, colRanges, rowMaxs, rowMins, rowRanges
The following objects are masked from ‘package:base’:
aperm, apply
Loading required package: Biostrings
Loading required package: XVector
Attaching package: ‘Biostrings’
The following object is masked from ‘package:DelayedArray’:
type
The following object is masked from ‘package:base’:
strsplit
Loading required package: Rsamtools
Warning message:
“replacing previous import ‘DT::dataTableOutput’ by ‘shiny::dataTableOutput’ when loading ‘SCRAT’”Warning message:
“replacing previous import ‘DT::renderDataTable’ by ‘shiny::renderDataTable’ when loading ‘SCRAT’”Warning message:
“replacing previous import ‘mclust::em’ by ‘shiny::em’ when loading ‘SCRAT’”
</code>
### Obtain Feature Matrix_____no_output_____
<code>
start_time = Sys.time()_____no_output_____metadata <- read.table('./input/metadata.tsv',
header = TRUE,
stringsAsFactors=FALSE,quote="",row.names=1)_____no_output_____SCRATsummary <- function (dir = "", genome, bamfile = NULL, singlepair = "automated",
removeblacklist = T, log2transform = T, adjustlen = T, featurelist = c("GENE",
"ENCL", "MOTIF_TRANSFAC", "MOTIF_JASPAR", "GSEA"), customfeature = NULL,
Genestarttype = "TSSup", Geneendtype = "TSSdown", Genestartbp = 3000,
Geneendbp = 1000, ENCLclunum = 2000, Motifflank = 100, GSEAterm = "c5.bp",
GSEAstarttype = "TSSup", GSEAendtype = "TSSdown", GSEAstartbp = 3000,
GSEAendbp = 1000)
{
if (is.null(bamfile)) {
bamfile <- list.files(dir, pattern = ".bam$")
}
datapath <- system.file("extdata", package = paste0("SCRATdata",
genome))
bamdata <- list()
for (i in bamfile) {
filepath <- file.path(dir, i)
if (singlepair == "automated") {
bamfile <- BamFile(filepath)
tmpsingle <- readGAlignments(bamfile)
tmppair <- readGAlignmentPairs(bamfile)
pairendtf <- testPairedEndBam(bamfile)
if (pairendtf) {
tmp <- tmppair
startpos <- pmin(start(first(tmp)), start(last(tmp)))
endpos <- pmax(end(first(tmp)), end(last(tmp)))
id <- which(!is.na(as.character(seqnames(tmp))))
tmp <- GRanges(seqnames=as.character(seqnames(tmp))[id],IRanges(start=startpos[id],end=endpos[id]))
}
else {
tmp <- GRanges(tmpsingle)
}
}
else if (singlepair == "single") {
tmp <- GRanges(readGAlignments(filepath))
}
else if (singlepair == "pair") {
tmp <- readGAlignmentPairs(filepath)
startpos <- pmin(start(first(tmp)), start(last(tmp)))
endpos <- pmax(end(first(tmp)), end(last(tmp)))
id <- which(!is.na(as.character(seqnames(tmp))))
tmp <- GRanges(seqnames=as.character(seqnames(tmp))[id],IRanges(start=startpos[id],end=endpos[id]))
}
if (removeblacklist) {
load(paste0(datapath, "/gr/blacklist.rda"))
tmp <- tmp[-as.matrix(findOverlaps(tmp, gr))[, 1],
]
}
bamdata[[i]] <- tmp
}
bamsummary <- sapply(bamdata, length)
allres <- NULL
datapath <- system.file("extdata", package = paste0("SCRATdata",
genome))
if ("GENE" %in% featurelist) {
print("Processing GENE features")
load(paste0(datapath, "/gr/generegion.rda"))
if (Genestarttype == "TSSup") {
grstart <- ifelse(as.character(strand(gr)) == "+",
start(gr) - as.numeric(Genestartbp), end(gr) +
as.numeric(Genestartbp))
}
else if (Genestarttype == "TSSdown") {
grstart <- ifelse(as.character(strand(gr)) == "+",
start(gr) + as.numeric(Genestartbp), end(gr) -
as.numeric(Genestartbp))
}
else if (Genestarttype == "TESup") {
grstart <- ifelse(as.character(strand(gr)) == "+",
end(gr) - as.numeric(Genestartbp), start(gr) +
as.numeric(Genestartbp))
}
else if (Genestarttype == "TESdown") {
grstart <- ifelse(as.character(strand(gr)) == "+",
end(gr) + as.numeric(Genestartbp), start(gr) -
as.numeric(Genestartbp))
}
if (Geneendtype == "TSSup") {
grend <- ifelse(as.character(strand(gr)) == "+",
start(gr) - as.numeric(Geneendbp), end(gr) +
as.numeric(Geneendbp))
}
else if (Geneendtype == "TSSdown") {
grend <- ifelse(as.character(strand(gr)) == "+",
start(gr) + as.numeric(Geneendbp), end(gr) -
as.numeric(Geneendbp))
}
else if (Geneendtype == "TESup") {
grend <- ifelse(as.character(strand(gr)) == "+",
end(gr) - as.numeric(Geneendbp), start(gr) +
as.numeric(Geneendbp))
}
else if (Geneendtype == "TESdown") {
grend <- ifelse(as.character(strand(gr)) == "+",
end(gr) + as.numeric(Geneendbp), start(gr) -
as.numeric(Geneendbp))
}
ngr <- names(gr)
gr <- GRanges(seqnames = seqnames(gr), IRanges(start = pmin(grstart,
grend), end = pmax(grstart, grend)))
names(gr) <- ngr
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- end(gr) - start(gr) + 1
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
if ("ENCL" %in% featurelist) {
print("Processing ENCL features")
load(paste0(datapath, "/gr/ENCL", ENCLclunum, ".rda"))
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) - start(i) +
1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
if ("MOTIF_TRANSFAC" %in% featurelist) {
print("Processing MOTIF_TRANSFAC features")
load(paste0(datapath, "/gr/transfac1.rda"))
gr <- flank(gr, as.numeric(Motifflank), both = T)
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) - start(i) +
1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
load(paste0(datapath, "/gr/transfac2.rda"))
gr <- flank(gr, as.numeric(Motifflank), both = T)
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) - start(i) +
1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
if (genome %in% c("hg19", "hg38")) {
load(paste0(datapath, "/gr/transfac3.rda"))
gr <- flank(gr, as.numeric(Motifflank), both = T)
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) -
start(i) + 1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
}
if ("MOTIF_JASPAR" %in% featurelist) {
print("Processing MOTIF_JASPAR features")
load(paste0(datapath, "/gr/jaspar1.rda"))
gr <- flank(gr, as.numeric(Motifflank), both = T)
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) - start(i) +
1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
load(paste0(datapath, "/gr/jaspar2.rda"))
gr <- flank(gr, as.numeric(Motifflank), both = T)
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) - start(i) +
1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
if ("GSEA" %in% featurelist) {
print("Processing GSEA features")
for (i in GSEAterm) {
load(paste0(datapath, "/gr/GSEA", i, ".rda"))
allgr <- gr
for (sgrn in names(allgr)) {
gr <- allgr[[sgrn]]
if (GSEAstarttype == "TSSup") {
grstart <- ifelse(as.character(strand(gr)) ==
"+", start(gr) - as.numeric(GSEAstartbp),
end(gr) + as.numeric(GSEAstartbp))
}
else if (GSEAstarttype == "TSSdown") {
grstart <- ifelse(as.character(strand(gr)) ==
"+", start(gr) + as.numeric(GSEAstartbp),
end(gr) - as.numeric(GSEAstartbp))
}
else if (GSEAstarttype == "TESup") {
grstart <- ifelse(as.character(strand(gr)) ==
"+", end(gr) - as.numeric(GSEAstartbp), start(gr) +
as.numeric(GSEAstartbp))
}
else if (GSEAstarttype == "TESdown") {
grstart <- ifelse(as.character(strand(gr)) ==
"+", end(gr) + as.numeric(GSEAstartbp), start(gr) -
as.numeric(GSEAstartbp))
}
if (GSEAendtype == "TSSup") {
grend <- ifelse(as.character(strand(gr)) ==
"+", start(gr) - as.numeric(GSEAendbp), end(gr) +
as.numeric(GSEAendbp))
}
else if (GSEAendtype == "TSSdown") {
grend <- ifelse(as.character(strand(gr)) ==
"+", start(gr) + as.numeric(GSEAendbp), end(gr) -
as.numeric(GSEAendbp))
}
else if (GSEAendtype == "TESup") {
grend <- ifelse(as.character(strand(gr)) ==
"+", end(gr) - as.numeric(GSEAendbp), start(gr) +
as.numeric(GSEAendbp))
}
else if (GSEAendtype == "TESdown") {
grend <- ifelse(as.character(strand(gr)) ==
"+", end(gr) + as.numeric(GSEAendbp), start(gr) -
as.numeric(GSEAendbp))
}
ngr <- names(gr)
gr <- GRanges(seqnames = seqnames(gr), IRanges(start = pmin(grstart,
grend), end = pmax(grstart, grend)))
names(gr) <- ngr
allgr[[sgrn]] <- gr
}
gr <- allgr
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- sapply(gr, function(i) sum(end(i) -
start(i) + 1))
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
}
if ("Custom" %in% featurelist) {
print("Processing custom features")
gr <- read.table(customfeature, as.is = T, sep = "\t")
gr <- GRanges(seqnames = gr[, 1], IRanges(start = gr[,
2], end = gr[, 3]))
tmp <- sapply(bamdata, function(i) countOverlaps(gr,
i))
tmp <- sweep(tmp, 2, bamsummary, "/") * 10000
if (log2transform) {
tmp <- log2(tmp + 1)
}
if (adjustlen) {
grrange <- end(gr) - start(gr) + 1
tmp <- sweep(tmp, 1, grrange, "/") * 1e+06
}
tmp <- tmp[rowSums(tmp) > 0, , drop = F]
allres <- rbind(allres, tmp)
}
allres
}_____no_output_____df_out <- SCRATsummary(dir = "./input/sc-bams_nodup/",
genome = "hg19",
featurelist="MOTIF_JASPAR",
log2transform = FALSE, adjustlen = FALSE, removeblacklist=FALSE)[1] "Processing MOTIF_JASPAR features"
end_time <- Sys.time()_____no_output_____end_time - start_time_____no_output_____dim(df_out)
df_out[1:5,1:5]_____no_output_____colnames(df_out) = sapply(strsplit(colnames(df_out), "\\."),'[',1)
dim(df_out)
df_out[1:5,1:5]_____no_output_____if(! all(colnames(df_out) == rownames(metadata))){
df_out = df_out[,rownames(metadata)]
dim(df_out)
df_out[1:5,1:5]
}_____no_output_____dim(df_out)
df_out[1:5,1:5]_____no_output_____saveRDS(df_out, file = './output/feature_matrices/FM_SCRAT_buenrostro2018_no_blacklist.rds')_____no_output_____sessionInfo()_____no_output_____save.image(file = 'SCRAT_buenrostro2018.RData')_____no_output_____
</code>
| {
"repository": "tAndreani/scATAC-benchmarking",
"path": "Extra/Buenrostro_2018/test_blacklist/SCRAT_buenrostro2018-blacklist-rm.ipynb",
"matched_keywords": [
"Bioconductor"
],
"stars": 2,
"size": 58030,
"hexsha": "cb799a2365167c75f4080e5729dac08584bd5544",
"max_line_length": 339,
"avg_line_length": 49.5136518771,
"alphanum_fraction": 0.5072031708
} |
# Notebook from NCBI-Hackathons/ncbi-cloud-tutorials
Path: BLAST tutorials/notebooks/GSD/GSD Rpb1_orthologs_in_1011_genomes.ipynb
# GSD: Rpb1 orthologs in 1011 genomes collection
This collects Rpb1 gene and protein sequences from a collection of natural isolates of sequenced yeast genomes from [Peter et al 2017](https://www.ncbi.nlm.nih.gov/pubmed/29643504), and then estimates the count of the heptad repeats. It builds directly on the notebook [here](GSD%20Rpb1_orthologs_in_PB_genomes.ipynb), which descends from [Searching for coding sequences in genomes using BLAST and Python](../Searching%20for%20coding%20sequences%20in%20genomes%20using%20BLAST%20and%20Python.ipynb). It also builds on the notebooks shown [here](https://nbviewer.jupyter.org/github/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb) and [here](https://github.com/fomightez/patmatch-binder).
Reference for sequence data:
[Genome evolution across 1,011 Saccharomyces cerevisiae isolates. Peter J, De Chiara M, Friedrich A, Yue JX, Pflieger D, Bergström A, Sigwalt A, Barre B, Freel K, Llored A, Cruaud C, Labadie K, Aury JM, Istace B, Lebrigand K, Barbry P, Engelen S, Lemainque A, Wincker P, Liti G, Schacherer J. Nature. 2018 Apr;556(7701):339-344. doi: 10.1038/s41586-018-0030-5. Epub 2018 Apr 11. PMID: 29643504](https://www.ncbi.nlm.nih.gov/pubmed/29643504)
-----_____no_output_____## Overview
_____no_output_____## Preparation
Get scripts and sequence data necessary.
**DO NOT 'RUN ALL'. AN INTERACTION IS NECESSARY AT CELL FIVE. AFTER THAT INTERACTION, THE REST BELOW IT CAN BE RUN.**
(Caveat: right now this is written for genes with no introns. Only a few hundred have in yeast and that is the organism in this example. Intron presence would only become important when trying to translate in late stages of this workflow.)_____no_output_____
<code>
gene_name = "RPB1"
size_expected = 5202
get_seq_from_link = False
link_to_FASTA_of_gene = "https://gist.githubusercontent.com/fomightez/f46b0624f1d8e3abb6ff908fc447e63b/raw/625eaba76bb54e16032f90c8812350441b753a0c/uz_S288C_YOR270C_VPH1_coding.fsa"
#**Possible future enhancement would be to add getting the FASTA of the gene from Yeastmine with just systematic id**_____no_output_____
</code>
Get the `blast_to_df` script by running this commands._____no_output_____
<code>
import os
file_needed = "blast_to_df.py"
if not os.path.isfile(file_needed):
!curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/blast-utilities/blast_to_df.py
import pandas as pd_____no_output_____
</code>
**Now to get the entire collection or a subset of the 1011 genomes, the next cell will need to be edited.** I'll probably leave it with a small set for typical running purposes. However, to make it run fast, try the 'super-tiny' set with just two._____no_output_____
<code>
# Method to get ALL the genomes. TAKES A WHILE!!!
# (ca. 1 hour and 15 minutes to download alone? + Extracting is a while.)
# Easiest way to minotor extracting step is to open terminal, cd to
# `GENOMES_ASSEMBLED`, & use `ls | wc -l` to count files extracted.
#!curl -O http://1002genomes.u-strasbg.fr/files/1011Assemblies.tar.gz
#!tar xzf 1011Assemblies.tar.gz
#!rm 1011Assemblies.tar.gz
# Small development set
!curl -OL https://www.dropbox.com/s/f42tiygq9tr1545/medium_setGENOMES_ASSEMBLED.tar.gz
!tar xzf medium_setGENOMES_ASSEMBLED.tar.gz
# Tiny development set
#!curl -OL https://www.dropbox.com/s/txufq2jflkgip82/tiny_setGENOMES_ASSEMBLED.tar.gz
#!tar xzf tiny_setGENOMES_ASSEMBLED.tar.gz
#!mv tiny_setGENOMES_ASSEMBLED GENOMES_ASSEMBLED
#define directory with genomes
genomes_dirn = "GENOMES_ASSEMBLED" % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 1034 0 1034 0 0 819 0 --:--:-- 0:00:01 --:--:-- 0
100 183M 100 183M 0 0 23.6M 0 0:00:07 0:00:07 --:--:-- 52.4M
</code>
Before process the list of all of them, fix one that has an file name mismatch with what the description lines have.
Specifically, the assembly file name is `CDH.re.fa`, but the FASTA-entries inside begin `CDH-3`.
Simple file name mismatch. So next cell will change that file name to match. _____no_output_____
<code>
import os
import sys
file_with_issues = "CDH.re.fa"
if os.path.isfile("GENOMES_ASSEMBLED/"+file_with_issues):
sys.stderr.write("\nFile with name non-matching entries ('{}') observed and"
" fixed.".format(file_with_issues))
!mv GENOMES_ASSEMBLED/CDH.re.fa GENOMES_ASSEMBLED/CDH_3.re.fa
#pause and then check if file with original name is there still because
# it means this was attempted too soon and need to start over.
import time
time.sleep(12) #12 seconds
if os.path.isfile("GENOMES_ASSEMBLED/"+file_with_issues):
sys.stderr.write("\n***PROBLEM. TRIED THIS CELL BEFORE FINISHED UPLOADING.\n"
"DELETE FILES ASSOCIATED AND START ALL OVER AGAIN WITH UPLOAD STEP***.")
else:
sys.stderr.write("\nFile '{}' not seen and so nothing done"
". Seems wrong.".format(file_with_issues))
sys.exit(1)
File with name non-matching entries ('CDH.re.fa') observed and fixed.# Get SGD gene sequence in FASTA format to search for best matches in the genomes
import sys
gene_filen = gene_name + ".fsa"
if get_seq_from_link:
!curl -o {gene_filen} {link_to_FASTA_of_gene}
else:
!touch {gene_filen}
sys.stderr.write("\nEDIT THE FILE '{}' TO CONTAIN "
"YOUR GENE OF INTEREST (FASTA-FORMATTED)"
".".format(gene_filen))
sys.exit(0)
EDIT THE FILE 'RPB1.fsa' TO CONTAIN YOUR GENE OF INTEREST (FASTA-FORMATTED).
</code>
**I PUT CONTENTS OF FILE `S288C_YDL140C_RPO21_coding.fsa` downloaded from [here](https://www.yeastgenome.org/locus/S000002299/sequence) as 'RPB1.fsa'.**
Now you are prepared to run BLAST to search each PacBio-sequenced genomes for the best match to a gene from the Saccharomyces cerevisiae strain S288C reference sequence._____no_output_____## Use BLAST to search the genomes for matches to the gene in the reference genome at SGD
SGD is the [Saccharomyces cerevisiae Genome Database site](http:yeastgenome.org) and the reference genome is from S288C.
This is going to go through each genome and make a database so it is searchable and then search for matches to the gene. The information on the best match will be collected. One use for that information will be collecting the corresponding sequences later.
Import the script that allows sending BLAST output to Python dataframes so that we can use it here._____no_output_____
<code>
from blast_to_df import blast_to_df_____no_output_____# Make a list of all `genome.fa` files, excluding `genome.fa.nhr` and `genome.fa.nin` and `genome.fansq`
# The excluding was only necessary because I had run some queries preliminarily in development. Normally, it would just be the `.re.fa` at the outset.
fn_to_check = "re.fa"
genomes = []
import os
import fnmatch
for file in os.listdir(genomes_dirn):
if fnmatch.fnmatch(file, '*'+fn_to_check):
if not file.endswith(".nhr") and not file.endswith(".nin") and not file.endswith(".nsq") :
# plus skip hidden files
if not file.startswith("._"):
genomes.append(file)
len(genomes)_____no_output_____
</code>
Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from BLAST for many sequences from filling up cell.
(You can monitor the making of files ending in `.nhr` for all the FASTA files in `GENOMES_ASSEMBLED` to monitor progress'.)_____no_output_____
<code>
%%time
%%capture
SGD_gene = gene_filen
dfs = []
for genome in genomes:
!makeblastdb -in {genomes_dirn}/{genome} -dbtype nucl
result = !blastn -query {SGD_gene} -db {genomes_dirn}/{genome} -outfmt "6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq" -task blastn
from blast_to_df import blast_to_df
blast_df = blast_to_df(result.n)
dfs.append(blast_df.head(1))CPU times: user 1.74 s, sys: 1.22 s, total: 2.97 s
Wall time: 1min 21s
# merge the dataframes in the list `dfs` into one dataframe
df = pd.concat(dfs)_____no_output_____#Save the df
filen_prefix = gene_name + "_orthologBLASTdf"
df.to_pickle(filen_prefix+".pkl")
df.to_csv(filen_prefix+'.tsv', sep='\t',index = False) _____no_output_____#df_____no_output_____
</code>
Computationally check if any genomes missing from the BLAST results list?_____no_output_____
<code>
subjids = df.sseqid.tolist()
#print (subjids)
#print (subjids[0:10])
subjids = [x.split("-")[0] for x in subjids]
#print (subjids)
#print (subjids[0:10])
len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be
# between `fn_to_check` and strain_id`, such as `SK1.genome.fa`
genome_ids = [x[:-len_genome_fn_end] for x in genomes]
#print (genome_ids[0:10])
a = set(genome_ids)
#print (a)
print ("initial:",len(a))
r = set(subjids)
print("results:",len(r))
print ("missing:",len(a-r))
if len(a-r):
print("\n")
print("ids missing:",a-r)
#a - rinitial: 48
results: 48
missing: 0
</code>
Sanity check: Report on how expected size compares to max size seen?_____no_output_____
<code>
size_seen = df.length.max(0)
print ("Expected size of gene:", size_expected)
print ("Most frequent size of matches:", df.length.mode()[0])
print ("Maximum size of matches:", df.length.max(0))Expected size of gene: 5202
Most frequent size of matches: 5202
Maximum size of matches: 5306
</code>
## Collect the identified, raw sequences
Get the expected size centered on the best match, plus a little flanking each because they might not exactly cover the entire open reading frame. (Although, the example here all look to be full size.)_____no_output_____
<code>
# Get the script for extracting based on position (and install dependency pyfaidx)
import os
file_needed = "extract_subsequence_from_FASTA.py"
if not os.path.isfile(file_needed):
!curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/Extract_from_FASTA/extract_subsequence_from_FASTA.py
!pip install pyfaidx % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16964 100 16964 0 0 74078 0 --:--:-- --:--:-- --:--:-- 74078
Requirement already satisfied: pyfaidx in /srv/conda/lib/python3.7/site-packages (0.5.5.2)
Requirement already satisfied: setuptools>=0.7 in /srv/conda/lib/python3.7/site-packages (from pyfaidx) (40.8.0)
Requirement already satisfied: six in /srv/conda/lib/python3.7/site-packages (from pyfaidx) (1.12.0)
</code>
For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output.
For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be because the generated files only moved into the `raw` directory as last step of cell:
ls seq_extracted* | wc -l
(**NOTE: WHEN RUNNING WITH THE FULL SET, THIS CELL BELOW WILL REPORT AROUND A DOZEN `FileNotFoundError:`/Exceptions. HOWEVER, THEY DON'T CAUSE THE NOTEBOOK ITSELF TO CEASE TO RUN. SO DISREGARD THEM FOR THE TIME BEING.** )_____no_output_____
<code>
%%capture
size_expected = size_expected # use value from above, or alter at this point.
#size_expected = df.length.max(0) #bp length of SGD coding sequence; should be equivalent and that way not hardcoded?
extra_add_to_start = 51 #to allow for 'fuzziness' at starting end
extra_add_to_end = 51 #to allow for 'fuzziness' at far end
genome_fn_end = "re.fa"
def midpoint(items):
'''
takes a iterable of items and returns the midpoint (integer) of the first
and second values
'''
return int((int(items[0])+int(items[1]))/2)
#midpoint((1,100))
def determine_pos_to_get(match_start,match_end):
'''
Take the start and end of the matched region.
Calculate midpoint between those and then
center expected size on that to determine
preliminary start and preliminary end to get.
Add the extra basepairs to get at each end
to allow for fuzziness/differences of actual
gene ends for orthologs.
Return the final start and end positions to get.
'''
center_of_match = midpoint((match_start,match_end))
half_size_expected = int(size_expected/2.0)
if size_expected % 2 != 0:
half_size_expected += 1
start_pos = center_of_match - half_size_expected
end_pos = center_of_match + half_size_expected
start_pos -= extra_add_to_start
end_pos += extra_add_to_end
# Because of getting some flanking sequences to account for 'fuzziness', it
# is possible the start and end can exceed possible. 'End' is not a problem
# because the `extract_subsequence_from_FASTA.py` script will get as much as
# it from the indicated sequence if a larger than possible number is
# provided. However,'start' can become negative and because the region to
# extract is provided as a string the dash can become a problem. Dealing
# with it here by making sequence positive only.
# Additionally, because I rely on center of match to position where to get,
# part being cut-off due to absence on sequence fragment will shift center
# of match away from what is actually center of gene and to counter-balance
# add twice the amount to the other end. (Actually, I feel I should adjust
# the start end likewise if the sequence happens to be shorter than portion
# I would like to capture but I don't know length of involved hit yet and
# that would need to be added to allow that to happen!<--TO DO)
if start_pos < 0:
raw_amount_missing_at_start = abs(start_pos)# for counterbalancing; needs
# to be collected before `start_pos` adjusted
start_pos = 1
end_pos += 2 * raw_amount_missing_at_start
return start_pos, end_pos
# go through the dataframe using information on each to come up with sequence file,
# specific indentifier within sequence file, and the start and end to extract
# store these valaues as a list in a dictionary with the strain identifier as the key.
extracted_info = {}
start,end = 0,0
for row in df.itertuples():
#print (row.length)
start_to_get, end_to_get = determine_pos_to_get(row.sstart, row.send)
posns_to_get = "{}-{}".format(start_to_get, end_to_get)
record_id = row.sseqid
strain_id = row.sseqid.split("-")[0]
seq_fn = strain_id + "." + genome_fn_end
extracted_info[strain_id] = [seq_fn, record_id, posns_to_get]
# Use the dictionary to get the sequences
for id_ in extracted_info:
#%run extract_subsequence_from_FASTA.py {*extracted_info[id_]} #unpacking doesn't seem to work here in `%run`
%run extract_subsequence_from_FASTA.py {genomes_dirn}/{extracted_info[id_][0]} {extracted_info[id_][1]} {extracted_info[id_][2]}
#package up the retrieved sequences
archive_file_name = gene_name+"_raw_ortholog_seqs.tar.gz"
# make list of extracted files using fnmatch
fn_part_to_match = "seq_extracted"
collected_seq_files_list = []
import os
import sys
import fnmatch
for file in os.listdir('.'):
if fnmatch.fnmatch(file, fn_part_to_match+'*'):
#print (file)
collected_seq_files_list.append(file)
!tar czf {archive_file_name} {" ".join(collected_seq_files_list)} # use the list for archiving command
sys.stderr.write("\n\nCollected RAW sequences gathered and saved as "
"`{}`.".format(archive_file_name))
# move the collected raw sequences to a folder in preparation for
# extracting encoding sequence from original source below
!mkdir raw
!mv seq_extracted*.fa raw_____no_output_____
</code>
That archive should contain the "raw" sequence for each gene, even if the ends are a little different for each. At minimum the entire gene sequence needs to be there at this point; extra at each end is preferable at this point.
You should inspect them as soon as possible and adjust the extra sequence to add higher or lower depending on whether the ortholog genes vary more or less, respectively. The reason they don't need to be perfect yet though is because next we are going to extract the longest open reading frame, which presumably demarcates the entire gene. Then we can return to use that information to clean up the collected sequences to just be the coding sequence._____no_output_____## Collect protein translations of the genes and then clean up "raw" sequences to just be coding
We'll assume the longest translatable frame in the collected "raw" sequences encodes the protein sequence for the gene orthologs of interest. Well base these steps on the [section '20.1.13 Identifying open reading frames'](http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc299) in the present version of the [Biopython Tutorial and Cookbook](http://biopython.org/DIST/docs/tutorial/Tutorial.html) (Last Update – 18 December 2018 (Biopython 1.73)._____no_output_____(First run the next cell to get a script needed for dealing with the strand during the translation and gathering of thge encoding sequence.)_____no_output_____
<code>
import os
file_needed = "convert_fasta_to_reverse_complement.py"
if not os.path.isfile(file_needed):
!curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/ConvertSeq/convert_fasta_to_reverse_complement.py % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8851 100 8851 0 0 28644 0 --:--:-- --:--:-- --:--:-- 28644
</code>
Now to perform the work described in the header to this section...
For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output.
For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be:
ls *_ortholog_gene.fa | wc -l_____no_output_____
<code>
%%capture
# find the featured open reading frame and collect presumed protein sequences
# Collect the corresponding encoding sequence from the original source
def len_ORF(items):
# orf is fourth item in the tuples
return len(items[3])
def find_orfs_with_trans(seq, trans_table, min_protein_length):
'''
adapted from the present section '20.1.13 Identifying open reading frames'
http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc299 in the
present version of the [Biopython Tutorial and Cookbook at
http://biopython.org/DIST/docs/tutorial/Tutorial.html
(Last Update – 18 December 2018 (Biopython 1.73)
Same as there except altered to sort on the length of the
open reading frame.
'''
answer = []
seq_len = len(seq)
for strand, nuc in [(+1, seq), (-1, seq.reverse_complement())]:
for frame in range(3):
trans = str(nuc[frame:].translate(trans_table))
trans_len = len(trans)
aa_start = 0
aa_end = 0
while aa_start < trans_len:
aa_end = trans.find("*", aa_start)
if aa_end == -1:
aa_end = trans_len
if aa_end-aa_start >= min_protein_length:
if strand == 1:
start = frame+aa_start*3
end = min(seq_len,frame+aa_end*3+3)
else:
start = seq_len-frame-aa_end*3-3
end = seq_len-frame-aa_start*3
answer.append((start, end, strand,
trans[aa_start:aa_end]))
aa_start = aa_end+1
answer.sort(key=len_ORF, reverse = True)
return answer
def generate_rcoutput_file_name(file_name,suffix_for_saving = "_rc"):
'''
from https://github.com/fomightez/sequencework/blob/master/ConvertSeq/convert_fasta_to_reverse_complement.py
Takes a file name as an argument and returns string for the name of the
output file. The generated name is based on the original file
name.
Specific example
=================
Calling function with
("sequence.fa", "_rc")
returns
"sequence_rc.fa"
'''
main_part_of_name, file_extension = os.path.splitext(
file_name) #from
#http://stackoverflow.com/questions/541390/extracting-extension-from-filename-in-python
if '.' in file_name: #I don't know if this is needed with the os.path.splitext method but I had it before so left it
return main_part_of_name + suffix_for_saving + file_extension
else:
return file_name + suffix_for_saving + ".fa"
def add_strand_to_description_line(file,strand="-1"):
'''
Takes a file and edits description line to add
strand info at end.
Saves the fixed file
'''
import sys
output_file_name = "temp.txt"
# prepare output file for saving so it will be open and ready
with open(output_file_name, 'w') as output_file:
# read in the input file
with open(file, 'r') as input_handler:
# prepare to give feeback later or allow skipping to certain start
lines_processed = 0
for line in input_handler:
lines_processed += 1
if line.startswith(">"):
new_line = line.strip() + "; {} strand\n".format(strand)
else:
new_line = line
# Send text to output
output_file.write(new_line)
# replace the original file with edited
!mv temp.txt {file}
# Feedback
sys.stderr.write("\nIn {}, strand noted.".format(file))
table = 1 #sets translation table to standard nuclear, see
# https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi
min_pro_len = 80 #cookbook had the standard `100`. Feel free to adjust.
prot_seqs_info = {} #collect as dictionary with strain_id as key. Values to
# be list with source id as first item and protein length as second and
# strand in source seq as third item, and start and end in source sequence as fourth and fifth,
# and file name of protein and gene as sixth and seventh.
# Example key and value pair: 'YPS138':['<source id>','<protein length>',-1,52,2626,'<gene file name>','<protein file name>']
gene_seqs_fn_list = []
prot_seqs_fn_list = []
from Bio import SeqIO
for raw_seq_filen in collected_seq_files_list:
#strain_id = raw_seq_filen[:-len_genome_fn_end] #if was dealing with source seq
strain_id = raw_seq_filen.split("-")[0].split("seq_extracted")[1]
record = SeqIO.read("raw/"+raw_seq_filen,"fasta")
raw_seq_source_fn = strain_id + "." + genome_fn_end
raw_seq_source_id = record.description.split(":")[0]
orf_list = find_orfs_with_trans(record.seq, table, min_pro_len)
orf_start, orf_end, strand, prot_seq = orf_list[0] #longest ORF seq for protein coding
location_raw_seq = record.description.rsplit(":",1)[1] #get to use in calculating
# the start and end position in original genome sequence.
raw_loc_parts = location_raw_seq.split("-")
start_from_raw_seq = int(raw_loc_parts[0])
end_from_raw_seq = int(raw_loc_parts[1])
length_extracted = len(record) #also to use in calculating relative original
#Fix negative value. (Somehow Biopython can report negative value when hitting
# end of sequence without encountering stop codon and negatives messes up
# indexing later it seems.)
if orf_start < 0:
orf_start = 0
# Trim back to the first Methionine, assumed to be the initiating MET.
# (THIS MIGHT BE A SOURCE OF EXTRA 'LEADING' RESIDUES IN SOME CASES & ARGUES
# FOR LIMITING THE AMOUNT OF FLANKING SEQUENCE ADDED TO ALLOW FOR FUZINESS.)
try:
amt_resi_to_trim = prot_seq.index("M")
except ValueError:
sys.stderr.write("**ERROR**When searching for initiating methionine,\n"
"no Methionine found in the traslated protein sequence.**ERROR**")
sys.exit(1)
prot_seq = prot_seq[amt_resi_to_trim:]
len_seq_trimmed = amt_resi_to_trim * 3
# Calculate the adjusted start and end values for the untrimmed ORF
adj_start = start_from_raw_seq + orf_start
adj_end = end_from_raw_seq - (length_extracted - orf_end)
# Adjust for trimming for appropriate strand.
if strand == 1:
adj_start += len_seq_trimmed
#adj_end += 3 # turns out stop codon is part of numbering biopython returns
elif strand == -1:
adj_end -= len_seq_trimmed
#adj_start -= 3 # turns out stop codon is part of numbering biopython returns
else:
sys.stderr.write("**ERROR**No strand match option detected!**ERROR**")
sys.exit(1)
# Collect the sequence for the actual gene encoding region from
# the original sequence. This way the original numbers will
# be put in the file.
start_n_end_str = "{}-{}".format(adj_start,adj_end)
%run extract_subsequence_from_FASTA.py {genomes_dirn}/{raw_seq_source_fn} {raw_seq_source_id} {start_n_end_str}
# rename the extracted subsequence a more distinguishing name and notify
g_output_file_name = strain_id +"_" + gene_name + "_ortholog_gene.fa"
!mv {raw_seq_filen} {g_output_file_name} # because the sequence saved happens to
# be same as raw sequence file saved previously, that name can be used to
# rename new file.
gene_seqs_fn_list.append(g_output_file_name)
sys.stderr.write("\n\nRenamed gene file to "
"`{}`.".format(g_output_file_name))
# Convert extracted sequence to reverse complement if translation was on negative strand.
if strand == -1:
%run convert_fasta_to_reverse_complement.py {g_output_file_name}
# replace original sequence file with the produced file
produced_fn = generate_rcoutput_file_name(g_output_file_name)
!mv {produced_fn} {g_output_file_name}
# add (after saved) onto the end of the description line for that `-1 strand`
# No way to do this in my current version of convert sequence. So editing descr line.
add_strand_to_description_line(g_output_file_name)
#When settled on actual protein encoding sequence, fill out
# description to use for saving the protein sequence.
prot_descr = (record.description.rsplit(":",1)[0]+ " "+ gene_name
+ "_ortholog"+ "| " +str(len(prot_seq)) + " aas | from "
+ raw_seq_source_id + " "
+ str(adj_start) + "-"+str(adj_end))
if strand == -1:
prot_descr += "; {} strand".format(strand)
# save the protein sequence as FASTA
chunk_size = 70 #<---amino acids per line to have in FASTA
prot_seq_chunks = [prot_seq[i:i+chunk_size] for i in range(
0, len(prot_seq),chunk_size)]
prot_seq_fa = ">" + prot_descr + "\n"+ "\n".join(prot_seq_chunks)
p_output_file_name = strain_id +"_" + gene_name + "_protein_ortholog.fa"
with open(p_output_file_name, 'w') as output:
output.write(prot_seq_fa)
prot_seqs_fn_list.append(p_output_file_name)
sys.stderr.write("\n\nProtein sequence saved as "
"`{}`.".format(p_output_file_name))
# at end store information in `prot_seqs_info` for later making a dataframe
# and then text table for saving summary
#'YPS138':['<source id>',<protein length>,-1,52,2626,'<gene file name>','<protein file name>']
prot_seqs_info[strain_id] = [raw_seq_source_id,len(prot_seq),strand,adj_start,adj_end,
g_output_file_name,p_output_file_name]
sys.stderr.write("\n******END OF A SET OF PROTEIN ORTHOLOG "
"AND ENCODING GENE********")_____no_output_____# use `prot_seqs_info` for saving a summary text table (first convert to dataframe?)
table_fn_prefix = gene_name + "_orthologs_table"
table_fn = table_fn_prefix + ".tsv"
pkl_table_fn = table_fn_prefix + ".pkl"
import pandas as pd
info_df = pd.DataFrame.from_dict(prot_seqs_info, orient='index',
columns=['descr_id', 'length', 'strand', 'start','end','gene_file','prot_file']) # based on
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html and
# note from Python 3.6 that `pd.DataFrame.from_items` is deprecated;
#"Please use DataFrame.from_dict"
info_df.to_pickle(pkl_table_fn)
info_df.to_csv(table_fn, sep='\t') # keep index is default
sys.stderr.write("Text file of associated details saved as '{}'.".format(table_fn))Text file of associated details saved as 'RPB1_orthologs_table.tsv'.# pack up archive of gene and protein sequences plus the table
seqs_list = gene_seqs_fn_list + prot_seqs_fn_list + [table_fn,pkl_table_fn]
archive_file_name = gene_name+"_ortholog_seqs.tar.gz"
!tar czf {archive_file_name} {" ".join(seqs_list)} # use the list for archiving command
sys.stderr.write("\nCollected gene and protein sequences"
" (plus table of details) gathered and saved as "
"`{}`.".format(archive_file_name))
Collected gene and protein sequences (plus table of details) gathered and saved as `RPB1_ortholog_seqs.tar.gz`.
</code>
Save the tarballed archive to your local machine._____no_output_____-----_____no_output_____## Estimate the count of the heptad repeats
Make a table of the estimate of heptad repeats for each orthlogous protein sequence._____no_output_____
<code>
# get the 'patmatch results to dataframe' script
!curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/patmatch-utilities/patmatch_results_to_df.py % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18722 100 18722 0 0 75491 0 --:--:-- --:--:-- --:--:-- 75491
</code>
Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from `patmatch_results_to_df` function from filling up cell._____no_output_____
<code>
%%time
%%capture
# Go through each protein sequence file and look for matches to heptad pattern
# LATER POSSIBLE IMPROVEMENT. Translate pasted gene sequence and add SGD REF S228C as first in list `prot_seqs_fn_list`. Because
# although this set of orthologs includes essentially S228C, other lists won't and best to have reference for comparing.
heptad_pattern = "[YF]SP[TG]SP[STAGN]" # will catch repeats#2 through #26 of S288C according to Corden, 2013 PMID: 24040939
from patmatch_results_to_df import patmatch_results_to_df
sum_dfs = []
raw_dfs = []
for prot_seq_fn in prot_seqs_fn_list:
!perl ../../patmatch_1.2/unjustify_fasta.pl {prot_seq_fn}
output = !perl ../../patmatch_1.2/patmatch.pl -p {heptad_pattern} {prot_seq_fn}.prepared
os.remove(os.path.join(prot_seq_fn+".prepared")) #delete file made for PatMatch
raw_pm_df = patmatch_results_to_df(output.n, pattern=heptad_pattern, name="CTD_heptad")
raw_pm_df.sort_values('hit_number', ascending=False, inplace=True)
sum_dfs.append(raw_pm_df.groupby('FASTA_id').head(1))
raw_dfs.append(raw_pm_df)
sum_pm_df = pd.concat(sum_dfs, ignore_index=True)
sum_pm_df.sort_values('hit_number', ascending=False, inplace=True)
sum_pm_df = sum_pm_df[['FASTA_id','hit_number']]
#make protein length into dictionary with ids as keys to map to FASTA_ids in
# order to add protein length as a column in summary table
length_info_by_id= dict(zip(info_df.descr_id,info_df.length))
sum_pm_df['prot_length'] = sum_pm_df['FASTA_id'].map(length_info_by_id)
sum_pm_df = sum_pm_df.reset_index(drop=True)
raw_pm_df = pd.concat(raw_dfs, ignore_index=True)CPU times: user 1.54 s, sys: 917 ms, total: 2.45 s
Wall time: 55.9 s
</code>
Because of use of `%%capture` to suppress output, need a separate cell to see results summary. (Only showing parts here because will add more useful information below.)_____no_output_____
<code>
sum_pm_df.head() # don't show all yet since lots and want to make this dataframe more useful below_____no_output_____sum_pm_df.tail() # don't show all yet since lots and want to make this dataframe more useful below_____no_output_____
</code>
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____WHAT ONES MISSING NOW?_____no_output_____Computationally check if any genomes missing from the list of orthologs?_____no_output_____
<code>
subjids = df.sseqid.tolist()
#print (subjids)
#print (subjids[0:10])
subjids = [x.split("-")[0] for x in subjids]
#print (subjids)
#print (subjids[0:10])
len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be
# between `fn_to_check` and strain_id`, such as `SK1.genome.fa`
genome_ids = [x[:-len_genome_fn_end] for x in genomes]
#print (genome_ids[0:10])
ortholg_ids = sum_pm_df.FASTA_id.tolist()
ortholg_ids = [x.split("-")[0] for x in ortholg_ids]
a = set(genome_ids)
#print (a)
print ("initial:",len(a))
r = set(subjids)
print("BLAST results:",len(r))
print ("missing from BLAST:",len(a-r))
if len(a-r):
#print("\n")
print("ids missing in BLAST results:",a-r)
#a - r
print ("\n\n=====POST-BLAST=======\n\n")
o = set(ortholg_ids)
print("orthologs extracted:",len(o))
print ("missing post-BLAST:",len(r-o))
if len(r-o):
print("\n")
print("ids lost post-BLAST:",r-o)
#r - o
print ("\n\n\n=====SUMMARY=======\n\n")
if len(a-r) and len(r-o):
print("\nAll missing in end:",(a-r) | (r-o))initial: 48
BLAST results: 48
missing from BLAST: 0
=====POST-BLAST=======
orthologs extracted: 48
missing post-BLAST: 0
=====SUMMARY=======
</code>
## Make the Summarizing Dataframe more informative
Add information on whether a stretch of 'N's is present. Making the data suspect and fit to be filtered out. Distinguish between cases where it is in what corresponds to the last third of the protein vs. elsewhere, if possible. Plus whether stop codon is present at end of encoding sequence because such cases also probably should be filtered out.
Add information from the supplemental data table so possible patterns can be assessed more easily._____no_output_____#### Add information about N stretches and stop codon_____no_output_____
<code>
# Collect following information for each gene sequence:
# N stretch of at least two or more present in first 2/3 of gene sequence
# N stretch of at least two or more present in last 1/3 of gene sequence
# stop codon encoded at end of sequence?
import re
min_number_Ns_in_row_to_collect = 2
pattern_obj = re.compile("N{{{},}}".format(min_number_Ns_in_row_to_collect), re.I) # adpated from
# code worked out in `collapse_large_unknown_blocks_in_DNA_sequence.py`, which relied heavily on
# https://stackoverflow.com/a/250306/8508004
def longest_stretch2ormore_found(string, pattern_obj):
'''
Check if a string has stretches of Ns of length two or more.
If it does, return the length of longest stretch.
If it doesn't return zero.
Based on https://stackoverflow.com/a/1155805/8508004 and
GSD Assessing_ambiguous_nts_in_nuclear_PB_genomes.ipynb
'''
longest_match = ''
for m in pattern_obj.finditer(string):
if len(m.group()) > len(longest_match):
longest_match = m.group()
if longest_match == '':
return 0
else:
return len(longest_match)
def chunk(xs, n):
'''Split the list, xs, into n chunks;
from http://wordaligned.org/articles/slicing-a-list-evenly-with-python'''
L = len(xs)
assert 0 < n <= L
s, r = divmod(L, n)
chunks = [xs[p:p+s] for p in range(0, L, s)]
chunks[n-1:] = [xs[-r-s:]]
return chunks
n_stretch_last_third_by_id = {}
n_stretch_first_two_thirds_by_id = {}
stop_codons = ['TAA','TAG','TGA']
stop_codon_presence_by_id = {}
for fn in gene_seqs_fn_list:
# read in sequence without using pyfaidx because small and not worth making indexing files
lines = []
with open(fn, 'r') as seqfile:
for line in seqfile:
lines.append(line.strip())
descr_line = lines[0]
seq = ''.join(lines[1:])
gene_seq_id = descr_line.split(":")[0].split(">")[1]#first line parsed for all in front of ":" and without caret
# determine first two-thirds and last third
chunks = chunk(seq,3)
assert len(chunks) == 3, ("The sequence must be split in three parts'.")
first_two_thirds = chunks[0] + chunks[1]
last_third = chunks[-1]
# Examine each part
n_stretch_last_third_by_id[gene_seq_id] = longest_stretch2ormore_found(last_third,pattern_obj)
n_stretch_first_two_thirds_by_id[gene_seq_id] = longest_stretch2ormore_found(first_two_thirds,pattern_obj)
#print(gene_seq_id)
#print (seq[-3:] in stop_codons)
#stop_codon_presence_by_id[gene_seq_id] = seq[-3:] in stop_codons
stop_codon_presence_by_id[gene_seq_id] = "+" if seq[-3:] in stop_codons else "-"
# Add collected information to sum_pm_df
sum_pm_df['NstretchLAST_THIRD'] = sum_pm_df['FASTA_id'].map(n_stretch_last_third_by_id)
sum_pm_df['NstretchELSEWHERE'] = sum_pm_df['FASTA_id'].map(n_stretch_first_two_thirds_by_id)
sum_pm_df['stop_codon'] = sum_pm_df['FASTA_id'].map(stop_codon_presence_by_id)
# Safe to ignore any warnings about copy. I think because I swapped columns in and out
# of sum_pm_df earlier perhaps._____no_output_____
</code>
#### Add details on strains from the published supplemental information
This section is based on [this notebook entitled 'GSD: Add Supplemental data info to nt count data for 1011 cerevisiae collection'](https://github.com/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb)._____no_output_____
<code>
!curl -OL https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-018-0030-5/MediaObjects/41586_2018_30_MOESM3_ESM.xls
!pip install xlrd % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1934k 100 1934k 0 0 2046k 0 --:--:-- --:--:-- --:--:-- 2044k
Collecting xlrd
[?25l Downloading https://files.pythonhosted.org/packages/b0/16/63576a1a001752e34bf8ea62e367997530dc553b689356b9879339cf45a4/xlrd-1.2.0-py2.py3-none-any.whl (103kB)
[K 100% |████████████████████████████████| 112kB 2.5MB/s
[?25hInstalling collected packages: xlrd
Successfully installed xlrd-1.2.0
import pandas as pd
#sum_pm_TEST_df = sum_pm_df.copy()
supp_df = pd.read_excel('41586_2018_30_MOESM3_ESM.xls', sheet_name=0, header=3, skipfooter=31)
supp_df['Standardized name'] = supp_df['Standardized name'].str.replace('SACE_','')
suppl_info_dict = supp_df.set_index('Standardized name').to_dict('index')
#Make new column with simplified strain_id tags to use for relating to supplemental table
def add_id_tags(fasta_fn):
return fasta_fn[:3]
sum_pm_df["id_tag"] = sum_pm_df['FASTA_id'].apply(add_id_tags)
ploidy_dict_by_id = {x:suppl_info_dict[x]['Ploidy'] for x in suppl_info_dict}
aneuploidies_dict_by_id = {x:suppl_info_dict[x]['Aneuploidies'] for x in suppl_info_dict}
eco_origin_dict_by_id = {x:suppl_info_dict[x]['Ecological origins'] for x in suppl_info_dict}
clade_dict_by_id = {x:suppl_info_dict[x]['Clades'] for x in suppl_info_dict}
sum_pm_df['Ploidy'] = sum_pm_df.id_tag.map(ploidy_dict_by_id) #Pandas docs has `Index.map` (uppercase `I`) but only lowercase works.
sum_pm_df['Aneuploidies'] = sum_pm_df.id_tag.map(aneuploidies_dict_by_id)
sum_pm_df['Ecological origin'] = sum_pm_df.id_tag.map(eco_origin_dict_by_id)
sum_pm_df['Clade'] = sum_pm_df.id_tag.map(clade_dict_by_id)
# remove the `id_tag` column add for relating details from supplemental to summary df
sum_pm_df = sum_pm_df.drop('id_tag',1)_____no_output_____# use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE
#with pd.option_context('display.max_rows', None, 'display.max_columns', None):
# display(sum_pm_df)
sum_pm_df_____no_output_____
</code>
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____## Filter collected set to those that are 'complete'
For plotting and summarizing with a good set of information, best to remove any where the identified ortholog gene has stretches of 'N's or lacks a stop codon.
(Keep unfiltered dataframe around though.)_____no_output_____
<code>
sum_pm_UNFILTEREDdf = sum_pm_df.copy()
#subset to those where there noth columns for Nstretch assessment are zero
sum_pm_df = sum_pm_df[(sum_pm_df[['NstretchLAST_THIRD','NstretchELSEWHERE']] == 0).all(axis=1)] # based on https://codereview.stackexchange.com/a/185390
#remove any where there isn't a stop codon
sum_pm_df = sum_pm_df.drop(sum_pm_df[sum_pm_df.stop_codon != '+'].index)_____no_output_____
</code>
Computationally summarize result of filtering in comparison to previous steps:_____no_output_____
<code>
subjids = df.sseqid.tolist()
#print (subjids)
#print (subjids[0:10])
subjids = [x.split("-")[0] for x in subjids]
#print (subjids)
#print (subjids[0:10])
len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be
# between `fn_to_check` and strain_id`, such as `SK1.genome.fa`
genome_ids = [x[:-len_genome_fn_end] for x in genomes]
#print (genome_ids[0:10])
ortholg_ids = sum_pm_UNFILTEREDdf.FASTA_id.tolist()
ortholg_ids = [x.split("-")[0] for x in ortholg_ids]
filtered_ids = sum_pm_df.FASTA_id.tolist()
filtered_ids =[x.split("-")[0] for x in filtered_ids]
a = set(genome_ids)
#print (a)
print ("initial:",len(a))
r = set(subjids)
print("BLAST results:",len(r))
print ("missing from BLAST:",len(a-r))
if len(a-r):
#print("\n")
print("ids missing in BLAST results:",a-r)
#a - r
print ("\n\n=====POST-BLAST=======\n\n")
o = set(ortholg_ids)
print("orthologs extracted:",len(o))
print ("missing post-BLAST:",len(r-o))
if len(r-o):
print("\n")
print("ids lost post-BLAST:",r-o)
#r - o
print ("\n\n\n=====PRE-FILTERING=======\n\n")
print("\nNumber before filtering:",len(sum_pm_UNFILTEREDdf))
if len(a-r) and len(r-o):
print("\nAll missing in unfiltered:",(a-r) | (r-o))
print ("\n\n\n=====POST-FILTERING SUMMARY=======\n\n")
f = set(filtered_ids)
print("\nNumber left in filtered set:",len(sum_pm_df))
print ("Number removed by filtering:",len(o-f))
if len(a-r) and len(r-o) and len(o-f):
print("\nAll missing in filtered:",(a-r) | (r-o) | (o-f))initial: 48
BLAST results: 48
missing from BLAST: 0
=====POST-BLAST=======
orthologs extracted: 48
missing post-BLAST: 0
=====PRE-FILTERING=======
Number before filtering: 48
=====POST-FILTERING SUMMARY=======
Number left in filtered set: 42
Number removed by filtering: 6
# use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
display(sum_pm_df)
#sum_pm_df_____no_output_____
</code>
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____#### Archive the 'Filtered' set of sequences
Above I saved all the gene and deduced protein sequences of the orthologs in a single archive. It might be useful to just have an archive of the 'filtered' set._____no_output_____
<code>
# pack up archive of gene and protein sequences for the 'filtered' set.
# Include the summary table too.
# This is different than the other sets I made because this 'filtering' was
# done using the dataframe and so I don't have the file associations. The file names
# though can be generated using the unfiltered file names for the genes and proteins
# and sorting which ones don't remain in the filtered set using 3-letter tags at
# the beginning of the entries in `FASTA_id` column to relate them.
# Use the `FASTA_id` column of sum_pm_df to make a list of tags that remain in filtered set
tags_remaining_in_filtered = [x[:3] for x in sum_pm_df.FASTA_id.tolist()]
# Go through the gene and protein sequence list and collect those where the first
# three letters match the tag
gene_seqs_FILTfn_list = [x for x in gene_seqs_fn_list if x[:3] in tags_remaining_in_filtered]
prot_seqs_FILTfn_list = [x for x in prot_seqs_fn_list if x[:3] in tags_remaining_in_filtered]
# Save the files in those two lists along with the sum_pm_df (as tabular data and pickled form)
patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary"
patmatchsum_fn = patmatchsum_fn_prefix + ".tsv"
pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl"
import pandas as pd
sum_pm_df.to_pickle(pklsum_patmatch_fn)
sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default
FILTEREDseqs_n_df_list = gene_seqs_FILTfn_list + prot_seqs_FILTfn_list + [patmatchsum_fn,pklsum_patmatch_fn]
archive_file_name = gene_name+"_ortholog_seqsFILTERED.tar.gz"
!tar czf {archive_file_name} {" ".join(FILTEREDseqs_n_df_list)} # use the list for archiving command
sys.stderr.write("\nCollected gene and protein sequences"
" (plus table of details) for 'FILTERED' set gathered and saved as "
"`{}`.".format(archive_file_name))
Collected gene and protein sequences (plus table of details) for 'FILTERED' set gathered and saved as `RPB1_ortholog_seqsFILTERED.tar.gz`.
</code>
Download the 'filtered' sequences to your local machine._____no_output_____## Summarizing with filtered set
Plot distribution._____no_output_____
<code>
%matplotlib inline
import math
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
#Want an image file of the figure saved?
saveplot = True
saveplot_fn_prefix = 'heptad_repeat_distribution'
#sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"]));
p= sns.countplot(sum_pm_df["hit_number"],
order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)),
color="C0", alpha= 0.93)
#palette="Blues"); # `order` to get those categories with zero
# counts to show up from https://stackoverflow.com/a/45359713/8508004
p.set_xlabel("heptad repeats")
#add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004
ncount = len(sum_pm_df)
for pat in p.patches:
x=pat.get_bbox().get_points()[:,0]
y=pat.get_bbox().get_points()[1,1]
# note that this check on the next line was necessary to add when I went back to cases where there's
# no counts for certain categories and so `y` was coming up `nan` for for thos and causing error
# about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004
if not math.isnan(y):
p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333')
if saveplot:
fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004
fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight')
fig.savefig(saveplot_fn_prefix + '.svg');_____no_output_____
</code>
However, with the entire 1011 collection, those at the bottom can not really be seen. The next plot shows this by limiting y-axis to 103.
It should be possible to make a broken y-axis plot for this eventually but not right now as there is no automagic way. So for now will need to composite the two plots together outside.
(Note that adding percents annotations makes height of this plot look odd in the notebook cell for now.)_____no_output_____
<code>
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
#Want an image file of the figure saved?
saveplot = True
saveplot_fn_prefix = 'heptad_repeat_distributionLIMIT103'
#sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"]));
p= sns.countplot(sum_pm_df["hit_number"],
order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)),
color="C0", alpha= 0.93)
#palette="Blues"); # `order` to get those categories with zero
# counts to show up from https://stackoverflow.com/a/45359713/8508004
p.set_xlabel("heptad repeats")
plt.ylim(0, 103)
#add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004
ncount = len(sum_pm_df)
for pat in p.patches:
x=pat.get_bbox().get_points()[:,0]
y=pat.get_bbox().get_points()[1,1]
# note that this check on the next line was necessary to add when I went back to cases where there's
# no counts for certain categories and so `y` was coming up `nan` for those and causing error
# about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004
if not math.isnan(y):
p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333')
if saveplot:
fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004
fig.savefig(saveplot_fn_prefix + '.png')
fig.savefig(saveplot_fn_prefix + '.svg');_____no_output_____
</code>
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____
<code>
%matplotlib inline
# above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic.
# Visualization
# This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts.
# For example, see `GC-clusters relative mito chromosome and feature` where I ran
# `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom`
# add the strain info for listing that without chr info & add species information for coloring on that
chromosome_id_prefix = "-"
def FASTA_id_to_strain(FAid):
'''
use FASTA_id column value to convert to strain_id
and then return the strain_id
'''
return FAid.split(chromosome_id_prefix)[0]
sum_pm_df_for_plot = sum_pm_df.copy()
sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain)
# sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips
# it is easier to add species column first and then use map instead of doing both at same with one `apply`
# of a function or both separately, both with `apply` of two different function.
# sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species)
sum_pm_df_for_plot['species'] = 'cerevisiae'
#Want an image file of the figure saved?
saveplot = True
saveplot_fn_prefix = 'heptad_repeats_by_strain'
import matplotlib.pyplot as plt
if len(sum_pm_df) > 60:
plt.figure(figsize=(8,232))
else:
plt.figure(figsize=(8,12))
import seaborn as sns
sns.set()
# Simple look - Comment out everything below to the next two lines to see it again.
p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b")
p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="Clade")
# NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data
# and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller.
p.set_xlabel("heptad repeats")
#p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below
# and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up
# needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept
#p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks
'''
xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004
for i in range(len(xticks)):
#print (i) # WAS FOR DEBUGGING
keep_ticks = [1,3,5] #harcoding essentially again, but at least it works
if i not in keep_ticks:
xticks[i].set_visible(False)
'''
'''
# Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get
# those with highest number of repeats with combination I could come up with.
sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or
# tried `.astype('category')` get plotting of the 0.5 values too
sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when
# added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise
# it was putting the first rows on the left, which happened to be the 'higher' repeat values
#p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot?
p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98)
#p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D",
# size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have
# strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored
p.set_xlabel("heptad repeats")
sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df;
'''
if saveplot:
fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004
fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight')
fig.savefig(saveplot_fn_prefix + '.svg');_____no_output_____
</code>
(Hexagons are used for those without an assigned clade in [the supplemental data Table 1](https://www.nature.com/articles/s41586-018-0030-5) in the plot above.)
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____
<code>
%matplotlib inline
# above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic.
# Visualization
# This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts.
# For example, see `GC-clusters relative mito chromosome and feature` where I ran
# `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom`
# add the strain info for listing that without chr info & add species information for coloring on that
chromosome_id_prefix = "-"
def FASTA_id_to_strain(FAid):
'''
use FASTA_id column value to convert to strain_id
and then return the strain_id
'''
return FAid.split(chromosome_id_prefix)[0]
sum_pm_df_for_plot = sum_pm_df.copy()
sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain)
# sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips
# it is easier to add species column first and then use map instead of doing both at same with one `apply`
# of a function or both separately, both with `apply` of two different function.
# sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species)
sum_pm_df_for_plot['species'] = 'cerevisiae'
#Want an image file of the figure saved?
saveplot = True
saveplot_fn_prefix = 'heptad_repeats_by_proteinlen'
import matplotlib.pyplot as plt
if len(sum_pm_df) > 60:
plt.figure(figsize=(8,232))
else:
plt.figure(figsize=(8,12))
import seaborn as sns
sns.set()
# Simple look - Comment out everything below to the next two lines to see it again.
#p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b")
p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="prot_length")
# NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data
# and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller.
p.set_xlabel("heptad repeats")
#p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below
# and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up
# needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept
#p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks
'''
xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004
for i in range(len(xticks)):
#print (i) # WAS FOR DEBUGGING
keep_ticks = [1,3,5] #harcoding essentially again, but at least it works
if i not in keep_ticks:
xticks[i].set_visible(False)
'''
'''
# Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get
# those with highest number of repeats with combination I could come up with.
sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or
# tried `.astype('category')` get plotting of the 0.5 values too
sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when
# added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise
# it was putting the first rows on the left, which happened to be the 'higher' repeat values
#p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot?
p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98)
#p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D",
# size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have
# strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored
p.set_xlabel("heptad repeats")
sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df;
'''
if saveplot:
fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004
fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight')
fig.savefig(saveplot_fn_prefix + '.svg');_____no_output_____
</code>
I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further._____no_output_____
## Make raw and summary data available for use elsewhere
All the raw data is there for each strain in `raw_pm_df`. For example, the next cell shows how to view the data associated with the summary table for isolate ADK_8:_____no_output_____
<code>
ADK_8_raw = raw_pm_df[raw_pm_df['FASTA_id'] == 'ADK_8-20587'].sort_values('hit_number', ascending=True).reset_index(drop=True)
ADK_8_raw _____no_output_____
</code>
The summary and raw data will be packaged up into one file in the cell below. One of the forms will be a tabular text data ('.tsv') files that can be opened in any spreadsheet software._____no_output_____
<code>
# save summary and raw results for use elsewhere (or use `.pkl` files for reloading the pickled dataframe into Python/pandas)
patmatch_fn_prefix = gene_name + "_orthologs_patmatch_results"
patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary"
patmatchsumFILTERED_fn_prefix = gene_name + "_orthologs_patmatch_results_summaryFILTERED"
patmatch_fn = patmatch_fn_prefix + ".tsv"
pkl_patmatch_fn = patmatch_fn_prefix + ".pkl"
patmatchsumUNF_fn = patmatchsumFILTERED_fn_prefix + ".tsv"
pklsum_patmatchUNF_fn = patmatchsumFILTERED_fn_prefix + ".pkl"
patmatchsum_fn = patmatchsum_fn_prefix + ".tsv"
pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl"
import pandas as pd
sum_pm_df.to_pickle(pklsum_patmatch_fn)
sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default
sys.stderr.write("Text file of summary details after filtering saved as '{}'.".format(patmatchsum_fn))
sum_pm_UNFILTEREDdf.to_pickle(pklsum_patmatchUNF_fn)
sum_pm_UNFILTEREDdf.to_csv(patmatchsumUNF_fn, sep='\t') # keep index is default
sys.stderr.write("\nText file of summary details before filtering saved as '{}'.".format(patmatchsumUNF_fn))
raw_pm_df.to_pickle(pkl_patmatch_fn)
raw_pm_df.to_csv(patmatch_fn, sep='\t') # keep index is default
sys.stderr.write("\nText file of raw details saved as '{}'.".format(patmatchsum_fn))
# pack up archive dataframes
pm_dfs_list = [patmatch_fn,pkl_patmatch_fn,patmatchsumUNF_fn,pklsum_patmatchUNF_fn, patmatchsum_fn,pklsum_patmatch_fn]
archive_file_name = patmatch_fn_prefix+".tar.gz"
!tar czf {archive_file_name} {" ".join(pm_dfs_list)} # use the list for archiving command
sys.stderr.write("\nCollected pattern matching"
" results gathered and saved as "
"`{}`.".format(archive_file_name))Text file of summary details after filtering saved as 'RPB1_orthologs_patmatch_results_summary.tsv'.
Text file of summary details before filtering saved as 'RPB1_orthologs_patmatch_results_summaryFILTERED.tsv'.
Text file of raw details saved as 'RPB1_orthologs_patmatch_results_summary.tsv'.
Collected pattern matching results gathered and saved as `RPB1_orthologs_patmatch_results.tar.gz`.
</code>
Download the tarballed archive of the files to your computer.
For now that archive doesn't include the figures generated from the plots because with a lot of strains they can get large. Download those if you want them. (Look for `saveplot_fn_prefix` settings in the code to help identify file names.)_____no_output_____----_____no_output_____
<code>
import time
def executeSomething():
#code here
print ('.')
time.sleep(480) #60 seconds times 8 minutes
while True:
executeSomething().
</code>
| {
"repository": "NCBI-Hackathons/ncbi-cloud-tutorials",
"path": "BLAST tutorials/notebooks/GSD/GSD Rpb1_orthologs_in_1011_genomes.ipynb",
"matched_keywords": [
"BioPython",
"evolution"
],
"stars": 11,
"size": 310780,
"hexsha": "cb7cd48ceada74091147e98b11980254432a9438",
"max_line_length": 87292,
"avg_line_length": 80.55469155,
"alphanum_fraction": 0.7114421777
} |
# Notebook from devVipin01/Machine-learning-AI-Templates
Path: Regression/RadiusNeighborsRegressor_MinMaxScaler_PolynomialFeatures.ipynb
# RadiusNeighborsRegressor with MinMaxScaler & Polynomial Features
_____no_output_____**This Code template is for the regression analysis using a RadiusNeighbors Regression and the feature rescaling technique MinMaxScaler along with Polynomial Features as a feature transformation technique in a pipeline** _____no_output_____### Required Packages_____no_output_____
<code>
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import MinMaxScaler,PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.neighbors import RadiusNeighborsRegressor
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')_____no_output_____
</code>
### Initialization
Filepath of CSV file_____no_output_____
<code>
#filepath
file_path= ""_____no_output_____
</code>
List of features which are required for model training ._____no_output_____
<code>
#x_values
features=[]_____no_output_____
</code>
Target feature for prediction._____no_output_____
<code>
#y_value
target=''_____no_output_____
</code>
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry._____no_output_____
<code>
df=pd.read_csv(file_path) #reading file
df.head()#displaying initial entries_____no_output_____print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1])Number of rows are : 400 ,and number of columns are : 9
df.columns.tolist()
_____no_output_____
</code>
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
_____no_output_____
<code>
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)_____no_output_____
</code>
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns._____no_output_____
<code>
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()_____no_output_____correlation = df[df.columns[1:]].corr()[target][:]
correlation_____no_output_____
</code>
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y._____no_output_____
<code>
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target] _____no_output_____
</code>
Calling preprocessing functions on the feature and target set._____no_output_____
<code>
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()_____no_output_____
</code>
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data._____no_output_____
<code>
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting_____no_output_____
</code>
### Data Scaling
**Used MinMaxScaler**
* Transform features by scaling each feature to a given range.
* This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
### Feature Transformation
**PolynomialFeatures :**
* Generate polynomial and interaction features.
* Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.
## Model
**RadiusNeighborsRegressor**
RadiusNeighborsRegressor implements learning based on the neighbors within a fixed radius of the query point, where is a floating-point value specified by the user.
**Tuning parameters :-**
* **radius:** Range of parameter space to use by default for radius_neighbors queries.
* **algorithm:** Algorithm used to compute the nearest neighbors:
* **leaf_size:** Leaf size passed to BallTree or KDTree.
* **p:** Power parameter for the Minkowski metric.
* **metric:** the distance metric to use for the tree.
* **outlier_label:** label for outlier samples
* **weights:** weight function used in prediction._____no_output_____
<code>
#training the RadiusNeighborsRegressor
model = make_pipeline(MinMaxScaler(),PolynomialFeatures(),RadiusNeighborsRegressor(radius=1.5))
model.fit(X_train,y_train)_____no_output_____
</code>
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted._____no_output_____
<code>
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))Accuracy score 71.19 %
#prediction on testing set
prediction=model.predict(X_test)_____no_output_____
</code>
### Model evolution
**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model._____no_output_____
<code>
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))Mean Absolute Error: 0.05762369892537518
Mean Squared Error: 0.006661892378988193
Root Mean Squared Error: 0.08162041643478789
print("R-squared score : ",r2_score(y_test,prediction))R-squared score : 0.7119485295771226
#ploting actual and predicted
red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red")
green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green")
plt.title("Comparison of Regression Algorithms")
plt.xlabel("Index of Candidate")
plt.ylabel("target")
plt.legend((red,green),('RadiusNeighborsRegressor', 'REAL'))
plt.show()
_____no_output_____
</code>
### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis._____no_output_____
<code>
plt.figure(figsize=(10,6))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()_____no_output_____
</code>
#### Creator: Vipin Kumar , Github: [Profile](https://github.com/devVipin01)_____no_output_____
| {
"repository": "devVipin01/Machine-learning-AI-Templates",
"path": "Regression/RadiusNeighborsRegressor_MinMaxScaler_PolynomialFeatures.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 133925,
"hexsha": "cb7d47d2ec8fcd64afebd4b209df752259b7bbad",
"max_line_length": 50448,
"avg_line_length": 167.40625,
"alphanum_fraction": 0.8819712526
} |
# Notebook from jouterleys/NSCI801-QuantNeuro
Path: NSCI801_Reproducibility.ipynb
# NSCI 801 - Quantitative Neuroscience
## Reproducibility, reliability, validity
Gunnar Blohm_____no_output_____### Outline
* statistical considerations
* multiple comparisons
* exploratory analyses vs hypothesis testing
* Open Science
* general steps toward transparency
* pre-registration / registered report
* Open science vs. patents_____no_output_____### Multiple comparisons
In [2009, Bennett et al.](https://teenspecies.github.io/pdfs/NeuralCorrelates.pdf) studies the brain of a salmon using fMRI and found and found significant activation despite the salmon being dead... (IgNobel Prize 2012)
Why did they find this?_____no_output_____They images 140 volumes (samples) of the brain and ran a standard preprocessing pipeline, including spatial realignment, co-registration of functional and anatomical volumes, and 8mm full-width at half maximum (FWHM) Gaussian smoothing.
They computed voxel-wise statistics.
<img style="float: center; width:750px;" src="stuff/salmon.png">_____no_output_____This is a prime example of what's known as the **multiple comparison problem**!
“the problem that occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values” (Wikipedia)
* problem that arises when implementing a large number of statistical tests in the same experiment
* the more tests we do, the higher probability of obtaining, at least, one test with statistical significance_____no_output_____### Probability(false positive) = f(number comparisons)
If you repeat a statistical test over and over again, the false positive ($FP$) rate ($P$) evolves as follows:
$$P(FP)=1-(1-\alpha)^N$$
* $\alpha$ is the confidence level for each individual test (e.g. 0.05)
* $N$ is the number of comparisons
Let's see how this works..._____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
plt.style.use('dark_background')_____no_output_____
</code>
Let's create some random data..._____no_output_____
<code>
rvs = stats.norm.rvs(loc=0, scale=10, size=1000)
sns.displot(rvs)_____no_output_____
</code>
Now let's run a t-test to see if it's different from 0_____no_output_____
<code>
statistic, pvalue = stats.ttest_1samp(rvs, 0)
print(pvalue)0.8239422967948438
</code>
Now let's do this many times for different samples, e.g. different voxels of our salmon..._____no_output_____
<code>
def t_test_function(alp, N):
"""computes t-test statistics on N random samples and returns number of significant tests"""
counter = 0
for i in range(N):
rvs = stats.norm.rvs(loc=0, scale=10, size=1000)
statistic, pvalue = stats.ttest_1samp(rvs, 0)
if pvalue <= alp:
counter = counter + 1
print(counter)
return counter
N = 100
counter = t_test_function(0.05, N)
print("The false positve rate was", counter/N*100, "%")4
The false positve rate was 4.0 %
</code>
Well, we wanted a $\alpha=0.05$, so what's the problem?
The problem is that we have hugely increased the likelihood of finding something significant by chance! (**p-hacking**)
Take the above example:
* running 100 independent tests with $\alpha=0.05$ resulted in a few positives
* well, that's good right? Now we can see if there is astory here we can publish...
* dead salmon!
* remember, our data was just noise!!! There was NO signal!
This is why we have corrections for multiple comparisons that adjust the p-value so that the **overall chance** to find a false positive stays at $\alpha$!
Why does this matter?_____no_output_____### Exploratory analyses vs hypothesis testing
Why do we distinguish between them?_____no_output_____<img style="float: center; width:750px;" src="stuff/ExploreConfirm1.png">_____no_output_____But in science, confirmatory analyses that are hypothesis-driven are often much more valued.
There is a temptation to frame *exploratory* analyses and *confirmatory*...
**This leads to disaster!!!**
* science is not solid
* replication crisis (psychology, social science, medicine, marketing, economics, sports science, etc, etc...)
* shaken trust in science
<img style="float: center; width:750px;" src="stuff/crisis.jpeg">
([Baker 2016](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970))_____no_output_____### Quick excursion: survivorship bias
"Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility." (Wikipedia)
<img style="float: center; width:750px;" src="stuff/SurvivorshipBias.png">_____no_output_____**How does survivorship bias affect neuroscience?**
Think about it..._____no_output_____E.g.
* people select neurons to analyze
* profs say it's absolutely achievable to become a prof
Just keep it in mind..._____no_output_____### Open science - transparency
Open science can hugely help increasing transparency in many different ways so that findings and data can be evaluated for what they are:
* publish data acquisition protocol and code: increases data reproducibility & credibility
* publish data: data get second, third, etc... lives
* publish data processing / analyses: increases reproducibility of results
* publish figures code and stats: increases reproducibility and credibility of conclusions
* pre-register hypotheses and analyses: ensures *confirmatory* analyses are not *exploratory* (HARKing)
For more info, see NSCI800 lectures about Open Science: [OS1](http://www.compneurosci.com/NSCI800/OpenScienceI.pdf), [OS2](http://www.compneurosci.com/NSCI800/OpenScienceII.pdf)_____no_output_____### Pre-registration / registered reports
<img style="float:right; width:500px;" src="stuff/RR.png">
* IPA guarantees publication
* If original methods are followed
* Main conclusions need to come from originally proposed analyses
* Does not prevent exploratory analyses
* Need to be labeled as such
[https://Cos.io/rr](https://Cos.io/rr)_____no_output_____Please follow **Stage 1** instructions of [the registered report intrustions from eNeuro](https://www.eneuro.org/sites/default/files/additional_assets/pdf/eNeuro%20Registered%20Reports%20Author%20Guidelines.pdf) for the course evaluation...
Questions???_____no_output_____### Open science vs. patents
The goal of Open Science is to share all aspects of research with the public!
* because knowledge should be freely available
* because the public paid for the science to happen in the first place
However, this prevents from patenting scientific results!
* this is good for science, because patents obstruct research
* prevents full privitazation of research: companies driving research is biased by private interest_____no_output_____Turns out open science is good for business!
* more people contribute
* wider adoption
* e.g. Github = Microsoft, Android = Google, etc
* better for society
* e.g. nonprofit pharma_____no_output_____**Why are patents still a thing?**
Well, some people think it's an outdated and morally corrupt concept.
* goal: maximum profit
* enabler: capitalism
* victims: general public
Think about it abd decide for yourself what to do with your research!!!_____no_output_____### THANK YOU!!!
<img style="float:center; width:750px;" src="stuff/empower.jpg">_____no_output_____
| {
"repository": "jouterleys/NSCI801-QuantNeuro",
"path": "NSCI801_Reproducibility.ipynb",
"matched_keywords": [
"Salmon",
"neuroscience"
],
"stars": 102,
"size": 21860,
"hexsha": "cb817ee55eb3ac69d9b8c800ab518ed4dc8b023f",
"max_line_length": 8000,
"avg_line_length": 42.119460501,
"alphanum_fraction": 0.7171546203
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.