text
stringlengths 67
1.03M
| metadata
dict |
---|---|
# Notebook from stefanpeidli/cellphonedb
Path: scanpy_cellphonedb.ipynb
<code>
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as pl
import scanpy as sc
import cellphonedb as cphdb
# Original API works for python as well, it's just not really nice
import sys
sys.path.insert(0, './cellphonedb/src/api_endpoints/terminal_api/method_terminal_api_endpoints/')
from method_terminal_commands import statistical_analysis_____no_output_____
</code>
# Dev_____no_output_____## Original method_____no_output_____
<code>
# you need to download these from cellphonedb website / github and replace the path accordingly
dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/'
metafile = dat+'test_meta.txt'
countfile = dat+'test_counts.txt'
statistical_analysis(meta_filename=metafile, counts_filename=countfile)[ ][APP][04/11/20-17:12:40][WARNING] Latest local available version is `v2.0.0`, using it
[ ][APP][04/11/20-17:12:40][WARNING] User selected downloaded database `v2.0.0` is available, using it
[ ][CORE][04/11/20-17:12:40][INFO] Initializing SqlAlchemy CellPhoneDB Core
[ ][CORE][04/11/20-17:12:40][INFO] Using custom database at C:\Users\Stefan\.cpdb\releases\v2.0.0\cellphone.db
[ ][APP][04/11/20-17:12:40][INFO] Launching Method cpdb_statistical_analysis_local_method_launcher
[ ][APP][04/11/20-17:12:40][INFO] Launching Method _set_paths
[ ][APP][04/11/20-17:12:40][WARNING] Output directory (C:\Users\Stefan\Documents\Github_Clones\cellphonedb/out) exist and is not empty. Result can overwrite old results
[ ][APP][04/11/20-17:12:40][INFO] Launching Method _load_meta_counts
[ ][CORE][04/11/20-17:12:40][INFO] Launching Method cpdb_statistical_analysis_launcher
[ ][CORE][04/11/20-17:12:40][INFO] Launching Method _counts_validations
[ ][CORE][04/11/20-17:12:40][INFO] [Cluster Statistical Analysis Simple] Threshold:0.1 Iterations:1000 Debug-seed:-1 Threads:4 Precision:3
[ ][CORE][04/11/20-17:12:40][INFO] Running Simple Prefilters
[ ][CORE][04/11/20-17:12:40][INFO] Running Real Simple Analysis
[ ][CORE][04/11/20-17:12:40][INFO] Running Statistical Analysis
[ ][CORE][04/11/20-17:13:27][INFO] Building Pvalues result
[ ][CORE][04/11/20-17:13:29][INFO] Building Simple results
[ ][CORE][04/11/20-17:13:29][INFO] [Cluster Statistical Analysis Complex] Threshold:0.1 Iterations:1000 Debug-seed:-1 Threads:4 Precision:3
[ ][CORE][04/11/20-17:13:29][INFO] Running Complex Prefilters
[ ][CORE][04/11/20-17:13:31][INFO] Running Real Complex Analysis
[ ][CORE][04/11/20-17:13:32][INFO] Running Statistical Analysis
[ ][CORE][04/11/20-17:14:38][INFO] Building Pvalues result
[ ][CORE][04/11/20-17:14:40][INFO] Building Complex results
pd.read_csv('./out/pvalues.csv')_____no_output_____
</code>
## scanpy API test official cellphonedb example data_____no_output_____
<code>
# you need to download these from cellphonedb website / github and replace the path accordingly
dat = 'C:/Users/Stefan/Downloads/cellphonedb_example_data/example_data/'
metafile = dat+'test_meta.txt'
countfile = dat+'test_counts.txt'_____no_output_____bdata=sc.AnnData(pd.read_csv(countfile, sep='\t',index_col=0).values.T, obs=pd.read_csv(metafile, sep='\t',index_col=0), var=pd.DataFrame(index=pd.read_csv(countfile, sep='\t',index_col=0).index.values))_____no_output_____outs=cphdb.statistical_analysis_scanpy(bdata, bdata.var_names, bdata.obs_names, 'cell_type')[ ][APP][04/11/20-17:14:43][WARNING] Latest local available version is `v2.0.0`, using it
[ ][APP][04/11/20-17:14:43][WARNING] User selected downloaded database `v2.0.0` is available, using it
[ ][CORE][04/11/20-17:14:43][INFO] Initializing SqlAlchemy CellPhoneDB Core
[ ][CORE][04/11/20-17:14:43][INFO] Using custom database at C:\Users\Stefan\.cpdb\releases\v2.0.0\cellphone.db
[ ][APP][04/11/20-17:14:43][INFO] Launching Method cpdb_statistical_analysis_local_method_launcher_scanpy
[ ][APP][04/11/20-17:14:43][INFO] Launching Method _set_paths
[ ][APP][04/11/20-17:14:43][WARNING] Output directory (C:\Users\Stefan\Documents\Github_Clones\cellphonedb/out) exist and is not empty. Result can overwrite old results
[ ][CORE][04/11/20-17:14:43][INFO] Launching Method cpdb_statistical_analysis_launcher
[ ][CORE][04/11/20-17:14:43][INFO] Launching Method _counts_validations
[ ][CORE][04/11/20-17:14:43][INFO] [Cluster Statistical Analysis Simple] Threshold:0.1 Iterations:1000 Debug-seed:-1 Threads:4 Precision:3
[ ][CORE][04/11/20-17:14:43][INFO] Running Simple Prefilters
[ ][CORE][04/11/20-17:14:43][INFO] Running Real Simple Analysis
[ ][CORE][04/11/20-17:14:43][INFO] Running Statistical Analysis
[ ][CORE][04/11/20-17:15:32][INFO] Building Pvalues result
[ ][CORE][04/11/20-17:15:35][INFO] Building Simple results
[ ][CORE][04/11/20-17:15:35][INFO] [Cluster Statistical Analysis Complex] Threshold:0.1 Iterations:1000 Debug-seed:-1 Threads:4 Precision:3
[ ][CORE][04/11/20-17:15:35][INFO] Running Complex Prefilters
[ ][CORE][04/11/20-17:15:37][INFO] Running Real Complex Analysis
[ ][CORE][04/11/20-17:15:38][INFO] Running Statistical Analysis
[ ][CORE][04/11/20-17:16:38][INFO] Building Pvalues result
[ ][CORE][04/11/20-17:16:39][INFO] Building Complex results
outs['pvalues']_____no_output_____# the output is also saved to
bdata.uns['cellphonedb_output']_____no_output_____bdata_____no_output_____
</code>
| {
"repository": "stefanpeidli/cellphonedb",
"path": "scanpy_cellphonedb.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": null,
"size": 63150,
"hexsha": "d000f1ce0f008b8f64f705810da78b9e62f26064",
"max_line_length": 212,
"avg_line_length": 45.3989935298,
"alphanum_fraction": 0.3578463975
} |
# Notebook from innawendell/European_Comedy
Path: Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb
## The Analysis of The Evolution of The Russian Comedy. Part 3._____no_output_____In this analysis,we will explore evolution of the French five-act comedy in verse based on the following features:
- The coefficient of dialogue vivacity;
- The percentage of scenes with split verse lines;
- The percentage of scenes with split rhymes;
- The percentage of open scenes.
- The percentage of scenes with split verse lines and rhymes.
We will tackle the following questions:
1. We will describe the features;
2. We will explore feature correlations.
3. We will check the features for normality using Shapiro-Wilk normality test. This will help us determine whether parametric vs. non-parametric statistical tests are more appropriate. If the features are not normally distributed, we will use non-parametric tests.
4. In our previous analysis of Sperantov's data, we discovered that instead of four periods of the Russian five-act tragedy in verse proposed by Sperantov, we can only be confident in the existence of two periods, where 1795 is the cut-off year. Therefore, we propose the following periods for the Russian verse comedy:
- Period One (from 1775 to 1794)
- Period Two (from 1795 to 1849).
5. We will run statistical tests to determine whether these two periods are statistically different.
6. We will create visualizations for each feature.
7. We will run descriptive statistics for each feature._____no_output_____
<code>
import pandas as pd
import numpy as np
import json
from os import listdir
from scipy.stats import shapiro
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns_____no_output_____def make_plot(feature, title):
mean, std, median = summary(feature)
plt.figure(figsize=(10, 7))
plt.title(title, fontsize=17)
sns.distplot(feature, kde=False)
mean_line = plt.axvline(mean,
color='black',
linestyle='solid',
linewidth=2); M1 = 'Mean';
median_line = plt.axvline(median,
color='green',linestyle='dashdot',
linewidth=2); M2='Median'
std_line = plt.axvline(mean + std,
color='black',
linestyle='dashed',
linewidth=2); M3 = 'Standard deviation';
plt.axvline(mean - std,
color='black',
linestyle='dashed',
linewidth=2)
plt.legend([mean_line, median_line, std_line], [M1, M2, M3])
plt.show()_____no_output_____def small_sample_mann_whitney_u_test(series_one, series_two):
values_one = series_one.sort_values().tolist()
values_two = series_two.sort_values().tolist()
# make sure there are no ties - this function only works for no ties
result_df = pd.DataFrame(values_one + values_two, columns=['combined']).sort_values(by='combined')
# average for ties
result_df['ranks'] = result_df['combined'].rank(method='average')
# make a dictionary where keys are values and values are ranks
val_to_rank = dict(zip(result_df['combined'].values, result_df['ranks'].values))
sum_ranks_one = np.sum([val_to_rank[num] for num in values_one])
sum_ranks_two = np.sum([val_to_rank[num] for num in values_two])
# number in sample one and two
n_one = len(values_one)
n_two = len(values_two)
# calculate the mann whitney u statistic which is the smaller of the u_one and u_two
u_one = ((n_one * n_two) + (n_one * (n_one + 1) / 2)) - sum_ranks_one
u_two = ((n_one * n_two) + (n_two * (n_two + 1) / 2)) - sum_ranks_two
# add a quality check
assert u_one + u_two == n_one * n_two
u_statistic = np.min([u_one, u_two])
return u_statistic_____no_output_____def summary(feature):
mean = feature.mean()
std = feature.std()
median = feature.median()
return mean, std, median_____no_output_____# updated boundaries
def determine_period(row):
if row <= 1794:
period = 1
else:
period = 2
return period_____no_output_____
</code>
## Part 1. Feature Descriptions_____no_output_____For the Russian corpus of the five-act comedies, we generated additional features that inspired by Iarkho. So far, we had no understanding how these features evolved over time and whether they could differentiate literary periods.
The features include the following:
1. **The Coefficient of Dialogue Vivacity**, i.e., the number of utterances in a play / the number of verse lines in a play. Since some of the comedies in our corpus were written in iambic hexameter while others were written in free iambs, it is important to clarify how we made sure the number of verse lines was comparable. Because Aleksandr Griboedov's *Woe From Wit* is the only four-act comedy in verse that had an extensive markup, we used it as the basis for our calculation.
- First, improved the Dracor's markup of the verse lines in *Woe From Wit*.
- Next, we calculated the number of verse lines in *Woe From Wit*, which was 2220.
- Then, we calculated the total number of syllables in *Woe From Wit*, which was 22076.
- We calculated the average number of syllables per verse line: 22076 / 2220 = 9.944144144144143.
- Finally, we divided the average number of syllables in *Woe From Wit* by the average number of syllables in a comedy written in hexameter, i.e., 12.5: 9.944144144144143 / 12.5 = 0.796.
- To convert the number of verse lines in a play written in free iambs and make it comparable with the comedies written in hexameter, we used the following formula: rescaled_number of verse lines = the number of verse lines in free iambs * 0.796.
- For example, in *Woe From Wit*, the number of verse lines = 2220, the rescaled number of verse lines = 2220 * 0.796 = 1767.12. The coefficient of dialogue vivacity = 702 / 1767.12 = 0.397.
2. **The Percentage of Scenes with Split Verse Lines**, i.e, the percentage of scenes where the end of a scene does not correspond with the end of a verse lines and the verse line extends into the next scene, e.g., "Не бойся. Онъ блажитъ. ЯВЛЕНІЕ 3. Какъ радъ что вижу васъ."
3. **The Percentage of Scenes with Split Rhymes**, i.e., the percentage of scenes that rhyme with other scenes, e.g., "Надѣюсъ на тебя, Вѣтрана, какъ на стѣну. ЯВЛЕНІЕ 4. И въ ней , какъ ни крѣпка, мы видимЪ перемѣну."
4. **The Percentage of Open Scenes**, i.e., the percentage of scenes with either split verse lines or rhymes.
5. **The Percentage of Scenes With Split Verse Lines and Rhymes**, i.e., the percentage of scenes that are connected through both means: by sharing a verse lines and a rhyme._____no_output_____
<code>
comedies = pd.read_csv('../Russian_Comedies/Data/Comedies_Raw_Data.csv')_____no_output_____# sort by creation date
comedies_sorted = comedies.sort_values(by='creation_date').copy()_____no_output_____# select only original comedies and five act
original_comedies = comedies_sorted[(comedies_sorted['translation/adaptation'] == 0) &
(comedies_sorted['num_acts'] == 5)].copy()_____no_output_____original_comedies.head()_____no_output_____original_comedies.shape_____no_output_____# rename column names for clarity
original_comedies = original_comedies.rename(columns={'num_scenes_iarkho': 'mobility_coefficient'})_____no_output_____comedies_verse_features = original_comedies[['index',
'title',
'first_name',
'last_name',
'creation_date',
'dialogue_vivacity',
'percentage_scene_split_verse',
'percentage_scene_split_rhymes',
'percentage_open_scenes',
'percentage_scenes_rhymes_split_verse']].copy()_____no_output_____comedies_verse_features.head()_____no_output_____
</code>
## Part 1. Feature Correlations_____no_output_____
<code>
comedies_verse_features[['dialogue_vivacity',
'percentage_scene_split_verse',
'percentage_scene_split_rhymes',
'percentage_open_scenes',
'percentage_scenes_rhymes_split_verse']].corr().round(2)_____no_output_____original_comedies[['dialogue_vivacity',
'mobility_coefficient']].corr()_____no_output_____
</code>
Dialogue vivacity is moderately positively correlated with the percentage of scenes with split verse lines (0.53), with the percentage of scenes with split rhymes (0.51), and slightly less correlated with the percentage of open scenes (0.45). However, it is strongly positively correlated with the percentage of scenes with both split rhymes and verse lines (0.73). The scenes with very fast-paced dialogue are more likely to be interconnected through both rhyme and shared verse lines. One unexpected discovery is that dialogue vivacity only weakly correlated with the mobility coefficient (0.06): more active movement of dramatic characters on stage does not necessarily entail that their utterances are going to be shorter.
The percentage of scenes with split verse lines is moderately positively correlated with the percentage of scenes with split rhymes (0.66): the scenes that are connected by verse are likely but not necessarily always going to be connected through rhyme.
Such features as the percentage of open scenes and the percentage of scenes with split rhymes and verse lines are strongly positively correlated with their constituent features (the correlation of the percentage of open scenes with the percentage of scenes with split verse lines is 0.86, with the percentage of split rhymes is 0.92). From this, we can infer that the bulk of the open scenes are connected through rhymes. The percentage of scenes with split rhymes and verse lines is strongly positively correlated with the percentage of scenes with split verse lines (0.87) and the percentage of scenes with split rhymes._____no_output_____## Part 3. Feature Distributions and Normality_____no_output_____
<code>
make_plot(comedies_verse_features['dialogue_vivacity'],
'Distribution of the Dialogue Vivacity Coefficient')/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
mean, std, median = summary(comedies_verse_features['dialogue_vivacity'])
print('Mean dialogue vivacity coefficient', round(mean, 2))
print('Standard deviation of the dialogue vivacity coefficient:', round(std, 2))
print('Median dialogue vivacity coefficient:', median)Mean dialogue vivacity coefficient 0.46
Standard deviation of the dialogue vivacity coefficient: 0.1
Median dialogue vivacity coefficient: 0.4575
</code>
### Shapiro-Wilk Normality Test_____no_output_____
<code>
print('The p-value of the Shapiro-Wilk normality test:',
shapiro(comedies_verse_features['dialogue_vivacity'])[1])The p-value of the Shapiro-Wilk normality test: 0.2067030817270279
</code>
The Shapiro-Wilk test showed that the probability of the coefficient of dialogue vivacity of being normally distributed was 0.2067030817270279, which was above the 0.05 significance level. We failed to reject the null hypothesis of the normal distribution._____no_output_____
<code>
make_plot(comedies_verse_features['percentage_scene_split_verse'],
'Distribution of The Percentage of Scenes with Split Verse Lines')_____no_output_____mean, std, median = summary(comedies_verse_features['percentage_scene_split_verse'])
print('Mean percentage of scenes with split verse lines:', round(mean, 2))
print('Standard deviation of the percentage of scenes with split verse lines:', round(std, 2))
print('Median percentage of scenes with split verse lines:', median)Mean percentage of scenes with split verse lines: 30.39
Standard deviation of the percentage of scenes with split verse lines: 14.39
Median percentage of scenes with split verse lines: 28.854
print('The p-value of the Shapiro-Wilk normality test:',
shapiro(comedies_verse_features['percentage_scene_split_verse'])[1])The p-value of the Shapiro-Wilk normality test: 0.8681985139846802
</code>
The Shapiro-Wilk showed that the probability of the percentage of scenes with split verse lines of being normally distributed was very high (the p-value is 0.8681985139846802). We failed to reject the null hypothesis of normal distribution._____no_output_____
<code>
make_plot(comedies_verse_features['percentage_scene_split_rhymes'],
'Distribution of The Percentage of Scenes with Split Rhymes')_____no_output_____mean, std, median = summary(comedies_verse_features['percentage_scene_split_rhymes'])
print('Mean percentage of scenes with split rhymes:', round(mean, 2))
print('Standard deviation of the percentage of scenes with split rhymes:', round(std, 2))
print('Median percentage of scenes with split rhymes:', median)Mean percentage of scenes with split rhymes: 39.77
Standard deviation of the percentage of scenes with split rhymes: 16.24
Median percentage of scenes with split rhymes: 36.6365
print('The p-value of the Shapiro-Wilk normality test:',
shapiro(comedies_verse_features['percentage_scene_split_rhymes'])[1])The p-value of the Shapiro-Wilk normality test: 0.5752763152122498
</code>
The Shapiro-Wilk test showed that the probability of the number of dramatic characters of being normally distributed was 0.5752763152122498. This probability was much higher than the 0.05 significance level. Therefore, we failed to reject the null hypothesis of normal distribution._____no_output_____
<code>
make_plot(comedies_verse_features['percentage_open_scenes'],
'Distribution of The Percentage of Open Scenes')_____no_output_____mean, std, median = summary(comedies_verse_features['percentage_open_scenes'])
print('Mean percentage of open scenes:', round(mean, 2))
print('Standard deviation of the percentage of open scenes:', round(std, 2))
print('Median percentage of open scenes:', median)Mean percentage of open scenes: 55.62
Standard deviation of the percentage of open scenes: 19.25
Median percentage of open scenes: 56.6605
print('The p-value of the Shapiro-Wilk normality test:',
shapiro(comedies_verse_features['percentage_open_scenes'])[1])The p-value of the Shapiro-Wilk normality test: 0.3018988370895386
</code>
The Shapiro-Wilk test showed that the probability of the number of the percentage of open scenes of being normally distributed was 0.3018988370895386, which was quite a lot higher than the significance level of 0.05. Therefore, we failed to reject the null hypothesis of normal distribution of the percentage of open scenes._____no_output_____
<code>
make_plot(comedies_verse_features['percentage_scenes_rhymes_split_verse'],
'Distribution of The Percentage of Scenes with Split Verse Lines and Rhymes')_____no_output_____mean, std, median = summary(comedies_verse_features['percentage_scenes_rhymes_split_verse'])
print('Mean percentage of scenes with split rhymes and verse lines:', round(mean, 2))
print('Standard deviation of the percentage of scenes with split rhymes and verse lines:', round(std, 2))
print('Median percentage of scenes with split rhymes and verse lines:', median)Mean percentage of scenes with split rhymes and verse lines: 14.53
Standard deviation of the percentage of scenes with split rhymes and verse lines: 9.83
Median percentage of scenes with split rhymes and verse lines: 13.0155
print('The p-value of the Shapiro-Wilk normality test:',
shapiro(comedies_verse_features['percentage_scenes_rhymes_split_verse'])[1])The p-value of the Shapiro-Wilk normality test: 0.015218793414533138
</code>
The Shapiro-Wilk test showed that the probability of the percentage of scenes with split verse lines and rhymes of being normally distributed was very low (the p-value was 0.015218793414533138). Therefore, we rejected the hypothesis of normal distribution._____no_output_____### Summary:
1. The majority of the verse features were normally distributed. For them, we could use a parametric statistical test.
2. The only feature that was not normally distributed was the percentage of scenes with split rhymes and verse lines. For this feature, we used a non-parametric test such as the Mann-Whitney u test._____no_output_____## Part 3. Hypothesis Testing_____no_output_____We will run statistical tests to determine whether the two periods distinguishable for the Russian five-act verse tragedy are significantly different for the Russian five-act comedy. The two periods are:
- Period One (from 1747 to 1794)
- Period Two (from 1795 to 1822)
For all features that were normally distributed, we will use *scipy.stats* Python library to run a **t-test** to check whether there is a difference between Period One and Period Two. The null hypothesis is that there is no difference between the two periods. The alternative hypothesis is that the two periods are different. Our significance level will be set at 0.05. If the p-value produced by the t-test will be below 0.05, we will reject the null hypothesis of no difference.
For the percentage of scenes with split rhymes and verse lines, we will run **the Mann-Whitney u-test** to check whether there is a difference between Period One and Period Two. The null hypothesis will be no difference between these periods, whereas the alternative hypothesis will be that the periods will be different.
Since both periods have fewer than 20 tragedies, we cannot use the scipy's Man-Whitney u-test that requires each sample size to be at least 20 because it uses normal approximation. Instead, we will have to run Mann-Whitney U-test without a normal approximation for which we wrote a custom function. The details about the test can be found in the following resource: https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_nonparametric/bs704_nonparametric4.html.
One limitation that we need to mention is the sample size. The first period has only six comedies and the second period has only ten. However, it is impossible to increase the sample size - we cannot ask the Russian playwrights of the eighteenth and nineteenth century to produce more five-act verse comedies. If there are other Russian five-act comedies of these periods, they are either unknown or not available to us._____no_output_____
<code>
comedies_verse_features['period'] = comedies_verse_features.creation_date.apply(determine_period)_____no_output_____period_one = comedies_verse_features[comedies_verse_features['period'] == 1].copy()
period_two = comedies_verse_features[comedies_verse_features['period'] == 2].copy()_____no_output_____period_one.shape_____no_output_____period_two.shape_____no_output_____
</code>
## The T-Test_____no_output_____### The Coefficient of Dialogue Vivacity_____no_output_____
<code>
from scipy.stats import ttest_ind_____no_output_____ttest_ind(period_one['dialogue_vivacity'],
period_two['dialogue_vivacity'], equal_var=False)_____no_output_____
</code>
### The Percentage of Scenes With Split Verse Lines_____no_output_____
<code>
ttest_ind(period_one['percentage_scene_split_verse'],
period_two['percentage_scene_split_verse'], equal_var=False)_____no_output_____
</code>
### The Percentage of Scnes With Split Rhymes_____no_output_____
<code>
ttest_ind(period_one['percentage_scene_split_rhymes'],
period_two['percentage_scene_split_rhymes'], equal_var=False)_____no_output_____
</code>
### The Percentage of Open Scenes_____no_output_____
<code>
ttest_ind(period_one['percentage_open_scenes'],
period_two['percentage_open_scenes'], equal_var=False)_____no_output_____
</code>
### Summary
|Feature |p-value |Result
|---------------------------| ----------------|--------------------------------
| The coefficient of dialogue vivacity |0.92 | Not Significant
|The percentage of scenes with split verse lines|0.009 | Significant
|The percentage of scenes with split rhymes| 0.44| Not significant
|The percentage of open scenes| 0.10| Not significant_____no_output_____## The Mann-Whitney Test_____no_output_____The Process:
- Our null hypothesis is that there is no difference between two periods. Our alternative hypothesis is that the periods are different.
- We will set the signficance level (alpha) at 0.05.
- We will run the test and calculate the test statistic.
- We will compare the test statistic with the critical value of U for a two-tailed test at alpha=0.05. Critical values can be found at https://www.real-statistics.com/statistics-tables/mann-whitney-table/.
- If our test statistic is equal or lower than the critical value of U, we will reject the null hypothesis. Otherwise, we will fail to reject it._____no_output_____### The Percentage of Scenes With Split Verse Lines and Rhymes_____no_output_____
<code>
small_sample_mann_whitney_u_test(period_one['percentage_scenes_rhymes_split_verse'],
period_two['percentage_scenes_rhymes_split_verse'])_____no_output_____
</code>
### Critical Value of U_____no_output_____|Periods |Critical Value of U
|---------------------------| ----------------
| Period One (n=6) and Period Two (n=10) |11
_____no_output_____### Summary
|Feature |u-statistic |Result
|---------------------------| ----------------|--------------------------------
| The percentage of scenes with split verse lines and rhymes|21 | Not Significant_____no_output_____We discovered that the distribution of only one feature, the percentage of scenes with split verse lines, was different in Periods One and Two. Distributions of other features did not prove to be significantly different. _____no_output_____## Part 4. Visualizations_____no_output_____
<code>
def scatter(df, feature, title, xlabel, text_y):
sns.jointplot('creation_date',
feature,
data=df,
color='b',
height=7).plot_joint(
sns.kdeplot,
zorder=0,
n_levels=20)
plt.axvline(1795, color='grey',linestyle='dashed', linewidth=2)
plt.text(1795.5, text_y, '1795')
plt.title(title, fontsize=20, pad=100)
plt.xlabel('Date', fontsize=14)
plt.ylabel(xlabel, fontsize=14)
plt.show()_____no_output_____
</code>
### The Coefficient of Dialogue Vivacity_____no_output_____
<code>
scatter(comedies_verse_features,
'dialogue_vivacity',
'The Coefficient of Dialogue Vivacity by Year',
'The Coefficient of Dialogue Vivacity',
0.85)/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
</code>
### The Percentage of Scenes With Split Verse Lines_____no_output_____
<code>
scatter(comedies_verse_features,
'percentage_scene_split_verse',
'The Percentage of Scenes With Split Verse Lines by Year',
'Percentage of Scenes With Split Verse Lines',
80)/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
</code>
### The Percentage of Scenes With Split Rhymes_____no_output_____
<code>
scatter(comedies_verse_features,
'percentage_scene_split_rhymes',
'The Percentage of Scenes With Split Rhymes by Year',
'The Percentage of Scenes With Split Rhymes',
80)/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
</code>
### The Percentage of Open Scenes_____no_output_____
<code>
scatter(comedies_verse_features,
'percentage_open_scenes',
'The Percentage of Open Scenes by Year',
'The Percentage of Open Scenes',
100)/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
</code>
### The Percentage of Scenes With Split Verse Lines and Rhymes_____no_output_____
<code>
scatter(comedies_verse_features,
'percentage_scenes_rhymes_split_verse',
' The Percentage of Scenes With Split Verse Lines and Rhymes by Year',
' The Percentage of Scenes With Split Verse Lines and Rhymes',
45)/opt/anaconda3/envs/text_extraction/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
</code>
## Part 5. Descriptive Statistics For Two Periods and Overall_____no_output_____### The Coefficient of Dialogue Vivacity_____no_output_____#### In Entire Corpus_____no_output_____
<code>
comedies_verse_features.describe().loc[:, 'dialogue_vivacity'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
#### By Tentative Periods_____no_output_____
<code>
comedies_verse_features.groupby('period').describe().loc[:, 'dialogue_vivacity'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
### The Percentage of Scenes With Split Verse Lines_____no_output_____#### In Entire Corpus_____no_output_____
<code>
comedies_verse_features.describe().loc[:, 'percentage_scene_split_verse'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
#### By Periods_____no_output_____
<code>
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scene_split_verse'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
### The Percentage of Scenes With Split Rhymes_____no_output_____
<code>
comedies_verse_features.describe().loc[:, 'percentage_scene_split_rhymes'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
#### By Tentative Periods_____no_output_____
<code>
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scene_split_rhymes'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
### The Percentage of Open Scenes_____no_output_____#### In Entire Corpus_____no_output_____
<code>
comedies_verse_features.describe().loc[:, 'percentage_open_scenes'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
#### By Tenative Periods_____no_output_____
<code>
comedies_verse_features.groupby('period').describe().loc[:, 'percentage_open_scenes'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
### The Percentage of Scenes With Split Verse Lines or Rhymes_____no_output_____
<code>
comedies_verse_features.describe().loc[:, 'percentage_scenes_rhymes_split_verse'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____comedies_verse_features.groupby('period').describe().loc[:, 'percentage_scenes_rhymes_split_verse'][['mean',
'std',
'50%',
'min',
'max']].round(2)_____no_output_____
</code>
### Summary:
1. The mean dialogue vivacity in the corpus of the Russian five-act comedy in verse was 0.46, with a 0.10 standard deviation. In the tentative Period One, the mean dialogue vivacity was 0.46, the same as in the tentative Period Two. The standard deviation increased from 0.05 in the tentative Period One to 0.13 in the tentative Period Two.
2. The mean percentage of scenes with split verse lines in the corpus was 30.39%, with a standard deviation of 14.39. In Period One, the mean percentage of scenes with split verse lines was 19.37%, with a standard deviation of 10.16. In Period Two, the mean percentage of scenes with split verse lines almost doubled to 37%, with a standard deviation of 12.57%.
3. The average percentage of scenes with split rhymes was higher in the entire corpus of the Russian five-act comedies in verse than the average percentage of scenes with split verse lines (39.77% vs. 30.39%), as was the standard deviation (16.24% vs. 14.39%). The percentage of scenes with split rhymes grew from the tentative Period One to the tentative Period Two from 35.55% to 42.30%; the standard deviation slightly increased from 15.73% to 16.82%.
4. In the corpus, the average percentage of open scenes was 55.62%, i.e., more than half of all scenes were connected either through rhyme or verse lines. The standard deviation was 19.25%. In the tentative Period One, the percentage of open scenes was 44.65%, with a standard deviation of 19.76%. In the tentative Period Two, the percentage of open scenes increased to 62.21%, with a standard deviation of 16.50%, i.e., the standard deviation was lower in Period Two.
5. For the corpus, only 14.53% of all scenes were connected through both rhymes and verse lines. The standard deviation of the percentage of scenes with split verse lines and rhymes was 9.83%. In the tentative Period One, the mean percentage of scenes with split verse lines and rhymes was 10.27%, with a standard deviation of 5.22%. In the tentative Period Two, the mean percentage of scenes with split verse lines and rhymes was 17.09%, with a much higher standard deviation of 11.25%._____no_output_____## Conclusions:
1. The majority of the examined features were normally distributed, except for the percentage of scenes with split verse lines and rhymes.
2. The distribution of the percentage of scenes with split verse lines differed significantly between Period One (from 1775 to 1794) and Period Two (from 1795 to 1849)).
2. For other verse features, there was no evidence to suggest that the two periods of the Russian five-act comedy in verse are significantly different.
3. The mean values of all examined features (except for the vivacity coefficient) increased from tentative Period One to Period Two. The mean vivacity coefficient remained the same from the tentative Period One to Period Two. The standard deviation of all examined features (except for the percentage of open scenes) increased from Period One to Period Two.
4. Judging by the natural clustering in the data evident from visualizations, 1805 may be a more appropriate boundary between the two time periods for comedy._____no_output_____
| {
"repository": "innawendell/European_Comedy",
"path": "Analyses/The Evolution of The Russian Comedy_Verse_Features.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 948447,
"hexsha": "d002bc0e0081d73349f836a6e32db713d13f5fa2",
"max_line_length": 157244,
"avg_line_length": 383.0561389338,
"alphanum_fraction": 0.9222391973
} |
# Notebook from quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Path: Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
# Chapter 4
`Original content created by Cam Davidson-Pilon`
`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`
______
## The greatest theorem never told
This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far. _____no_output_____### The Law of Large Numbers
Let $Z_i$ be $N$ independent samples from some probability distribution. According to *the Law of Large numbers*, so long as the expected value $E[Z]$ is finite, the following holds,
$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$
In words:
> The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.
This may seem like a boring result, but it will be the most useful tool you use._____no_output_____### Intuition
If the above Law is somewhat surprising, it can be made more clear by examining a simple example.
Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:
$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$
By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values:
\begin{align}
\frac{1}{N} \sum_{i=1}^N \;Z_i
& =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\\\[5pt]
& = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\\\[5pt]
& = c_1 \times \text{ (approximate frequency of $c_1$) } \\\\
& \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\\\[5pt]
& \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\\\[5pt]
& = E[Z]
\end{align}
Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost *any distribution*, minus some important cases we will encounter later.
##### Example
____
Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables.
We sample `sample_size = 100000` Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to it's parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`. _____no_output_____
<code>
%matplotlib inline
import numpy as np
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
figsize( 12.5, 5 )
sample_size = 100000
expected_value = lambda_ = 4.5
poi = np.random.poisson
N_samples = range(1,sample_size,100)
for k in range(3):
samples = poi( lambda_, sample_size )
partial_average = [ samples[:i].mean() for i in N_samples ]
plt.plot( N_samples, partial_average, lw=1.5,label="average \
of $n$ samples; seq. %d"%k)
plt.plot( N_samples, expected_value*np.ones_like( partial_average),
ls = "--", label = "true expected value", c = "k" )
plt.ylim( 4.35, 4.65)
plt.title( "Convergence of the average of \n random variables to its \
expected value" )
plt.ylabel( "average of $n$ samples" )
plt.xlabel( "# of samples, $n$")
plt.legend();_____no_output_____
</code>
Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for *flirting*: convergence.
Another very relevant question we can ask is *how quickly am I converging to the expected value?* Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — *compute on average*? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:
$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$
The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:
$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$
By computing the above many, $N_y$, times (remember, it is random), and averaging them:
$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$
Finally, taking the square root:
$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$ _____no_output_____
<code>
figsize( 12.5, 4)
N_Y = 250 #use this many to approximate D(N)
N_array = np.arange( 1000, 50000, 2500 ) #use this many samples in the approx. to the variance.
D_N_results = np.zeros( len( N_array ) )
lambda_ = 4.5
expected_value = lambda_ #for X ~ Poi(lambda) , E[ X ] = lambda
def D_N( n ):
"""
This function approx. D_n, the average variance of using n samples.
"""
Z = poi( lambda_, (n, N_Y) )
average_Z = Z.mean(axis=0)
return np.sqrt( ( (average_Z - expected_value)**2 ).mean() )
for i,n in enumerate(N_array):
D_N_results[i] = D_N(n)
plt.xlabel( "$N$" )
plt.ylabel( "expected squared-distance from true value" )
plt.plot(N_array, D_N_results, lw = 3,
label="expected distance between\n\
expected value and \naverage of $N$ random variables.")
plt.plot( N_array, np.sqrt(expected_value)/np.sqrt(N_array), lw = 2, ls = "--",
label = r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$" )
plt.legend()
plt.title( "How 'fast' is the sample average converging? " );_____no_output_____
</code>
As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.
It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of convergence to $E[Z]$ of the Law of Large Numbers is
$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$
This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the *statistical* point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a *larger* $N$ is fine too.
### How do we compute $Var(Z)$ though?
The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:
$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$
### Expected values and probabilities
There is an even less explicit relationship between expected value and estimating probabilities. Define the *indicator function*
$$\mathbb{1}_A(x) =
\begin{cases} 1 & x \in A \\\\
0 & else
\end{cases}
$$
Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:
$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$
Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 5, and we have many samples from a $Exp(.5)$ distribution.
$$ P( Z > 5 ) = \sum_{i=1}^N \mathbb{1}_{z > 5 }(Z_i) $$
_____no_output_____
<code>
N = 10000
print( np.mean( [ np.random.exponential( 0.5 ) > 5 for i in range(N) ] ) )0.0001
</code>
### What does this all have to do with Bayesian statistics?
*Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.
When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).
We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us *confidence in how unconfident we should be*. The next section deals with this issue._____no_output_____## The Disorder of Small Numbers
The Law of Large Numbers is only valid as $N$ gets *infinitely* large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.
##### Example: Aggregated geographic data
Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can *fail* for areas with small populations.
We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does **not** vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:
$$ \text{height} \sim \text{Normal}(150, 15 ) $$
We aggregate the individuals at the county level, so we only have data for the *average in the county*. What might our dataset look like?_____no_output_____
<code>
figsize( 12.5, 4)
std_height = 15
mean_height = 150
n_counties = 5000
pop_generator = np.random.randint
norm = np.random.normal
#generate some artificial population numbers
population = pop_generator(100, 1500, n_counties )
average_across_county = np.zeros( n_counties )
for i in range( n_counties ):
#generate some individuals and take the mean
average_across_county[i] = norm(mean_height, 1./std_height,
population[i] ).mean()
#located the counties with the apparently most extreme average heights.
i_min = np.argmin( average_across_county )
i_max = np.argmax( average_across_county )
#plot population size vs. recorded average
plt.scatter( population, average_across_county, alpha = 0.5, c="#7A68A6")
plt.scatter( [ population[i_min], population[i_max] ],
[average_across_county[i_min], average_across_county[i_max] ],
s = 60, marker = "o", facecolors = "none",
edgecolors = "#A60628", linewidths = 1.5,
label="extreme heights")
plt.xlim( 100, 1500 )
plt.title( "Average height vs. County Population")
plt.xlabel("County Population")
plt.ylabel("Average height in county")
plt.plot( [100, 1500], [150, 150], color = "k", label = "true expected \
height", ls="--" )
plt.legend(scatterpoints = 1);_____no_output_____
</code>
What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively.
We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 4000, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights._____no_output_____
<code>
print("Population sizes of 10 'shortest' counties: ")
print(population[ np.argsort( average_across_county )[:10] ], '\n')
print("Population sizes of 10 'tallest' counties: ")
print(population[ np.argsort( -average_across_county )[:10] ])Population sizes of 10 'shortest' counties:
[109 135 135 133 109 157 175 120 105 131]
Population sizes of 10 'tallest' counties:
[122 133 313 109 124 280 106 198 326 216]
</code>
Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.
##### Example: Kaggle's *U.S. Census Return Rate Challenge*
Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:_____no_output_____
<code>
figsize( 12.5, 6.5 )
data = np.genfromtxt( "./data/census_data.csv", skip_header=1,
delimiter= ",")
plt.scatter( data[:,1], data[:,0], alpha = 0.5, c="#7A68A6")
plt.title("Census mail-back rate vs Population")
plt.ylabel("Mail-back rate")
plt.xlabel("population of block-group")
plt.xlim(-100, 15e3 )
plt.ylim( -5, 105)
i_min = np.argmin( data[:,0] )
i_max = np.argmax( data[:,0] )
plt.scatter( [ data[i_min,1], data[i_max, 1] ],
[ data[i_min,0], data[i_max,0] ],
s = 60, marker = "o", facecolors = "none",
edgecolors = "#A60628", linewidths = 1.5,
label="most extreme points")
plt.legend(scatterpoints = 1);_____no_output_____
</code>
The above is a classic phenomenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).
I am perhaps overstressing the point and maybe I should have titled the book *"You don't have big data problems!"*, but here again is an example of the trouble with *small datasets*, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are *stable*, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.
For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript [The Most Dangerous Equation](http://nsm.uh.edu/~dgraur/niv/TheMostDangerousEquation.pdf). _____no_output_____##### Example: How to order Reddit submissions
You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is **not** a good reflection of the true value of the product.
This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with *falsely-substandard* ratings of around 4.8. How can we correct this?
Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, called submissions, for people to comment on. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.
<img src="http://i.imgur.com/3v6bz9f.png" />
How would you determine which submissions are the best? There are a number of ways to achieve this:
1. *Popularity*: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very *popular*, the submission is likely more controversial than best.
2. *Difference*: Using the *difference* of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the *Top* submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.
3. *Time adjusted*: Consider using Difference divided by the age of the submission. This creates a *rate*, something like *difference per second*, or *per minute*. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).
3. *Ratio*: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is *more likely* to be better.
I used the phrase *more likely* for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the later with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.
What we really want is an estimate of the *true upvote ratio*. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me.
_____no_output_____One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:
1. Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision.
2. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are *r/aww*, which posts pics of cute animals, and *r/politics*. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same.
In light of these, I think it is better to use a `Uniform` prior.
With our prior in place, we can find the posterior of the true upvote ratio. The Python script `top_showerthoughts_submissions.py` will scrape the best posts from the `showerthoughts` community on Reddit. This is a text-only community so the title of each post *is* the post. Below is the top post as well as some other sample posts:_____no_output_____
<code>
#adding a number to the end of the %run call with get the ith top post.
%run top_showerthoughts_submissions.py 2
print("Post contents: \n")
print(top_post)Post contents:
Toilet paper should be free and have advertising printed on it.
"""
contents: an array of the text from the last 100 top submissions to a subreddit
votes: a 2d numpy array of upvotes, downvotes for each submission.
"""
n_submissions = len(votes)
submissions = np.random.randint( n_submissions, size=4)
print("Some Submissions (out of %d total) \n-----------"%n_submissions)
for i in submissions:
print('"' + contents[i] + '"')
print("upvotes/downvotes: ",votes[i,:], "\n")Some Submissions (out of 98 total)
-----------
"Rappers from the 90's used guns when they had beef rappers today use Twitter."
upvotes/downvotes: [32 3]
"All polls are biased towards people who are willing to take polls"
upvotes/downvotes: [1918 101]
"Taco Bell should give customers an extra tortilla so they can make a burrito out of all the stuff that spilled out of the other burritos they ate."
upvotes/downvotes: [79 17]
"There should be an /r/alanismorissette where it's just examples of people using "ironic" incorrectly"
upvotes/downvotes: [33 6]
</code>
For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular submission's upvote/downvote pair._____no_output_____
<code>
import pymc3 as pm
def posterior_upvote_ratio( upvotes, downvotes, samples = 20000):
"""
This function accepts the number of upvotes and downvotes a particular submission recieved,
and the number of posterior samples to return to the user. Assumes a uniform prior.
"""
N = upvotes + downvotes
with pm.Model() as model:
upvote_ratio = pm.Uniform("upvote_ratio", 0, 1)
observations = pm.Binomial( "obs", N, upvote_ratio, observed=upvotes)
trace = pm.sample(samples, step=pm.Metropolis())
burned_trace = trace[int(samples/4):]
return burned_trace["upvote_ratio"]
_____no_output_____
</code>
Below are the resulting posterior distributions._____no_output_____
<code>
figsize( 11., 8)
posteriors = []
colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"]
for i in range(len(submissions)):
j = submissions[i]
posteriors.append( posterior_upvote_ratio( votes[j, 0], votes[j,1] ) )
plt.hist( posteriors[i], bins = 10, normed = True, alpha = .9,
histtype="step",color = colours[i%5], lw = 3,
label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
plt.hist( posteriors[i], bins = 10, normed = True, alpha = .2,
histtype="stepfilled",color = colours[i], lw = 3, )
plt.legend(loc="upper left")
plt.xlim( 0, 1)
plt.title("Posterior distributions of upvote ratios on different submissions");Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model.
[-------100%-------] 20000 of 20000 in 1.4 sec. | SPS: 14595.5 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model.
[-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15189.5 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model.
[-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15429.0 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model.
[-------100%-------] 20000 of 20000 in 1.3 sec. | SPS: 15146.5 | ETA: 0.0
</code>
Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.
### Sorting!
We have been ignoring the goal of this exercise: how do we sort the submissions from *best to worst*? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.
I suggest using the *95% least plausible value*, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:_____no_output_____
<code>
N = posteriors[0].shape[0]
lower_limits = []
for i in range(len(submissions)):
j = submissions[i]
plt.hist( posteriors[i], bins = 20, normed = True, alpha = .9,
histtype="step",color = colours[i], lw = 3,
label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
plt.hist( posteriors[i], bins = 20, normed = True, alpha = .2,
histtype="stepfilled",color = colours[i], lw = 3, )
v = np.sort( posteriors[i] )[ int(0.05*N) ]
#plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 )
plt.vlines( v, 0, 10 , color = colours[i], linestyles = "--", linewidths=3 )
lower_limits.append(v)
plt.legend(loc="upper left")
plt.legend(loc="upper left")
plt.title("Posterior distributions of upvote ratios on different submissions");
order = np.argsort( -np.array( lower_limits ) )
print(order, lower_limits)[1 0 2 3] [0.80034320917496615, 0.94092009444598201, 0.74660503350561902, 0.72190353389632911]
</code>
The best submissions, according to our procedure, are the submissions that are *most-likely* to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.
Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. That is, even in the worst case scenario, when we have severely overestimated the upvote ratio, we can be sure the best submissions are still on top. Under this ordering, we impose the following very natural properties:
1. given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).
2. given two submissions with the same number of votes, we still assign the submission with more upvotes as *better*.
### But this is too slow for real-time!
I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + u \\\\
& b = 1 + d \\\\
\end{align}
$u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.
_____no_output_____
<code>
def intervals(u,d):
a = 1. + u
b = 1. + d
mu = a/(a+b)
std_err = 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1.) ) )
return ( mu, std_err )
print("Approximate lower bounds:")
posterior_mean, std_err = intervals(votes[:,0],votes[:,1])
lb = posterior_mean - std_err
print(lb)
print("\n")
print("Top 40 Sorted according to approximate lower bounds:")
print("\n")
order = np.argsort( -lb )
ordered_contents = []
for i in order[:40]:
ordered_contents.append( contents[i] )
print(votes[i,0], votes[i,1], contents[i])
print("-------------")Approximate lower bounds:
[ 0.93349005 0.9532194 0.94149718 0.90859764 0.88705356 0.8558795
0.85644927 0.93752679 0.95767101 0.91131012 0.910073 0.915999
0.9140058 0.83276025 0.87593961 0.87436674 0.92830849 0.90642832
0.89187973 0.89950891 0.91295322 0.78607629 0.90250203 0.79950031
0.85219422 0.83703439 0.7619808 0.81301134 0.7313114 0.79137561
0.82701445 0.85542404 0.82309334 0.75211374 0.82934814 0.82674958
0.80933194 0.87448152 0.85350205 0.75460106 0.82934814 0.74417233
0.79924258 0.8189683 0.75460106 0.90744016 0.83838023 0.78802791
0.78400654 0.64638659 0.62047936 0.76137738 0.81365241 0.83838023
0.78457533 0.84980627 0.79249393 0.69020315 0.69593922 0.70758151
0.70268831 0.91620627 0.73346864 0.86382644 0.80877728 0.72708753
0.79822085 0.68333632 0.81699014 0.65100453 0.79809005 0.74702492
0.77318569 0.83221179 0.66500492 0.68134548 0.7249286 0.59412132
0.58191312 0.73142963 0.73142963 0.66251028 0.87152685 0.74107856
0.60935684 0.87152685 0.77484517 0.88783675 0.81814153 0.54569789
0.6122496 0.75613569 0.53511973 0.74556767 0.81814153 0.85773646
0.6122496 0.64814153]
Top 40 Sorted according to approximate lower bounds:
596 18 Someone should develop an AI specifically for reading Terms & Conditions and flagging dubious parts.
-------------
2360 98 Porn is the only industry where it is not only acceptable but standard to separate people based on race, sex and sexual preference.
-------------
1918 101 All polls are biased towards people who are willing to take polls
-------------
948 50 They should charge less for drinks in the drive-thru because you can't refill them.
-------------
3740 239 When I was in elementary school and going through the DARE program, I was positive a gang of older kids was going to corner me and force me to smoke pot. Then I became an adult and realized nobody is giving free drugs to somebody that doesn't want them.
-------------
166 7 "Noted" is the professional way of saying "K".
-------------
29 0 Rewatching Mr. Bean, I've realised that the character is an eccentric genius and not a blithering idiot.
-------------
289 18 You've been doing weird cameos in your friends' dreams since kindergarten.
-------------
269 17 At some point every parent has stopped wiping their child's butt and hoped for the best.
-------------
121 6 Is it really fair to say a person over 85 has heart failure? Technically, that heart has done exceptionally well.
-------------
535 40 It's surreal to think that the sun and moon and stars we gaze up at are the same objects that have been observed for millenia, by everyone in the history of humanity from cavemen to Aristotle to Jesus to George Washington.
-------------
527 40 I wonder if America's internet is censored in a similar way that North Korea's is, but we have no idea of it happening.
-------------
1510 131 Kenny's family is poor because they're always paying for his funeral.
-------------
43 1 If I was as careful with my whole paycheck as I am with my last $20 I'd be a whole lot better off
-------------
162 10 Black hair ties are probably the most popular bracelets in the world.
-------------
107 6 The best answer to the interview question "What is your greatest weakness?" is "interviews".
-------------
127 8 Surfing the internet without ads feels like a summer evening without mosquitoes
-------------
159 12 I wonder if Superman ever put a pair of glasses on Lois Lane's dog, and she was like "what's this Clark? Did you get me a new dog?"
-------------
21 0 Sitting on a cold toilet seat or a warm toilet seat both suck for different reasons.
-------------
1414 157 My life is really like Rihanna's song, "just work work work work work" and the rest of it I can't really understand.
-------------
222 22 I'm honestly slightly concerned how often Reddit commenters make me laugh compared to my real life friends.
-------------
52 3 The world must have been a spookier place altogether when candles and gas lamps were the only sources of light at night besides the moon and the stars.
-------------
194 19 I have not been thankful enough in the last few years that the Black Eyed Peas are no longer ever on the radio
-------------
18 0 Living on the coast is having the window seat of the land you live on.
-------------
18 0 Binoculars are like walkie talkies for the deaf.
-------------
28 1 Now that I am a parent of multiple children I have realized that my parents were lying through their teeth when they said they didn't have a favorite.
-------------
16 0 I sneer at people who read tabloids, but every time I look someone up on Wikipedia the first thing I look for is what controversies they've been involved in.
-------------
1559 233 Kid's menus at restaurants should be smaller portions of the same adult dishes at lower prices and not the junk food that they usually offer.
-------------
1426 213 Eventually once all phones are waterproof we'll be able to push people into pools again
-------------
61 5 Myspace is so outdated that jokes about it being outdated has become outdated
-------------
52 4 As a kid, seeing someone step on a banana peel and not slip was a disappointment.
-------------
90 9 Yahoo!® is the RadioShack® of the Internet.
-------------
34 2 People who "tell it like it is" rarely do so to say something nice
-------------
39 3 Closing your eyes after turning off your alarm is a very dangerous game.
-------------
39 3 Your known 'first word' is the first word your parents heard you speak. In reality, it may have been a completely different word you said when you were alone.
-------------
87 10 "Smells Like Teen Spirit" is as old to listeners of today as "Yellow Submarine" was to listeners of 1991.
-------------
239 36 if an ocean didnt stop immigrants from coming to America what makes us think a wall will?
-------------
22 1 The phonebook was the biggest invasion of privacy that everyone was oddly ok with.
-------------
57 6 I'm actually the most productive when I procrastinate because I'm doing everything I possibly can to avoid the main task at hand.
-------------
57 6 You will never feel how long time is until you have allergies and snot slowly dripping out of your nostrils, while sitting in a classroom with no tissues.
-------------
</code>
We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern. _____no_output_____
<code>
r_order = order[::-1][-40:]
plt.errorbar( posterior_mean[r_order], np.arange( len(r_order) ),
xerr=std_err[r_order], capsize=0, fmt="o",
color = "#7A68A6")
plt.xlim( 0.3, 1)
plt.yticks( np.arange( len(r_order)-1,-1,-1 ), map( lambda x: x[:30].replace("\n",""), ordered_contents) );_____no_output_____
</code>
In the graphic above, you can see why sorting by mean would be sub-optimal._____no_output_____### Extension to Starred rating systems
The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating.
We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently:
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + S \\\\
& b = 1 + N - S \\\\
\end{align}
where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above. _____no_output_____##### Example: Counting Github stars
What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million respositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO_____no_output_____### Conclusion
While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering *how the data is shaped*.
1. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter).
2. Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable.
3. There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem.
_____no_output_____### Appendix
##### Derivation of sorting submissions formula
Basically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF ([Cumulative Distribution Function](http://en.wikipedia.org/wiki/Cumulative_Distribution_Function)), but the CDF of the beta, for integer parameters, is known but is a large sum [3].
We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is
$$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$
Hence we solve the following equation for $x$ and have an approximate lower bound.
$$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$
$\Phi$ being the [cumulative distribution for the normal distribution](http://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution)
_____no_output_____##### Exercises
1\. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value *given* we know $X$ is less than 1? Would you need more samples than the original samples size to be equally accurate?_____no_output_____
<code>
## Enter code here
import scipy.stats as stats
exp = stats.expon( scale=4 )
N = 1e5
X = exp.rvs( int(N) )
## ..._____no_output_____
</code>
2\. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made?
-----
#### Kicker Careers Ranked by Make Percentage
<table><tbody><tr><th>Rank </th><th>Kicker </th><th>Make % </th><th>Number of Kicks</th></tr><tr><td>1 </td><td>Garrett Hartley </td><td>87.7 </td><td>57</td></tr><tr><td>2</td><td> Matt Stover </td><td>86.8 </td><td>335</td></tr><tr><td>3 </td><td>Robbie Gould </td><td>86.2 </td><td>224</td></tr><tr><td>4 </td><td>Rob Bironas </td><td>86.1 </td><td>223</td></tr><tr><td>5</td><td> Shayne Graham </td><td>85.4 </td><td>254</td></tr><tr><td>… </td><td>… </td><td>…</td><td> </td></tr><tr><td>51</td><td> Dave Rayner </td><td>72.2 </td><td>90</td></tr><tr><td>52</td><td> Nick Novak </td><td>71.9 </td><td>64</td></tr><tr><td>53 </td><td>Tim Seder </td><td>71.0 </td><td>62</td></tr><tr><td>54 </td><td>Jose Cortez </td><td>70.7</td><td> 75</td></tr><tr><td>55 </td><td>Wade Richey </td><td>66.1</td><td> 56</td></tr></tbody></table>_____no_output_____In August 2013, [a popular post](http://bpodgursky.wordpress.com/2013/08/21/average-income-per-programming-language/) on the average income per programmer of different languages was trending. Here's the summary chart: (reproduced without permission, cause when you lie with stats, you gunna get the hammer). What do you notice about the extremes?
------
#### Average household income by programming language
<table >
<tr><td>Language</td><td>Average Household Income ($)</td><td>Data Points</td></tr>
<tr><td>Puppet</td><td>87,589.29</td><td>112</td></tr>
<tr><td>Haskell</td><td>89,973.82</td><td>191</td></tr>
<tr><td>PHP</td><td>94,031.19</td><td>978</td></tr>
<tr><td>CoffeeScript</td><td>94,890.80</td><td>435</td></tr>
<tr><td>VimL</td><td>94,967.11</td><td>532</td></tr>
<tr><td>Shell</td><td>96,930.54</td><td>979</td></tr>
<tr><td>Lua</td><td>96,930.69</td><td>101</td></tr>
<tr><td>Erlang</td><td>97,306.55</td><td>168</td></tr>
<tr><td>Clojure</td><td>97,500.00</td><td>269</td></tr>
<tr><td>Python</td><td>97,578.87</td><td>2314</td></tr>
<tr><td>JavaScript</td><td>97,598.75</td><td>3443</td></tr>
<tr><td>Emacs Lisp</td><td>97,774.65</td><td>355</td></tr>
<tr><td>C#</td><td>97,823.31</td><td>665</td></tr>
<tr><td>Ruby</td><td>98,238.74</td><td>3242</td></tr>
<tr><td>C++</td><td>99,147.93</td><td>845</td></tr>
<tr><td>CSS</td><td>99,881.40</td><td>527</td></tr>
<tr><td>Perl</td><td>100,295.45</td><td>990</td></tr>
<tr><td>C</td><td>100,766.51</td><td>2120</td></tr>
<tr><td>Go</td><td>101,158.01</td><td>231</td></tr>
<tr><td>Scala</td><td>101,460.91</td><td>243</td></tr>
<tr><td>ColdFusion</td><td>101,536.70</td><td>109</td></tr>
<tr><td>Objective-C</td><td>101,801.60</td><td>562</td></tr>
<tr><td>Groovy</td><td>102,650.86</td><td>116</td></tr>
<tr><td>Java</td><td>103,179.39</td><td>1402</td></tr>
<tr><td>XSLT</td><td>106,199.19</td><td>123</td></tr>
<tr><td>ActionScript</td><td>108,119.47</td><td>113</td></tr>
</table>_____no_output_____### References
1. Wainer, Howard. *The Most Dangerous Equation*. American Scientist, Volume 95.
2. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. [Web](http://www.sloansportsconference.com/wp-content/uploads/2013/Going%20for%20Three%20Predicting%20the%20Likelihood%20of%20Field%20Goal%20Success%20with%20Logistic%20Regression.pdf). 20 Feb. 2013.
3. http://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function_____no_output_____
<code>
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()_____no_output_____
</code>
<style>
img{
max-width:800px}
</style>_____no_output_____
| {
"repository": "quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers",
"path": "Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 74,
"size": 633929,
"hexsha": "d002f51b520dfb6f7f7c8f13e0401f22dc925760",
"max_line_length": 117692,
"avg_line_length": 528.2741666667,
"alphanum_fraction": 0.9198332936
} |
# Notebook from ethiry99/HW16_Amazon_Vine_Analysis
Path: Vine_Review_Analysis.ipynb
<code>
# Dependencies and Setup
import pandas as pd_____no_output_____vine_review_df=pd.read_csv("Resources/vine_table.csv")
_____no_output_____vine_review_df.head()
_____no_output_____vine_review_df=vine_review_df.loc[(vine_review_df["total_votes"] >= 20) & (vine_review_df["helpful_votes"]/vine_review_df["total_votes"] >= .5)]_____no_output_____vine_review_df.head()_____no_output_____vine_rv_paid_df=vine_review_df.loc[vine_review_df["vine"]=="Y"]
vine_rv_paid_count=len(vine_rv_paid_df)
#print(f"5 Star paid percent {vine_five_star_paid_percent:.1%}\n"
print(f"Paid vine reviews = {vine_rv_paid_count}")Paid vine reviews = 386
vine_rv_unpaid_df=vine_review_df.loc[vine_review_df["vine"]=="N"]
vine_rv_unpaid_count=len(vine_rv_unpaid_df)
print(f"Paid (vine) reviews = {vine_rv_paid_count}")
print(f"Unpaid (vine) reviews = {vine_rv_unpaid_count}")Paid (vine) reviews = 386
Unpaid (vine) reviews = 48717
vine_rv_paid_five_star_df=vine_rv_paid_df.loc[(vine_rv_paid_df["star_rating"]==5)]
five_star_paid_count=len(vine_rv_paid_five_star_df)
print(f"Five star paid reviews = {five_star_paid_count}")Five star paid reviews = 176
vine_rv_unpaid_five_star_df=vine_rv_unpaid_df.loc[(vine_rv_unpaid_df["star_rating"]==5)]
five_star_unpaid_count=len(vine_rv_unpaid_five_star_df)
print(f"Five star paid reviews = {five_star_paid_count}")
print(f"Five star unpaid reviews = {five_star_unpaid_count}")Five star paid reviews = 176
Five star unpaid reviews = 24026
vine_five_star_paid_percent=five_star_paid_count/vine_rv_paid_count
vine_five_star_paid_percent_____no_output_____vine_five_star_unpaid_percent=five_star_unpaid_count/vine_rv_unpaid_count
vine_five_star_unpaid_percent_____no_output_____print(f"5 Star paid percent {vine_five_star_paid_percent:.1%}\n"
f"5 Star unpaid percent {vine_five_star_unpaid_percent:.1%}")5 Star paid percent 45.6%
5 Star unpaid percent 49.3%
</code>
| {
"repository": "ethiry99/HW16_Amazon_Vine_Analysis",
"path": "Vine_Review_Analysis.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 11488,
"hexsha": "d00352c042e11023e04cd767f979253bf98e6a8d",
"max_line_length": 156,
"avg_line_length": 25.9322799097,
"alphanum_fraction": 0.4093837047
} |
# Notebook from bbglab/adventofcode
Path: 2016/loris/day_1.ipynb
# Advent of Code 2016_____no_output_____
<code>
data = open('data/day_1-1.txt', 'r').readline().strip().split(', ')_____no_output_____class TaxiCab:
def __init__(self, data):
self.data = data
self.double_visit = []
self.position = {'x': 0, 'y': 0}
self.direction = {'x': 0, 'y': 1}
self.grid = {i: {j: 0 for j in range(-500, 501)} for i in range(-500, 501)}
def run(self):
for instruction in self.data:
toward = instruction[0]
length = int(instruction[1:])
self.move(toward, length)
def move(self, toward, length):
if toward == 'R':
if self.direction['x'] == 0:
# from UP
if self.direction['y'] == 1:
self.position['x'] += length
self.direction['x'] = 1
for i in range(self.position['x'] - length, self.position['x']):
self.grid[self.position['y']][i] += 1
if self.grid[self.position['y']][i] > 1:
self.double_visit.append((i, self.position['y']))
# from DOWN
else:
self.position['x'] -= length
self.direction['x'] = -1
for i in range(self.position['x'] + length, self.position['x'], -1):
self.grid[self.position['y']][i] += 1
if self.grid[self.position['y']][i] > 1:
self.double_visit.append((i, self.position['y']))
self.direction['y'] = 0
else:
# FROM RIGHT
if self.direction['x'] == 1:
self.position['y'] -= length
self.direction['y'] = -1
for i in range(self.position['y'] + length, self.position['y'], -1):
self.grid[i][self.position['x']] += 1
if self.grid[i][self.position['x']] > 1:
self.double_visit.append((self.position['x'], i))
# FROM LEFT
else:
self.position['y'] += length
self.direction['y'] = 1
for i in range(self.position['y'] - length, self.position['y']):
self.grid[i][self.position['x']] += 1
if self.grid[i][self.position['x']] > 1:
self.double_visit.append((self.position['x'], i))
self.direction['x'] = 0
else:
if self.direction['x'] == 0:
# from UP
if self.direction['y'] == 1:
self.position['x'] -= length
self.direction['x'] = -1
for i in range(self.position['x'] + length, self.position['x'], -1):
self.grid[self.position['y']][i] += 1
if self.grid[self.position['y']][i] > 1:
self.double_visit.append((i, self.position['y']))
# from DOWN
else:
self.position['x'] += length
self.direction['x'] = 1
for i in range(self.position['x'] - length, self.position['x']):
self.grid[self.position['y']][i] += 1
if self.grid[self.position['y']][i] > 1:
self.double_visit.append((i, self.position['y']))
self.direction['y'] = 0
else:
# FROM RIGHT
if self.direction['x'] == 1:
self.position['y'] += length
self.direction['y'] = 1
for i in range(self.position['y'] - length, self.position['y']):
self.grid[i][self.position['x']] += 1
if self.grid[i][self.position['x']] > 1:
self.double_visit.append((self.position['x'], i))
# FROM LEFT
else:
self.position['y'] -= length
self.direction['y'] = -1
for i in range(self.position['y'] + length, self.position['y'], -1):
self.grid[i][self.position['x']] += 1
if self.grid[i][self.position['x']] > 1:
self.double_visit.append((self.position['x'], i))
self.direction['x'] = 0
def get_distance(self):
return sum([abs(i) for i in self.position.values()])
def get_distance_first_double_visit(self):
return sum(self.double_visit[0]) if len(self.double_visit) > 0 else 0_____no_output_____# Test
def test(data, result):
tc = TaxiCab(data)
tc.run()
assert tc.get_distance() == result_____no_output_____test(data=['R2', 'L3'], result=5)
test(data=['R2', 'R2', 'R2'], result=2)
test(data=['R5', 'L5', 'R5', 'R3'], result=12)_____no_output_____tc = TaxiCab(data)
tc.run()
tc.get_distance()_____no_output_____
</code>
<code>
# Test
def test(data, result):
tc = TaxiCab(data)
tc.run()
assert tc.get_distance_first_double_visit() == result_____no_output_____test(data=['R8', 'R4', 'R4', 'R8'], result=4)_____no_output_____tc.get_distance_first_double_visit()_____no_output_____
</code>
| {
"repository": "bbglab/adventofcode",
"path": "2016/loris/day_1.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 10515,
"hexsha": "d0042eab5854b447de51a429a272d3a09f8991fe",
"max_line_length": 277,
"avg_line_length": 36.5104166667,
"alphanum_fraction": 0.4633380884
} |
# Notebook from rabest265/GunViolence
Path: Code/demographics_Lat_Long.ipynb
<code>
#API calls to Google Maps for Lat & Long_____no_output_____# Dependencies
import requests
import json
from config import gkey
import os
import csv
import pandas as pd
import numpy as np
_____no_output_____# Load CSV file
csv_path = os.path.join('..',"output", "demographics.csv")
# Read Purchasing File and store into Pandas data frame
cities_df = pd.read_csv(csv_path, encoding = "ISO-8859-1")
cities_df.head()_____no_output_____params = {"key": gkey}
# Loop through the cities_df that do not have lat & long and run a lat/long search for each city
for index, row in cities_df.iterrows():
if(pd.isnull(row['Lat'])):
base_url = "https://maps.googleapis.com/maps/api/geocode/json"
city = row['city_name']
state = row['State']
citystate=city + ", "+ state
print (citystate)
# update address key value
params['address'] = f"{city},{state}"
# make request
cities_lat_lng = requests.get(base_url, params=params)
# print the cities_lat_lng url, avoid doing for public github repos in order to avoid exposing key
# print(cities_lat_lng.url)
# convert to json
cities_lat_lng = cities_lat_lng.json()
cities_df.loc[index, "Lat"] = cities_lat_lng["results"][0]["geometry"]["location"]["lat"]
cities_df.loc[index, "Lng"] = cities_lat_lng["results"][0]["geometry"]["location"]["lng"]
# Visualize to confirm lat lng appear
cities_df.head()Quincy, WA
Raft Island, WA
Rainier, WA
Ravensdale, WA
Raymond, WA
Reardan, WA
Redmond, WA
Renton, WA
Republic, WA
Richland, WA
Ridgefield, WA
Ritzville, WA
Riverbend, WA
River Road, WA
Riverside, WA
Rochester, WA
Rockford, WA
Rock Island, WA
Rockport, WA
Rocky Point, WA
Ronald, WA
Roosevelt, WA
Rosalia, WA
Rosburg, WA
Rosedale, WA
Roslyn, WA
Roy, WA
Royal City, WA
Ruston, WA
Ryderwood, WA
St. John, WA
Salmon Creek, WA
Sammamish, WA
Santiago, WA
Satsop, WA
Seabeck, WA
SeaTac, WA
Seattle, WA
Sedro-Woolley, WA
Sekiu, WA
Selah, WA
Sequim, WA
Shadow Lake, WA
Shelton, WA
Shoreline, WA
Silvana, WA
Silverdale, WA
Silver Firs, WA
Sisco Heights, WA
Skamokawa Valley, WA
Skokomish, WA
Skykomish, WA
Snohomish, WA
Snoqualmie, WA
Snoqualmie Pass, WA
Soap Lake, WA
South Bend, WA
South Cle Elum, WA
South Creek, WA
South Hill, WA
South Prairie, WA
South Wenatchee, WA
Southworth, WA
Spanaway, WA
Spangle, WA
Spokane, WA
Spokane Valley, WA
Sprague, WA
Springdale, WA
Stansberry Lake, WA
Stanwood, WA
Starbuck, WA
Startup, WA
Steilacoom, WA
Steptoe, WA
Stevenson, WA
Sudden Valley, WA
Sultan, WA
Sumas, WA
Summit, WA
Summit View, WA
Summitview, WA
Sumner, WA
Sunday Lake, WA
Sunnyside, WA
Sunnyslope, WA
Suquamish, WA
Swede Heaven, WA
Tacoma, WA
Taholah, WA
Tampico, WA
Tanglewilde, WA
Tanner, WA
Tekoa, WA
Tenino, WA
Terrace Heights, WA
Thorp, WA
Three Lakes, WA
Tieton, WA
Tokeland, WA
Toledo, WA
Tonasket, WA
Toppenish, WA
Torboy, WA
Touchet, WA
Town and Country, WA
Tracyton, WA
Trout Lake, WA
Tukwila, WA
Tumwater, WA
Twin Lakes, WA
Twisp, WA
Union, WA
Union Gap, WA
Union Hill-Novelty Hill, WA
Uniontown, WA
University Place, WA
Upper Elochoman, WA
Vader, WA
Valley, WA
Vancouver, WA
Vantage, WA
Vashon, WA
Vaughn, WA
Venersborg, WA
Verlot, WA
Waitsburg, WA
Walla Walla, WA
Walla Walla East, WA
Waller, WA
Wallula, WA
Walnut Grove, WA
Wapato, WA
Warden, WA
Warm Beach, WA
Washougal, WA
Washtucna, WA
Waterville, WA
Wauna, WA
Waverly, WA
Wenatchee, WA
West Clarkston-Highland, WA
West Pasco, WA
Westport, WA
West Richland, WA
West Side Highway, WA
Whidbey Island Station, WA
White Center, WA
White Salmon, WA
White Swan, WA
Wilbur, WA
Wilderness Rim, WA
Wilkeson, WA
Willapa, WA
Wilson Creek, WA
Winlock, WA
Winthrop, WA
Wishram, WA
Wollochet, WA
Woodinville, WA
Woodland, WA
Woods Creek, WA
Woodway, WA
Yacolt, WA
Yakima, WA
Yarrow Point, WA
Yelm, WA
Zillah, WA
Accoville, WV
Addison, WV
Albright, WV
Alderson, WV
Alum Creek, WV
Amherstdale, WV
Anawalt, WV
Anmoore, WV
Ansted, WV
Apple Grove, WV
Arbovale, WV
Athens, WV
Auburn, WV
Aurora, WV
Bancroft, WV
Barboursville, WV
Barrackville, WV
Bartley, WV
Bartow, WV
Bath, WV
Bayard, WV
Beards Fork, WV
Beaver, WV
Beckley, WV
Beech Bottom, WV
Belington, WV
Belle, WV
Belmont, WV
Belva, WV
Benwood, WV
Bergoo, WV
Berwind, WV
Bethany, WV
Bethlehem, WV
Beverly, WV
Big Chimney, WV
Big Creek, WV
Big Sandy, WV
Birch River, WV
Blacksville, WV
Blennerhassett, WV
Bluefield, WV
Bluewell, WV
Boaz, WV
Bolivar, WV
Bolt, WV
Boomer, WV
Bowden, WV
Bradley, WV
Bradshaw, WV
Bramwell, WV
Brandonville, WV
Brandywine, WV
Brenton, WV
Bridgeport, WV
Brookhaven, WV
Bruceton Mills, WV
Bruno, WV
Brush Fork, WV
Buckhannon, WV
Bud, WV
Buffalo, WV
Burlington, WV
Burnsville, WV
Cairo, WV
Camden-on-Gauley, WV
Cameron, WV
Capon Bridge, WV
Carolina, WV
Carpendale, WV
Cass, WV
Cassville, WV
Cedar Grove, WV
Century, WV
Ceredo, WV
Chapmanville, WV
Charleston, WV
Charles Town, WV
Charlton Heights, WV
Chattaroy, WV
Chauncey, WV
Cheat Lake, WV
Chelyan, WV
Chesapeake, WV
Chester, WV
Clarksburg, WV
Clay, WV
Clearview, WV
Clendenin, WV
Coal City, WV
Coal Fork, WV
Comfort, WV
Corinne, WV
Covel, WV
Cowen, WV
Crab Orchard, WV
Craigsville, WV
Cross Lanes, WV
Crum, WV
Crumpler, WV
Cucumber, WV
Culloden, WV
Dailey, WV
Daniels, WV
Danville, WV
Davis, WV
Davy, WV
Deep Water, WV
Delbarton, WV
Despard, WV
Dixie, WV
Dunbar, WV
Durbin, WV
East Bank, WV
East Dailey, WV
Eccles, WV
Eleanor, WV
Elizabeth, WV
Elk Garden, WV
Elkins, WV
Elkview, WV
Ellenboro, WV
Enterprise, WV
Fairlea, WV
Fairmont, WV
Fairview, WV
Falling Spring, WV
Falling Waters, WV
Falls View, WV
Farmington, WV
Fayetteville, WV
Fenwick, WV
Flatwoods, WV
Flemington, WV
Follansbee, WV
Fort Ashby, WV
Fort Gay, WV
Frank, WV
Franklin, WV
Friendly, WV
Gallipolis Ferry, WV
Galloway, WV
Gary, WV
Gassaway, WV
Gauley Bridge, WV
Ghent, WV
Gilbert, WV
Gilbert Creek, WV
Glasgow, WV
Glen Dale, WV
Glen Ferris, WV
Glen Fork, WV
Glen Jean, WV
Glenville, WV
Glen White, WV
Grafton, WV
Grantsville, WV
Grant Town, WV
Granville, WV
Great Cacapon, WV
Green Bank, WV
Green Spring, WV
Greenview, WV
Gypsy, WV
Hambleton, WV
Hamlin, WV
Handley, WV
Harman, WV
Harpers Ferry, WV
Harrisville, WV
Hartford City, WV
Harts, WV
Hedgesville, WV
Helen, WV
Helvetia, WV
Henderson, WV
Hendricks, WV
Henlawson, WV
Hepzibah, WV
Hico, WV
Hillsboro, WV
Hilltop, WV
Hinton, WV
Holden, WV
Hometown, WV
Hooverson Heights, WV
Hundred, WV
Huntersville, WV
Huntington, WV
Hurricane, WV
Huttonsville, WV
Iaeger, WV
Idamay, WV
Inwood, WV
Itmann, WV
Jacksonburg, WV
Jane Lew, WV
Jefferson, WV
Junior, WV
Justice, WV
Kenova, WV
Kermit, WV
Keyser, WV
Keystone, WV
Kimball, WV
Kimberly, WV
Kincaid, WV
Kingwood, WV
Kistler, WV
Kopperston, WV
Lashmeet, WV
Lavalette, WV
Leon, WV
Lesage, WV
Lester, WV
Lewisburg, WV
Littleton, WV
Logan, WV
Lost Creek, WV
Lubeck, WV
Lumberport, WV
Mabscott, WV
MacArthur, WV
McConnell, WV
McMechen, WV
Madison, WV
Mallory, WV
Man, WV
Mannington, WV
Marlinton, WV
Marmet, WV
Martinsburg, WV
Mason, WV
Masontown, WV
Matewan, WV
Matheny, WV
Matoaka, WV
Maybeury, WV
Meadow Bridge, WV
Middlebourne, WV
Middleway, WV
Mill Creek, WV
Milton, WV
Minden, WV
Mineralwells, WV
Mitchell Heights, WV
Monaville, WV
Monongah, WV
Montcalm, WV
Montgomery, WV
Montrose, WV
Moorefield, WV
Morgantown, WV
Moundsville, WV
Mount Carbon, WV
Mount Gay-Shamrock, WV
Mount Hope, WV
Mullens, WV
Neibert, WV
Nettie, WV
Newburg, WV
New Cumberland, WV
Newell, WV
New Haven, WV
New Martinsville, WV
New Richmond, WV
Nitro, WV
Northfork, WV
North Hills, WV
Nutter Fort, WV
Oak Hill, WV
Oakvale, WV
Oceana, WV
Omar, WV
Paden City, WV
Page, WV
Pageton, WV
Parcoal, WV
Parkersburg, WV
Parsons, WV
Paw Paw, WV
Pax, WV
Pea Ridge, WV
Pennsboro, WV
Pentress, WV
Petersburg, WV
Peterstown, WV
Philippi, WV
Pickens, WV
Piedmont, WV
Pinch, WV
Pine Grove, WV
Pineville, WV
Piney View, WV
Pleasant Valley, WV
Poca, WV
Point Pleasant, WV
Powellton, WV
Pratt, WV
Prichard, WV
Prince, WV
Princeton, WV
Prosperity, WV
Pullman, WV
Quinwood, WV
Rachel, WV
Racine, WV
Rainelle, WV
Rand, WV
Ranson, WV
Ravenswood, WV
Raysal, WV
Reader, WV
Red Jacket, WV
Reedsville, WV
Reedy, WV
Rhodell, WV
Richwood, WV
Ridgeley, WV
Ripley, WV
Rivesville, WV
Robinette, WV
Roderfield, WV
Romney, WV
Ronceverte, WV
Rossmore, WV
Rowlesburg, WV
Rupert, WV
St. Albans, WV
St. George, WV
St. Marys, WV
Salem, WV
Salt Rock, WV
Sand Fork, WV
Sarah Ann, WV
Scarbro, WV
Shady Spring, WV
Shannondale, WV
Shenandoah Junction, WV
Shepherdstown, WV
Shinnston, WV
Shrewsbury, WV
Sissonville, WV
Sistersville, WV
Smithers, WV
Smithfield, WV
Sophia, WV
South Charleston, WV
Spelter, WV
Spencer, WV
Springfield, WV
Stanaford, WV
Star City, WV
Stollings, WV
Stonewood, WV
Summersville, WV
Sutton, WV
Switzer, WV
Sylvester, WV
Teays Valley, WV
Terra Alta, WV
Thomas, WV
Thurmond, WV
Tioga, WV
Tornado, WV
Triadelphia, WV
Tunnelton, WV
Twilight, WV
Union, WV
Valley Bend, WV
Valley Grove, WV
Valley Head, WV
Van, WV
Verdunville, WV
Vienna, WV
Vivian, WV
Wallace, WV
War, WV
Wardensville, WV
Washington, WV
Waverly, WV
Wayne, WV
Weirton, WV
Welch, WV
Wellsburg, WV
West Hamlin, WV
West Liberty, WV
West Logan, WV
West Milford, WV
Weston, WV
Westover, WV
West Union, WV
Wheeling, WV
White Hall, WV
White Sulphur Springs, WV
Whitesville, WV
Whitmer, WV
Wiley Ford, WV
Williamson, WV
Williamstown, WV
Windsor Heights, WV
Winfield, WV
Wolf Summit, WV
Womelsdorf, WV
Worthington, WV
Abbotsford, WI
Abrams, WI
Adams, WI
Adell, WI
Albany, WI
Algoma, WI
Allenton, WI
Allouez, WI
Alma, WI
Alma Center, WI
Almena, WI
Almond, WI
Altoona, WI
Amberg, WI
Amery, WI
Amherst, WI
Amherst Junction, WI
Angelica, WI
Aniwa, WI
Antigo, WI
Appleton, WI
Arcadia, WI
Arena, WI
Argonne, WI
Argyle, WI
Arkansaw, WI
Arkdale, WI
Arlington, WI
Arpin, WI
cities_df.head()
cities_df.to_csv("../Output/cities.csv", index=False, header=True)_____no_output_____
</code>
| {
"repository": "rabest265/GunViolence",
"path": "Code/demographics_Lat_Long.ipynb",
"matched_keywords": [
"STAR",
"Salmon"
],
"stars": null,
"size": 51222,
"hexsha": "d0069e2a36204df8606dc23f8e75ef7c3b8b2179",
"max_line_length": 116,
"avg_line_length": 26.0406710727,
"alphanum_fraction": 0.4398305416
} |
# Notebook from debugevent90901/courseArchive
Path: ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
# Lab 4: EM Algorithm and Single-Cell RNA-seq Data_____no_output_____### Name: Your Name Here (Your netid here)_____no_output_____### Due April 2, 2021 11:59 PM_____no_output_____#### Preamble (Don't change this)_____no_output_____## Important Instructions -
1. Please implement all the *graded functions* in main.py file. Do not change function names in main.py.
2. Please read the description of every graded function very carefully. The description clearly states what is the expectation of each graded function.
3. After some graded functions, there is a cell which you can run and see if the expected output matches the output you are getting.
4. The expected output provided is just a way for you to assess the correctness of your code. The code will be tested on several other cases as well._____no_output_____
<code>
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns_____no_output_____%run main.py_____no_output_____module = Lab4()_____no_output_____
</code>
## Part 1 : Expectation-Maximization (EM) algorithm for transcript quantification_____no_output_____## Introduction
The EM algorithm is a very helpful tool to compute maximum likelihood estimates of parameters in models that have some latent (hidden) variables.
In the case of the transcript quantification problem, the model parameters we want to estimate are the transcript relative abundances $\rho_1,...,\rho_K$.
The latent variables are the read-to-transcript indicator variables $Z_{ik}$, which indicate whether the $i$th read comes from the $k$th transcript (in which case $Z_{ik}=1$.
In this part of the lab, you will be given the read alignment data.
For each read and transcript pair, it tells you whether the read can be mapped (i.e., aligned) to that transcript.
Using the EM algorithm, you will estimate the relative abundances of the trascripts.
_____no_output_____### Reading read transcript data - We have 30000 reads and 30 transcripts_____no_output_____
<code>
n_reads=30000
n_transcripts=30
read_mapping=[]
with open("read_mapping_data.txt",'r') as file :
lines_reads=file.readlines()
for line in lines_reads :
read_mapping.append([int(x) for x in line.split(",")])_____no_output_____read_mapping[:10]_____no_output_____
</code>
Rather than giving you a giant binary matrix, we encoded the read mapping data in a more concise way. read_mapping is a list of lists. The $i$th list contains the indices of the transcripts that the $i$th read maps to._____no_output_____### Reading true abundances and transcript lengths_____no_output_____
<code>
with open("transcript_true_abundances.txt",'r') as file :
lines_gt=file.readlines()
ground_truth=[float(x) for x in lines_gt[0].split(",")]
with open("transcript_lengths.txt",'r') as file :
lines_gt=file.readlines()
tr_lengths=[float(x) for x in lines_gt[0].split(",")]_____no_output_____ground_truth[:5]_____no_output_____tr_lengths[:5]_____no_output_____
</code>
## Graded Function 1 : expectation_maximization (10 marks)
Purpose : To implement the EM algorithm to obtain abundance estimates for each transcript.
E-step : In this step, we calculate the fraction of read that is assigned to each transcript (i.e., the estimate of $Z_{ik}$). For read $i$ and transicript $k$, this is calculated by dividing the current abundance estimate of transcript $k$ by the sum of abundance estimates of all transcripts that read $i$ maps to.
M-step : In this step, we update the abundance estimate of each transcript based on the fraction of all reads that is currently assigned to the transcript. First we compute the average fraction of all reads assigned to the transcript. Then, (if transcripts are of different lengths) we divide the result by the transcript length.
Finally, we normalize all abundance estimates so that they add up to 1.
Inputs - read_mapping (which is a list of lists where each sublist contains the transcripts to which a particular read belongs to. The length of this list is equal to the number of reads, i.e. 30000; tr_lengths (a list containing the length of the 30 transcripts, in order); n_iterations (the number of EM iterations to be performed)
Output - a list of lists where each sublist contains the abundance estimates for a transcript across all iterations. The length of each sublist should be equal to the number of iterations plus one (for the initialization) and the total number of sublists should be equal to the number of transcripts._____no_output_____
<code>
history=module.expectation_maximization(read_mapping,tr_lengths,20)
print(len(history))
print(len(history[0]))
print(history[0][-5:])
print(history[1][-5:])
print(history[2][-5:])30
21
[0.033769639494636614, 0.03381298624783303, 0.03384568373972949, 0.0338703482393148, 0.03388895326082054]
[0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502]
[0.0660581789629968, 0.06606927656035864, 0.0660765012689558, 0.06608120466668756, 0.0660842666518177]
</code>
## Expected Output -
30
21
[0.033769639494636614, 0.03381298624783303, 0.03384568373972948, 0.0338703482393148, 0.03388895326082054]
[0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502]
[0.0660581789629968, 0.06606927656035864, 0.06607650126895578, 0.06608120466668756, 0.0660842666518177]
_____no_output_____You can use the following function to visualize how the estimated relative abundances are converging with the number of iterations of the algorithm._____no_output_____
<code>
def visualize_em(history,n_iterations) :
#start code here
fig, ax = plt.subplots(figsize=(8,6))
for j in range(n_transcripts):
ax.plot([i for i in range(n_iterations+1)],[history[j][i] - ground_truth[j] for i in range(n_iterations+1)],marker='o')
#end code here_____no_output_____visualize_em(history,20)_____no_output_____
</code>
## Part 2 : Exploring Single-Cell RNA-seq data_____no_output_____In a study published in 2015, Zeisel et al. used single-cell RNA-seq data to explore the cell diversity in the mouse brain.
We will explore the data used for their study.
You can read more about it [here](https://science.sciencemag.org/content/347/6226/1138)._____no_output_____
<code>
#reading single-cell RNA-seq data
lines_genes=[]
with open("Zeisel_expr.txt",'r') as file :
lines_genes=file.readlines()_____no_output_____lines_genes[0][:300]_____no_output_____
</code>
Each line in the file Zeisel_expr.txt corresponds to one gene.
The columns correspond to different cells (notice that this is the opposite of how we looked at this matrix in class).
The entries of this matrix correspond to the number of reads mapping to a given gene in the corresponding cell._____no_output_____
<code>
# reading true labels for each cell
with open("Zeisel_labels.txt",'r') as file :
true_labels = file.read().splitlines()_____no_output_____
</code>
The study also provides us with true labels for each of the cells.
For each of the cells, the vector true_labels contains the name of the cell type.
There are nine different cell types in this dataset._____no_output_____
<code>
set(true_labels)_____no_output_____
</code>
## Graded Function 2 : prepare_data (10 marks) :
Purpose - To create a dataframe where each row corresponds to a specific cell and each column corresponds to the expressions levels of a particular gene across all cells.
You should name the columns as "Gene_1", "Gene_2", and so on.
We will iterate through all the lines in lines_genes list created above, add 1 to each value and take log.
Each line will correspond to 1 column in the dataframe
Output - gene expression dataframe
### Note - All the values in the output dataframe should be rounded off to 5 digits after the decimal_____no_output_____
<code>
data_df=module.prepare_data(lines_genes)
print(data_df.shape)
print(data_df.iloc[0:3,:5])(3005, 19972)
Gene_0 Gene_1 Gene_2 Gene_3 Gene_4
0 0.0 1.38629 1.38629 0.0 0.69315
1 0.0 0.69315 0.69315 0.0 0.69315
2 0.0 0.00000 1.94591 0.0 0.69315
print(data_df.columns)Index(['Gene_0', 'Gene_1', 'Gene_2', 'Gene_3', 'Gene_4', 'Gene_5', 'Gene_6',
'Gene_7', 'Gene_8', 'Gene_9',
...
'Gene_19962', 'Gene_19963', 'Gene_19964', 'Gene_19965', 'Gene_19966',
'Gene_19967', 'Gene_19968', 'Gene_19969', 'Gene_19970', 'Gene_19971'],
dtype='object', length=19972)
</code>
## Expected Output :
``(3005, 19972)``
`` Gene_0 Gene_1 Gene_2 Gene_3 Gene_4``
``0 0.0 1.38629 1.38629 0.0 0.69315``
``1 0.0 0.69315 0.69315 0.0 0.69315``
``2 0.0 0.00000 1.94591 0.0 0.69315``_____no_output_____## Graded Function 3 : identify_less_expressive_genes (10 marks)
Purpose : To identify genes (columns) that are expressed in less than 25 cells. We will create a list of all gene columns that have values greater than 0 for less than 25 cells.
Input - gene expression dataframe
Output - list of column names which are expressed in less than 25 cells_____no_output_____
<code>
drop_columns = module.identify_less_expressive_genes(data_df)
print(len(drop_columns))
print(drop_columns[:10])5120
Index(['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152',
'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173'],
dtype='object')
</code>
## Expected Output :
``5120``
``['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152', 'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173']``_____no_output_____### Filtering less expressive genes
We will now create a new dataframe in which genes which are expressed in less than 25 cells will not be present_____no_output_____
<code>
df_new = data_df.drop(drop_columns, axis=1)_____no_output_____df_new.head()_____no_output_____
</code>
## Graded Function 4 : perform_pca (10 marks)
Pupose - Perform Principal Component Analysis on the new dataframe and take the top 50 principal components
Input - df_new
Output - numpy array containing the top 50 principal components of the data.
### Note - All the values in the output should be rounded off to 5 digits after the decimal
### Note - Please use random_state=365 for the PCA object you will create_____no_output_____
<code>
pca_data=module.perform_pca(df_new)
print(pca_data.shape)
print(type(pca_data))
print(pca_data[0:3,:5])(3005, 50)
<class 'numpy.ndarray'>
[[26.97148 -2.7244 0.62163 25.90148 -6.24736]
[26.49135 -1.58774 -4.79315 24.01094 -7.25618]
[47.82664 5.06799 2.15177 30.24367 -3.38878]]
</code>
## Expected Output :
``(3005, 50)``
``<class 'numpy.ndarray'>``
``[[26.97148 -2.7244 0.62163 25.90148 -6.24736]``
`` [26.49135 -1.58774 -4.79315 24.01094 -7.25618]``
`` [47.82664 5.06799 2.15177 30.24367 -3.38878]]``_____no_output_____## (Non-graded) Function 5 : perform_tsne
Pupose - Perform t-SNE on the pca_data and obtain 2 t-SNE components
We will use TSNE class of the sklearn.manifold package. Use random_state=1000 and perplexity=50
Documenation can be found here - https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
Input - pca_data
Output - numpy array containing the top 2 tsne components of the data.
**Note: This function will not be graded because of the random nature of t-SNE.**_____no_output_____
<code>
tsne_data50 = module.perform_tsne(pca_data)
print(tsne_data50.shape)
print(tsne_data50[:3,:])(3005, 2)
[[ 19.031317 -45.3434 ]
[ 19.188553 -44.945473]
[ 17.369982 -47.997364]]
</code>
## Expected Output :
(These numbers can deviate a bit depending on your sklearn)
``(3005, 2)``
``[[ 15.069608 -47.535984]``
`` [ 15.251476 -47.172073]``
`` [ 13.3932 -49.909657]]``_____no_output_____
<code>
fig, ax = plt.subplots(figsize=(12,8))
sns.scatterplot(tsne_data50[:,0], tsne_data50[:,1], hue=true_labels)
plt.show()/usr/local/lib/python3.9/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
</code>
Notice that the different cell types form clusters (which can be easily visualized on the t-SNE space).
Zeisel et al. performed clustering on this data in order to identify and label the different cell types.
You can try using clustering methods (such as k-means and GMM) to cluster the single-cell RNA-seq data of Zeisel at al. and see if your results agree with theirs!_____no_output_____
| {
"repository": "debugevent90901/courseArchive",
"path": "ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb",
"matched_keywords": [
"RNA-seq",
"single-cell"
],
"stars": null,
"size": 1023042,
"hexsha": "d00741055dc800ea60b86da8dd05cb6e0b604bae",
"max_line_length": 602447,
"avg_line_length": 1344.3390275953,
"alphanum_fraction": 0.7113657113
} |
# Notebook from justinshaffer/Extraction_kit_benchmarking
Path: code/Taxon profile analysis.ipynb
# Set-up notebook environment
## NOTE: Use a QIIME2 kernel_____no_output_____
<code>
import numpy as np
import pandas as pd
import seaborn as sns
import scipy
from scipy import stats
import matplotlib.pyplot as plt
import re
from pandas import *
import matplotlib.pyplot as plt
%matplotlib inline
from qiime2.plugins import feature_table
from qiime2 import Artifact
from qiime2 import Metadata
import biom
from biom.table import Table
from qiime2.plugins import diversity
from scipy.stats import ttest_ind
from scipy.stats.stats import pearsonr
%config InlineBackend.figure_formats = ['svg']
from qiime2.plugins.feature_table.methods import relative_frequency
import biom
import qiime2 as q2
import os
import math
_____no_output_____
</code>
# Import sample metadata_____no_output_____
<code>
meta = q2.Metadata.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/sample_metadata/12201_metadata.txt').to_dataframe()
_____no_output_____
</code>
Separate round 1 and round 2 and exclude round 1 Zymo, Homebrew, and MagMAX Beta_____no_output_____
<code>
meta_r1 = meta[meta['round'] == 1]
meta_clean_r1_1 = meta_r1[meta_r1['extraction_kit'] != 'Zymo MagBead']
meta_clean_r1_2 = meta_clean_r1_1[meta_clean_r1_1['extraction_kit'] != 'Homebrew']
meta_clean_r1 = meta_clean_r1_2[meta_clean_r1_2['extraction_kit'] != 'MagMax Beta']
meta_clean_r2 = meta[meta['round'] == 2]
_____no_output_____
</code>
Remove PowerSoil samples from each round - these samples will be used as the baseline _____no_output_____
<code>
meta_clean_r1_noPS = meta_clean_r1[meta_clean_r1['extraction_kit'] != 'PowerSoil']
meta_clean_r2_noPS = meta_clean_r2[meta_clean_r2['extraction_kit'] != 'PowerSoil']
_____no_output_____
</code>
Create tables including only round 1 or round 2 PowerSoil samples_____no_output_____
<code>
meta_clean_r1_onlyPS = meta_clean_r1[meta_clean_r1['extraction_kit'] == 'PowerSoil']
meta_clean_r2_onlyPS = meta_clean_r2[meta_clean_r2['extraction_kit'] == 'PowerSoil']
_____no_output_____
</code>
Merge PowerSoil samples from round 2 with other samples from round 1, and vice versa - this will allow us to get the correlations between the two rounds of PowerSoil_____no_output_____
<code>
meta_clean_r1_with_r2_PS = pd.concat([meta_clean_r1_noPS, meta_clean_r2_onlyPS])
meta_clean_r2_with_r1_PS = pd.concat([meta_clean_r2_noPS, meta_clean_r1_onlyPS])
_____no_output_____
</code>
## Collapse feature-table to the desired level (e.g., genus)_____no_output_____16S_____no_output_____
<code>
qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/06_taxonomy/dna_all_16S_deblur_seqs_taxonomy_silva138.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 846 samples and 1660 features
_____no_output_____
</code>
ITS_____no_output_____
<code>
qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/06_taxonomy/dna_all_ITS_deblur_seqs_taxonomy_unite8.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 978 samples and 791 features
_____no_output_____
</code>
Shotgun_____no_output_____
<code>
qiime taxa collapse \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock.qza \
--i-taxonomy /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/wol_taxonomy.qza \
--p-level 6 \
--o-collapsed-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza
qiime feature-table summarize \
--i-table /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza \
--o-visualization /Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qzv
# There are 1044 samples and 2060 features
_____no_output_____
</code>
# Import feature-tables_____no_output_____
<code>
dna_bothPS_16S_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/16S/10_filtered_data/dna_bothPS_16S_deblur_biom_lod_noChl_noMit_sepp_gg_noNTCs_noMock_taxa_collapse_genus.qza')
dna_bothPS_ITS_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/ITS/08_filtered_data/dna_bothPS_ITS_deblur_biom_lod_noNTCs_noMock_taxa_collapse_genus.qza')
dna_bothPS_shotgun_genus_qza = q2.Artifact.load('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/data/shotgun/03_filtered_data/dna_bothPS_shotgun_woltka_wol_biom_noNTCs_noMock_taxa_collapse_genus.qza')
_____no_output_____
</code>
# Convert QZA to a Pandas DataFrame_____no_output_____
<code>
dna_bothPS_16S_genus_df = dna_bothPS_16S_genus_qza.view(pd.DataFrame)
dna_bothPS_ITS_genus_df = dna_bothPS_ITS_genus_qza.view(pd.DataFrame)
dna_bothPS_shotgun_genus_df = dna_bothPS_shotgun_genus_qza.view(pd.DataFrame)
_____no_output_____
</code>
# Melt dataframes_____no_output_____
<code>
dna_bothPS_16S_genus_df_melt = dna_bothPS_16S_genus_df.unstack()
dna_bothPS_ITS_genus_df_melt = dna_bothPS_ITS_genus_df.unstack()
dna_bothPS_shotgun_genus_df_melt = dna_bothPS_shotgun_genus_df.unstack()
dna_bothPS_16S_genus = pd.DataFrame(dna_bothPS_16S_genus_df_melt)
dna_bothPS_ITS_genus = pd.DataFrame(dna_bothPS_ITS_genus_df_melt)
dna_bothPS_shotgun_genus = pd.DataFrame(dna_bothPS_shotgun_genus_df_melt)
_____no_output_____dna_bothPS_16S_genus.reset_index(inplace=True)
dna_bothPS_16S_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
dna_bothPS_ITS_genus.reset_index(inplace=True)
dna_bothPS_ITS_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
dna_bothPS_shotgun_genus.reset_index(inplace=True)
dna_bothPS_shotgun_genus.rename(columns={'level_0':'taxa','level_1':'sample',0:'counts'}, inplace=True)
_____no_output_____
</code>
# Wrangle data into long form for each kit_____no_output_____Wrangle metadata_____no_output_____
<code>
# Create empty list of extraction kit IDs
ext_kit_levels = []
# Create empty list of metadata subsets based on levels of variable of interest
ext_kit = []
# Create empty list of baseline samples for each subset
bl = []
# Populate lists with round 1 data
for ext_kit_level, ext_kit_level_df in meta_clean_r1_with_r2_PS.groupby('extraction_kit_round'):
ext_kit.append(ext_kit_level_df)
powersoil_r1_bl = meta_clean_r1_onlyPS[meta_clean_r1_onlyPS.extraction_kit_round == 'PowerSoil r1']
bl.append(powersoil_r1_bl)
ext_kit_levels.append(ext_kit_level)
print('Gathered data for',ext_kit_level)
# Populate lists with round 2 data
for ext_kit_level, ext_kit_level_df in meta_clean_r2_with_r1_PS.groupby('extraction_kit_round'):
ext_kit.append(ext_kit_level_df)
powersoil_r2_bl = meta_clean_r2_onlyPS[meta_clean_r2_onlyPS['extraction_kit_round'] == 'PowerSoil r2']
bl.append(powersoil_r2_bl)
ext_kit_levels.append(ext_kit_level)
print('Gathered data for',ext_kit_level)
# Create empty list for concatenated subset-baseline datasets
subsets_w_bl = {}
# Populate list with subset-baseline data
for ext_kit_level, ext_kit_df, ext_kit_bl in zip(ext_kit_levels, ext_kit, bl):
new_df = pd.concat([ext_kit_bl,ext_kit_df])
subsets_w_bl[ext_kit_level] = new_df
print('Merged data for',ext_kit_level)
Gathered data for Norgen
Gathered data for PowerSoil Pro
Gathered data for PowerSoil r2
Gathered data for MagMAX Microbiome
Gathered data for NucleoMag Food
Gathered data for PowerSoil r1
Gathered data for Zymo MagBead
Merged data for Norgen
Merged data for PowerSoil Pro
Merged data for PowerSoil r2
Merged data for MagMAX Microbiome
Merged data for NucleoMag Food
Merged data for PowerSoil r1
Merged data for Zymo MagBead
</code>
16S_____no_output_____
<code>
list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_16S_genera = pd.merge(value, dna_bothPS_16S_genus, left_index=True, right_on='sample')
#create new column
meta_16S_genera['taxa_subject'] = meta_16S_genera['taxa'] + meta_16S_genera['host_subject_id']
#subtract out duplicates and pivot
meta_16S_genera_clean = meta_16S_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_16S_genera_pivot = meta_16S_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_16S_genera_pivot_clean = meta_16S_genera_pivot.dropna()
# Export dataframe to file
meta_16S_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_16S_genera_%s.txt'%string,
sep = '\t',
index = False)
_____no_output_____
</code>
ITS_____no_output_____
<code>
list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_ITS_genera = pd.merge(value, dna_bothPS_ITS_genus, left_index=True, right_on='sample')
#create new column
meta_ITS_genera['taxa_subject'] = meta_ITS_genera['taxa'] + meta_ITS_genera['host_subject_id']
#subtract out duplicates and pivot
meta_ITS_genera_clean = meta_ITS_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_ITS_genera_pivot = meta_ITS_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_ITS_genera_pivot_clean = meta_ITS_genera_pivot.dropna()
# Export dataframe to file
meta_ITS_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_ITS_genera_%s.txt'%string,
sep = '\t',
index = False)
_____no_output_____
</code>
Shotgun_____no_output_____
<code>
list_of_lists = []
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_shotgun_genera = pd.merge(value, dna_bothPS_shotgun_genus, left_index=True, right_on='sample')
#create new column
meta_shotgun_genera['taxa_subject'] = meta_shotgun_genera['taxa'] + meta_shotgun_genera['host_subject_id']
#subtract out duplicates and pivot
meta_shotgun_genera_clean = meta_shotgun_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_shotgun_genera_pivot = meta_shotgun_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_shotgun_genera_pivot_clean = meta_shotgun_genera_pivot.dropna()
# Export dataframe to file
meta_shotgun_genera_pivot_clean.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlation_shotgun_genera_%s.txt'%string,
sep = '\t',
index = False)
_____no_output_____
</code>
# Code below is not used
## NOTE: The first cell was originally appended to the cell above_____no_output_____
<code>
# check pearson correlation
x = meta_16S_genera_pivot_clean.iloc[:,1]
y = meta_16S_genera_pivot_clean[key]
corr = stats.pearsonr(x, y)
int1, int2 = corr
corr_rounded = round(int1, 2)
corr_str = str(corr_rounded)
x_key = key[0]
y_key = key[1]
list1 = []
list1.append(corr_rounded)
list1.append(key)
list_of_lists.append(list1)
_____no_output_____list_of_lists_____no_output_____df = pd.DataFrame(list_of_lists, columns = ['Correlation', 'Extraction kit'])
_____no_output_____df.to_csv('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/table_correlations_16S_genera.txt',
sep = '\t',
index = False)
_____no_output_____splot = sns.catplot(y="Correlation",
x="Extraction kit",
hue= "Extraction kit",
kind='bar',
data=df,
dodge = False)
splot.set(ylim=(0, 1))
plt.xticks(rotation=45,
horizontalalignment='right')
#new_labels = ['−20C','−20C after 1 week', '4C','Ambient','Freeze-thaw','Heat']
#for t, l in zip(splot._legend.texts, new_labels):
# t.set_text(l)
splot.savefig('correlation_16S_genera.png')
splot.savefig('correlation_16S_genera.svg', format='svg', dpi=1200)
_____no_output_____
</code>
### Individual correlation plots _____no_output_____
<code>
for key, value in subsets_w_bl.items():
string = ''.join(key)
#merge metadata subsets with baseline with taxonomy
meta_16S_genera = pd.merge(value, dna_bothPS_16S_genus, left_index=True, right_on='sample')
#create new column
meta_16S_genera['taxa_subject'] = meta_16S_genera['taxa'] + meta_16S_genera['host_subject_id']
#subtract out duplicates and pivot
meta_16S_genera_clean = meta_16S_genera.drop_duplicates(subset = ['taxa_subject', 'extraction_kit_round'], keep = 'first')
meta_16S_genera_pivot = meta_16S_genera_clean.pivot(index='taxa_subject', columns='extraction_kit_round', values='counts')
meta_16S_genera_pivot_clean = meta_16S_genera_pivot.dropna()
# check pearson correlation
x = meta_16S_genera_pivot_clean.iloc[:,1]
y = meta_16S_genera_pivot_clean[key]
corr = stats.pearsonr(x, y)
int1, int2 = corr
corr_rounded = round(int1, 2)
corr_str = str(corr_rounded)
#make correlation plots
meta_16S_genera_pivot_clean['x1'] = meta_16S_genera_pivot_clean.iloc[:,1]
meta_16S_genera_pivot_clean['y1'] = meta_16S_genera_pivot_clean.iloc[:,0]
ax=sns.lmplot(x='x1',
y='y1',
data=meta_16S_genera_pivot_clean,
height=3.8)
ax.set(yscale='log')
ax.set(xscale='log')
ax.set(xlabel='PowerSoil', ylabel=key)
#plt.xlim(0.00001, 10000000)
#plt.ylim(0.00001, 10000000)
plt.title(string + ' (%s)' %corr_str)
ax.savefig('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/figure_scatter_correlation_16S_genera_%s.png'%string)
ax.savefig('/Users/Justin/Mycelium/UCSD/00_Knight_Lab/03_Extraction_test_12201/round_02/results/feature_abundance_correlation_images/figure_scatter_correlation_16S_genera_%s.svg'%string, format='svg',dpi=1200)
_____no_output_____
</code>
| {
"repository": "justinshaffer/Extraction_kit_benchmarking",
"path": "code/Taxon profile analysis.ipynb",
"matched_keywords": [
"microbiome",
"QIIME2"
],
"stars": null,
"size": 23131,
"hexsha": "d0096cb02dc68507e2b0cfb172642550ef65c2c8",
"max_line_length": 246,
"avg_line_length": 34.7834586466,
"alphanum_fraction": 0.6314469759
} |
# Notebook from rpatil524/Community-Notebooks
Path: MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
<a href="https://colab.research.google.com/github/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# How to build an RNA-seq logistic regression classifier with BigQuery ML
Check out other notebooks at our [Community Notebooks Repository](https://github.com/isb-cgc/Community-Notebooks)!
- **Title:** How to build an RNA-seq logistic regression classifier with BigQuery ML
- **Author:** John Phan
- **Created:** 2021-07-19
- **Purpose:** Demonstrate use of BigQuery ML to predict a cancer endpoint using gene expression data.
- **URL:** https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb
- **Note:** This example is based on the work published by [Bosquet et al.](https://molecular-cancer.biomedcentral.com/articles/10.1186/s12943-016-0548-9)
This notebook builds upon the [scikit-learn notebook](https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier.ipynb) and demonstrates how to build a machine learning model using BigQuery ML to predict ovarian cancer treatment outcome. BigQuery is used to create a temporary data table that contains both training and testing data. These datasets are then used to fit and evaluate a Logistic Regression classifier. _____no_output_____# Import Dependencies_____no_output_____
<code>
# GCP libraries
from google.cloud import bigquery
from google.colab import auth_____no_output_____
</code>
## Authenticate
Before using BigQuery, we need to get authorization for access to BigQuery and the Google Cloud. For more information see ['Quick Start Guide to ISB-CGC'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html). Alternative authentication methods can be found [here](https://googleapis.dev/python/google-api-core/latest/auth.html)_____no_output_____
<code>
# if you're using Google Colab, authenticate to gcloud with the following
auth.authenticate_user()
# alternatively, use the gcloud SDK
#!gcloud auth application-default login_____no_output_____
</code>
## Parameters
Customize the following parameters based on your notebook, execution environment, or project. BigQuery ML must create and store classification models, so be sure that you have write access to the locations stored in the "bq_dataset" and "bq_project" variables. _____no_output_____
<code>
# set the google project that will be billed for this notebook's computations
google_project = 'google-project' ## CHANGE ME
# bq project for storing ML model
bq_project = 'bq-project' ## CHANGE ME
# bq dataset for storing ML model
bq_dataset = 'scratch' ## CHANGE ME
# name of temporary table for data
bq_tmp_table = 'tmp_data'
# name of ML model
bq_ml_model = 'tcga_ov_therapy_ml_lr_model'
# in this example, we'll be using the Ovarian cancer TCGA dataset
cancer_type = 'TCGA-OV'
# genes used for prediction model, taken from Bosquet et al.
genes = "'RHOT1','MYO7A','ZBTB10','MATK','ST18','RPS23','GCNT1','DROSHA','NUAK1','CCPG1',\
'PDGFD','KLRAP1','MTAP','RNF13','THBS1','MLX','FAP','TIMP3','PRSS1','SLC7A11',\
'OLFML3','RPS20','MCM5','POLE','STEAP4','LRRC8D','WBP1L','ENTPD5','SYNE1','DPT',\
'COPZ2','TRIO','PDPR'"
# clinical data table
clinical_table = 'isb-cgc-bq.TCGA_versioned.clinical_gdc_2019_06'
# RNA seq data table
rnaseq_table = 'isb-cgc-bq.TCGA.RNAseq_hg38_gdc_current'
_____no_output_____
</code>
## BigQuery Client
Create the BigQuery client._____no_output_____
<code>
# Create a client to access the data within BigQuery
client = bigquery.Client(google_project)_____no_output_____
</code>
## Create a Table with a Subset of the Gene Expression Data
Pull RNA-seq gene expression data from the TCGA RNA-seq BigQuery table, join it with clinical labels, and pivot the table so that it can be used with BigQuery ML. In this example, we will label the samples based on therapy outcome. "Complete Remission/Response" will be labeled as "1" while all other therapy outcomes will be labeled as "0". This prepares the data for binary classification.
Prediction modeling with RNA-seq data typically requires a feature selection step to reduce the dimensionality of the data before training a classifier. However, to simplify this example, we will use a pre-identified set of 33 genes (Bosquet et al. identified 34 genes, but PRSS2 and its aliases are not available in the hg38 RNA-seq data).
Creation of a BQ table with only the data of interest reduces the size of the data passed to BQ ML and can significantly reduce the cost of running BQ ML queries. This query also randomly splits the dataset into "training" and "testing" sets using the "FARM_FINGERPRINT" hash function in BigQuery. "FARM_FINGERPRINT" generates an integer from the input string. More information can be found [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/hash_functions)._____no_output_____
<code>
tmp_table_query = client.query(("""
BEGIN
CREATE OR REPLACE TABLE `{bq_project}.{bq_dataset}.{bq_tmp_table}` AS
SELECT * FROM (
SELECT
labels.case_barcode as sample,
labels.data_partition as data_partition,
labels.response_label AS label,
ge.gene_name AS gene_name,
-- Multiple samples may exist per case, take the max value
MAX(LOG(ge.HTSeq__FPKM_UQ+1)) AS gene_expression
FROM `{rnaseq_table}` AS ge
INNER JOIN (
SELECT
*
FROM (
SELECT
case_barcode,
primary_therapy_outcome_success,
CASE
-- Complete Reponse --> label as 1
-- All other responses --> label as 0
WHEN primary_therapy_outcome_success = 'Complete Remission/Response' THEN 1
WHEN (primary_therapy_outcome_success IN (
'Partial Remission/Response','Progressive Disease','Stable Disease'
)) THEN 0
END AS response_label,
CASE
WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) < 5 THEN 'training'
WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) >= 5 THEN 'testing'
END AS data_partition
FROM `{clinical_table}`
WHERE
project_short_name = '{cancer_type}'
AND primary_therapy_outcome_success IS NOT NULL
)
) labels
ON labels.case_barcode = ge.case_barcode
WHERE gene_name IN ({genes})
GROUP BY sample, label, data_partition, gene_name
)
PIVOT (
MAX(gene_expression) FOR gene_name IN ({genes})
);
END;
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_tmp_table=bq_tmp_table,
rnaseq_table=rnaseq_table,
clinical_table=clinical_table,
cancer_type=cancer_type,
genes=genes
)).result()
print(tmp_table_query)<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3894001250>
</code>
Let's take a look at this subset table. The data has been pivoted such that each of the 33 genes is available as a column that can be "SELECTED" in a query. In addition, the "label" and "data_partition" columns simplify data handling for classifier training and evaluation. _____no_output_____
<code>
tmp_table_data = client.query(("""
SELECT
* --usually not recommended to use *, but in this case, we want to see all of the 33 genes
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_tmp_table=bq_tmp_table
)).result().to_dataframe()
print(tmp_table_data.info())
tmp_table_data<class 'pandas.core.frame.DataFrame'>
RangeIndex: 264 entries, 0 to 263
Data columns (total 36 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 sample 264 non-null object
1 data_partition 264 non-null object
2 label 264 non-null int64
3 RHOT1 264 non-null float64
4 MYO7A 264 non-null float64
5 ZBTB10 264 non-null float64
6 MATK 264 non-null float64
7 ST18 264 non-null float64
8 RPS23 264 non-null float64
9 GCNT1 264 non-null float64
10 DROSHA 264 non-null float64
11 NUAK1 264 non-null float64
12 CCPG1 264 non-null float64
13 PDGFD 264 non-null float64
14 KLRAP1 264 non-null float64
15 MTAP 264 non-null float64
16 RNF13 264 non-null float64
17 THBS1 264 non-null float64
18 MLX 264 non-null float64
19 FAP 264 non-null float64
20 TIMP3 264 non-null float64
21 PRSS1 264 non-null float64
22 SLC7A11 264 non-null float64
23 OLFML3 264 non-null float64
24 RPS20 264 non-null float64
25 MCM5 264 non-null float64
26 POLE 264 non-null float64
27 STEAP4 264 non-null float64
28 LRRC8D 264 non-null float64
29 WBP1L 264 non-null float64
30 ENTPD5 264 non-null float64
31 SYNE1 264 non-null float64
32 DPT 264 non-null float64
33 COPZ2 264 non-null float64
34 TRIO 264 non-null float64
35 PDPR 264 non-null float64
dtypes: float64(33), int64(1), object(2)
memory usage: 74.4+ KB
None
</code>
# Train the Machine Learning Model
Now we can train a classifier using BigQuery ML with the data stored in the subset table. This model will be stored in the location specified by the "bq_ml_model" variable, and can be reused to predict samples in the future.
We pass three options to the BQ ML model: model_type, auto_class_weights, and input_label_cols. Model_type specifies the classifier model type. In this case, we use "LOGISTIC_REG" to train a logistic regression classifier. Other classifier options are documented [here](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create). Auto_class_weights indicates whether samples should be weighted to balance the classes. For example, if the dataset happens to have more samples labeled as "Complete Response", those samples would be less weighted to ensure that the model is not biased towards predicting those samples. Input_label_cols tells BigQuery that the "label" column should be used to determine each sample's label.
**Warning**: BigQuery ML models can be very time-consuming and expensive to train. Please check your data size before running BigQuery ML commands. Information about BigQuery ML costs can be found [here](https://cloud.google.com/bigquery-ml/pricing)._____no_output_____
<code>
# create ML model using BigQuery
ml_model_query = client.query(("""
CREATE OR REPLACE MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`
OPTIONS
(
model_type='LOGISTIC_REG',
auto_class_weights=TRUE,
input_label_cols=['label']
) AS
SELECT * EXCEPT(sample, data_partition) -- when training, we only the labels and feature columns
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'training' -- using training data only
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)).result()
print(ml_model_query)
# now get the model metadata
ml_model = client.get_model('{}.{}.{}'.format(bq_project, bq_dataset, bq_ml_model))
print(ml_model)<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3893663810>
Model(reference=ModelReference(project='isb-project-zero', dataset_id='jhp_scratch', project_id='tcga_ov_therapy_ml_lr_model'))
</code>
# Evaluate the Machine Learning Model
Once the model has been trained and stored, we can evaluate the model's performance using the "testing" dataset from our subset table. Evaluating a BQ ML model is generally less expensive than training.
Use the following query to evaluate the BQ ML model. Note that we're using the "data_partition = 'testing'" clause to ensure that we're only evaluating the model with test samples from the subset table.
BigQuery's ML.EVALUATE function returns several performance metrics: precision, recall, accuracy, f1_score, log_loss, and roc_auc. More details about these performance metrics are available from [Google's ML Crash Course](https://developers.google.com/machine-learning/crash-course/classification/video-lecture). Specific topics can be found at the following URLs: [precision and recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall), [accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy), [ROC and AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc). _____no_output_____
<code>
ml_eval = client.query(("""
SELECT * FROM ML.EVALUATE (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing'
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)).result().to_dataframe()_____no_output_____# Display the table of evaluation results
ml_eval_____no_output_____
</code>
# Predict Outcome for One or More Samples
ML.EVALUATE evaluates a model's performance, but does not produce actual predictions for each sample. In order to do that, we need to use the ML.PREDICT function. The syntax is similar to that of the ML.EVALUATE function and returns "label", "predicted_label", "predicted_label_probs", and all feature columns. Since the feature columns are unchanged from the input dataset, we select only the original label, predicted label, and probabilities for each sample.
Note that the input dataset can include one or more samples, and must include the same set of features as the training dataset. _____no_output_____
<code>
ml_predict = client.query(("""
SELECT
label,
predicted_label,
predicted_label_probs
FROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`,
(
SELECT * EXCEPT(sample, data_partition)
FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`
WHERE data_partition = 'testing' -- Use the testing dataset
)
)
""").format(
bq_project=bq_project,
bq_dataset=bq_dataset,
bq_ml_model=bq_ml_model,
bq_tmp_table=bq_tmp_table
)).result().to_dataframe()_____no_output_____# Display the table of prediction results
ml_predict_____no_output_____# Calculate the accuracy of prediction, which should match the result of ML.EVALUATE
accuracy = 1-sum(abs(ml_predict['label']-ml_predict['predicted_label']))/len(ml_predict)
print('Accuracy: ', accuracy)Accuracy: 0.6230769230769231
</code>
# Next Steps
The BigQuery ML logistic regression model trained in this notebook is comparable to the scikit-learn model developed in our [companion notebook](https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier.ipynb). BigQuery ML simplifies the model building and evaluation process by enabling bioinformaticians to use machine learning within the BigQuery ecosystem. However, it is often necessary to optimize performance by evaluating several types of models (i.e., other than logistic regression), and tuning model parameters. Due to the cost of BigQuery ML for training, such iterative model fine-tuning may be cost prohibitive. In such cases, a combination of scikit-learn (or other libraries such as Keras and TensorFlow) and BigQuery ML may be appropriate. E.g., models can be fine-tuned using scikit-learn and published as a BigQuery ML model for production applications. In future notebooks, we will explore methods for model selection, optimization, and publication with BigQuery ML. _____no_output_____
| {
"repository": "rpatil524/Community-Notebooks",
"path": "MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb",
"matched_keywords": [
"RNA-seq"
],
"stars": 16,
"size": 53907,
"hexsha": "d00c6cf71bdffc5e1414b4ece1a89cb27eb58159",
"max_line_length": 1068,
"avg_line_length": 42.9195859873,
"alphanum_fraction": 0.4078691079
} |
# Notebook from jouterleys/BiomchBERT
Path: classify_papers.ipynb
Uses Fine-Tuned BERT network to classify biomechanics papers from PubMed_____no_output_____
<code>
# Check date
!rm /etc/localtime
!ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
!date
# might need to restart runtime if timezone didn't changeThu Mar 24 06:59:32 PDT 2022
## Install & load libraries
!pip install tensorflow==2.7.0
try:
from official.nlp import optimization
except:
!pip install -q -U tf-models-official==2.4.0
from official.nlp import optimization
try:
from Bio import Entrez
except:
!pip install -q -U biopython
from Bio import Entrez
try:
import tensorflow_text as text
except:
!pip install -q -U tensorflow_text==2.7.3
import tensorflow_text as text
import pandas as pd
import numpy as np
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
import tensorflow as tf # probably have to lock version
import string
import datetime
from bs4 import BeautifulSoup
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.models import load_model
import tensorflow_hub as hub
from google.colab import drive
import datetime as dt
#Define date range
today = dt.date.today()
yesterday = today - dt.timedelta(days=1)
week_ago = yesterday - dt.timedelta(days=7) # ensure overlap in pubmed search
days_ago_6 = yesterday - dt.timedelta(days=6) # for text output
# Mount Google Drive for model and csv up/download
drive.mount('/content/gdrive')
print(today)Collecting tensorflow==2.7.0
Downloading tensorflow-2.7.0-cp37-cp37m-manylinux2010_x86_64.whl (489.6 MB)
[K |████████████████████████████████| 489.6 MB 24 kB/s
[?25hRequirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.44.0)
Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.2)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.1.0)
Requirement already satisfied: flatbuffers<3.0,>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.15.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.24.0)
Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.21.5)
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.6.3)
Collecting gast<0.5.0,>=0.2.1
Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.14.0)
Collecting keras<2.8,>=2.7.0rc0
Downloading keras-2.7.0-py2.py3-none-any.whl (1.3 MB)
[K |████████████████████████████████| 1.3 MB 43.1 MB/s
[?25hRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.17.3)
Requirement already satisfied: wheel<1.0,>=0.32.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.37.1)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.2.0)
Requirement already satisfied: tensorboard~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.8.0)
Requirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (13.0.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.10.0.2)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.0)
Requirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.0.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.3.0)
Collecting tensorflow-estimator<2.8,~=2.7.0rc0
Downloading tensorflow_estimator-2.7.0-py2.py3-none-any.whl (463 kB)
[K |████████████████████████████████| 463 kB 48.7 MB/s
[?25hRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow==2.7.0) (1.5.2)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (57.4.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.0.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.35.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (3.3.6)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.6.1)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (2.23.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.8.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.2.4)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.8)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (4.11.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (3.7.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.4.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2021.10.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (1.24.3)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (3.2.0)
Installing collected packages: tensorflow-estimator, keras, gast, tensorflow
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.8.0
Uninstalling tensorflow-estimator-2.8.0:
Successfully uninstalled tensorflow-estimator-2.8.0
Attempting uninstall: keras
Found existing installation: keras 2.8.0
Uninstalling keras-2.8.0:
Successfully uninstalled keras-2.8.0
Attempting uninstall: gast
Found existing installation: gast 0.5.3
Uninstalling gast-0.5.3:
Successfully uninstalled gast-0.5.3
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.8.0
Uninstalling tensorflow-2.8.0:
Successfully uninstalled tensorflow-2.8.0
Successfully installed gast-0.4.0 keras-2.7.0 tensorflow-2.7.0 tensorflow-estimator-2.7.0
[K |████████████████████████████████| 1.1 MB 5.3 MB/s
[K |████████████████████████████████| 99 kB 7.8 MB/s
[K |████████████████████████████████| 596 kB 41.1 MB/s
[K |████████████████████████████████| 352 kB 49.1 MB/s
[K |████████████████████████████████| 1.1 MB 37.0 MB/s
[K |████████████████████████████████| 47.8 MB 57 kB/s
[K |████████████████████████████████| 1.2 MB 43.6 MB/s
[K |████████████████████████████████| 43 kB 1.8 MB/s
[K |████████████████████████████████| 237 kB 45.6 MB/s
[?25h Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 2.3 MB 4.9 MB/s
[K |████████████████████████████████| 4.9 MB 5.5 MB/s
[?25h[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
Mounted at /content/gdrive
2022-03-24
# Define Search Criteria ----
def search(query):
Entrez.email = '[email protected]'
handle = Entrez.esearch(db='pubmed',
sort='most recent',
retmax='5000',
retmode='xml',
datetype='pdat', # pdat is published date, edat is entrez date.
# reldate=7, # only within n days from now
mindate= min_date,
maxdate= max_date, # for searching date range
term=query)
results = Entrez.read(handle)
return results
# Perform Search and Pull Paper Titles ----
def fetch_details(ids):
Entrez.email = '[email protected]'
handle = Entrez.efetch(db='pubmed',
retmode='xml',
id=ids)
results = Entrez.read(handle)
return results
# Make the stop words for string cleaning ----
def html_strip(text):
text = BeautifulSoup(text, 'lxml').text
text = text.replace('[','').replace(']','')
return text
def clean_str(text, stops):
text = BeautifulSoup(text, 'lxml').text
text = text.split()
return ' '.join([word for word in text if word not in stops])
stop = list(stopwords.words('english'))
stop_c = [string.capwords(word) for word in stop]
for word in stop_c:
stop.append(word)
new_stop = ['The', 'An', 'A', 'Do', 'Is', 'In', 'StringElement',
'NlmCategory', 'Label', 'attributes', 'INTRODUCTION',
'METHODS', 'BACKGROUND', 'RESULTS', 'CONCLUSIONS']
for s in new_stop:
stop.append(s)
# Search terms (can test string with Pubmed Advanced Search) ----
# search_results = search('(Biomech*[Title/Abstract] OR locomot*[Title/Abstract])')
min_date = week_ago.strftime('%m/%d/%Y')
max_date = yesterday.strftime('%m/%d/%Y')
search_results = search('(biomech*[Title/Abstract] OR locomot*[Title/Abstract] NOT opiod*[Title/Abstract] NOT pharm*[Journal] NOT mouse[Title/Abstract] NOT drosophil*[Title/Abstract] NOT mice[Title/Abstract] NOT rats*[Title/Abstract] NOT elegans[Title/Abstract])')
id_list = search_results['IdList']
papers = fetch_details(id_list)
print(len(papers['PubmedArticle']), 'Papers found')
titles, full_titles, keywords, authors, links, journals, abstracts = ([] for i in range(7))
for paper in papers['PubmedArticle']:
# clean and store titles, abstracts, and links
t = clean_str(paper['MedlineCitation']['Article']['ArticleTitle'],
stop).replace('[','').replace(']','').capitalize() # rm brackets that survived beautifulsoup, sentence case
titles.append(t)
full_titles.append(paper['MedlineCitation']['Article']['ArticleTitle'])
pmid = paper['MedlineCitation']['PMID']
links.append('[URL="https://www.ncbi.nlm.nih.gov/pubmed/{0}"]{1}[/URL]'.format(pmid, html_strip(paper['MedlineCitation']['Article']['ArticleTitle'])))
try:
abstracts.append(clean_str(paper['MedlineCitation']['Article']['Abstract']['AbstractText'][0],
stop).replace('[','').replace(']','').capitalize()) # rm brackets that survived beautifulsoup, sentence case
except:
abstracts.append('')
# clean and store authors
auths = []
try:
for auth in paper['MedlineCitation']['Article']['AuthorList']:
try: # see if there is a last name and initials
auth_name = [auth['LastName'], auth['Initials'] + ',']
auth_name = ' '.join(auth_name)
auths.append(auth_name)
except:
if 'LastName' in auth.keys(): # maybe they don't have initials
auths.append(auth['LastName'] + ',')
else: # no last name
auths.append('')
print(paper['MedlineCitation']['Article']['ArticleTitle'],
'has an issue with an author name:')
except:
auths.append('AUTHOR NAMES ERROR')
print(paper['MedlineCitation']['Article']['ArticleTitle'], 'has no author list?')
# compile authors
authors.append(' '.join(auths).replace('[','').replace(']','')) # rm brackets in names
# journal names
journals.append(paper['MedlineCitation']['Article']['Journal']['Title'].replace('[','').replace(']','')) # rm brackets
# store keywords
if paper['MedlineCitation']['KeywordList'] != []:
kwds = []
for kw in paper['MedlineCitation']['KeywordList'][0]:
kwds.append(kw[:])
keywords.append(', '.join(kwds).lower())
else:
keywords.append('')
# Put Titles, Abstracts, Authors, Journal, and Keywords into dataframe
papers_df = pd.DataFrame({'title': titles,
'keywords': keywords,
'abstract': abstracts,
'authors': authors,
'journal': journals,
'links': links,
'raw_title': full_titles,
'mindate': min_date,
'maxdate': max_date})
# remove papers with no title or no authors
for index, row in papers_df.iterrows():
if row['title'] == '' or row['authors'] == 'AUTHOR NAMES ERROR':
papers_df.drop(index, inplace=True)
papers_df.reset_index(drop=True, inplace=True)
# join titles and abstract
papers_df['BERT_input'] = pd.DataFrame(papers_df['title'] + ' ' + papers_df['abstract'])
# Load Fine-Tuned BERT Network ----
model = tf.saved_model.load('/content/gdrive/My Drive/BiomchBERT/Data/BiomchBERT/')
print('Loaded model from disk')
# Load Label Encoder ----
le = LabelEncoder()
le.classes_ = np.load('/content/gdrive/My Drive/BiomchBERT/Data/BERT_label_encoder.npy')
print('Loaded Label Encoder')
84 Papers found
Loaded model from disk
Loaded Label Encoder
# Predict Paper Topic ----
predicted_topic = model(papers_df['BERT_input'], training=False) # will run out of GPU memory (14GB) if predicting more than ~2000 title+abstracts at once_____no_output_____# Determine Publications that BiomchBERT is unsure about ----
topics, pred_val_str = ([] for i in range(2))
for pred_prob in predicted_topic:
pred_val = np.max(pred_prob)
if pred_val > 1.5 * np.sort(pred_prob)[-2]: # Is top confidence score more than 1.5x the second best confidence score?
topics.append(le.inverse_transform([np.argmax(pred_prob)])[0])
top1 = le.inverse_transform([np.argmax(pred_prob)])[0]
top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0]
# pred_val_str.append(pred_val * 100) # just report top category
pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str(
np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2)) # report top 2 categories
else:
topics.append('UNKNOWN')
top1 = le.inverse_transform([np.argmax(pred_prob)])[0]
top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0]
pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str(
np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2))
papers_df['topic'] = topics
papers_df['pred_val'] = pred_val_str
print('BiomchBERT is unsure about {0} papers\n'.format(len(papers_df[papers_df['topic'] == 'UNKNOWN'])))
BiomchBERT is unsure about 6 papers
# Prompt User to decide for BiomchBERT ----
unknown_papers = papers_df[papers_df['topic'] == 'UNKNOWN']
for indx, paper in unknown_papers.iterrows():
print(paper['raw_title'])
print(paper['journal'])
print(paper['pred_val'])
print()
splt_str = paper['pred_val'].split(';')
options = [str for pred_cls in splt_str for str in le.classes_ if (str in pred_cls)]
choice = input('(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? ')
print()
if choice == '1':
papers_df.iloc[indx]['topic'] = str(options[0])
elif choice == '2':
papers_df.iloc[indx]['topic'] = str(options[1])
elif choice == 'o':
# print all categories so you can select
for i in zip(range(len(le.classes_)),le.classes_):
print(i)
new_cat = input('Enter number of new class or type "r" to remove paper: ')
print()
if new_cat == 'r':
papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output
else:
papers_df.iloc[indx]['topic'] = le.classes_[int(new_cat)]
elif choice == 'r':
papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output
print('Removing {0} papers\n'.format(len(papers_df[papers_df['topic'] == '_REMOVE_'])))Contribution of sensory feedback to Soleus muscle activity during voluntary contraction in humans.
Journal of neurophysiology
51.6% NEURAL; 38.0% MUSCLE
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1
Anterior Cable Reconstruction: Prioritizing Rotator Cable and Tendon Cord When Considering Superior Capsular Reconstruction.
Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
47.0% ORTHOPAEDICS/SURGERY; 45.3% TENDON/LIGAMENT
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1
Comparison the Effect of Pain Neuroscience and Pain Biomechanics Education on Neck Pain and Fear of Movement in Patients with Chronic Nonspecific Neck Pain During the COVID-19 Pandemic.
Pain and therapy
45.3% REHABILITATION; 34.9% ERGONOMICS
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1
The role of nanoplastics on the toxicity of the herbicide phenmedipham, using Danio rerio embryos as model organisms.
Environmental pollution (Barking, Essex : 1987)
28.0% COMPARATIVE; 23.5% CELLULAR/SUBCELLULAR
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? r
The Terrific Skink bite force suggests insularity as a likely driver to exceptional resource use.
Scientific reports
52.1% EVOLUTION/ANTHROPOLOGY; 42.1% COMPARATIVE
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? o
(0, 'BONE')
(1, 'BOTANY')
(2, 'CARDIOVASCULAR/CARDIOPULMONARY')
(3, 'CELLULAR/SUBCELLULAR')
(4, 'COMPARATIVE')
(5, 'DENTAL/ORAL/FACIAL')
(6, 'ERGONOMICS')
(7, 'EVOLUTION/ANTHROPOLOGY')
(8, 'GAIT/LOCOMOTION')
(9, 'HAND/FINGER/FOOT/TOE')
(10, 'JOINT/CARTILAGE')
(11, 'METHODS')
(12, 'MODELING')
(13, 'MUSCLE')
(14, 'NEURAL')
(15, 'ORTHOPAEDICS/SPINE')
(16, 'ORTHOPAEDICS/SURGERY')
(17, 'POSTURE/BALANCE')
(18, 'PROSTHETICS/ORTHOTICS')
(19, 'REHABILITATION')
(20, 'ROBOTICS')
(21, 'SPORT/EXERCISE')
(22, 'TENDON/LIGAMENT')
(23, 'TISSUE/BIOMATERIAL')
(24, 'TRAUMA/IMPACT')
(25, 'VETERINARY/AGRICULTURAL')
(26, 'VISUAL/VESTIBULAR')
Enter number of new class or type "r" to remove paper: 4
Overground gait kinematics and muscle activation patterns in the Yucatan mini pig.
Journal of neural engineering
36.0% COMPARATIVE; 28.8% GAIT/LOCOMOTION
(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1
Removing 1 papers
# Double check that none of these papers were included in past literature updates ----
# load prior papers
# papers_df.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False) # run ONLY if there are no prior papers
prior_papers = pd.read_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv')
prior_papers.dropna(subset=['title'], inplace=True)
prior_papers.reset_index(drop=True, inplace=True)
# NEED TO DO: find matching papers between current week and prior papers using Pubmed ID since titles can change from ahead of print to final version.
# match = papers_df['links'].split(']')[0].isin(prior_papers['links'].split(']')[0])
match = papers_df['title'].isin(prior_papers['title']) # boolean
print('Removing {0} papers found in prior literature updates\n'.format(sum(match)))
# filter and check if everything accidentally was removed
filtered_papers_df = papers_df.drop(papers_df[match].index)
if filtered_papers_df.shape[0] < 1:
raise ValueError('might have removed all the papers for some reason. ')
else:
papers_df = filtered_papers_df
papers_df.reset_index(drop=True, inplace=True)
updated_prior_papers = pd.concat([prior_papers, papers_df], axis=0)
updated_prior_papers.reset_index(drop=True, inplace=True)
updated_prior_papers.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False)Removing 18 papers found in prior literature updates
# Create Text File for Biomch-L ----
# Compile papers grouped by topic
txtname = '/content/gdrive/My Drive/BiomchBERT/Updates/' + today.strftime("%Y-%m-%d") + '-litupdate.txt'
txt = open(txtname, 'w', encoding='utf-8')
txt.write('[SIZE=16px][B]LITERATURE UPDATE[/B][/SIZE]\n')
txt.write(days_ago_6.strftime("%b %d, %Y") + ' - '+ yesterday.strftime("%b %d, %Y")+'\n') # a week ago from yesterday.
txt.write(
"""
Literature search terms: biomech* & locomot*
Publications are classified by [URL="https://www.ryan-alcantara.com/projects/p88_BiomchBERT/"]BiomchBERT[/URL], a neural network trained on past Biomch-L Literature Updates. BiomchBERT is managed by [URL="https://jouterleys.github.io"]Jereme Outerleys[/URL], a Doctoral Student at Queen's University. Each publication has a score (out of 100%) reflecting how confident BiomchBERT is that the publication belongs in a particular category (top 2 shown). If something doesn't look right, email jereme.outerleys[at]queensu.ca.
Twitter: [URL="https://www.twitter.com/jouterleys"]@jouterleys[/URL].
"""
)
# Write papers to text file grouped by topic ----
topic_list = np.unique(papers_df.sort_values('topic')['topic'])
for topic in topic_list:
papers_subset = pd.DataFrame(papers_df[papers_df.topic == topic].reset_index(drop=True))
txt.write('\n')
# TOPIC NAME (with some cleaning)
if topic == '_REMOVE_':
continue
elif topic == 'UNKNOWN':
txt.write('[SIZE=16px][B]*Papers BiomchBERT is unsure how to classify*[/B][/SIZE]\n')
elif topic == 'CARDIOVASCULAR/CARDIOPULMONARY':
topic = 'CARDIOVASCULAR/PULMONARY'
txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic)
elif topic == 'CELLULAR/SUBCELLULAR':
topic = 'CELLULAR'
txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic)
elif topic == 'ORTHOPAEDICS/SURGERY':
topic = 'ORTHOPAEDICS (SURGERY)'
txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic)
elif topic == 'ORTHOPAEDICS/SPINE':
topic = 'ORTHOPAEDICS (SPINE)'
txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic)
else:
txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic)
# HYPERLINKED PAPERS, AUTHORS, JOURNAL NAME
for i, paper in enumerate(papers_subset['links']):
txt.write('[B]%s[/B] ' % paper)
txt.write('%s ' % papers_subset['authors'][i])
txt.write('[I]%s[/I]. ' % papers_subset['journal'][i])
# CONFIDENCE SCORE (BERT softmax categorical crossentropy)
try:
txt.write('(%.1f%%) \n\n' % papers_subset['pred_val'][i])
except:
txt.write('(%s)\n\n' % papers_subset['pred_val'][i])
txt.write('[SIZE=16px][B]*PICK OF THE WEEK*[/B][/SIZE]\n')
txt.close()
print('Literature Update Exported for Biomch-L')
print('Location:', txtname)Literature Update Exported for Biomch-L
Location: /content/gdrive/My Drive/BiomchBERT/Updates/2022-03-24-litupdate.txt
_____no_output_____
</code>
| {
"repository": "jouterleys/BiomchBERT",
"path": "classify_papers.ipynb",
"matched_keywords": [
"BioPython",
"evolution",
"neuroscience"
],
"stars": null,
"size": 31222,
"hexsha": "d00d1fd31d99a85620063e299bc079d92cc907c1",
"max_line_length": 31222,
"avg_line_length": 31222,
"alphanum_fraction": 0.6535135481
} |
# Notebook from superkley/udacity-mlnd
Path: p2_sl_finding_donors/p2_sl_finding_donors.ipynb
# Supervised Learning: Finding Donors for *CharityML*
> Udacity Machine Learning Engineer Nanodegree: _Project 2_
>
> Author: _Ke Zhang_
>
> Submission Date: _2017-04-30_ (Revision 3)_____no_output_____## Content
- [Getting Started](#Getting-Started)
- [Exploring the Data](#Exploring-the-Data)
- [Preparing the Data](#Preparing-the-Data)
- [Evaluating Model Performance](#Evaluating-Model-Performance)
- [Improving Results](#Improving-Results)
- [Feature Importance](#Feature-Importance)
- [References](#References)
- [Reproduction Environment](#Reproduction-Environment)_____no_output_____## Getting Started
In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries._____no_output_____----
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database._____no_output_____
<code>
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
import matplotlib.pyplot as plt
import seaborn as sns
# Import supplementary visualization code visuals.py
import visuals as vs
#sklearn makes lots of deprecation warnings...
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Pretty display for notebooks
%matplotlib inline
sns.set(style='white', palette='muted', color_codes=True)
sns.set_context('notebook', font_scale=1.2, rc={'lines.linewidth': 1.2})
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))_____no_output_____
</code>
### Implementation: Data Exploration
A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:
- The total number of records, `'n_records'`
- The number of individuals making more than \$50,000 annually, `'n_greater_50k'`.
- The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`.
- The percentage of individuals making more than \$50,000 annually, `'greater_percent'`.
**Hint:** You may need to look at the table above to understand how the `'income'` entries are formatted. _____no_output_____
<code>
# Total number of records
n_records = data.shape[0]
# Number of records where individual's income is more than $50,000
n_greater_50k = data[data['income'] == '>50K'].shape[0]
# Number of records where individual's income is at most $50,000
n_at_most_50k = data[data['income'] == '<=50K'].shape[0]
# Percentage of individuals whose income is more than $50,000
greater_percent = n_greater_50k / (n_records / 100.0)
# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)Total number of records: 45222
Individuals making more than $50,000: 11208
Individuals making at most $50,000: 34014
Percentage of individuals making more than $50,000: 24.78%
</code>
----
## Preparing the Data
Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms._____no_output_____### Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`.
Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed._____no_output_____
<code>
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)_____no_output_____
</code>
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully.
Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed. _____no_output_____
<code>
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)_____no_output_____
</code>
### Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.
Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this._____no_output_____
<code>
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 1))_____no_output_____
</code>
### Implementation: Data Preprocessing
From the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`.
| | someFeature | | someFeature_A | someFeature_B | someFeature_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following:
- Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) to perform one-hot encoding on the `'features_raw'` data.
- Convert the target label `'income_raw'` to numerical entries.
- Set records with "<=50K" to `0` and records with ">50K" to `1`._____no_output_____
<code>
# One-hot encode the 'features_raw' data using pandas.get_dummies()
features = pd.get_dummies(features_raw)
# Encode the 'income_raw' data to numerical values
income = income_raw.apply(lambda x: 1 if x == '>50K' else 0)
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# Uncomment the following line to see the encoded feature names
print encoded103 total features after one-hot encoding.
['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week', 'workclass_ Federal-gov', 'workclass_ Local-gov', 'workclass_ Private', 'workclass_ Self-emp-inc', 'workclass_ Self-emp-not-inc', 'workclass_ State-gov', 'workclass_ Without-pay', 'education_level_ 10th', 'education_level_ 11th', 'education_level_ 12th', 'education_level_ 1st-4th', 'education_level_ 5th-6th', 'education_level_ 7th-8th', 'education_level_ 9th', 'education_level_ Assoc-acdm', 'education_level_ Assoc-voc', 'education_level_ Bachelors', 'education_level_ Doctorate', 'education_level_ HS-grad', 'education_level_ Masters', 'education_level_ Preschool', 'education_level_ Prof-school', 'education_level_ Some-college', 'marital-status_ Divorced', 'marital-status_ Married-AF-spouse', 'marital-status_ Married-civ-spouse', 'marital-status_ Married-spouse-absent', 'marital-status_ Never-married', 'marital-status_ Separated', 'marital-status_ Widowed', 'occupation_ Adm-clerical', 'occupation_ Armed-Forces', 'occupation_ Craft-repair', 'occupation_ Exec-managerial', 'occupation_ Farming-fishing', 'occupation_ Handlers-cleaners', 'occupation_ Machine-op-inspct', 'occupation_ Other-service', 'occupation_ Priv-house-serv', 'occupation_ Prof-specialty', 'occupation_ Protective-serv', 'occupation_ Sales', 'occupation_ Tech-support', 'occupation_ Transport-moving', 'relationship_ Husband', 'relationship_ Not-in-family', 'relationship_ Other-relative', 'relationship_ Own-child', 'relationship_ Unmarried', 'relationship_ Wife', 'race_ Amer-Indian-Eskimo', 'race_ Asian-Pac-Islander', 'race_ Black', 'race_ Other', 'race_ White', 'sex_ Female', 'sex_ Male', 'native-country_ Cambodia', 'native-country_ Canada', 'native-country_ China', 'native-country_ Columbia', 'native-country_ Cuba', 'native-country_ Dominican-Republic', 'native-country_ Ecuador', 'native-country_ El-Salvador', 'native-country_ England', 'native-country_ France', 'native-country_ Germany', 'native-country_ Greece', 'native-country_ Guatemala', 'native-country_ Haiti', 'native-country_ Holand-Netherlands', 'native-country_ Honduras', 'native-country_ Hong', 'native-country_ Hungary', 'native-country_ India', 'native-country_ Iran', 'native-country_ Ireland', 'native-country_ Italy', 'native-country_ Jamaica', 'native-country_ Japan', 'native-country_ Laos', 'native-country_ Mexico', 'native-country_ Nicaragua', 'native-country_ Outlying-US(Guam-USVI-etc)', 'native-country_ Peru', 'native-country_ Philippines', 'native-country_ Poland', 'native-country_ Portugal', 'native-country_ Puerto-Rico', 'native-country_ Scotland', 'native-country_ South', 'native-country_ Taiwan', 'native-country_ Thailand', 'native-country_ Trinadad&Tobago', 'native-country_ United-States', 'native-country_ Vietnam', 'native-country_ Yugoslavia']
</code>
### Shuffle and Split Data
Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split._____no_output_____
<code>
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])Training set has 36177 samples.
Testing set has 9045 samples.
</code>
----
## Evaluating Model Performance
In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a *naive predictor*._____no_output_____### Metrics and the Naive Predictor
*CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).
Looking at the distribution of classes (those who make at most \$50,000, and those who make more), it's clear most individuals do not make more than \$50,000. This can greatly affect **accuracy**, since we could simply say *"this person does not make more than \$50,000"* and generally be right, without ever looking at the data! Making such a statement would be called **naive**, since we have not considered any information to substantiate the claim. It is always important to consider the *naive prediction* for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, *CharityML* would identify no one as donors. _____no_output_____### Question 1 - Naive Predictor Performace
*If we chose a model that always predicted an individual made more than \$50,000, what would that model's accuracy and F-score be on this dataset?*
**Note:** You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later._____no_output_____
<code>
# Calculate accuracy
accuracy = 1.0 * n_greater_50k / n_records
# Calculate F-score using the formula above for beta = 0.5
recall = 1.0
fscore = (
(1 + 0.5**2) * accuracy * recall
) / (
0.5**2 * accuracy + recall
)
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)Naive Predictor: [Accuracy score: 0.2478, F-score: 0.2917]
</code>
### Supervised Learning Models
**The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent Classifier (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression_____no_output_____### Question 2 - Model Application
List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen
- *Describe one real-world application in industry where the model can be applied.* (You may need to do research for this — give references!)
- *What are the strengths of the model; when does it perform well?*
- *What are the weaknesses of the model; when does it perform poorly?*
- *What makes this model a good candidate for the problem, given what you know about the data?*_____no_output_____**Answer: **
Total number of records: 45222
The algorithms we're searching here is for a supervised classification problem predicting a category using labeled data with less than 100K samples.
* **Support Vector Machine** (SVM):
* Real-world application: classifying proteins, protein-protein interaction
* References: [Bioinformatics - Semi-Supervised Multi-Task Learning for Predicting Interactions between HIV-1 and Human Proteins](https://static.googleusercontent.com/media/research.google.com/de//pubs/archive/35765.pdf)
* Strengths of the model:
* effective in high dimensional spaces and with nonlinear relationships
* robust to noise (because margins maximized and theoretical bounds on overfitting)
* Weaknesses of the model:
* requires to select a good kernel function and a number of hyperparameters such as the regularization parameter and the number of iterations
* sensitive to feature scaling
* model parameters are difficult to interpret
* requires significant memory and processing power
* tuning the regularization parameters required to avoid overfitting
* Reasoning: *Linear SVC* is the optimal estimator following the [Scikit-Learn - Choosing the right estimator](http://scikit-learn.org/stable/tutorial/machine_learning_map/) when using less than 100K samples to solve to classification problem.
* **Logistic Regression**:
* Real-world application: media and advertising campaigns optimization and decision making
* References: [Evaluating Online Ad Campaigns in a Pipeline: Causal Models At Scale](https://static.googleusercontent.com/media/research.google.com/de//pubs/archive/36552.pdf)
* Strengths of the model:
* simple, no user-defined parameters to experiment with unless you regularize, β is intuitive
* fast to train and to predict
* easy to interpret: output can be interpreted as a probability
* pretty robust to noise with low variance and less prone to over-fitting
* lots of ways to regularize the model
* Weaknesses of the model:
* unstable when one predictor could almost explain the response variable
* often less accurate than the newer methods
* Interpreting θ isn't straightforward
* Reasoning: It's similar to *linear SVC*, is widely used and can be easyly implemented.
* **K-Nearest Neighbors (KNeighbors)**:
* Real-world application: image and video content classification
* References: [Clustering billions of images with large scale nearest neighbor search](https://static.googleusercontent.com/media/research.google.com/de//pubs/archive/32616.pdf)
* Strengths of the model:
* simple and powerful
* easy to explain
* no training involved ("lazy")
* naturally handles multiclass classification and regression
* learns nonlinear boundaries
* Weaknesses of the model:
* expensive and slow to predict new instances ("lazy")
* must define a meaningful distance function (preference bias)
* need to decide on a good distance metric
* performs poorly on high-dimensionality datasets (curse of high dimensionality)
* Reasoning: *KNeighbors* classifier is the next option suggested on the machine learning map when *linear SVC* does work poor. Since our dataset is low-dimensional and it should generate reasonable results.
_____no_output_____### Implementation - Creating a Training and Predicting Pipeline
To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.
In the code block below, you will need to implement the following:
- Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics).
- Fit the learner to the sampled training data and record the training time.
- Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`.
- Record the total prediction time.
- Calculate the accuracy score for both the training subset and testing set.
- Calculate the F-score for both the training subset and testing set.
- Make sure that you set the `beta` parameter!_____no_output_____
<code>
# Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# Calculate the training time
results['train_time'] = end - start
# Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
results['pred_time'] = end - start
# Compute accuracy on the first 300 training samples
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# Compute accuracy on test set
results['acc_test'] = accuracy_score(y_test, predictions_test)
# Compute F-score on the the first 300 training samples
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=.5)
# Compute F-score on the test set
results['f_test'] = fbeta_score(y_test, predictions_test, beta=.5)
# Success
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# Return the results
return results_____no_output_____
</code>
### Implementation: Initial Model Evaluation
In the code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`.
- Use a `'random_state'` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Calculate the number of records equal to 1%, 10%, and 100% of the training data.
- Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively.
**Note:** Depending on which algorithms you chose, the following implementation may take some time to run!_____no_output_____
<code>
# Import the three supervised learning models from sklearn
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
# Initialize the three models
clf_A = LinearSVC(random_state=42)
clf_B = LogisticRegression(random_state=42)
clf_C = KNeighborsClassifier()
# Calculate the number of samples for 1%, 10%, and 100% of the training data
n = len(y_train)
samples_1 = int(round(n / 100.0))
samples_10 = int(round(n / 10.0))
samples_100 = n
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)LinearSVC trained on 362 samples.
LinearSVC trained on 3618 samples.
LinearSVC trained on 36177 samples.
LogisticRegression trained on 362 samples.
LogisticRegression trained on 3618 samples.
LogisticRegression trained on 36177 samples.
KNeighborsClassifier trained on 362 samples.
KNeighborsClassifier trained on 3618 samples.
KNeighborsClassifier trained on 36177 samples.
</code>
----
## Improving Results
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score. _____no_output_____### Question 3 - Choosing the Best Model
*Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000.*
**Hint:** Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data._____no_output_____**Answer: **
| | training time | predicting time | training set scores | testing set scores |
|----------------------|----------------------------|-----------------|----------------|
| **LinearSVC** | ++ | +++ | +++ | **+++** |
| LogisticRegression | +++ | +++ | ++ | ++ |
| KNeighborsClassifier | + | --- | +++ | + |
Based on the evaluation the *linear SVC* model is the most appropriate for the task. Compared to the other two models in testing, it's both fast and has the highest scores. While *linear SVC* and *logistic regression* have almost the same accuracy, its *f-score* is slightly higher indicating that *linear SVC* has more correct positive predictions on actual positive observations.
Although *logistic regression* has a shorter training time, it has worse *f-score* when applied to the testing set. In a real-world setting the testing scores are the metrics that do matter. The moderate longer training time can be ignored. *K-neighbors* outperforms the other two only when predicting the training scores. But with an even poorer testing score it actually shows that the model has an overfitting problem._____no_output_____### Question 4 - Describing the Model in Layman's Terms
*In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.*_____no_output_____**Answer: **
As the final model we used a classifier called *linear SVC* which assumes that the underlying data are linearly separable. In a simplified 2-dimensional space this technique attemps to find the best line that separates the two classes of points with the largest margin. In higher dimensional space, the algorithm searches for the best hyperplane following the same principle by maximizing the margin (e.g. a plane in 3-dimensional space).
In our case, in the training phase the algorithm calculates the best model to separate the different classes in the training data by a maximum margin. And with the trained model the algorithm is able to predict the unseen examples.
_____no_output_____### Implementation: Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Initialize the classifier you've chosen and store it in `clf`.
- Set a `random_state` if one is available to the same state you set before.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available!
- Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$).
- Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`.
**Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!_____no_output_____
<code>
# Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# Initialize the classifier
clf = LinearSVC(random_state=42)
# Create the parameters list you wish to tune
parameters = {
'C': [.1, .5, 1.0, 5.0, 10.0],
'loss': ['hinge', 'squared_hinge'],
'tol': [1e-3, 1e-4, 1e-5],
'random_state': [0, 42, 10000]
}
# Make an fbeta_score scoring object
scorer = make_scorer(fbeta_score, beta=.5)
# Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf, parameters, scoring=scorer)
# Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train, y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))Unoptimized model
------
Accuracy score on testing data: 0.8507
F-score on testing data: 0.7054
Optimized Model
------
Final accuracy score on the testing data: 0.8514
Final F-score on the testing data: 0.7063
# print optimized parameters
print("Optimized params for Linear SVM: {}".format(
grid_fit.best_params_
))Optimized params for Linear SVM: {'loss': 'squared_hinge', 'C': 10.0, 'random_state': 0, 'tol': 0.001}
</code>
### Question 5 - Final Model Evaluation
_What is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_
**Note:** Fill in the table below with your results, and then provide discussion in the **Answer** box._____no_output_____#### Results:
| Metric | Benchmark Predictor | Unoptimized Model | Optimized Model |
| :------------: | :-----------------: | :---------------: | :-------------: |
| Accuracy Score | .2478 | .8507 | .8514 |
| F-score | .2917 | .7054 | .7063 |
_____no_output_____**Answer: **
The scores of the *optimized model* are a bit better than the *unoptimized model* showing that the defaults were actually very good in the first place. Compared to the *benchmark predictor*, the optimized model is by manifold better with larger accuracy and f-scores._____no_output_____----
## Feature Importance
An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.
Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset._____no_output_____### Question 6 - Feature Relevance Observation
When **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data.
_Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?______no_output_____**Answer:**
Of the thirteen available features we believe that the following five features are the most important for prediction and ordered them by their importance with the most important at top. From looking at the data and following the rules of our society:
* *occupation*: different occupations have usually different salary ranges
* *capital-gain*: the rich get richer. Capital gain is an indicator of the personal wealth.
* *education-level*: when employed, people with higher education gets better paid
* *hours-per-week*: part time jobs are often less paid
* *age*: older people tends to earn more_____no_output_____### Implementation - Extracting Feature Importance
Choose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.
In the code cell below, you will need to implement the following:
- Import a supervised learning model from sklearn if it is different from the three used earlier.
- Train the supervised model on the entire training set.
- Extract the feature importances using `'.feature_importances_'`._____no_output_____
<code>
# Import a supervised learning model that has 'feature_importances_'
from sklearn.ensemble import AdaBoostClassifier
# Train the supervised model on the training set
model = AdaBoostClassifier(random_state=42).fit(X_train, y_train)
# Extract the feature importances
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)_____no_output_____
</code>
### Question 7 - Extracting Feature Importance
Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
_How do these five features compare to the five features you discussed in **Question 6**? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant?______no_output_____
<code>
# print top 10 features importances
def rank_features(features, scores, descending=True, n=10):
"""
sorts and cuts features by scores.
:return: array of [feature name, score] tuples
"""
return sorted(
[[f, s] for f, s in zip(features, scores) if s],
key=lambda x: x[1],
reverse=descending
)[:n]
rank_features(
features.columns,
importances
)_____no_output_____# capital-loss and income have a positive correlation
features['capital-loss'].corr(income)_____no_output_____
</code>
**Answer:**
From the top 5 features selected by *AdaBoostClassifier* we got 4 hits (*age*, *capital-gain*, *hours-per-week* and *education-level*). That *capital-loss* has a such big influence is really surprising and by looking at the cell above, *income* and *capital-loss* are even positively correlated. Our top one guess *occupation* hasn't made into top 10 in the feature importances.
The visualization of the feature importances confirms our guess that our observation in the society is actually true that older and higher educated people with money (big *capital-loss* or *capital-gain*) would have higher salaries. These thoughts are now quantified and explained by the classfier._____no_output_____### Feature Selection
How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*. _____no_output_____
<code>
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
start = time()
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
training_time_reduced = time() - start
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))Final Model trained on full data
------
Accuracy on testing data: 0.8514
F-score on testing data: 0.7063
Final Model trained on reduced data
------
Accuracy on testing data: 0.8096
F-score on testing data: 0.5983
# compare scores
def relative_diff_pct(x, y):
"""
returns the relative difference between x and y in percent.
"""
return 100 * ((y - x) / x)
print('Relative Diff. of accuracy-scores: {0:.2f}%'.format(
relative_diff_pct(.8514, .8096)
))
print('Relative Diff. of f-scores: {0:.2f}%'.format(
relative_diff_pct(.7063, .5983)
))Relative Diff. of accuracy-scores: -4.91%
Relative Diff. of f-scores: -15.29%
# Train with full data
start = time()
clf = (clone(best_clf)).fit(X_train, y_train)
training_time_full = time() - start
print('Relative Diff. of training times: {0:.2f}%'.format(
relative_diff_pct(training_time_reduced, training_time_full)
))Relative Diff. of training times: 94.68%
</code>
### Question 8 - Effects of Feature Selection
*How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?*
*If training time was a factor, would you consider using the reduced data as your training set?*_____no_output_____**Answer:**_____no_output_____Both the accuracy and f-scores dropped down when using only the top 5 features. The f-scores have an **over 15% difference**, while the difference between the accuracy scores is at about 5%.
The training time with reduced data was more than **2 times faster** than the training time of the full data set. In some other scenarios when the training time or the computation power if of high priority, we would consider to use the reduced data. But since the difference between the f-scores are considerably large we would use the full training set for this problem._____no_output_____## References
* [Udacity - Machine Learning](https://classroom.udacity.com/courses/ud262)
* [Laurad Hamilton - ML Cheat Sheet](http://www.lauradhamilton.com/machine-learning-algorithm-cheat-sheet)
* [Scikit-Learn - ML Modules](http://scikit-learn.org/stable/modules/sgd.html)
* [Scikit-Learn - AdaBoostClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html)
* [rcompton - Supervised Learning Superstitions](https://github.com/rcompton/ml_cheat_sheet)
* [Wikipedia - Support Vector Machine](https://en.wikipedia.org/wiki/Support_vector_machine#Definition)
* [Quora - SVM in layman's terms](https://www.quora.com/What-does-support-vector-machine-SVM-mean-in-laymans-terms)_____no_output_____## Reproduction Environment_____no_output_____
<code>
import IPython
print IPython.sys_info(){'commit_hash': u'5c9c918',
'commit_source': 'installation',
'default_encoding': 'cp936',
'ipython_path': 'C:\\dev\\anaconda\\lib\\site-packages\\IPython',
'ipython_version': '5.1.0',
'os_name': 'nt',
'platform': 'Windows-7-6.1.7601-SP1',
'sys_executable': 'C:\\dev\\anaconda\\python.exe',
'sys_platform': 'win32',
'sys_version': '2.7.13 |Anaconda custom (32-bit)| (default, Dec 19 2016, 13:36:02) [MSC v.1500 32 bit (Intel)]'}
!pip freezealabaster==0.7.9
anaconda-client==1.6.0
anaconda-navigator==1.4.3
argcomplete==1.0.0
astroid==1.4.9
astropy==1.3
Babel==2.3.4
backports-abc==0.5
backports.shutil-get-terminal-size==1.0.0
backports.ssl-match-hostname==3.4.0.2
beautifulsoup4==4.5.3
bitarray==0.8.1
blaze==0.10.1
bokeh==0.12.4
boto==2.45.0
Bottleneck==1.2.0
cdecimal==2.3
cffi==1.9.1
chardet==2.3.0
chest==0.2.3
click==6.7
cloudpickle==0.2.2
clyent==1.2.2
colorama==0.3.7
comtypes==1.1.2
conda==4.3.16
configobj==5.0.6
configparser==3.5.0
contextlib2==0.5.4
cookies==2.2.1
cryptography==1.7.1
cycler==0.10.0
Cython==0.25.2
cytoolz==0.8.2
dask==0.13.0
datashape==0.5.4
decorator==4.0.11
dill==0.2.5
docutils==0.13.1
enum34==1.1.6
et-xmlfile==1.0.1
fastcache==1.0.2
Flask==0.12
Flask-Cors==3.0.2
funcsigs==1.0.2
functools32==3.2.3.post2
futures==3.0.5
gevent==1.2.1
glueviz==0.9.1
greenlet==0.4.11
grin==1.2.1
h5py==2.6.0
HeapDict==1.0.0
idna==2.2
imagesize==0.7.1
ipaddress==1.0.18
ipykernel==4.5.2
ipython==5.1.0
ipython-genutils==0.1.0
ipywidgets==5.2.2
isort==4.2.5
itsdangerous==0.24
jdcal==1.3
jedi==0.9.0
Jinja2==2.9.4
jsonschema==2.5.1
jupyter==1.0.0
jupyter-client==4.4.0
jupyter-console==5.0.0
jupyter-core==4.2.1
lazy-object-proxy==1.2.2
llvmlite==0.15.0
locket==0.2.0
lxml==3.7.2
MarkupSafe==0.23
matplotlib==2.0.0
menuinst==1.4.4
mistune==0.7.3
mpmath==0.19
multipledispatch==0.4.9
nbconvert==4.2.0
nbformat==4.2.0
networkx==1.11
nltk==3.2.2
nose==1.3.7
notebook==4.4.1
numba==0.30.1+0.g8c1033f.dirty
numexpr==2.6.1
numpy==1.11.3
numpydoc==0.6.0
odo==0.5.0
openpyxl==2.4.1
pandas==0.19.2
partd==0.3.7
path.py==0.0.0
pathlib2==2.2.0
patsy==0.4.1
pep8==1.7.0
pickleshare==0.7.4
Pillow==4.0.0
ply==3.9
prompt-toolkit==1.0.9
psutil==5.0.1
py==1.4.32
pyasn1==0.1.9
pycosat==0.6.1
pycparser==2.17
pycrypto==2.6.1
pycurl==7.43.0
pyflakes==1.5.0
pygame==1.9.3
Pygments==2.1.3
pylint==1.6.4
pymongo==3.3.0
pyOpenSSL==16.2.0
pyparsing==2.1.4
pytest==3.0.5
python-dateutil==2.6.0
pytz==2016.10
pywin32==220
PyYAML==3.12
pyzmq==16.0.2
QtAwesome==0.4.3
qtconsole==4.2.1
QtPy==1.2.1
requests==2.12.4
requests-file==1.4.1
responses==0.5.1
rope==0.9.4
scandir==1.4
scikit-image==0.12.3
scikit-learn==0.18.1
scipy==0.18.1
seaborn==0.7.1
simplegeneric==0.8.1
singledispatch==3.4.0.3
six==1.10.0
snowballstemmer==1.2.1
sockjs-tornado==1.0.3
sphinx==1.5.1
spyder==3.1.2
SQLAlchemy==1.1.5
statsmodels==0.6.1
subprocess32==3.2.7
sympy==1.0
tables==3.2.2
toolz==0.8.2
tornado==4.4.2
traitlets==4.3.1
unicodecsv==0.14.1
wcwidth==0.1.7
Werkzeug==0.11.15
widgetsnbextension==1.2.6
win-unicode-console==0.5
wrapt==1.10.8
xlrd==1.0.0
XlsxWriter==0.9.6
xlwings==0.10.2
xlwt==1.2.0
</code>
| {
"repository": "superkley/udacity-mlnd",
"path": "p2_sl_finding_donors/p2_sl_finding_donors.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": 4,
"size": 241408,
"hexsha": "d00ee5dbde43cd3d5af4ecf610aa3f3dec799d12",
"max_line_length": 80096,
"avg_line_length": 144.6423007789,
"alphanum_fraction": 0.8568108762
} |
# Notebook from jfdahl/Advent-of-Code-2019
Path: README.ipynb
# Advent-of-Code-2019
## About Advent of Code
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.
## Websites
https://adventofcode.com/
ServiceNow Leaderboard: https://adventofcode.com/2019/leaderboard/private/view/109388]
PySlackers Leaderboard: https://adventofcode.com/2019/leaderboard/private/view/52704]
## 2019 theme
Santa has become stranded at the edge of the Solar System while delivering presents to other planets!
To accurately calculate his position in space, safely align his warp drive, and
return to Earth in time to save Christmas, he needs you to bring him measurements from fifty stars.
Collect stars by solving puzzles. Two puzzles will be made available on each day in the Advent calendar;
the second puzzle is unlocked when you complete the first.
Each puzzle grants one star. Good luck!
_____no_output_____
| {
"repository": "jfdahl/Advent-of-Code-2019",
"path": "README.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 1935,
"hexsha": "d00f3c747872b4a0f4a5987a69c25479305ef0d3",
"max_line_length": 320,
"avg_line_length": 32.25,
"alphanum_fraction": 0.6217054264
} |
# Notebook from Akshat2127/Part-Of-Speech-Tagging
Path: HMM TaggerPart of Speech Tagging - HMM.ipynb
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, we'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

_____no_output_____### The Road Ahead
We will complete this project in 3 steps mentioned below. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger_____no_output_____<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>_____no_output_____
<code>
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution_____no_output_____
</code>
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). We should get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html).
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```_____no_output_____
<code>
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
</code>
### The Dataset Interface
We can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>_____no_output_____#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`._____no_output_____
<code>
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
</code>
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
We can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`._____no_output_____
<code>
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
</code>
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset._____no_output_____
<code>
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
</code>
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus._____no_output_____
<code>
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
</code>
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. The next several cells will complete functions to compute the counts of several sets of counts. _____no_output_____## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus._____no_output_____### IMPLEMENTATION: Pair Counts
The function below computes the joint frequency counts for two input sequences._____no_output_____
<code>
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
pair_dict = {}
i = 0;
for tag in sequences_A:
pair_dict[tag] = {}
for word, tag in (sequences_B):
if word in pair_dict[tag]:
pair_dict[tag][word] = pair_dict[tag][word] + 1
else:
pair_dict[tag][word] = 1
return pair_dict
# Calculate C(t_i, w_i)
emission_counts = pair_counts(data.tagset, data.stream())
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably._____no_output_____
<code>
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
word_counts = pair_counts(data.tagset, data.training_set.stream())
def getMaxFreq(word, counts):
maxFreq = -1
#maxFreqTag
for key, value in counts.items():
if word in value:
if value[word] > maxFreq:
maxFreq = value[word]
maxFreqTag = key
return maxFreqTag
def GetVocabFrequencies(vocab, counts):
word_freq = {}
for word in vocab:
word_freq[word] = getMaxFreq(word, counts)
return word_freq
mfc_table = GetVocabFrequencies(data.training_set.vocab, word_counts) # TODO: YOUR CODE HERE
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')_____no_output_____
</code>
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger._____no_output_____
<code>
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions_____no_output_____
</code>
### Example Decoding Sequences with MFC Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
</code>
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. _____no_output_____
<code>
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions_____no_output_____
</code>
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus._____no_output_____
<code>
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.00%
</code>
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information._____no_output_____### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$_____no_output_____
<code>
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
unigram_counts ={}
for i in range(len(sequences)):
for j in range(len(sequences[i])):
if sequences[i][j] in unigram_counts:
unigram_counts[sequences[i][j]] = unigram_counts[sequences[i][j]] + 1
else:
unigram_counts[sequences[i][j]] = 1
return unigram_counts
# call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set.Y)
print((tag_unigrams))
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>'){'ADV': 44877, 'NOUN': 220632, '.': 117757, 'VERB': 146161, 'ADP': 115808, 'ADJ': 66754, 'CONJ': 30537, 'DET': 109671, 'PRT': 23906, 'NUM': 11878, 'PRON': 39383, 'X': 1094}
</code>
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
_____no_output_____
<code>
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
bigram_counts ={}
for i in range(len(sequences)):
for j in range(len(sequences[i]) - 1):
pair = (sequences[i][j], sequences[i][j + 1])
if pair in bigram_counts:
bigram_counts[pair] = bigram_counts[pair] + 1
else:
bigram_counts[pair] = 1
return bigram_counts
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set.Y)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag._____no_output_____
<code>
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
starting_counts = {}
for i in range(len(sequences)):
if sequences[i][0] in starting_counts:
starting_counts[sequences[i][0]] = starting_counts[sequences[i][0]] + 1
else:
starting_counts[sequences[i][0]] = 1
return starting_counts
# Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set.Y)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag._____no_output_____
<code>
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
ending_counts = {}
for i in range(len(sequences)):
last_idx = len(sequences[i]) - 1
if sequences[i][last_idx] in ending_counts:
ending_counts[sequences[i][last_idx]] = ending_counts[sequences[i][last_idx]] + 1
else:
ending_counts[sequences[i][last_idx]] = 1
return ending_counts
# Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')_____no_output_____
</code>
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$_____no_output_____
<code>
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
states = {}
for tag in emission_counts:
tag_count = tag_unigrams[tag]
prob_distributuion = {word : word_count/tag_count for word, word_count in emission_counts[tag].items() }
state = State(DiscreteDistribution(prob_distributuion), name=tag)
states[tag] = state
basic_model.add_states(state)
for tag_pair in tag_bigrams.keys():
training_set_count = len(data.training_set.Y)
start_prob = tag_starts[tag_pair[0]]/training_set_count
basic_model.add_transition(basic_model.start, states[tag_pair[0]], start_prob)
trans_prob = tag_bigrams[tag_pair]/tag_unigrams[tag_pair[0]]
basic_model.add_transition(states[tag_pair[0]], states[tag_pair[1]], trans_prob)
end_prob = tag_ends[tag_pair[0]]/training_set_count
basic_model.add_transition(states[tag_pair[0]], basic_model.end, end_prob)
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')_____no_output_____hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')training accuracy basic hmm model: 97.54%
testing accuracy basic hmm model: 96.18%
</code>
### Example Decoding Sequences with the HMM Tagger_____no_output_____
<code>
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
</code>
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets._____no_output_____
<code>
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]_____no_output_____
</code>
| {
"repository": "Akshat2127/Part-Of-Speech-Tagging",
"path": "HMM TaggerPart of Speech Tagging - HMM.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 48779,
"hexsha": "d013486eff30978dcc5e9a567d974bc2d87e2552",
"max_line_length": 600,
"avg_line_length": 42.196366782,
"alphanum_fraction": 0.5817257426
} |
# Notebook from feberhardt/stardist
Path: examples/2D/2_training.ipynb
<code>
from __future__ import print_function, unicode_literals, absolute_import, division
import sys
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from glob import glob
from tqdm import tqdm
from tifffile import imread
from csbdeep.utils import Path, normalize
from stardist import fill_label_holes, random_label_cmap, calculate_extents, gputools_available
from stardist.models import Config2D, StarDist2D, StarDistData2D
np.random.seed(42)
lbl_cmap = random_label_cmap()Using TensorFlow backend.
</code>
# Data
We assume that data has already been downloaded via notebook [1_data.ipynb](1_data.ipynb).
<div class="alert alert-block alert-info">
Training data (for input `X` with associated label masks `Y`) can be provided via lists of numpy arrays, where each image can have a different size. Alternatively, a single numpy array can also be used if all images have the same size.
Input images can either be two-dimensional (single-channel) or three-dimensional (multi-channel) arrays, where the channel axis comes last. Label images need to be integer-valued.
</div>_____no_output_____
<code>
X = sorted(glob('data/dsb2018/train/images/*.tif'))
Y = sorted(glob('data/dsb2018/train/masks/*.tif'))
assert all(Path(x).name==Path(y).name for x,y in zip(X,Y))_____no_output_____X = list(map(imread,X))
Y = list(map(imread,Y))
n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1]_____no_output_____
</code>
Normalize images and fill small label holes._____no_output_____
<code>
axis_norm = (0,1) # normalize channels independently
# axis_norm = (0,1,2) # normalize channels jointly
if n_channel > 1:
print("Normalizing image channels %s." % ('jointly' if axis_norm is None or 2 in axis_norm else 'independently'))
sys.stdout.flush()
X = [normalize(x,1,99.8,axis=axis_norm) for x in tqdm(X)]
Y = [fill_label_holes(y) for y in tqdm(Y)]100%|██████████| 447/447 [00:01<00:00, 462.35it/s]
100%|██████████| 447/447 [00:04<00:00, 111.61it/s]
</code>
Split into train and validation datasets._____no_output_____
<code>
assert len(X) > 1, "not enough training data"
rng = np.random.RandomState(42)
ind = rng.permutation(len(X))
n_val = max(1, int(round(0.15 * len(ind))))
ind_train, ind_val = ind[:-n_val], ind[-n_val:]
X_val, Y_val = [X[i] for i in ind_val] , [Y[i] for i in ind_val]
X_trn, Y_trn = [X[i] for i in ind_train], [Y[i] for i in ind_train]
print('number of images: %3d' % len(X))
print('- training: %3d' % len(X_trn))
print('- validation: %3d' % len(X_val))number of images: 447
- training: 380
- validation: 67
</code>
Training data consists of pairs of input image and label instances._____no_output_____
<code>
i = min(9, len(X)-1)
img, lbl = X[i], Y[i]
assert img.ndim in (2,3)
img = img if img.ndim==2 else img[...,:3]
plt.figure(figsize=(16,10))
plt.subplot(121); plt.imshow(img,cmap='gray'); plt.axis('off'); plt.title('Raw image')
plt.subplot(122); plt.imshow(lbl,cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels')
None;_____no_output_____
</code>
# Configuration
A `StarDist2D` model is specified via a `Config2D` object._____no_output_____
<code>
print(Config2D.__doc__)Configuration for a :class:`StarDist2D` model.
Parameters
----------
axes : str or None
Axes of the input images.
n_rays : int
Number of radial directions for the star-convex polygon.
Recommended to use a power of 2 (default: 32).
n_channel_in : int
Number of channels of given input image (default: 1).
grid : (int,int)
Subsampling factors (must be powers of 2) for each of the axes.
Model will predict on a subsampled grid for increased efficiency and larger field of view.
backbone : str
Name of the neural network architecture to be used as backbone.
kwargs : dict
Overwrite (or add) configuration attributes (see below).
Attributes
----------
unet_n_depth : int
Number of U-Net resolution levels (down/up-sampling layers).
unet_kernel_size : (int,int)
Convolution kernel size for all (U-Net) convolution layers.
unet_n_filter_base : int
Number of convolution kernels (feature channels) for first U-Net layer.
Doubled after each down-sampling layer.
unet_pool : (int,int)
Maxpooling size for all (U-Net) convolution layers.
net_conv_after_unet : int
Number of filters of the extra convolution layer after U-Net (0 to disable).
unet_* : *
Additional parameters for U-net backbone.
train_shape_completion : bool
Train model to predict complete shapes for partially visible objects at image boundary.
train_completion_crop : int
If 'train_shape_completion' is set to True, specify number of pixels to crop at boundary of training patches.
Should be chosen based on (largest) object sizes.
train_patch_size : (int,int)
Size of patches to be cropped from provided training images.
train_background_reg : float
Regularizer to encourage distance predictions on background regions to be 0.
train_dist_loss : str
Training loss for star-convex polygon distances ('mse' or 'mae').
train_loss_weights : tuple of float
Weights for losses relating to (probability, distance)
train_epochs : int
Number of training epochs.
train_steps_per_epoch : int
Number of parameter update steps per epoch.
train_learning_rate : float
Learning rate for training.
train_batch_size : int
Batch size for training.
train_n_val_patches : int
Number of patches to be extracted from validation images (``None`` = one patch per image).
train_tensorboard : bool
Enable TensorBoard for monitoring training progress.
train_reduce_lr : dict
Parameter :class:`dict` of ReduceLROnPlateau_ callback; set to ``None`` to disable.
use_gpu : bool
Indicate that the data generator should use OpenCL to do computations on the GPU.
.. _ReduceLROnPlateau: https://keras.io/callbacks/#reducelronplateau
# 32 is a good default choice (see 1_data.ipynb)
n_rays = 32
# Use OpenCL-based computations for data generator during training (requires 'gputools')
use_gpu = False and gputools_available()
# Predict on subsampled grid for increased efficiency and larger field of view
grid = (2,2)
conf = Config2D (
n_rays = n_rays,
grid = grid,
use_gpu = use_gpu,
n_channel_in = n_channel,
)
print(conf)
vars(conf)Config2D(axes='YXC', backbone='unet', grid=(2, 2), n_channel_in=1, n_channel_out=33, n_dim=2, n_rays=32, net_conv_after_unet=128, net_input_shape=(None, None, 1), net_mask_shape=(None, None, 1), train_background_reg=0.0001, train_batch_size=4, train_checkpoint='weights_best.h5', train_checkpoint_epoch='weights_now.h5', train_checkpoint_last='weights_last.h5', train_completion_crop=32, train_dist_loss='mae', train_epochs=400, train_learning_rate=0.0003, train_loss_weights=(1, 0.2), train_n_val_patches=None, train_patch_size=(256, 256), train_reduce_lr={'factor': 0.5, 'patience': 40, 'min_delta': 0}, train_shape_completion=False, train_steps_per_epoch=100, train_tensorboard=True, unet_activation='relu', unet_batch_norm=False, unet_dropout=0.0, unet_kernel_size=(3, 3), unet_last_activation='relu', unet_n_conv_per_depth=2, unet_n_depth=3, unet_n_filter_base=32, unet_pool=(2, 2), unet_prefix='', use_gpu=False)
if use_gpu:
from csbdeep.utils.tf import limit_gpu_memory
# adjust as necessary: limit GPU memory to be used by TensorFlow to leave some to OpenCL-based computations
limit_gpu_memory(0.8)_____no_output_____
</code>
**Note:** The trained `StarDist2D` model will *not* predict completed shapes for partially visible objects at the image boundary if `train_shape_completion=False` (which is the default option)._____no_output_____
<code>
model = StarDist2D(conf, name='stardist', basedir='models')Using default values: prob_thresh=0.5, nms_thresh=0.4.
</code>
Check if the neural network has a large enough field of view to see up to the boundary of most objects._____no_output_____
<code>
median_size = calculate_extents(list(Y), np.median)
fov = np.array(model._axes_tile_overlap('YX'))
if any(median_size > fov):
print("WARNING: median object size larger than field of view of the neural network.")_____no_output_____
</code>
# Training_____no_output_____You can define a function/callable that applies augmentation to each batch of the data generator._____no_output_____
<code>
augmenter = None
# def augmenter(X_batch, Y_batch):
# """Augmentation for data batch.
# X_batch is a list of input images (length at most batch_size)
# Y_batch is the corresponding list of ground-truth label images
# """
# # ...
# return X_batch, Y_batch_____no_output_____
</code>
We recommend to monitor the progress during training with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard). You can start it in the shell from the current working directory like this:
$ tensorboard --logdir=.
Then connect to [http://localhost:6006/](http://localhost:6006/) with your browser.
_____no_output_____
<code>
quick_demo = True
if quick_demo:
print (
"NOTE: This is only for a quick demonstration!\n"
" Please set the variable 'quick_demo = False' for proper (long) training.",
file=sys.stderr, flush=True
)
model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter,
epochs=2, steps_per_epoch=10)
print("====> Stopping training and loading previously trained demo model from disk.", file=sys.stderr, flush=True)
model = StarDist2D(None, name='2D_demo', basedir='../../models/examples')
model.basedir = None # to prevent files of the demo model to be overwritten (not needed for your model)
else:
model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter)
None;NOTE: This is only for a quick demonstration!
Please set the variable 'quick_demo = False' for proper (long) training.
</code>
# Threshold optimization_____no_output_____While the default values for the probability and non-maximum suppression thresholds already yield good results in many cases, we still recommend to adapt the thresholds to your data. The optimized threshold values are saved to disk and will be automatically loaded with the model._____no_output_____
<code>
model.optimize_thresholds(X_val, Y_val)NMS threshold = 0.3: 80%|████████ | 16/20 [00:46<00:17, 4.42s/it, 0.485 -> 0.796]
NMS threshold = 0.4: 80%|████████ | 16/20 [00:46<00:17, 4.45s/it, 0.485 -> 0.796]
NMS threshold = 0.5: 80%|████████ | 16/20 [00:50<00:18, 4.63s/it, 0.485 -> 0.796]
</code>
| {
"repository": "feberhardt/stardist",
"path": "examples/2D/2_training.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 163992,
"hexsha": "d016627ff8fe655546e649bd56766c38afd627c1",
"max_line_length": 144576,
"avg_line_length": 286.1989528796,
"alphanum_fraction": 0.9124896336
} |
# Notebook from fenago/Applied_Data_Analytics
Path: Chapter01/Exercise1.03/Exercise1.03.ipynb
# Understanding the data
In this first part, we load the data and perform some initial exploration on it. The main goal of this step is to acquire some basic knowledge about the data, how the various features are distributed, if there are missing values in it and so on._____no_output_____
<code>
### imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# load hourly data
hourly_data = pd.read_csv('../data/hour.csv')_____no_output_____
</code>
Check data format, number of missing values in the data and general statistics:_____no_output_____
<code>
# print some generic statistics about the data
print(f"Shape of data: {hourly_data.shape}")
print(f"Number of missing values in the data: {hourly_data.isnull().sum().sum()}")
# get statistics on the numerical columns
hourly_data.describe().T_____no_output_____# create a copy of the original data
preprocessed_data = hourly_data.copy()
# tranform seasons
seasons_mapping = {1: 'winter', 2: 'spring', 3: 'summer', 4: 'fall'}
preprocessed_data['season'] = preprocessed_data['season'].apply(lambda x: seasons_mapping[x])
# transform yr
yr_mapping = {0: 2011, 1: 2012}
preprocessed_data['yr'] = preprocessed_data['yr'].apply(lambda x: yr_mapping[x])
# transform weekday
weekday_mapping = {0: 'Sunday', 1: 'Monday', 2: 'Tuesday', 3: 'Wednesday', 4: 'Thursday', 5: 'Friday', 6: 'Saturday'}
preprocessed_data['weekday'] = preprocessed_data['weekday'].apply(lambda x: weekday_mapping[x])
# transform weathersit
weather_mapping = {1: 'clear', 2: 'cloudy', 3: 'light_rain_snow', 4: 'heavy_rain_snow'}
preprocessed_data['weathersit'] = preprocessed_data['weathersit'].apply(lambda x: weather_mapping[x])
# transorm hum and windspeed
preprocessed_data['hum'] = preprocessed_data['hum']*100
preprocessed_data['windspeed'] = preprocessed_data['windspeed']*67
# visualize preprocessed columns
cols = ['season', 'yr', 'weekday', 'weathersit', 'hum', 'windspeed']
preprocessed_data[cols].sample(10, random_state=123)_____no_output_____
</code>
### Registered vs casual use analysis_____no_output_____
<code>
# assert that total numer of rides is equal to the sum of registered and casual ones
assert (preprocessed_data.casual + preprocessed_data.registered == preprocessed_data.cnt).all(), \
'Sum of casual and registered rides not equal to total number of rides'_____no_output_____# plot distributions of registered vs casual rides
sns.distplot(preprocessed_data['registered'], label='registered')
sns.distplot(preprocessed_data['casual'], label='casual')
plt.legend()
plt.xlabel('rides')
plt.ylabel("frequency")
plt.title("Rides distributions")
plt.savefig('figs/rides_distributions.png', format='png')_____no_output_____# plot evolution of rides over time
plot_data = preprocessed_data[['registered', 'casual', 'dteday']]
ax = plot_data.groupby('dteday').sum().plot(figsize=(10,6))
ax.set_xlabel("time");
ax.set_ylabel("number of rides per day");
plt.savefig('figs/rides_daily.png', format='png')_____no_output_____# create new dataframe with necessary for plotting columns, and
# obtain number of rides per day, by grouping over each day
plot_data = preprocessed_data[['registered', 'casual', 'dteday']]
plot_data = plot_data.groupby('dteday').sum()
# define window for computing the rolling mean and standard deviation
window = 7
rolling_means = plot_data.rolling(window).mean()
rolling_deviations = plot_data.rolling(window).std()
# create a plot of the series, where we first plot the series of rolling means,
# then we color the zone between the series of rolling means
# +- 2 rolling standard deviations
ax = rolling_means.plot(figsize=(10,6))
ax.fill_between(rolling_means.index, \
rolling_means['registered'] + 2*rolling_deviations['registered'], \
rolling_means['registered'] - 2*rolling_deviations['registered'], \
alpha = 0.2)
ax.fill_between(rolling_means.index, \
rolling_means['casual'] + 2*rolling_deviations['casual'], \
rolling_means['casual'] - 2*rolling_deviations['casual'], \
alpha = 0.2)
ax.set_xlabel("time");
ax.set_ylabel("number of rides per day");
plt.savefig('figs/rides_aggregated.png', format='png')_____no_output_____# select relevant columns
plot_data = preprocessed_data[['hr', 'weekday', 'registered', 'casual']]
# transform the data into a format, in number of entries are computed as count,
# for each distinct hr, weekday and type (registered or casual)
plot_data = plot_data.melt(id_vars=['hr', 'weekday'], var_name='type', value_name='count')
# create FacetGrid object, in which a grid plot is produced.
# As columns, we have the various days of the week,
# as rows, the different types (registered and casual)
grid = sns.FacetGrid(plot_data, row='weekday', col='type', height=2.5,\
aspect=2.5, row_order=['Monday', 'Tuesday', \
'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
# populate the FacetGrid with the specific plots
grid.map(sns.barplot, 'hr', 'count', alpha=0.5)
grid.savefig('figs/weekday_hour_distributions.png', format='png')_____no_output_____# select subset of the data
plot_data = preprocessed_data[['hr', 'season', 'registered', 'casual']]
# unpivot data from wide to long format
plot_data = plot_data.melt(id_vars=['hr', 'season'], var_name='type', \
value_name='count')
# define FacetGrid
grid = sns.FacetGrid(plot_data, row='season', \
col='type', height=2.5, aspect=2.5, \
row_order=['winter', 'spring', 'summer', 'fall'])
# apply plotting function to each element in the grid
grid.map(sns.barplot, 'hr', 'count', alpha=0.5)
# save figure
grid.savefig('figs/exercise_1_02_a.png', format='png')_____no_output_____plot_data = preprocessed_data[['weekday', 'season', 'registered', 'casual']]
plot_data = plot_data.melt(id_vars=['weekday', 'season'], var_name='type', value_name='count')
grid = sns.FacetGrid(plot_data, row='season', col='type', height=2.5, aspect=2.5,
row_order=['winter', 'spring', 'summer', 'fall'])
grid.map(sns.barplot, 'weekday', 'count', alpha=0.5,
order=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
# save figure
grid.savefig('figs/exercise_1_02_b.png', format='png')_____no_output_____
</code>
Exercise 1.03: Estimating average registered rides_____no_output_____
<code>
# compute population mean of registered rides
population_mean = preprocessed_data.registered.mean()
# get sample of the data (summer 2011)
sample = preprocessed_data[(preprocessed_data.season == "summer") &\
(preprocessed_data.yr == 2011)].registered
# perform t-test and compute p-value
from scipy.stats import ttest_1samp
test_result = ttest_1samp(sample, population_mean)
print(f"Test statistic: {test_result[0]:.03f}, p-value: {test_result[1]:.03f}")
# get sample as 5% of the full data
import random
random.seed(111)
sample_unbiased = preprocessed_data.registered.sample(frac=0.05)
test_result_unbiased = ttest_1samp(sample_unbiased, population_mean)
print(f"Unbiased test statistic: {test_result_unbiased[0]:.03f}, p-value: {test_result_unbiased[1]:.03f}")_____no_output_____
</code>
| {
"repository": "fenago/Applied_Data_Analytics",
"path": "Chapter01/Exercise1.03/Exercise1.03.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 9809,
"hexsha": "d016d58293c8827d94f3166e0250425714ad9e10",
"max_line_length": 9809,
"avg_line_length": 9809,
"alphanum_fraction": 0.7084310327
} |
# Notebook from christianausb/vehicleControl
Path: path_following_lateral_dynamics.ipynb
<code>
import json
import math
import numpy as np
import openrtdynamics2.lang as dy
import openrtdynamics2.targets as tg
from vehicle_lib.vehicle_lib import *_____no_output_____# load track data
with open("track_data/simple_track.json", "r") as read_file:
track_data = json.load(read_file)
_____no_output_____
#
# Demo: a vehicle controlled to follow a given path
#
# Implemented using the code generator openrtdynamics 2 - https://pypi.org/project/openrtdynamics2/ .
# This generates c++ code for Web Assembly to be run within the browser.
#
system = dy.enter_system()
velocity = dy.system_input( dy.DataTypeFloat64(1), name='velocity', default_value=6.0, value_range=[0, 25], title="vehicle velocity")
max_lateral_velocity = dy.system_input( dy.DataTypeFloat64(1), name='max_lateral_velocity', default_value=1.0, value_range=[0, 4.0], title="maximal lateral velocity")
max_lateral_accleration = dy.system_input( dy.DataTypeFloat64(1), name='max_lateral_accleration', default_value=2.0, value_range=[1.0, 4.0], title="maximal lateral acceleration")
# parameters
wheelbase = 3.0
# sampling time
Ts = 0.01
# create storage for the reference path:
path = import_path_data(track_data)
# create placeholders for the plant output signals
x = dy.signal()
y = dy.signal()
psi = dy.signal()
# track the evolution of the closest point on the path to the vehicles position
projection = track_projection_on_path(path, x, y)
d_star = projection['d_star'] # the distance parameter of the path describing the closest point to the vehicle
x_r = projection['x_r'] # (x_r, y_r) the projected vehicle position on the path
y_r = projection['y_r']
psi_rr = projection['psi_r'] # the orientation angle (tangent of the path)
K_r = projection['K_r'] # the curvature of the path
Delta_l = projection['Delta_l'] # the lateral distance between vehicle and path
#
# project the vehicle velocity onto the path yielding v_star
#
# Used formula inside project_velocity_on_path:
# v_star = d d_star / dt = v * cos( Delta_u ) / ( 1 - Delta_l * K(d_star) )
#
Delta_u = dy.signal() # feedback from control
v_star = project_velocity_on_path(velocity, Delta_u, Delta_l, K_r)
dy.append_output(v_star, 'v_star')
#
# compute an enhanced (less noise) signal for the path orientation psi_r by integrating the
# curvature profile and fusing the result with psi_rr to mitigate the integration drift.
#
psi_r, psi_r_dot = compute_path_orientation_from_curvature( Ts, v_star, psi_rr, K_r, L=1.0 )
dy.append_output(psi_rr, 'psi_rr')
dy.append_output(psi_r_dot, 'psi_r_dot')
#
# lateral open-loop control to realize an 'obstacle-avoiding maneuver'
#
# the dynamic model for the lateral distance Delta_l is
#
# d/dt Delta_l = u,
#
# meaning u is the lateral velocity to which is used to control the lateral
# distance to the path.
#
# generate a velocity profile
u_move_left = dy.signal_step( dy.int32(50) ) - dy.signal_step( dy.int32(200) )
u_move_right = dy.signal_step( dy.int32(500) ) - dy.signal_step( dy.int32(350) )
# apply a rate limiter to limit the acceleration
u = dy.rate_limit( max_lateral_velocity * (u_move_left + u_move_right), Ts, dy.float64(-1) * max_lateral_accleration, max_lateral_accleration)
dy.append_output(u, 'u')
# internal lateral model (to verify the lateral dynamics of the simulated vehicle)
Delta_l_mdl = dy.euler_integrator(u, Ts)
dy.append_output(Delta_l_mdl, 'Delta_l_mdl')
#
# path tracking control
#
# Control of the lateral distance to the path can be performed via the augmented control
# variable u.
#
# Herein, a linearization yielding the resulting lateral dynamics u --> Delta_l : 1/s is applied.
#
Delta_u << dy.asin( dy.saturate(u / velocity, -0.99, 0.99) )
delta_star = psi_r - psi
delta = delta_star + Delta_u
delta = dy.unwrap_angle(angle=delta, normalize_around_zero = True)
dy.append_output(Delta_u, 'Delta_u')
dy.append_output(delta_star, 'delta_star')
#
# The model of the vehicle including a disturbance
#
# steering angle limit
delta = dy.saturate(u=delta, lower_limit=-math.pi/2.0, upper_limit=math.pi/2.0)
# the model of the vehicle
x_, y_, psi_, x_dot, y_dot, psi_dot = discrete_time_bicycle_model(delta, velocity, Ts, wheelbase)
# close the feedback loops
x << x_
y << y_
psi << psi_
#
# outputs: these are available for visualization in the html set-up
#
dy.append_output(x, 'x')
dy.append_output(y, 'y')
dy.append_output(psi, 'psi')
dy.append_output(delta, 'steering')
dy.append_output(x_r, 'x_r')
dy.append_output(y_r, 'y_r')
dy.append_output(psi_r, 'psi_r')
dy.append_output(Delta_l, 'Delta_l')
# generate code for Web Assembly (wasm), requires emcc (emscripten) to build
code_gen_results = dy.generate_code(template=tg.TargetCppWASM(), folder="generated/path_following_lateral_dynamics", build=True)
#
dy.clear()
compiling system Sys1000_optim_loop (level 1)...
compiling system simulation (level 0)...
Generated code will be written to generated/path_following_lateral_dynamics .
writing file generated/path_following_lateral_dynamics/simulation_manifest.json
writing file generated/path_following_lateral_dynamics/main.cpp
Running compiler: emcc --bind -s MODULARIZE=1 -s EXPORT_NAME="ORTD_simulator" generated/path_following_lateral_dynamics/main.cpp -O2 -s -o generated/path_following_lateral_dynamics/main.js
Compilation result: 0
import IPython
IPython.display.IFrame(src='../vehicle_control_tutorial/path_following_lateral_dynamics.html', width='100%', height=1000)_____no_output_____
</code>
| {
"repository": "christianausb/vehicleControl",
"path": "path_following_lateral_dynamics.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 8789,
"hexsha": "d016e8dd5093cd233a5ed398b452eda0e5da45c4",
"max_line_length": 201,
"avg_line_length": 32.9176029963,
"alphanum_fraction": 0.5752645352
} |
# Notebook from avkch/Python-for-beginners
Path: Python for beginners.ipynb
# Python programming for beginners
[email protected]_____no_output_____## Agenda
1. Background, why Python, [installation](#installation), IDE, setup
2. Variables, Boolean, None, numbers (integers, floating point), check type
3. List, Set, Dictionary, Tuple
4. Text and regular expressions
5. Conditions, loops
6. Objects and Functions
7. Special functions (range, enumerate, zip), Iterators
8. I/O working with files, working directory, projects
9. Packages, pip, selected packages (xmltodict, biopython, xlwings, pyautogui, sqlalchemy, cx_Oracle, pandas)
10. Errors and debugging (try, except)
11. virtual environments_____no_output_____## What is a Programming language?
_____no_output_____## Why Python
### Advantages:
* Opensource/ free - explanation
* Easy to learn
* Old
* Popular
* All purpose
* Simple syntaxis
* High level
* Scripting
* Dynamically typed
### Disadvantages:
* Old
* Dynamically typed
* Inconsistent development_____no_output_____<a id="installation"></a>
## Installation
[Python](http://python.org/)
[Anaconda](https://www.anaconda.com/products/individual)_____no_output_____## Integrated Development Environment (IDE)
* IDLE – comes with Python
* [Jupiter notebook](https://jupyter.org/install)
* [google colab](https://colab.research.google.com/notebooks/basic_features_overview.ipynb#scrollTo=KR921S_OQSHG)
* Spyder – comes with Anaconda
* [Visual Studio Code](https://code.visualstudio.com/)
* [PyCharm community](https://www.jetbrains.com/toolbox-app/)_____no_output_____## Python files
Python files are text files with .py extension
### Comments
Comments are pieces of code that are not going to be executed. In python everything after hashtag (#) on the same line is comment.
Comments are used to describe code: what is this particular piece of code doing and why you have created it._____no_output_____
<code>
# this is a comment it will be ignored when running the python file_____no_output_____
</code>
## Variables
Assigning value to a variable
_____no_output_____
<code>
my_variable = 3
print(my_variable)3
</code>
### Naming variables
Variable names cannot start with the number, cannot contain special characters or space except _
Should not be name of python function.
* variable1 -> <font color=green>this is OK</font>
* 1variable -> <font color=red>this is not OK</font>
* Important-variable! -> <font color=red>this is not OK</font>
* myVariable -> <font color=green>this is OK</font>
* my_variable -> <font color=green>this is OK</font>_____no_output_____## Data types
### Numbers
#### 1. integers (whole numbers)_____no_output_____
<code>
var2 = 2
my_variable + var2
print(my_variable + 4)
my_variable = 6
print(my_variable +4)7
10
</code>
we can assign the result to another variable_____no_output_____
<code>
result = my_variable + var2
print(result)8
</code>
#### 2. Doubles (floating point number)_____no_output_____
<code>
double = 2.05
print(double)2.05
</code>
#### Mathematical operations
<font color= #00B19C>- Additon and substraction</font>_____no_output_____
<code>
2 + 3_____no_output_____5 - 2_____no_output_____
</code>
<font color= #00B19C>- Multiplication and division</font>_____no_output_____
<code>
2 * 3_____no_output_____6 / 2_____no_output_____
</code>
<font color=red>Note: the result of division is float not int!</font>_____no_output_____<font color= #00B19C>- Exponential</font>_____no_output_____
<code>
2 ** 4
# 2**4 is equal to 2*2*2*2_____no_output_____
</code>
<font color= #00B19C>- Floor division</font>_____no_output_____
<code>
7 // 3_____no_output_____
</code>
7/3 is 2.3333 the floor division is giving the whole number 2 (how many times you can fit 3 in 7)_____no_output_____<font color= #00B19C>- Modulo</font>_____no_output_____
<code>
7.0 % 2_____no_output_____
</code>
7//3 is 2, modulo is giving the remainder of the operation (what is left when you fit 2 times 3 in 7 ; 7 =2*3 + 1)
<font color= red>Note: Floor division and modulo results are inegers if integers are used as arguments and float if one of the arguments is float</font>_____no_output_____### Special variables
#### 1. None
None means variable without data type, nothing_____no_output_____
<code>
var = None
print(var)None
</code>
#### 2. Bolean
<font color= red>Note: Bolean is type of integer that can take only values of 0 or 1</font>_____no_output_____
<code>
var = True # or 1
var2 = False # or 0
print(var)
print(var+1)True
2
</code>
### Check variable type
#### 1. type() function_____no_output_____
<code>
print(type(True))
print(type(1))
print(type(my_variable))<class 'bool'>
<class 'int'>
<class 'int'>
</code>
#### 2. isinstance() function_____no_output_____
<code>
print(isinstance(True, bool))
print(isinstance(False, int))
print(isinstance(1, int))True
True
True
</code>
## Comparing variables_____no_output_____
<code>
print(1 == 1)
print(1 == 2)
print(1 != 2)
print(1 < 2)
print(1 > 2)True
False
True
True
False
my_variable = None
print(my_variable == None)
print(my_variable is None)
my_variable = 1.5
print(my_variable == 1.5)
print(my_variable is 1.5)
print(my_variable is not None)True
True
True
False
True
</code>
<font color= red>Note as a general rule of thumb use "is" "is not" when checking if variable is **None**, **True** or **False** in all other cases use "=="</format>_____no_output_____### Converting Int to Float and vs versa
#### 1. float() function_____no_output_____
<code>
float(3)_____no_output_____
</code>
#### 2. int() function
<font color= red>Note the int() conversion is taking in to account only the whole number int(2.9) = 2!</font>_____no_output_____
<code>
int(2.9)_____no_output_____
</code>
## Tuple
tuple is a collection which is ordered and unchangeable._____no_output_____
<code>
my_tuple = (3, 8, 5, 7, 5) _____no_output_____
</code>
access tuple items by index
<font color= red>Note Python is 0 indensing language = it starts to count from 0!</font>_____no_output__________no_output_____
<code>
print(my_tuple[0])
print(my_tuple[2:4])
print(my_tuple[2:])
print(my_tuple[:2])
print(my_tuple[-1])
print(my_tuple[::3])
print(my_tuple[1::2])
print(my_tuple[::-1])
print(my_tuple[-2::]) 3
(5, 7)
(5, 7, 5)
(3, 8)
5
(3, 7)
(8, 7)
(5, 7, 5, 8, 3)
(7, 5)
</code>
### Tuple methods
Methods are functions inside an object (every variable in Python is an object)
#### 1.count() method - Counts number of occurrences of item in a tuple_____no_output_____
<code>
my_tuple.count(6)_____no_output_____
</code>
#### 2.index() method - Returns the index of first occurence of an item in a tuple_____no_output_____
<code>
my_tuple.index(5)_____no_output_____
</code>
#### Other operations with tuples
Adding tuples_____no_output_____
<code>
my_tuple + (7, 2, 1)_____no_output_____
</code>
Nested tuples = tuples containing tuples_____no_output_____
<code>
tuple_of_tuples = ((1,2,3),(3,4,5))_____no_output_____print(tuple_of_tuples)
print(tuple_of_tuples[0])
print(tuple_of_tuples[1][2])((1, 2, 3), (3, 4, 5))
(1, 2, 3)
5
</code>
## List
List is a collection which is ordered and changeable._____no_output_____
<code>
my_list = [3, 8, 5, 7, 5]_____no_output_____
</code>
Accesing list members is exactly the same as accesing tuple members, .count() and .index() methods work the same way with lists.
The difference is that list members can be changed_____no_output_____
<code>
my_list[1] = 9
print(my_list)[3, 9, 5, 7, 5]
my_tuple[1] = 9_____no_output_____
</code>
Lists are having more methods than tuples_____no_output_____#### 1.count() method
same as with tuple
#### 2.index() method
same as with tuple
#### 3.reverse() method
inverting the list same as my_list[::-1]_____no_output_____
<code>
my_list.reverse()
print(my_list)
my_list = my_list[::-1]
print(my_list)[5, 7, 5, 9, 3]
[3, 9, 5, 7, 5]
</code>
#### 4.sort() method
sorting the list from smallest to largest or alphabetically in case of text_____no_output_____
<code>
my_list.sort()
print(my_list)
my_list.sort(reverse=True)
print(my_list)[3, 5, 5, 7, 9]
[9, 7, 5, 5, 3]
</code>
#### 5.clear() method
removing everything from a list, equal to my_list = []_____no_output_____
<code>
my_list.clear()
print(my_list)[]
</code>
#### 6.remove() method
Removes the first item with the specified value_____no_output_____
<code>
my_list = [3, 8, 5, 7, 5]
my_list.remove(7)
print(my_list)[3, 8, 5, 5]
</code>
#### 7.pop() method
Removes the element at the specified position_____no_output_____
<code>
my_list.pop(0)
print(my_list)[8, 5, 5]
</code>
#### 8.copy() method
Returns a copy of the list_____no_output_____
<code>
my_list_copy = my_list.copy()
print(my_list_copy)[3, 8, 5, 7, 5]
</code>
what is the problem with my_other_list = my_list?_____no_output_____
<code>
my_other_list = my_list
print(my_other_list)[3, 8, 5, 7, 5]
my_list.pop(0)
print(my_list)
print(my_list_copy)
print(my_other_list)[8, 5, 7, 5]
[3, 8, 5, 7, 5]
[8, 5, 7, 5]
my_other_list.pop(0)
print(my_list)
print(my_list_copy)
print(my_other_list)[5, 7, 5]
[3, 8, 5, 7, 5]
[5, 7, 5]
</code>
#### 9.insert() method
Adds an element at the specified position, displacing the following members with 1 position_____no_output_____
<code>
my_list = [3, 8, 5, 7, 5]
my_list.insert(3, 1)
print(my_list)[3, 8, 5, 1, 7, 5]
</code>
#### 10.append() method
Adds an element at the end of the list_____no_output_____
<code>
my_list.append(6)
print(my_list)
my_list.append([6,7])
print(my_list)[3, 8, 5, 1, 7, 5, 6]
[3, 8, 5, 1, 7, 5, 6, [6, 7]]
</code>
#### 11.extend() method
dd the elements of a list (or any iterable), to the end of the current list_____no_output_____
<code>
another_list = [2, 6, 8]
my_list.extend(another_list)
print(my_list)[3, 8, 5, 1, 7, 5, 6, 2, 6, 8]
another_tuple = (2, 6, 8)
my_list.extend(another_tuple)
print(my_list)[3, 8, 5, 1, 7, 5, 6, 2, 6, 8, 2, 6, 8]
</code>
## <font color= red>End of first session</font>_____no_output_____#### Set
Set is an unordered collection of unique objects._____no_output_____
<code>
my_set = {3, 8, 5, 7, 5}
print(my_set){8, 3, 5, 7}
print(my_set){8, 3, 5, 7}
</code>
### Set methods
.add() - Adds an element to the set
.clear() - Removes all the elements from the set
.copy() - Returns a copy of the set
.difference() - Returns a set containing the difference between two or more sets
.difference_update() - Removes the items in this set that are also included in another, specified set
.discard() - Remove the specified item
.intersection() - Returns a set, that is the intersection of two other sets
.intersection_update() - Removes the items in this set that are not present in other, specified set(s)
.isdisjoint() - Returns whether two sets have a intersection or not
.issubset() - Returns whether another set contains this set or not
.issuperset() - Returns whether this set contains another set or not
.pop() - Removes an element from the set
.remove() - Removes the specified element
.symmetric_difference() - Returns a set with the symmetric differences of two sets
.symmetric_difference_update() - Inserts the symmetric differences from this set and another
.union() - Return a set containing the union of sets
.update() - Update the set with the union of this set and others_____no_output_____
<code>
set_a = {1,2,3,4,5}
set_b = {4,5,6,7,8}
print(set_a.union(set_b))
print(set_a.intersection(set_b)){1, 2, 3, 4, 5, 6, 7, 8}
{4, 5}
</code>
## Converting tuple to list to set
we can convert any tuple or set to list with **list()** function
we can convert any list or set to tuple with **tuple()** function
we can convert any tuple or list to set with **set()** function_____no_output_____
<code>
my_list = [3, 8, 5, 7, 5]
print(my_list)
my_tuple = tuple(my_list)
print(my_tuple)
my_set =set(my_list)
print(my_set)
my_list2 = list(my_set)
print(my_list2)[3, 8, 5, 7, 5]
(3, 8, 5, 7, 5)
{8, 3, 5, 7}
[8, 3, 5, 7]
</code>
this functions can be nested_____no_output_____
<code>
my_unique_list = list(set(my_list))
print(my_unique_list)[8, 3, 5, 7]
</code>
### Checking if something is in a list, set, tuple_____no_output_____
<code>
print(3 in my_set)
print(9 in my_set)
print(3 in my_list)
print(9 in my_tuple)True
False
True
False
</code>
## Dictionary
dictionary is a collection which is unordered, changeable and indexed as a key-value pair_____no_output_____
<code>
my_dict = {1: 2.3,
2: 8.6}
print(my_dict[2])8.6
print(my_dict[3])_____no_output_____print(my_dict.keys())
print(my_dict.values())
print(1 in my_dict.keys())
print(2.3 in my_dict.values())
print(my_dict.items())dict_keys([1, 2])
dict_values([2.3, 8.6])
True
True
dict_items([(1, 2.3), (2, 8.6)])
</code>
## Strings
strings are ordered sequence of characters, strings are unchangable_____no_output_____
<code>
print(my_dict.get(2))8.6
my_string = 'this is string'
other_string = "this is string as well"
multilane_string = '''this is
a multi lane
string'''
print(my_string)
print(other_string)
print(multilane_string)this is string
this is string as well
this is
a multi lane
string
my_string = 'this "word" is in quotes'
my_other_string = "This is Maria's book"
print(my_string)
print(my_other_string)this "word" is in quotes
This is Maria's book
my_string = "this \"word\" is in quotes"
my_other_string = 'This is Maria\'s book'
print(my_string)
print(my_other_string)this "word" is in quotes
This is Maria's book
my_number = 9
my_string = '9'
print(my_number+1)
print(my_string+1)10
print(my_string+'1')
print(int(my_string)+1)
print(my_number+int('1'))91
10
10
</code>
Accesing list members is exactly the same as with lists and tuples_____no_output_____
<code>
print(other_string)
print(other_string[0])
print(other_string[::-1])this is string as well
t
llew sa gnirts si siht
</code>
## String methods
.capitalize() - Converts the first character to upper case
.casefold() - Converts string into lower case
.center() - Returns a centered string
.count() - Returns the number of times a specified value occurs in a string
.encode() - Returns an encoded version of the string
.endswith() - Returns true if the string ends with the specified value
.expandtabs() - Sets the tab size of the string
.find() - Searches the string for a specified value and returns the position of where it was found
.format() - Formats specified values in a string
.format_map() - Formats specified values in a string
.index() - Searches the string for a specified value and returns the position of where it was found
.isalnum() - Returns True if all characters in the string are alphanumeric
.isalpha() - Returns True if all characters in the string are in the alphabet
.isdecimal() - Returns True if all characters in the string are decimals
.isdigit() - Returns True if all characters in the string are digits
.isidentifier() - Returns True if the string is an identifier
.islower() - Returns True if all characters in the string are lower case
.isnumeric() - Returns True if all characters in the string are numeric
.isprintable() - Returns True if all characters in the string are printable
.isspace() - Returns True if all characters in the string are whitespaces
.istitle() - Returns True if the string follows the rules of a title
.isupper() - Returns True if all characters in the string are upper case
.join() - Joins the elements of an iterable to the end of the string
.ljust() - Returns a left justified version of the string
.lower() - Converts a string into lower case
.lstrip() - Returns a left trim version of the string
.maketrans() - Returns a translation table to be used in translations
.partition() - Returns a tuple where the string is parted into three parts
.replace() - Returns a string where a specified value is replaced with a specified value
.rfind() - Searches the string for a specified value and returns the last position of where it was found
.rindex() - Searches the string for a specified value and returns the last position of where it was found
.rjust() - Returns a right justified version of the string
.rpartition() - Returns a tuple where the string is parted into three parts
.rsplit() - Splits the string at the specified separator, and returns a list
.rstrip() - Returns a right trim version of the string
.split() - Splits the string at the specified separator, and returns a list
.splitlines() - Splits the string at line breaks and returns a list
.startswith() - Returns true if the string starts with the specified value
.strip() - Returns a trimmed version of the string
.swapcase() - Swaps cases, lower case becomes upper case and vice versa
.title() - Converts the first character of each word to upper case
.translate() - Returns a translated string
.upper() - Converts a string into upper case
.zfill() - Fills the string with a specified number of 0 values at the beginning_____no_output_____
<code>
my_string = ' string with spaces '
print(my_string)
my_stripped_string = my_string.strip()
print(my_stripped_string) string with spaces
string with spaces
print('ABC' == 'ABC')
print('ABC' == ' ABC ')True
False
list_of_words = my_string.split()
print(list_of_words)['string', 'with', 'spaces']
text = 'id1, id2, id3, id4'
ids_list = text.split(', ')
print(ids_list)['id1', 'id2', 'id3', 'id4']
new_text = ' / '.join(ids_list)
print(new_text)id1 / id2 / id3 / id4
xml_text = 'this is <body>text</body> with xml tags'
xml_text.find('<body>')_____no_output_____xml_body = xml_text[xml_text.find('<body>')+len('<body>'):xml_text.find('</body>')]
print(xml_body)text
</code>
### Other operations with strings
combinig (adding) strings_____no_output_____
<code>
text = 'text1'+'text2'
print(text)text1text2
text = 'text1'*4
print(text)text1text1text1text1
</code>
row and formated string_____no_output_____
<code>
file_location = 'C:\Users\U6047694\Documents\job\Python_Projects\file.txt'_____no_output_____file_location = r'C:\Users\U6047694\Documents\job\Python_Projects\file.txt'
print(file_location)C:\Users\U6047694\Documents\job\Python_Projects\file.txt
var1 = 5
var2 = 6
print(f'Var1 is: {var1}, var2 is: {var2} and the sum is: {var1+var2}')
# this is the same as
print('Var1 is: '+str(var1)+', var2 is: '+str(var2)+' and the sum is: '+str(var1+var2))Var1 is: 5, var2 is: 6 and the sum is: 11
Var1 is: 5, var2 is: 6 and the sum is: 11
</code>
## Regular expressions in Python
The regular expressions in python are stored in separate package **re** this package should be imported in order to access its functionality (methods).
### Methods in re package
* re.search() - Check if given pattern is present anywhere in input string. Output is a re.Match object, usable in conditional expressions
* re.fullmatch() - ensures pattern matches the entire input string
* re.compile() - Compile a pattern for reuse, outputs re.Pattern object
* re.sub() - search and replace
* re.escape() - automatically escape all metacharacters
* re.split() - split a string based on RE text matched by the groups will be part of the output
* re.findall() - returns all the matches as a list
* re.finditer() - iterator with re.Match object for each match
* re.subn() - gives tuple of modified string and number of substitutions
### re characters
'.' - Match any character except newline
'^' - Match the start of the string
'$' - Match the end of the string
'*' - Match 0 or more repetitions
'+' - Match 1 or more repetitions
'?' - Match 0 or 1 repetitions
### re set of characters
'[]' - Match a set of characters
'[a-z]' - Match any lowercase ASCII letter
'[lower-upper]' - Match a set of characters from lower to upper
'[^]' - Match characters NOT in a set
<a href="https://cheatography.com/davechild/cheat-sheets/regular-expressions/" >Cheet Sheet</a>
<a href="https://docs.python.org/3/library/re.html">re reference</a>_____no_output_____
<code>
text = 'this is a sample text for re testing'
t_words = re.findall('t[a-z]* ', text)
print(t_words)
new_text = re.sub('t[a-z]* ', 'replace ', text)
print(new_text)['this ', 'text ']
replace is a sample replace for re testing
</code>
## Conditions
### IF, ELIF, ELSE conditions
if condition sintacsis:_____no_output_____
<code>
a = 3
if a == 2:
print('a is 2')_____no_output_____if a == 3:
print('a is 3')
else:
print('a is not 2')a is 3
if a == 2:
pring('a is 2')
elif a == 3:
print('a is 3')
else:
print('a is not 2 or 3')a is 3
if a == 2:
pring('a is 2')
if a == 3:
print('a is 3')
else:
print('a is not 2 or 3')a is 3
if a > 2:
print('a is bigger than 2')
if a < 4:
print('a is smaller than 4')
else:
print('a is something else')a is bigger than 2
a is smaller than 4
if a > 2:
print('a is bigger than 2')
elif a < 4:
print('a is smaller than 4')
else:
print('a is something else')a is bigger than 2
</code>
#### OR / AND in conditional statement_____no_output_____
<code>
b = 4
if a > 2 or b < 2:
print(f'a is: {a} b is: {b}.')a is: 3 b is: 4.
</code>
#### Nested conditional statements_____no_output_____
<code>
a = 2
if a == 2:
if b > a:
print('b is bigger than a')
else:
print('b is not bigger than a')
else:
print(f'a is {a}')b is bigger than a
</code>
## Loops
### FOR loop_____no_output_____
<code>
my_list = [1, 3, 5]
for item in my_list:
print(item)1
3
5
</code>
### WHILE loop_____no_output_____
<code>
a = 0
while a < 5:
a = a + 1 # or alternatively a += 1
print(a)1
2
3
4
5
</code>
You can put else statement in the while loop as well_____no_output_____
<code>
a = 3
while a < 5:
a = a + 1 # or alternatively a += 1
print(a)
else:
print('This is the end!')4
5
This is the end!
</code>
Loops can be nested as well_____no_output_____
<code>
columns = ['A', 'B', 'C']
rows = [1, 2, 3]
for column in columns:
print(column)
for row in rows:
print(row)A
1
2
3
B
1
2
3
C
1
2
3
</code>
Break and Continue. Break is stopping the loop, continue is skipping to the next item in the loop_____no_output_____
<code>
for column in columns:
print(column)
if column == 'B':
breakA
B
</code>
If we have nested loops break will stop only the loop in which is used_____no_output_____
<code>
columns = ['A', 'B', 'C']
rows = [1, 2, 3]
for column in columns:
print(column)
for row in rows:
print(row)
if row == 2:
breakA
1
2
B
1
2
C
1
2
i = 0
while i < 6:
i += 1
if i == 3:
continue
print(i)1
2
4
5
6
</code>
## Objects
Everything in Python is Object
_____no_output_____
<code>
class Player:
def __init__(self, name):
self.name = name
print(f'{self.name} is a Player')
def run(self):
return f'{self.name} is running'_____no_output_____player1 = Player('Messi')
player1.run()Messi is a Player
</code>
### Inheritance
Inheritance allows us to define a class that inherits all the methods and properties from another class.
Parent class is the class being inherited from, also called base class.
Child class is the class that inherits from another class, also called derived class._____no_output_____
<code>
class Futbol_player(Player):
def kick_ball(self):
return f'{self.name} is kicking the ball'
class Basketball_player(Player):
def catch_ball(self):
return f'{self.name} is catching the ball'_____no_output_____player2 = Futbol_player('Leo Messi')
player2.kick_ball()Leo Messi is a Player
player2.run()_____no_output_____player3 = Basketball_player('Pau Gasol')
player3.catch_ball()Pau Gasol is a Player
player3.kick_ball()_____no_output_____class a_list(list):
def get_3_element(self):
return self[3]
my_list = ['a', 'b', 'c', 'd']
my_a_list = a_list(['a', 'b', 'c', 'd'])
my_a_list.get_3_element()_____no_output_____my_list.get_3_element()_____no_output_____my_a_list.count('a')_____no_output_____
</code>
## Functions
A function is a block of code which only runs when it is called.
You can pass data, known as arguments or parameters, into a function.
A function can return data as a result or not._____no_output_____
<code>
def my_func(n):
'''this is power function'''
result = n*n
return result_____no_output_____
</code>
You can assign the result of a function to another variable_____no_output_____
<code>
power5 = my_func(5)
print(power5)25
</code>
multiline string (Docstrings) can be used to describe the functions, can be accessed by \__doc\__ method_____no_output_____
<code>
print(my_func.__doc__)this is power function
</code>
One function can return more than one value_____no_output_____
<code>
def my_function(a):
x = a*2
y = a+2
return x, y
variable1 = my_function(5)
print(variable1)
print(variable2)(10, 7)
7
</code>
One function can have between 0 and many arguments _____no_output_____
<code>
def my_formula(a, b, c):
y = (a*b) + c
return y _____no_output_____
</code>
#### Positional arguments_____no_output_____
<code>
my_formula(2,3,4)_____no_output_____
</code>
#### Keyword arguments_____no_output_____
<code>
my_formula(c=4, a=2, b=3)_____no_output_____
</code>
You can pass both positional and keyword arguments to a function but the positional should always come first_____no_output_____
<code>
my_formula(4, c=4, b=3)_____no_output_____
</code>
#### Default arguments
This are arguments that are assigned when declaring the function and if not specified will take the default data_____no_output_____
<code>
def my_formula(a, b, c=3):
y = (a*b) + c
return y
my_formula(2, 3, c=6)_____no_output_____
</code>
#### Arbitrary Arguments, \*args:
If you do not know how many arguments that will be passed into your function, add a * before the argument name in the function definition.
The function will receive a tuple of arguments, and they can be access accordingly:
_____no_output_____
<code>
def greeting(*args):
greeting = f'Hi to {", ".join(args[:-1])} and {args[-1]}'
print(greeting)
greeting('Joe', 'Ben', 'Bobby')Hi to Joe, Ben and Bobby
</code>
#### Arbitrary Keyword Arguments, \**kwargs
If you do not know how many keyword arguments that will be passed into your function, add two asterisk: ** before the parameter name in the function definition.
This way the function will receive a dictionary of arguments, and can access the items accordingly_____no_output_____
<code>
def list_names(**kwargs):
for key, value in kwargs.items():
print(f'{key} is: {value}')
list_names(first_name='Jonny', family_name='Walker')first_name is: Jonny
family_name is: Walker
list_names(primer_nombre='Jose', segundo_nombre='Maria', primer_apellido='Peréz', segundo_apellido='García')primer_nombre is: Jose
segundo_nombre is: Maria
primer_apellido is: Peréz
segundo_apellido is: García
</code>
### Scope of the function
Scope of the function is what a function can see and use.
The function can use all global variables if there is no local assigned_____no_output_____
<code>
a = 'Hello'
def my_function():
print(a)
my_function()Hello
</code>
If we have local variable with the same name the function will use the local._____no_output_____
<code>
a = 'Hello'
def my_function():
a = 'Hi'
print(a)
my_function()Hi
a = 'Hello'
def my_function():
print(a)
a = 'Hi'
my_function()_____no_output_____
</code>
This is important as this is preventing us from changing global variables inside function_____no_output_____
<code>
a = 'Hello'
def change_a():
a = a + 'Hi'
change_a()
print(a)_____no_output_____
</code>
A function cannot access local variables from another function._____no_output_____
<code>
def my_function():
b = 'Hi'
print(a)
def my_other_function():
print(b)
my_other_function()_____no_output_____
</code>
Local variables cannot be accessed from global environment_____no_output_____
<code>
print(b)_____no_output_____
</code>
Similar to variables you can use functions from the global environment or define them inside a parent function_____no_output_____
<code>
def add_function(a, b):
result = a + b
return result
def formula_function(a, b, c):
result = add_function(a, b) * c
return result
print(formula_function(2,3,4))20
</code>
We can use the result from one function as argument for another_____no_output_____
<code>
print(formula_function(add_function(4,5), 3, 2))24
</code>
We can use function as argument for another function or return function from another function, we have Anonymous/Lambda Function in Python as well._____no_output_____#### Recursive functions
Recursive function is function that is using (calling) itself_____no_output_____
<code>
def factorial(x):
"""This is a recursive function
to find the factorial of an integer (factorial(4) = 4*3*2*1)"""
if x == 1:
return 1
else:
result = x * factorial(x-1)
return result
factorial(5)
def extract('http..'):
result = request('http..')
if request = None:
time.sleep(360)
result = extract()_____no_output_____
</code>
## Special functions (range, enumerate, zip)
### range() function - is creating sequence_____no_output_____
<code>
my_range = range(5)
print(my_range)
my_list = list(range(2, 10, 2))
my_listrange(0, 5)
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
for i in range(3, len(my_list), 2):
print(my_list[i])d
f
h
range_list = list(range(10))
print(range_list)[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code>
### enumerate() function is creating index for iterables _____no_output_____
<code>
import time
my_list = list(range(10))
my_second_list = []
for index, value in enumerate(my_list):
time.sleep(1)
my_second_list.append(value+2)
print(f'{index+1} from {len(my_list)}')
print(my_second_list)1 from 10
2 from 10
3 from 10
4 from 10
5 from 10
6 from 10
7 from 10
8 from 10
9 from 10
10 from 10
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
print(my_second_list)[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
</code>
### zip() function is aggregating items into tuples_____no_output_____
<code>
list1 = [2, 4, 6, 7, 8]
list2 = ['a', 'b', 'c', 'd', 'e']
for item1, item2 in zip(list1, list2):
print(f'item1 is:{item1} and item2 is: {item2}')item1 is:2 and item2 is: a
item1 is:4 and item2 is: b
item1 is:6 and item2 is: c
item1 is:7 and item2 is: d
item1 is:8 and item2 is: e
</code>
### Iterator objects_____no_output_____
<code>
string = 'abc'
it = iter(string)
it_____no_output_____next(it)_____no_output_____
</code>
## I/O working with files, working directory, projects
I/O = Input / Output. Loading data to python, getting data out of python_____no_output_____### Keyboard input
#### input() function_____no_output_____
<code>
str = input("Enter your input: ")
print("Received input is : "+ str)Enter your input: Hi!
Received input is : Hi!
</code>
### Console output
#### print() function_____no_output_____
<code>
print('Console output')Console output
</code>
### Working with text files
#### open() function
open(file_name [, access_mode][, buffering])
file_name = string with format 'C:/temp/my_file.txt'
access_mode = string with format: 'r', 'rb', 'w' etc
1. r = Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode.
2. rb = Opens a file for reading only in binary format.
3. r+ = Opens a file for both reading and writing.
4. rb = Opens a file for both reading and writing in binary format.
5. w = Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.
6. wb = Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.
7. w+ = Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.
8. wb+ = Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.
9. a = Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.
10. ab = Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.
11. a+ = Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
12. ab+ = Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing._____no_output_____
<code>
txt_file = open('C:/temp/python test/txt_file.txt', 'w')_____no_output_____txt_file.write('some text')
txt_file.close()_____no_output_____txt_file = open('C:/temp/python test/txt_file.txt', 'r')
text = txt_file.read()
txt_file.close()
print(text)some text
txt_file = open('C:/temp/python test/txt_file.txt', 'a')
txt_file.write('\nsome more text')
txt_file.close()
_____no_output_____txt_file = open('C:/temp/python test/txt_file.txt', 'r')
txt_lines = txt_file.readlines()
print(type(txt_lines))
txt_file.close()
print(txt_lines)<class 'list'>
['some text\n', 'some more text']
txt_file = open('C:/temp/python test/txt_file.txt', 'r')
txt_line = txt_file.readline()
print(txt_line)
txt_line2 = txt_file.readline()
print(txt_line2)some text
some more text
</code>
### Deleting files
requires os library this library is part of Python but is not loaded by default so to use it we should import it_____no_output_____
<code>
import os
os.remove('C:/temp/python test/txt_file.txt')_____no_output_____if os.path.exists('C:/temp/python test/txt_file.txt'):
os.remove('C:/temp/python test/txt_file.txt')
else:
print('The file does not exist')The file does not exist
</code>
### Removing directories with os.rmdir()
To delete the directory with os.rmdir() the directory should be empty we can check what is inside the directory with os.listdir() or os.walk()_____no_output_____
<code>
os.listdir('C:/temp/python test/')_____no_output_____os.walk('C:/temp/python test/')_____no_output_____for item in os.walk('C:/temp/python test/'):
print(item[0])
print(item[1])
print(item[2])C:/temp/python test/
['test dir']
['test file.txt', 'txt_file.txt']
C:/temp/python test/test dir
[]
[]
</code>
### Rename file or directory_____no_output_____
<code>
os.rename('C:/temp/python test/test file.txt', 'C:/temp/python test/test file renamed.txt')
os.listdir('C:/temp/python test/')_____no_output_____
</code>
### Open folder or file in Windows with the associated program_____no_output_____
<code>
os.startfile('C:/temp/python test/test file renamed.txt')_____no_output_____
</code>
## Working directory_____no_output_____
<code>
import os
os.getcwd()_____no_output_____os.chdir('C:/temp/python test/')
os.getcwd()_____no_output_____os.listdir()_____no_output_____
</code>
### Projects
Project is a folder organising your files, the top level is your working directory.
Good practices of organising your projects:
1. Create separate folder for your python(.py) files, name this folder without space (eg. py_files or python_files)
2. Add in your py_files fodler a file called \_\_init\_\_.py, this is an empty python file that will allow you to import all files in this folder as packages.
3. is a good idea to make your project folder a git repository so you can track your changes.
4. put all your source files and result files in your project directory._____no_output_____## Packages
Packages (or libraries) are python files with objects and functions that you can use, some of them are installed with python and are part of the programming language, others should be installed.
### Package managers
Package managers are helping you to install, update and uninstall packages.
#### pip package manager
This is the default python package manager
* pip install package_name=version - installing a package
* pip freeze - get the list of installed packages
* pip freeze > requirements.txt - saves the list of installed packages as requirements.txt file
* pip install -r requirements.txt - install all packages from requirements.txt file
#### conda package manager
This is used by anaconda distributions of python
### The Python Standard Library - packages included in python
[Full list](https://docs.python.org/3/library/)
* os - Miscellaneous operating system interfaces
* time — Time access and conversions
* datetime — Basic date and time types
* math — Mathematical functions
* random — Generate pseudo-random numbers
* statistics — Mathematical statistics functions
* shutil — High-level file operations
* pickle — Python object serialization
* logging — Logging facility for Python
* tkinter — Python interface to Tcl/Tk (creating UI)
* venv — Creation of virtual environments
* re - Regular expression operations_____no_output_____#### time package examples_____no_output_____
<code>
import time
print('start')
time.sleep(3)
print('stop')start
stop
time_now = time.localtime()
print(time_now)time.struct_time(tm_year=2020, tm_mon=10, tm_mday=6, tm_hour=9, tm_min=45, tm_sec=26, tm_wday=1, tm_yday=280, tm_isdst=1)
</code>
convert time to string with form dd-mm-yyyy_____no_output_____
<code>
date = time.strftime('%d-%m-%Y', time_now)
print(date)
month = time.strftime('%B', time_now)
print(f'month is {month}')06-10-2020
month is October
</code>
convert string to time_____no_output_____
<code>
as_time = time.strptime("30 Nov 2020", "%d %b %Y")
print(as_time)time.struct_time(tm_year=2020, tm_mon=11, tm_mday=30, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=335, tm_isdst=-1)
</code>
#### datatime package examples_____no_output_____
<code>
import datetime
today = datetime.date.today()
print(today)
print(type(today))
week_ago = today - datetime.timedelta(days=7)
print(week_ago)
today_string = today.strftime('%Y/%m/%d')
print(today_string)
print(type(today_string))2020-10-06
<class 'datetime.date'>
2020-09-29
2020/10/06
<class 'str'>
</code>
#### shutil package examples
functions for file copying and removal
* shutil.copy(src, dst)
* shutil.copytree(src, dst)
* shutil.rmtree(path)
* shutil.move(src, dst)_____no_output_____### How to import packages and function from packages
* Import the whole package - in this case you can use all the functions of the package including the functions in the modules of the package, you can rename the package when importing_____no_output_____
<code>
import datetime
today = datetime.date.today()
print(today)
import datetime as dt
today = dt.date.today()
print(today)2020-10-06
2020-10-06
</code>
* import individual modules or individual functions - in this case you can use the functions direktly as if they are defined in your script. <font color=red>Important: be aware of function shadowing - when you import functions with the same name from different packages or you have defined function with the same name!</font>_____no_output_____
<code>
from datetime import date # importing date class
today = date.today()
print(today)
# Warning this is replacing date class with string!!!
date = '25/06/2012'
today = date.today()
print(today)2020-10-06
</code>
When importing individual functions or classes from the same package you can import them together_____no_output_____
<code>
from datetime import date, time, timedelta_____no_output_____
</code>
## Selected external packages
If you are using pip package manager all the packages available are installed from [PyPI](https://pypi.org/)_____no_output_____* [Biopython](https://biopython.org/) - contains parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...) and more
* [SQLAlchemy](https://docs.sqlalchemy.org/en/13/) - connect to SQL database and query the database
* [cx_Oracle](https://oracle.github.io/python-cx_Oracle/) - connect to Oracle database
* [xmltodict](https://github.com/martinblech/xmltodict) - convert xml to Python dictionary with xml tags as keys and the information inside the tags as values_____no_output_____
<code>
import xmltodict
xml = """
<root xmlns="http://defaultns.com/"
xmlns:a="http://a.com/"
xmlns:b="http://b.com/">
<x>1</x>
<a:y>2</a:y>
<b:z>3</b:z>
</root>"""
xml_dict = xmltodict.parse(xml)
print(xml_dict.keys())
print(xml_dict['root'].keys())
print(xml_dict['root'].values())odict_keys(['root'])
odict_keys(['@xmlns', '@xmlns:a', '@xmlns:b', 'x', 'a:y', 'b:z'])
odict_values(['http://defaultns.com/', 'http://a.com/', 'http://b.com/', '1', '2', '3'])
</code>
### Pyautogui
[PyAutoGUI](https://pyautogui.readthedocs.io/en/latest/index.html) lets your Python scripts control the mouse and keyboard to automate interactions with other applications._____no_output_____
<code>
import pyautogui as pa
screen_width, screen_height = pa.size() # Get the size of the primary monitor.
print(f'screen size is {screen_width} x {screen_height}')
mouse_x, mouse_y = pa.position() # Get the XY position of the mouse.
print(f'mouse position is: {mouse_x}, {mouse_y}')
pa.moveTo(600, 500, duration=5) # Move the mouse to XY coordinates.
screen size is 1920 x 1080
mouse position is: 457, 278
import time
time.sleep(3)
pa.moveTo(600, 500)
pa.click()
pa.write('Hello world!', interval=0.25)
pa.alert('Script finished!') _____no_output_____pa.screenshot('C:/temp/python test/my_screenshot.png', region=(0,0, 300, 400))_____no_output_____location = pa.locateOnScreen('C:/temp/python test/python.PNG')
print(location)
image_center = pa.center(location)
print(image_center)
pa.moveTo(image_center, duration=3)Box(left=1669, top=131, width=59, height=54)
Point(x=1698, y=158)
</code>
### Pandas
[Pandas](https://pandas.pydata.org/docs/user_guide/index.html) - is providing high-performance, easy-to-use data structures and data analysis tools for Python
[Pandas cheat sheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
Is providing 2 new data structures to Python
1. Series - is a one-dimensional labeled (indexed) array capable of holding any data type
2. DataFrame - is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table_____no_output_____
<code>
import pandas as pd
d = {'b': 1, 'a': 0, 'c': 2}
my_serie = pd.Series(d)
print(my_serie['a'])
print(type(my_serie))0
<class 'pandas.core.series.Series'>
list1 = [1, 2, 3]
list2 = [5, 6, 8]
list3 = [10, 12, 13]
df = pd.DataFrame({'b': list1, 'a': list2, 'c': list3})
df_____no_output_____print(df.index)
print(df.columns)
print(df.shape)RangeIndex(start=0, stop=3, step=1)
Index(['b', 'a', 'c'], dtype='object')
(3, 3)
df.columns = ['column1', 'column2', 'column3']
# alternative df.rename({'a':'column1'}) in case you dont want to rename all the columns
df_____no_output_____df.index = ['a', 'b', 'c']
df_____no_output_____
</code>
#### selecting values from dataframe
* select column_____no_output_____
<code>
df['column1']_____no_output_____
</code>
* select multiple columns_____no_output_____
<code>
df[['column3', 'column2']]_____no_output_____
</code>
* selecting row_____no_output_____
<code>
row1 = df.iloc[1]
row1_____no_output_____df.loc['a']_____no_output_____df.loc[['a', 'c']]_____no_output_____
</code>
* selecting values from single cell_____no_output_____
<code>
df['column1'][2]_____no_output_____df.iloc[1:2, 0:2]_____no_output_____
</code>
* selecting by column only rows meeting criteria (filtering the table)_____no_output_____
<code>
df[df['column1'] > 1]_____no_output_____
</code>
* select random columns by number (n) or as a fraction (frac)_____no_output_____
<code>
df.sample(n=2)_____no_output_____
</code>
#### adding new data to Data Frame
* add new column _____no_output_____
<code>
df['column4'] = [24, 12, 16]
df_____no_output_____df['column5'] = df['column1'] + df['column2']
df_____no_output_____df['column6'] = 7
df_____no_output_____
</code>
* add new row_____no_output_____
<code>
df = df.append({'column1':4, 'column2': 8, 'column3': 5, 'column4': 7, 'column5': 8, 'column6': 11}, ignore_index=True)
df_____no_output_____
</code>
* add new dataframe on the bottom (columns should have the same names in both dataframes)_____no_output_____
<code>
new_df = df.append(df, ignore_index=True)
new_df_____no_output_____
</code>
* merging data frames (similar to joins in SQL), default ‘inner’_____no_output_____
<code>
df2 = pd.DataFrame({'c1':[2, 3, 4, 5], 'c2': [4, 7, 11, 3]})
df2_____no_output_____merged_df = df.merge(df2, left_on='column1', right_on='c1', how='left')
merged_df_____no_output_____merged_df = pd.merge(df, df2, left_on='column1', right_on='c1')
merged_df_____no_output_____
</code>
* copy data frames - this is important to prevent warnings and artefacts_____no_output_____
<code>
df1 = pd.DataFrame({'a':[1,2,3,4,5], 'b':[6,7,8,9,10]})
df2 = df1[df1['a'] > 2].copy()
df2.iloc[0, 0] = 56
df2_____no_output_____
</code>
* change the data type in a column_____no_output_____
<code>
print(type(df1['a'][0]))
df1['a'] = df1['a'].astype('str')
print(type(df1['a'][0]))
df1<class 'str'>
<class 'str'>
</code>
* value counts - counts the number of appearances of a value in a column_____no_output_____
<code>
df1.iloc[0, 0] = '5'
df1
df1['a'].value_counts()_____no_output_____
</code>
* drop duplicates - removes duplicated rows in a data frame_____no_output_____
<code>
df1.iloc[0, 1] = 10
df1_____no_output_____df1.drop_duplicates(inplace=True)
df1_____no_output_____
</code>
#### Pandas I/O
* from / to excel file_____no_output_____
<code>
excel_sheet = pd.read_excel('C:/temp/python test/example.xlsx', sheet_name='Sheet1')_____no_output_____excel_sheet.head()_____no_output_____print(excel_sheet.shape)
print(excel_sheet['issue'][0])
excel_sheet = excel_sheet[~excel_sheet['keywords'].isna()]
print(excel_sheet.shape)(39, 10)
nan
(35, 10)
excel_sheet.to_excel('C:/temp/python test/example_1.xlsx', index=False)_____no_output_____
</code>
To create excel file with multiple sheets pandas ExcelWriter method shoyld be used and sheets assigned to it_____no_output_____
<code>
writer = pd.ExcelWriter('C:/temp/python test/example_2.xlsx')
df1.to_excel(writer, 'Sheet1', index = False)
excel_sheet.to_excel(writer, 'Sheet2', index = False)
writer.save()_____no_output_____
</code>
* from html page
pandas read_html method is reading the whole page and is creating list of dataframes, one for every html table in the webpage_____no_output_____
<code>
codons = pd.read_html('https://en.wikipedia.org/wiki/DNA_codon_table')_____no_output_____codons[2]_____no_output_____
</code>
* from SQL database_____no_output_____
<code>
my_data = pd.read_sql('select column1, column2 from table1', connection)_____no_output_____
</code>
* from CSV file_____no_output_____
<code>
my_data = pd.read_csv('data.csv')_____no_output_____
</code>
### XLWings
Working with excel files
[Documentation](https://docs.xlwings.org/en/stable/)_____no_output_____
<code>
import xlwings as xw
workbook = xw.Book()
_____no_output_____new_sht = workbook.sheets.add('new_sheet')_____no_output_____new_sht.range('A1').value = 'Hi from Python'
new_sht.range('A1').column_width = 30
new_sht.range('A1').color = (0,255,255)_____no_output_____a2_value = new_sht.range('A2').value
print(a2_value)56.0
workbook.save('C:/temp/python test/new_file.xlsx')
workbook.close()_____no_output_____
</code>
## Errors an debugging
### Escaping errors in Python with try: except:_____no_output_____
<code>
a = 7/0_____no_output_____import sys
try:
a = 7/0
except:
print(f'a cannot be calculated, {sys.exc_info()[0]}!')
a = Nonea cannot be calculated, <class 'ZeroDivisionError'>!
try:
'something'
except:
try:
'something else'
except:
'and another try'
finally:
print('Nothing is working :(')Nothing is working :(
</code>
### Debugging in PyCharm_____no_output_____## Virtual environments
You can create new virtual environment for every Python project, the virtual environment is an indipendant instalation of Python and you can install packages indipendantly of your System Python._____no_output_____
| {
"repository": "avkch/Python-for-beginners",
"path": "Python for beginners.ipynb",
"matched_keywords": [
"BioPython",
"bioinformatics"
],
"stars": null,
"size": 203770,
"hexsha": "d0195815dbbeeb559ce6f2091c86b1fd0e41a2b0",
"max_line_length": 33092,
"avg_line_length": 29.1808678219,
"alphanum_fraction": 0.5301565491
} |
# Notebook from ads-ad-itcenter/qunomon.forked
Path: ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb
# test note
* jupyterはコンテナ起動すること
* テストベッド一式起動済みであること
_____no_output_____
<code>
!pip install --upgrade pip
!pip install --force-reinstall ../lib/ait_sdk-0.1.7-py3-none-any.whlRequirement already satisfied: pip in /usr/local/lib/python3.6/dist-packages (21.0.1)
Collecting pip
Downloading pip-21.1.1-py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 4.0 MB/s eta 0:00:01
[?25hInstalling collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.0.1
Uninstalling pip-21.0.1:
Successfully uninstalled pip-21.0.1
Successfully installed pip-21.1.1
Processing /workdir/root/lib/ait_sdk-0.1.7-py3-none-any.whl
Collecting numpy<=1.19.3
Downloading numpy-1.19.3-cp36-cp36m-manylinux2010_x86_64.whl (14.9 MB)
[K |████████████████████████████████| 14.9 MB 2.8 MB/s eta 0:00:01 |███████▉ | 3.6 MB 3.7 MB/s eta 0:00:04 |███████████▎ | 5.2 MB 3.7 MB/s eta 0:00:03 |████████████████▌ | 7.6 MB 3.7 MB/s eta 0:00:02 |██████████████████▏ | 8.4 MB 3.7 MB/s eta 0:00:02 |███████████████████████████▍ | 12.7 MB 2.8 MB/s eta 0:00:01
[?25hCollecting py-cpuinfo<=7.0.0
Downloading py-cpuinfo-7.0.0.tar.gz (95 kB)
[K |████████████████████████████████| 95 kB 4.1 MB/s eta 0:00:011
[?25hCollecting keras<=2.4.3
Downloading Keras-2.4.3-py2.py3-none-any.whl (36 kB)
Collecting nbformat<=5.0.8
Downloading nbformat-5.0.8-py3-none-any.whl (172 kB)
[K |████████████████████████████████| 172 kB 10.6 MB/s eta 0:00:01
[?25hCollecting psutil<=5.7.3
Downloading psutil-5.7.3.tar.gz (465 kB)
[K |████████████████████████████████| 465 kB 8.4 MB/s eta 0:00:01 |█████████████████████████▍ | 368 kB 8.4 MB/s eta 0:00:01
[?25hCollecting nbconvert<=6.0.7
Using cached nbconvert-6.0.7-py3-none-any.whl (552 kB)
Collecting scipy>=0.14
Downloading scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl (25.9 MB)
[K |████████████████████████████████| 25.9 MB 6.4 MB/s eta 0:00:01 |██▊ | 2.2 MB 6.3 MB/s eta 0:00:04 |█████▏ | 4.2 MB 6.3 MB/s eta 0:00:04 |███████████████▊ | 12.7 MB 6.5 MB/s eta 0:00:03 |██████████████████████▎ | 18.0 MB 5.8 MB/s eta 0:00:02 |█████████████████████████████▋ | 24.0 MB 6.4 MB/s eta 0:00:01
[?25hCollecting h5py
Downloading h5py-3.1.0-cp36-cp36m-manylinux1_x86_64.whl (4.0 MB)
[K |████████████████████████████████| 4.0 MB 8.1 MB/s eta 0:00:01
[?25hCollecting pyyaml
Downloading PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl (640 kB)
[K |████████████████████████████████| 640 kB 6.3 MB/s eta 0:00:01
[?25hCollecting testpath
Using cached testpath-0.4.4-py2.py3-none-any.whl (163 kB)
Collecting jupyterlab-pygments
Using cached jupyterlab_pygments-0.1.2-py2.py3-none-any.whl (4.6 kB)
Collecting mistune<2,>=0.8.1
Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting jinja2>=2.4
Using cached Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
Collecting jupyter-core
Using cached jupyter_core-4.7.1-py3-none-any.whl (82 kB)
Collecting pandocfilters>=1.4.1
Using cached pandocfilters-1.4.3-py3-none-any.whl
Collecting entrypoints>=0.2.2
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting defusedxml
Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting traitlets>=4.2
Using cached traitlets-4.3.3-py2.py3-none-any.whl (75 kB)
Collecting bleach
Using cached bleach-3.3.0-py2.py3-none-any.whl (283 kB)
Collecting nbclient<0.6.0,>=0.5.0
Using cached nbclient-0.5.3-py3-none-any.whl (82 kB)
Collecting pygments>=2.4.1
Downloading Pygments-2.9.0-py3-none-any.whl (1.0 MB)
[K |████████████████████████████████| 1.0 MB 11.8 MB/s eta 0:00:01 |████████▌ | 266 kB 11.8 MB/s eta 0:00:01
[?25hCollecting MarkupSafe>=0.23
Using cached MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl (32 kB)
Collecting jupyter-client>=6.1.5
Using cached jupyter_client-6.1.12-py3-none-any.whl (112 kB)
Collecting async-generator
Using cached async_generator-1.10-py3-none-any.whl (18 kB)
Collecting nest-asyncio
Using cached nest_asyncio-1.5.1-py3-none-any.whl (5.0 kB)
Collecting tornado>=4.1
Using cached tornado-6.1-cp36-cp36m-manylinux2010_x86_64.whl (427 kB)
Collecting python-dateutil>=2.1
Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting pyzmq>=13
Using cached pyzmq-22.0.3-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)
Collecting ipython-genutils
Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting jsonschema!=2.5.0,>=2.4
Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
Collecting pyrsistent>=0.14.0
Using cached pyrsistent-0.17.3-cp36-cp36m-linux_x86_64.whl
Collecting importlib-metadata
Downloading importlib_metadata-4.0.1-py3-none-any.whl (16 kB)
Collecting attrs>=17.4.0
Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
[K |████████████████████████████████| 53 kB 2.0 MB/s eta 0:00:011
[?25hCollecting setuptools
Downloading setuptools-56.2.0-py3-none-any.whl (785 kB)
[K |████████████████████████████████| 785 kB 5.5 MB/s eta 0:00:01 |███████████████████████████████▎| 768 kB 5.5 MB/s eta 0:00:01
[?25hCollecting six>=1.11.0
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting decorator
Downloading decorator-5.0.7-py3-none-any.whl (8.8 kB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting webencodings
Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting cached-property
Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)
Collecting typing-extensions>=3.6.4
Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)
Collecting zipp>=0.5
Downloading zipp-3.4.1-py3-none-any.whl (5.2 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Building wheels for collected packages: psutil, py-cpuinfo
Building wheel for psutil (setup.py) ... [?25ldone
[?25h Created wheel for psutil: filename=psutil-5.7.3-cp36-cp36m-linux_x86_64.whl size=288610 sha256=add7bf93ebb9ecbd8650a6cf9469361f154e3e577a660725a014c35bae9e2b35
Stored in directory: /root/.cache/pip/wheels/fa/ad/67/90bbaacdcfe970960dd5158397f23a6579b51d853720d7856d
Building wheel for py-cpuinfo (setup.py) ... [?25ldone
[?25h Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-py3-none-any.whl size=20299 sha256=b2ec8e860f6c76a428e7e43a1be32903a0b2061998a1606cd0dd1d40219c59a1
Stored in directory: /root/.cache/pip/wheels/46/6d/cc/73a126dc2e09fe56fcec0a7386d255762611fbed1c86d3bbcc
Successfully built psutil py-cpuinfo
Installing collected packages: zipp, typing-extensions, six, ipython-genutils, decorator, traitlets, setuptools, pyrsistent, importlib-metadata, attrs, tornado, pyzmq, python-dateutil, pyparsing, jupyter-core, jsonschema, webencodings, pygments, packaging, numpy, nest-asyncio, nbformat, MarkupSafe, jupyter-client, cached-property, async-generator, testpath, scipy, pyyaml, pandocfilters, nbclient, mistune, jupyterlab-pygments, jinja2, h5py, entrypoints, defusedxml, bleach, py-cpuinfo, psutil, nbconvert, keras, ait-sdk
Attempting uninstall: zipp
Found existing installation: zipp 3.4.0
Uninstalling zipp-3.4.0:
Successfully uninstalled zipp-3.4.0
Attempting uninstall: six
Found existing installation: six 1.15.0
Uninstalling six-1.15.0:
Successfully uninstalled six-1.15.0
Attempting uninstall: ipython-genutils
Found existing installation: ipython-genutils 0.2.0
Uninstalling ipython-genutils-0.2.0:
Successfully uninstalled ipython-genutils-0.2.0
Attempting uninstall: decorator
Found existing installation: decorator 4.4.2
Uninstalling decorator-4.4.2:
Successfully uninstalled decorator-4.4.2
Attempting uninstall: traitlets
Found existing installation: traitlets 4.3.3
Uninstalling traitlets-4.3.3:
Successfully uninstalled traitlets-4.3.3
Attempting uninstall: setuptools
Found existing installation: setuptools 54.1.2
Uninstalling setuptools-54.1.2:
Successfully uninstalled setuptools-54.1.2
Attempting uninstall: pyrsistent
Found existing installation: pyrsistent 0.17.3
Uninstalling pyrsistent-0.17.3:
Successfully uninstalled pyrsistent-0.17.3
Attempting uninstall: importlib-metadata
Found existing installation: importlib-metadata 3.1.1
Uninstalling importlib-metadata-3.1.1:
Successfully uninstalled importlib-metadata-3.1.1
Attempting uninstall: attrs
Found existing installation: attrs 20.3.0
Uninstalling attrs-20.3.0:
Successfully uninstalled attrs-20.3.0
Attempting uninstall: tornado
Found existing installation: tornado 6.1
Uninstalling tornado-6.1:
Successfully uninstalled tornado-6.1
Attempting uninstall: pyzmq
Found existing installation: pyzmq 22.0.3
Uninstalling pyzmq-22.0.3:
Successfully uninstalled pyzmq-22.0.3
Attempting uninstall: python-dateutil
Found existing installation: python-dateutil 2.8.1
Uninstalling python-dateutil-2.8.1:
Successfully uninstalled python-dateutil-2.8.1
Attempting uninstall: pyparsing
Found existing installation: pyparsing 2.4.7
Uninstalling pyparsing-2.4.7:
Successfully uninstalled pyparsing-2.4.7
Attempting uninstall: jupyter-core
Found existing installation: jupyter-core 4.7.1
Uninstalling jupyter-core-4.7.1:
Successfully uninstalled jupyter-core-4.7.1
Attempting uninstall: jsonschema
Found existing installation: jsonschema 3.2.0
Uninstalling jsonschema-3.2.0:
Successfully uninstalled jsonschema-3.2.0
Attempting uninstall: webencodings
Found existing installation: webencodings 0.5.1
Uninstalling webencodings-0.5.1:
Successfully uninstalled webencodings-0.5.1
Attempting uninstall: pygments
Found existing installation: Pygments 2.8.1
Uninstalling Pygments-2.8.1:
Successfully uninstalled Pygments-2.8.1
Attempting uninstall: packaging
Found existing installation: packaging 20.9
Uninstalling packaging-20.9:
Successfully uninstalled packaging-20.9
Attempting uninstall: numpy
Found existing installation: numpy 1.18.5
Uninstalling numpy-1.18.5:
Successfully uninstalled numpy-1.18.5
Attempting uninstall: nest-asyncio
Found existing installation: nest-asyncio 1.5.1
Uninstalling nest-asyncio-1.5.1:
Successfully uninstalled nest-asyncio-1.5.1
Attempting uninstall: nbformat
Found existing installation: nbformat 5.1.2
Uninstalling nbformat-5.1.2:
Successfully uninstalled nbformat-5.1.2
Attempting uninstall: MarkupSafe
Found existing installation: MarkupSafe 1.1.1
Uninstalling MarkupSafe-1.1.1:
Successfully uninstalled MarkupSafe-1.1.1
Attempting uninstall: jupyter-client
Found existing installation: jupyter-client 6.1.12
Uninstalling jupyter-client-6.1.12:
Successfully uninstalled jupyter-client-6.1.12
Attempting uninstall: async-generator
Found existing installation: async-generator 1.10
Uninstalling async-generator-1.10:
Successfully uninstalled async-generator-1.10
Attempting uninstall: testpath
Found existing installation: testpath 0.4.4
Uninstalling testpath-0.4.4:
Successfully uninstalled testpath-0.4.4
Attempting uninstall: scipy
Found existing installation: scipy 1.4.1
Uninstalling scipy-1.4.1:
Successfully uninstalled scipy-1.4.1
Attempting uninstall: pandocfilters
Found existing installation: pandocfilters 1.4.3
Uninstalling pandocfilters-1.4.3:
Successfully uninstalled pandocfilters-1.4.3
Attempting uninstall: nbclient
Found existing installation: nbclient 0.5.3
Uninstalling nbclient-0.5.3:
Successfully uninstalled nbclient-0.5.3
Attempting uninstall: mistune
Found existing installation: mistune 0.8.4
Uninstalling mistune-0.8.4:
Successfully uninstalled mistune-0.8.4
Attempting uninstall: jupyterlab-pygments
Found existing installation: jupyterlab-pygments 0.1.2
Uninstalling jupyterlab-pygments-0.1.2:
Successfully uninstalled jupyterlab-pygments-0.1.2
Attempting uninstall: jinja2
Found existing installation: Jinja2 2.11.3
Uninstalling Jinja2-2.11.3:
Successfully uninstalled Jinja2-2.11.3
Attempting uninstall: h5py
Found existing installation: h5py 2.10.0
Uninstalling h5py-2.10.0:
Successfully uninstalled h5py-2.10.0
Attempting uninstall: entrypoints
Found existing installation: entrypoints 0.3
Uninstalling entrypoints-0.3:
Successfully uninstalled entrypoints-0.3
Attempting uninstall: defusedxml
Found existing installation: defusedxml 0.7.1
Uninstalling defusedxml-0.7.1:
Successfully uninstalled defusedxml-0.7.1
Attempting uninstall: bleach
Found existing installation: bleach 3.3.0
Uninstalling bleach-3.3.0:
Successfully uninstalled bleach-3.3.0
Attempting uninstall: nbconvert
Found existing installation: nbconvert 6.0.7
Uninstalling nbconvert-6.0.7:
Successfully uninstalled nbconvert-6.0.7
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.3.0 requires h5py<2.11.0,>=2.10.0, but you have h5py 3.1.0 which is incompatible.
tensorflow 2.3.0 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.3 which is incompatible.
tensorflow 2.3.0 requires scipy==1.4.1, but you have scipy 1.5.4 which is incompatible.[0m
Successfully installed MarkupSafe-1.1.1 ait-sdk-0.1.7 async-generator-1.10 attrs-21.2.0 bleach-3.3.0 cached-property-1.5.2 decorator-5.0.7 defusedxml-0.7.1 entrypoints-0.3 h5py-3.1.0 importlib-metadata-4.0.1 ipython-genutils-0.2.0 jinja2-2.11.3 jsonschema-3.2.0 jupyter-client-6.1.12 jupyter-core-4.7.1 jupyterlab-pygments-0.1.2 keras-2.4.3 mistune-0.8.4 nbclient-0.5.3 nbconvert-6.0.7 nbformat-5.0.8 nest-asyncio-1.5.1 numpy-1.19.3 packaging-20.9 pandocfilters-1.4.3 psutil-5.7.3 py-cpuinfo-7.0.0 pygments-2.9.0 pyparsing-2.4.7 pyrsistent-0.17.3 python-dateutil-2.8.1 pyyaml-5.4.1 pyzmq-22.0.3 scipy-1.5.4 setuptools-56.2.0 six-1.16.0 testpath-0.4.4 tornado-6.1 traitlets-4.3.3 typing-extensions-3.10.0.0 webencodings-0.5.1 zipp-3.4.1
[33mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv[0m
from pathlib import Path
import pprint
from ait_sdk.test.hepler import Helper
import json_____no_output_____# settings cell
# mounted dir
root_dir = Path('/workdir/root/ait')
ait_name='eval_metamorphic_test_tf1.13'
ait_version='0.1'
ait_full_name=f'{ait_name}_{ait_version}'
ait_dir = root_dir / ait_full_name
td_name=f'{ait_name}_test'
# (dockerホスト側の)インベントリ登録用アセット格納ルートフォルダ
current_dir = %pwd
with open(f'{current_dir}/config.json', encoding='utf-8') as f:
json_ = json.load(f)
root_dir = json_['host_ait_root_dir']
is_container = json_['is_container']
invenotory_root_dir = f'{root_dir}\\ait\\{ait_full_name}\\local_qai\\inventory'
# entry point address
# コンテナ起動かどうかでポート番号が変わるため、切り替える
if is_container:
backend_entry_point = 'http://host.docker.internal:8888/qai-testbed/api/0.0.1'
ip_entry_point = 'http://host.docker.internal:8888/qai-ip/api/0.0.1'
else:
backend_entry_point = 'http://host.docker.internal:5000/qai-testbed/api/0.0.1'
ip_entry_point = 'http://host.docker.internal:6000/qai-ip/api/0.0.1'
# aitのデプロイフラグ
# 一度実施すれば、それ以降は実施しなくてOK
is_init_ait = True
# インベントリの登録フラグ
# 一度実施すれば、それ以降は実施しなくてOK
is_init_inventory = True
_____no_output_____helper = Helper(backend_entry_point=backend_entry_point,
ip_entry_point=ip_entry_point,
ait_dir=ait_dir,
ait_full_name=ait_full_name)_____no_output_____# health check
helper.get_bk('/health-check')
helper.get_ip('/health-check')<Response [200]>
{'Code': 0, 'Message': 'alive.'}
<Response [200]>
{'Code': 0, 'Message': 'alive.'}
# create ml-component
res = helper.post_ml_component(name=f'MLComponent_{ait_full_name}', description=f'Description of {ait_full_name}', problem_domain=f'ProbremDomain of {ait_full_name}')
helper.set_ml_component_id(res['MLComponentId'])<Response [200]>
{'MLComponentId': 13,
'Result': {'Code': 'P22000', 'Message': 'add ml-component success.'}}
# deploy AIT
if is_init_ait:
helper.deploy_ait_non_build()
else:
print('skip deploy AIT')<Response [400]>
{'Code': 'T54000',
'Message': 'already exist ait = eval_metamorphic_test_tf1.13-0.1'}
<Response [200]>
{'Code': 'D00001', 'Message': 'Deploy success'}
res = helper.get_data_types()
model_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'model'][0]['Id']
dataset_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'dataset'][0]['Id']
res = helper.get_file_systems()
unix_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'UNIX_FILE_SYSTEM'][0]['Id']
windows_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'WINDOWS_FILE'][0]['Id']_____no_output_____# add inventories
if is_init_inventory:
inv1_name = helper.post_inventory('train_image', dataset_data_type_id, windows_file_system_id,
f'{invenotory_root_dir}\\mnist_dataset\\mnist_dataset.zip',
'MNIST_dataset are train image, train label, test image, test label', ['zip'])
inv2_name = helper.post_inventory('mnist_model', dataset_data_type_id, windows_file_system_id,
f'{invenotory_root_dir}\\mnist_model\\model_mnist.zip',
'MNIST_model', ['zip'])
else:
print('skip add inventories')<Response [200]>
{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}
<Response [200]>
{'result': {'Code': 'I22000', 'Message': 'append Inventory success.'}}
# get ait_json and inventory_jsons
res_json = helper.get_bk('/QualityMeasurements/RelationalOperators', is_print_json=False).json()
eq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '=='][0])
nq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '!='][0])
gt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>'][0])
ge_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>='][0])
lt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<'][0])
le_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<='][0])
res_json = helper.get_bk('/testRunners', is_print_json=False).json()
ait_json = [j for j in res_json['TestRunners'] if j['Name'] == ait_name][-1]
inv_1_json = helper.get_inventory(inv1_name)
inv_2_json = helper.get_inventory(inv2_name)<Response [200]>
<Response [200]>
<Response [200]>
<Response [200]>
# add teast_descriptions
helper.post_td(td_name, ait_json['QualityDimensionId'],
quality_measurements=[
{"Id":ait_json['Report']['Measures'][0]['Id'], "Value":"0.25", "RelationalOperatorId":lt_id, "Enable":True}
],
target_inventories=[
{"Id":1, "InventoryId": inv_1_json['Id'], "TemplateInventoryId": ait_json['TargetInventories'][0]['Id']},
{"Id":2, "InventoryId": inv_2_json['Id'], "TemplateInventoryId": ait_json['TargetInventories'][1]['Id']}
],
test_runner={
"Id":ait_json['Id'],
"Params":[
{"TestRunnerParamTemplateId":ait_json['ParamTemplates'][0]['Id'], "Value":"10"},
{"TestRunnerParamTemplateId":ait_json['ParamTemplates'][1]['Id'], "Value":"500"},
{"TestRunnerParamTemplateId":ait_json['ParamTemplates'][2]['Id'], "Value":"train"}
]
})<Response [200]>
{'Result': {'Code': 'T22000', 'Message': 'append test description success.'}}
# get test_description_jsons
td_1_json = helper.get_td(td_name)<Response [200]>
# run test_descriptions
helper.post_run_and_wait(td_1_json['Id'])<Response [200]>
{'Job': {'Id': '13', 'StartDateTime': '2021-05-10 14:07:31.784737+09:00'},
'Result': {'Code': 'R12000', 'Message': 'job launch success.'}}
[{'Id': 13,
'Result': 'OK',
'ResultDetail': 'average : OK.\n',
'Status': 'DONE',
'TestDescriptionID': 14}]
res_json = helper.get_td_detail(td_1_json['Id'])
pprint.pprint(res_json)<Response [200]>
{'Result': {'Code': 'T32000', 'Message': 'get detail success.'},
'TestDescriptionDetail': {'Id': 14,
'Name': 'eval_metamorphic_test_tf1.13_test',
'Opinion': '',
'QualityDimension': {'Id': 6,
'Name': 'Robustness_of_trained_model'},
'QualityMeasurements': [{'Description': 'Average '
'number of '
'NG output',
'Enable': True,
'Id': 31,
'Name': 'average',
'RelationalOperatorId': 4,
'Structure': 'single',
'Value': '0.25'}],
'Star': False,
'TargetInventories': [{'DataType': {'Id': 1,
'Name': 'dataset'},
'Description': 'MNIST_dataset '
'are train '
'image, train '
'label, test '
'image, test '
'label',
'Id': 29,
'Name': 'eval_metamorphic_test_tf1.13_0.1_train_image',
'TemplateInventoryId': 18},
{'DataType': {'Id': 1,
'Name': 'dataset'},
'Description': 'MNIST_model',
'Id': 30,
'Name': 'eval_metamorphic_test_tf1.13_0.1_mnist_model',
'TemplateInventoryId': 19}],
'TestDescriptionResult': {'Detail': 'average : '
'OK.\n',
'Downloads': [{'Description': 'deep_saucer_log',
'DownloadURL': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/466',
'FileName': 'deep.log',
'Id': 31,
'Name': 'DeepLog'},
{'Description': 'AIT_log',
'DownloadURL': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/467',
'FileName': 'ait.log',
'Id': 32,
'Name': 'Log'}],
'Graphs': [{'Description': 'number '
'of '
'NG '
'output',
'FileName': 'result.csv',
'Graph': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/465',
'GraphType': 'table',
'Id': 413,
'Name': 'result',
'ReportIndex': 1,
'ReportName': 'result',
'ReportRequired': True}],
'LogFile': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/468',
'Summary': 'OK'},
'TestRunner': {'Author': 'AIST',
'Description': 'Metamorphic test.\n'
'Make sure can be '
'classified in the '
'same result as the '
'original class be '
'added a little '
'processing to the '
'original data.',
'Email': '',
'Id': 9,
'LandingPage': '',
'Name': 'eval_metamorphic_test_tf1.13',
'Params': [{'Id': 48,
'Name': 'Lap',
'TestRunnerParamTemplateId': 36,
'Value': '10'},
{'Id': 49,
'Name': 'NumTest',
'TestRunnerParamTemplateId': 37,
'Value': '500'},
{'Id': 50,
'Name': 'mnist_type',
'TestRunnerParamTemplateId': 38,
'Value': 'train'}],
'Quality': 'https://airc.aist.go.jp/aiqm/quality/internal/Robustness_of_trained_model',
'Version': '0.1'}}}
# generate report
res = helper.post_report(td_1_json['Id'])<Response [200]>
{'OutParams': {'ReportUrl': 'http://127.0.0.1:8888/qai-testbed/api/0.0.1/download/469'},
'Result': {'Code': 'D12000', 'Message': 'command invoke success.'}}
</code>
| {
"repository": "ads-ad-itcenter/qunomon.forked",
"path": "ait_repository/test/tests/eval_metamorphic_test_tf1.13_0.1.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 16,
"size": 36711,
"hexsha": "d01a43ff7ffd3c40bc38f05c6a62c0ecf9f63182",
"max_line_length": 746,
"avg_line_length": 48.4313984169,
"alphanum_fraction": 0.4910517284
} |
# Notebook from yelabucsf/scrna-parameter-estimation
Path: analysis/simulation/estimator_validation.ipynb
# Estimator validation
This notebook contains code to generate Figure 2 of the paper.
This notebook also serves to compare the estimates of the re-implemented scmemo with sceb package from Vasilis.
_____no_output_____
<code>
import pandas as pd
import matplotlib.pyplot as plt
import scanpy as sc
import scipy as sp
import itertools
import numpy as np
import scipy.stats as stats
from scipy.integrate import dblquad
import seaborn as sns
from statsmodels.stats.multitest import fdrcorrection
import imp
pd.options.display.max_rows = 999
pd.set_option('display.max_colwidth', -1)
import pickle as pkl
import time_____no_output_____import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
mpl.rcParams['ps.fonttype'] = 42
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'x-small',
'axes.labelsize': 'medium',
'axes.titlesize':'medium',
'figure.titlesize':'medium',
'xtick.labelsize':'xx-small',
'ytick.labelsize':'xx-small'}
pylab.rcParams.update(params)
_____no_output_____import sys
sys.path.append('/data/home/Github/scrna-parameter-estimation/dist/schypo-0.0.0-py3.7.egg')
import schypo
import schypo.simulate as simulate/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/scanpy/api/__init__.py:6: FutureWarning:
In a future version of Scanpy, `scanpy.api` will be removed.
Simply use `import scanpy as sc` and `import scanpy.external as sce` instead.
FutureWarning,
import sys
sys.path.append('/data/home/Github/single_cell_eb/')
sys.path.append('/data/home/Github/single_cell_eb/sceb/')
import scdd_____no_output_____data_path = '/data/parameter_estimation/'
fig_path = '/data/home/Github/scrna-parameter-estimation/figures/fig3/'_____no_output_____
</code>
### Check 1D estimates of `sceb` with `scmemo`
Using the Poisson model. The outputs should be identical, this is for checking the implementation. _____no_output_____
<code>
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(100, 20))
adata = sc.AnnData(data)
size_factors = scdd.dd_size_factor(adata)
Nr = data.sum(axis=1).mean()_____no_output______, M_dd = scdd.dd_1d_moment(adata, size_factor=scdd.dd_size_factor(adata), Nr=Nr)
var_scdd = scdd.M_to_var(M_dd)
print(var_scdd)_____no_output_____imp.reload(estimator)
mean_scmemo, var_scmemo = estimator._poisson_1d(data, data.shape[0], estimator._estimate_size_factor(data))
print(var_scmemo)_____no_output_____df = pd.DataFrame()
df['size_factor'] = size_factors
df['inv_size_factor'] = 1/size_factors
df['inv_size_factor_sq'] = 1/size_factors**2
df['expr'] = data[:, 0].todense().A1
precomputed_size_factors = df.groupby('expr')['inv_size_factor'].mean(), df.groupby('expr')['inv_size_factor_sq'].mean()_____no_output_____imp.reload(estimator)
expr, count = np.unique(data[:, 0].todense().A1, return_counts=True)
print(estimator._poisson_1d((expr, count), data.shape[0], precomputed_size_factors))[0.5217290008068085, 0.9860336223993191]
</code>
### Check 2D estimates of `sceb` and `scmemo`
Using the Poisson model. The outputs should be identical, this is for checking the implementation. _____no_output_____
<code>
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(1000, 4))
adata = sc.AnnData(data)
size_factors = scdd.dd_size_factor(adata)_____no_output_____mean_scdd, cov_scdd, corr_scdd = scdd.dd_covariance(adata, size_factors)
print(cov_scdd)[[ 9.66801891 -1.45902975 -1.97166503 -10.13305759]
[ -1.45902975 3.37530982 -0.83509601 -2.76389597]
[ -1.97166503 -0.83509601 2.51976446 -2.9553916 ]
[-10.13305759 -2.76389597 -2.9553916 1.48619472]]
imp.reload(estimator)
cov_scmemo = estimator._poisson_cov(data, data.shape[0], size_factors, idx1=[0, 1, 2], idx2=[1, 2, 3])
print(cov_scmemo)[[ -1.45902975 -1.97166503 -10.13305759]
[ 3.37530982 -0.83509601 -2.76389597]
[ -0.83509601 2.51976446 -2.9553916 ]]
expr, count = np.unique(data[:, :2].toarray(), return_counts=True, axis=0)
df = pd.DataFrame()
df['size_factor'] = size_factors
df['inv_size_factor'] = 1/size_factors
df['inv_size_factor_sq'] = 1/size_factors**2
df['expr1'] = data[:, 0].todense().A1
df['expr2'] = data[:, 1].todense().A1
precomputed_size_factors = df.groupby(['expr1', 'expr2'])['inv_size_factor'].mean(), df.groupby(['expr1', 'expr2'])['inv_size_factor_sq'].mean()/home/ssm-user/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning: divide by zero encountered in true_divide
"""
/home/ssm-user/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning: divide by zero encountered in true_divide
cov_scmemo = estimator._poisson_cov((expr[:, 0], expr[:, 1], count), data.shape[0], size_factor=precomputed_size_factors)
print(cov_scmemo)-1.4590297282462616
</code>
### Extract parameters from interferon dataset_____no_output_____
<code>
adata = sc.read(data_path + 'interferon_filtered.h5ad')
adata = adata[adata.obs.cell_type == 'CD4 T cells - ctrl']
data = adata.X.copy()
relative_data = data.toarray()/data.sum(axis=1)_____no_output_____q = 0.07
x_param, z_param, Nc, good_idx = schypo.simulate.extract_parameters(adata.X, q=q, min_mean=q)_____no_output_____imp.reload(simulate)
transcriptome = simulate.simulate_transcriptomes(
n_cells=10000,
means=z_param[0],
variances=z_param[1],
corr=x_param[2],
Nc=Nc)
relative_transcriptome = transcriptome/transcriptome.sum(axis=1).reshape(-1, 1)
qs, captured_data = simulate.capture_sampling(transcriptome, q=q, q_sq=q**2+1e-10)_____no_output_____def qqplot(x, y, s=1):
plt.scatter(
np.quantile(x, np.linspace(0, 1, 1000)),
np.quantile(y, np.linspace(0, 1, 1000)),
s=s)
plt.plot(x, x, lw=1, color='m')_____no_output_____plt.figure(figsize=(8, 2));
plt.subplots_adjust(wspace=0.2);
plt.subplot(1, 3, 1);
sns.distplot(np.log(captured_data.mean(axis=0)), hist=False, label='Simulated')
sns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False, label='Real')
plt.xlabel('Log(mean)')
plt.subplot(1, 3, 2);
sns.distplot(np.log(captured_data.var(axis=0)), hist=False)
sns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False)
plt.xlabel('Log(variance)')
plt.subplot(1, 3, 3);
sns.distplot(np.log(captured_data.sum(axis=1)), hist=False)
sns.distplot(np.log(data.toarray().sum(axis=1)), hist=False)
plt.xlabel('Log(total UMI count)')
plt.savefig(figpath + 'simulation_stats.png', bbox_inches='tight')_____no_output_____
</code>
### Compare datasets generated by Poisson and hypergeometric processes_____no_output_____
<code>
_, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson')
_, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper')_____no_output_____q_list = [0.05, 0.1, 0.2, 0.3, 0.5]_____no_output_____plt.figure(figsize=(8, 2))
plt.subplots_adjust(wspace=0.3)
for idx, q in enumerate(q_list):
_, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson')
_, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper')
relative_poi_captured = poi_captured/poi_captured.sum(axis=1).reshape(-1, 1)
relative_hyper_captured = hyper_captured/hyper_captured.sum(axis=1).reshape(-1, 1)
poi_corr = np.corrcoef(relative_poi_captured, rowvar=False)
hyper_corr = np.corrcoef(relative_hyper_captured, rowvar=False)
sample_idx = np.random.choice(poi_corr.ravel().shape[0], 100000)
plt.subplot(1, len(q_list), idx+1)
plt.scatter(poi_corr.ravel()[sample_idx], hyper_corr.ravel()[sample_idx], s=1, alpha=1)
plt.plot([-1, 1], [-1, 1], 'm', lw=1)
# plt.xlim([-0.3, 0.4])
# plt.ylim([-0.3, 0.4])
if idx != 0:
plt.yticks([])
plt.title('q={}'.format(q))
plt.savefig(figpath + 'poi_vs_hyp_sim_corr.png', bbox_inches='tight')_____no_output_____
</code>
### Compare Poisson vs HG estimators_____no_output_____
<code>
def compare_esimators(q, plot=False, true_data=None, var_q=1e-10):
q_sq = var_q + q**2
true_data = schypo.simulate.simulate_transcriptomes(1000, 1000, correlated=True) if true_data is None else true_data
true_relative_data = true_data / true_data.sum(axis=1).reshape(-1, 1)
qs, captured_data = schypo.simulate.capture_sampling(true_data, q, q_sq)
Nr = captured_data.sum(axis=1).mean()
captured_relative_data = captured_data/captured_data.sum(axis=1).reshape(-1, 1)
adata = sc.AnnData(sp.sparse.csr_matrix(captured_data))
sf = schypo.estimator._estimate_size_factor(adata.X, 'hyper_relative', total=True)
good_idx = (captured_data.mean(axis=0) > q)
# True moments
m_true, v_true, corr_true = true_relative_data.mean(axis=0), true_relative_data.var(axis=0), np.corrcoef(true_relative_data, rowvar=False)
rv_true = v_true/m_true**2#schypo.estimator._residual_variance(m_true, v_true, schypo.estimator._fit_mv_regressor(m_true, v_true))
# Compute 1D moments
m_obs, v_obs = captured_relative_data.mean(axis=0), captured_relative_data.var(axis=0)
rv_obs = v_obs/m_obs**2#schypo.estimator._residual_variance(m_obs, v_obs, schypo.estimator._fit_mv_regressor(m_obs, v_obs))
m_poi, v_poi = schypo.estimator._poisson_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0])
rv_poi = v_poi/m_poi**2#schypo.estimator._residual_variance(m_poi, v_poi, schypo.estimator._fit_mv_regressor(m_poi, v_poi))
m_hyp, v_hyp = schypo.estimator._hyper_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0], q=q)
rv_hyp = v_hyp/m_hyp**2#schypo.estimator._residual_variance(m_hyp, v_hyp, schypo.estimator._fit_mv_regressor(m_hyp, v_hyp))
# Compute 2D moments
corr_obs = np.corrcoef(captured_relative_data, rowvar=False)
# corr_obs = corr_obs[np.triu_indices(corr_obs.shape[0])]
idx1 = np.array([i for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]])
idx2 = np.array([j for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]])
sample_idx = np.random.choice(idx1.shape[0], 10000)
idx1 = idx1[sample_idx]
idx2 = idx2[sample_idx]
corr_true = corr_true[(idx1, idx2)]
corr_obs = corr_obs[(idx1, idx2)]
cov_poi = schypo.estimator._poisson_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2)
cov_hyp = schypo.estimator._hyper_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2, q=q)
corr_poi = schypo.estimator._corr_from_cov(cov_poi, v_poi[idx1], v_poi[idx2])
corr_hyp = schypo.estimator._corr_from_cov(cov_hyp, v_hyp[idx1], v_hyp[idx2])
corr_poi[np.abs(corr_poi) > 1] = np.nan
corr_hyp[np.abs(corr_hyp) > 1] = np.nan
mean_list = [m_obs, m_poi, m_hyp]
var_list = [rv_obs, rv_poi, rv_hyp]
corr_list = [corr_obs, corr_poi, corr_hyp]
estimated_list = [mean_list, var_list, corr_list]
true_list = [m_true, rv_true, corr_true]
if plot:
count = 0
for j in range(3):
for i in range(3):
plt.subplot(3, 3, count+1)
if i != 2:
plt.scatter(
np.log(true_list[i][good_idx]),
np.log(estimated_list[i][j][good_idx]),
s=0.1)
plt.plot(np.log(true_list[i][good_idx]), np.log(true_list[i][good_idx]), linestyle='--', color='m')
plt.xlim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max())
plt.ylim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max())
else:
x = true_list[i]
y = estimated_list[i][j]
print(x.shape, y.shape)
plt.scatter(
x,
y,
s=0.1)
plt.plot([-1, 1], [-1, 1],linestyle='--', color='m')
plt.xlim(-1, 1);
plt.ylim(-1, 1);
# if not (i == j):
# plt.yticks([]);
# plt.xticks([]);
if i == 1 or i == 0:
print((np.log(true_list[i][good_idx]) > np.log(estimated_list[i][j][good_idx])).mean())
count += 1
else:
return qs, good_idx, estimated_list, true_list_____no_output_____import matplotlib.pylab as pylab
params = {'legend.fontsize': 'small',
'axes.labelsize': 'medium',
'axes.titlesize':'medium',
'figure.titlesize':'medium',
'xtick.labelsize':'xx-small',
'ytick.labelsize':'xx-small'}
pylab.rcParams.update(params)_____no_output_____
</code>
<code>
true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], Nc=Nc)_____no_output_____q = 0.025
plt.figure(figsize=(4, 4))
plt.subplots_adjust(wspace=0.5, hspace=0.5)
compare_esimators(q, plot=True, true_data=true_data)
plt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_2.5.png', bbox_inches='tight', dpi=1200)/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:67: RuntimeWarning: invalid value encountered in log
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:94: RuntimeWarning: invalid value encountered in log
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:94: RuntimeWarning: invalid value encountered in greater
q = 0.4
plt.figure(figsize=(4, 4))
plt.subplots_adjust(wspace=0.5, hspace=0.5)
compare_esimators(q, plot=True, true_data=true_data)
plt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_40.png', bbox_inches='tight', dpi=1200)/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
def compute_mse(x, y, log=True):
if log:
return np.nanmean(np.abs(np.log(x)-np.log(y)))
else:
return np.nanmean(np.abs(x-y))
def concordance(x, y, log=True):
if log:
a = np.log(x)
b = np.log(y)
else:
a = x
b = y
cond = np.isfinite(a) & np.isfinite(b)
a = a[cond]
b = b[cond]
cmat = np.cov(a, b)
return 2*cmat[0,1]/(cmat[0,0] + cmat[1,1] + (a.mean()-b.mean())**2)
m_mse_list, v_mse_list, c_mse_list = [], [], []
# true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1],
# Nc=Nc)
q_list = [0.01, 0.025, 0.1, 0.15, 0.3, 0.5, 0.7, 0.99]
qs_list = []
for q in q_list:
qs, good_idx, est, true = compare_esimators(q, plot=False, true_data=true_data)
qs_list.append(qs)
m_mse_list.append([concordance(x[good_idx], true[0][good_idx]) for x in est[0]])
v_mse_list.append([concordance(x[good_idx], true[1][good_idx]) for x in est[1]])
c_mse_list.append([concordance(x, true[2], log=False) for x in est[2]])
m_mse_list, v_mse_list, c_mse_list = np.array(m_mse_list), np.array(v_mse_list), np.array(c_mse_list)/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
/data/home/anaconda3/envs/single_cell/lib/python3.7/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in log
# This is added back by InteractiveShellApp.init_path()
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'small',
'axes.labelsize': 'medium',
'axes.titlesize':'medium',
'figure.titlesize':'medium',
'xtick.labelsize':'small',
'ytick.labelsize':'small'}
pylab.rcParams.update(params)
plt.figure(figsize=(8, 3))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 3, 1)
plt.plot(q_list[1:], m_mse_list[:, 0][1:], '-o')
# plt.legend(['Naive,\nPoisson,\nHG'])
plt.ylabel('CCC log(mean)')
plt.xlabel('overall UMI efficiency (q)')
plt.subplot(1, 3, 2)
plt.plot(q_list[2:], v_mse_list[:, 0][2:], '-o')
plt.plot(q_list[2:], v_mse_list[:, 1][2:], '-o')
plt.plot(q_list[2:], v_mse_list[:, 2][2:], '-o')
plt.legend(['Naive', 'Poisson', 'HG'], ncol=3, loc='upper center', bbox_to_anchor=(0.4,1.15))
plt.ylabel('CCC log(variance)')
plt.xlabel('overall UMI efficiency (q)')
plt.subplot(1, 3, 3)
plt.plot(q_list[2:], c_mse_list[:, 0][2:], '-o')
plt.plot(q_list[2:], c_mse_list[:, 1][2:], '-o')
plt.plot(q_list[2:], c_mse_list[:, 2][2:], '-o')
# plt.legend(['Naive', 'Poisson', 'HG'])
plt.ylabel('CCC correlation')
plt.xlabel('overall UMI efficiency (q)')
plt.savefig(fig_path + 'poi_vs_hyper_rv_ccc.pdf', bbox_inches='tight')_____no_output_____plt.figure(figsize=(1, 1.3))
plt.plot(q_list, v_mse_list[:, 0], '-o', ms=4)
plt.plot(q_list, v_mse_list[:, 1], '-o', ms=4)
plt.plot(q_list, v_mse_list[:, 2], '-o', ms=4)
plt.savefig(fig_path + 'poi_vs_hyper_ccc_var_rv_inset.pdf', bbox_inches='tight')_____no_output_____plt.figure(figsize=(1, 1.3))
plt.plot(q_list, c_mse_list[:, 0], '-o', ms=4)
plt.plot(q_list, c_mse_list[:, 1], '-o', ms=4)
plt.plot(q_list, c_mse_list[:, 2], '-o', ms=4)
plt.savefig(fig_path + 'poi_vs_hyper_ccc_corr_inset.pdf', bbox_inches='tight')_____no_output_____
</code>
| {
"repository": "yelabucsf/scrna-parameter-estimation",
"path": "analysis/simulation/estimator_validation.ipynb",
"matched_keywords": [
"Scanpy",
"scRNA"
],
"stars": 2,
"size": 164842,
"hexsha": "d01adf518daccca71b47da8482f0e2946bc01cab",
"max_line_length": 32536,
"avg_line_length": 170.291322314,
"alphanum_fraction": 0.884798777
} |
# Notebook from fatginger1024/NumericalMethods
Path: numerical5.ipynb
<center> <h1>Numerical Methods -- Assignment 5</h1> </center>_____no_output_____## Problem1 -- Energy density_____no_output_____The matter and radiation density of the universe at redshift $z$ is
$$\Omega_m(z) = \Omega_{m,0}(1+z)^3$$
$$\Omega_r(z) = \Omega_{r,0}(1+z)^4$$
where $\Omega_{m,0}=0.315$ and $\Omega_r = 9.28656 \times 10^{-5}$_____no_output_____### (a) Plot_____no_output_____
<code>
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
z = np.linspace(-1000,4000,10000)
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,3)
O_r = O_r0*np.power(z+1,4)
#define where the roots are
x1 = -1; x2 = O_m0/O_r0
y1 = O_m0*np.power(x1+1,3)
y2 = O_m0*np.power(x2+1,3)
x = np.array([x1,x2])
y = np.array([y1,y2])
#plot the results
plt.figure(figsize=(8,8))
plt.plot(z,O_m,'-',label="matter density")
plt.plot(z,O_r,'-',label="radiation density")
plt.plot(x,y,'h',label=r"$z_{eq}$")
plt.xlabel("redshift(z)")
plt.ylabel("energy density")
plt.legend()
plt.show()_____no_output_____
</code>
### (b) Analytical solution_____no_output_____An analytical solution can be found by equating the two equations. Since $z$ denotes for the redshift and it has a physical meaning, so it must take a real value for it to have a meaning. Thus
\begin{align*}
\Omega_m(z) &= \Omega_r(z)\\
\Omega_{m,0}(1+z)^3 &= \Omega_{r,0}(1+z)^4\\
(1+z)^3(0.315-9.28656 \times 10^{-5} z)&=0\\
(1+z)^3 &= 0\\
or \ (0.315-9.28656 \times 10^{-5} (z+1))&=0\\
\end{align*}
$z_1 = -1$ or $z_2 = 3391.0$_____no_output_____### (c) Bisection method_____no_output_____The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow. scipy.optimize.bisect calculates the roots for a given function, but for it to work $f(a)$ and $f(b)$ must take different signs (so that there exists a root $\in [a,b]$)._____no_output_____
<code>
from scipy.optimize import bisect
def f(z):
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,3)
O_r = O_r0*np.power(z+1,4)
return O_m -O_r
z1 = bisect(f,-1000,0,xtol=1e-10)
z2 = bisect(f,0,4000,xtol=1e-10)
print "The roots are found to be:",z1,z2The roots are found to be: -1.00000000003 3390.9987595
</code>
### (d) Secant method_____no_output_____The $\textit{secant method}$ uses secant lines to find the root. A secant line is a straight line that intersects two points of a curve. In the secant method, a line is drawn between two points on the continuous function such that it extends and intersects the $x$ axis. A secant line $y$ is drawn from $f(b)$ to $f(a)$ and intersects at point $c$ on the $x$ axis such that
$$y = \frac{f(b)-f(a)}{b-a}(c-b)+f(b)$$
The solution is therefore
$$c = b-f(b)\frac{b-a}{f(b)-f(a)}$$_____no_output_____
<code>
def secant(f, x0, x1, eps):
f_x0 = f(x0)
f_x1 = f(x1)
iteration_counter = 0
while abs(f_x1) > eps and iteration_counter < 100:
try:
denominator = float(f_x1 - f_x0)/(x1 - x0)
x = x1 - float(f_x1)/denominator
except ZeroDivisionError:
print "Error! - denominator zero for x = ", x
sys.exit(1) # Abort with error
x0 = x1
x1 = x
f_x0 = f_x1
f_x1 = f(x1)
iteration_counter += 1
# Here, either a solution is found, or too many iterations
if abs(f_x1) > eps:
iteration_counter = -1
return x, iteration_counter
#find the roots in the nearby region, with an accuracy of 1e-10
z1 = secant(f,-10,-0.5,1e-10)[0]
z2 = secant(f,3000,4000,1e-10)[0]
print "The roots are found to be:",z1,z2
The roots are found to be: -0.999466618551 3390.9987595
</code>
### (e) Newton-Raphson method_____no_output_____In numerical methods, $\textit{Newton-Raphson method}$ is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is as follows:
* Starting with a function $f$ defined over the real number $x$, the function's derivative $f'$, and an initial guess $x_0$ for a root of the fucntion $f$, then a better approximation $x_1$ is:
$$x_1 = x_0 -\frac{f(x_0)}{f'(x_0)}$$
* The process is then repeated as
$$x_{n+1} = x_n-\frac{f(x_n)}{f'(x_n)}$$
until a sufficiently satisfactory value is reached._____no_output_____
<code>
def fprime(z):
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,2)
O_r = O_r0*np.power(z+1,3)
return 3*O_m -4*O_r
def Newton(f, dfdx, x, eps):
f_value = f(x)
iteration_counter = 0
while abs(f_value) > eps and iteration_counter < 100:
try:
x = x - float(f_value)/dfdx(x)
except ZeroDivisionError:
print "Error! - derivative zero for x = ", x
sys.exit(1) # Abort with error
f_value = f(x)
iteration_counter += 1
# Here, either a solution is found, or too many iterations
if abs(f_value) > eps:
iteration_counter = -1
return x, iteration_counter
z1 = Newton(f,fprime,0,1e-10)[0]
z2 = Newton(f,fprime,3000,1e-10)[0]
print "The roots are found to be:",z1,z2
The roots are found to be: -0.9993234602 3390.9987595
</code>
Now, change the initial guess far from the values obtained from (b). And test how the three algorithms perform respectively._____no_output_____
<code>
#test how the bisection method perform
import time
start1 = time.time()
z1 = bisect(f,-1000,1000,xtol=1e-10)
end1 = time.time()
start2 = time.time()
z2 = bisect(f,3000,10000,xtol=1e-10)
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2The roots are found to be: -1.00000000003 3390.9987595
With a deviation of: 3.31965566147e-11 1.11306528845e-14
Time used are: 0.00043797492981 0.000349998474121
#test how the secant method perform
start1 = time.time()
z1 = secant(f,-1000,1000,1e-10)[0]
end1 = time.time()
start2 = time.time()
z2 = secant(f,3000,10000,1e-10)[0]
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2
print "Roots found after",secant(f,-10,-0.5,1e-10)[1],"and",secant(f,3000,4000,1e-10)[1],"loops"
The roots are found to be: -0.999414972528 3390.9987595
With a deviation of: 0.000585027471743 0.0
Time used are: 0.000823020935059 0.000194072723389
Roots found after 25 and 8 loops
#test how the newton-Raphson method perform
start1 = time.time()
z1 = Newton(f,fprime,-1000,1e-10)[0]
end1 = time.time()
start2 = time.time()
z2 = Newton(f,fprime,10000,1e-10)[0]
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2
print "Roots found after",Newton(f,fprime,0,1e-10)[1],"and",Newton(f,fprime,3000,1e-10)[1],"loops"The roots are found to be: -1.00051824126 3390.9987595
With a deviation of: 0.000518241260632 0.0
Time used are: 0.000991821289062 0.000278949737549
Roots found after 18 and 7 loops
</code>
It is not difficult to find out that tested with the function given, bisection method is the fastest and the most reliable method in finding the first root; however, in determining the second root, both the secant method and Newton's method showed better performance, with zero deviation from the actual value, and a much faster run time. But in general, when dealing with more complicated calculations, bisection method is relatively slow. But within a given tolerance Newton's method and secant method may probably show better performance._____no_output_____## Problem 2 -- Potential_____no_output_____$\textit{Navarro-Frenk-White}$ and $\textit{Hernquist}$ potential can be expressed as the following equations:
$$\Phi_{NFW}(r) = \Phi_0\frac{r_s}{r}\,ln(1+r/r_s)$$
$$\Phi_{Hernquist}(r) = -\Phi_0\,\frac{1}{2(1+r/r_s)}$$
with $\Phi_0 = 1.659 \times 10^4 \ km^2/s^2$ and $r_s = 15.61 \ kpc$.
The apocentre and pericentre can be found by solving the following equation:
\begin{align*}
E_{tot} &= \frac{1}{2}\left(v_t^2+v_r^2\right)+\Phi\\
\end{align*}
where $L = J_r=rv_r$ is the angular momentum in the radial direction, and $E_{tot}$ is the total energy of the elliptical orbit and can be found by $(r,v_t,v_r)$ of a given star.
Define the residue function
$$R\equiv E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$$
so that the percenter and apocenter can be found when $R=0$.
Then, the radial action $J_r$ is defined as $\textit{(Jason L. Sanders,2015)}$
$$J_r = \frac{1}{\pi}\int_{r_p}^{r_a}dr\sqrt{2E-2\Phi-\frac{L^2}{r^2}}$$
where $r_p$ is the pericentric radius and $r_a$ is the apocentric radius._____no_output_____
<code>
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import newton
from scipy.integrate import quad
from math import *
r = np.array([7.80500, 15.6100,31.2200,78.0500,156.100]) #r in kpc
vt = np.array([139.234,125.304,94.6439,84.5818,62.8640]) # vt in km/s
vr = np.array([-15.4704,53.7018,-283.932,-44.5818,157.160]) # vr in km/s
#NFW profile potential
def NFW(r):
phi0 = 1.659e4
rs = 15.61
ratio = rs/r
return -phi0*ratio*np.log(1+1/ratio)
#Hernquist profile potential
def H(r):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return -phi0/(2*(1+ratio))
#1st derivative of Hernquist profile potential
def H_d(r):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return phi0*0.5/rs*((1+ratio)**(-2))
#1st derivative of NFW profile potential
def NFW_d(r):
phi0 = 1.659e4
rs = 15.61
ratio = rs/r
return -phi0*rs*((-1/r**2)*np.log(1+1/ratio)+1/(r*rs)*(1+1/ratio)**(-1))
#total energy, NFW profile
def E_NFW(r,vt,vr):
E = 0.5*(vt**2+vr**2)+NFW(r)
return E
#total energy, Hernquist profile
def E_H(r,vt,vr):
E = 0.5*(vt**2+vr**2)+H(r)
return E
#Residue function
def Re(r,Energy,momentum,p):
return Energy - 0.5*(momentum/r)**2-p
#Residue function for NFW profile
def R_NFW(r,Energy,momentum):
return Energy - 0.5*(momentum/r)**2-NFW(r)
#Residue function for Hernquist profile
def R_H(r,Energy,momentum):
return Energy - 0.5*(momentum/r)**2-H(r)
#derivative of residue of NFW profile
def R_dNFW(r,Energy,momentum):
return Energy*0+momentum**2*r**(-3)-NFW_d(r)
#derivative of residue of Hernquist profile
def R_dH(r,Energy,momentum):
return Energy*0+momentum**2*r**(-3)-H_d(r)
#second derivative of residue of Hernquist profile, come handy if the
#calculated value for pericentre for Hernquist profile is too far off
#from the value calculated for NFW profile
def R_ddH(r,Energy,momentum):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return Energy*0-3*momentum**2*r**(-4)+phi0*0.5/rs**2*((1+ratio)**(-3))
#function that defines the radial action
def r_actionNFW(r,Energy,momentum):
return np.sqrt(2*(Energy-NFW(r))-(momentum/r)**2)/pi
def r_actionH(r,Energy,momentum):
return np.sqrt(2*(Energy-H(r))-(momentum/r)**2)/pi
R1 = np.linspace(7,400,1000)
R2 = np.linspace(10,500,1000)
R3 = np.linspace(7,600,1000)
R4 = np.linspace(50,800,1000)
R5 = np.linspace(50,1500,1000)
Momentum = r*vt
Energy_nfw = E_NFW(r,vt,vr)
Energy_h = E_H(r,vt,vr)
#plot results for 5 stars
#1st star
i = 0
R_nfw = Re(R1,Energy_nfw[i],Momentum[i],NFW(R1))
R_h = Re(R1,Energy_h[i],Momentum[i],H(R1))
plt.figure(figsize=(15,10))
plt.plot(R1,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R1,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"1st star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,100,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH)
e3 = Re(z1,Energy_h[i],Momentum[i],H(z1))
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"The pericentre and apocentre are found to be: 7.75067639682 kpc and 178.316271432 kpc for the NFW profile
The pericentre is found to be: 7.75234061029 kpc for the Hernquist profile
#2nd star
i = 1
R_nfw = Re(R2,Energy_nfw[i],Momentum[i],NFW(R2))
R_h = Re(R2,Energy_h[i],Momentum[i],H(R2))
plt.figure(figsize=(15,10))
plt.plot(R2,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R2,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"2nd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH)
e3 = Re(z1,Energy_h[i],Momentum[i],H(z1))
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z3,e3,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"The pericentre and apocentre are found to be: 14.1075227275 kpc and 375.764622203 kpc for the NFW profile
The pericentre is found to be: 14.1986051415 kpc for the Hernquist profile
#3rd star
i = 2
R_nfw = Re(R3,Energy_nfw[i],Momentum[i],NFW(R3))
R_h = Re(R3,Energy_h[i],Momentum[i],H(R3))
plt.figure(figsize=(15,10))
plt.plot(R3,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R3,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"3rd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
print "The pericentre is found to be:",z1,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
z2 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e2 = R_H(z2,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile"
plt.plot(z2,e2,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"The pericentre is found to be: 9.47357617853 kpc for the NFW profile
The pericentre is found to be: 9.62166037758 kpc for the Hernquist profile
#4th star
i = 3
R_nfw = Re(R4,Energy_nfw[i],Momentum[i],NFW(R4))
R_h = Re(R4,Energy_h[i],Momentum[i],H(R4))
plt.figure(figsize=(15,10))
plt.plot(R4,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R4,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"4th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e3 = R_H(z3,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"The pericentre and apocentre are found to be: 64.9161691596 kpc and 697.429352749 kpc for the NFW profile
The pericentre is found to be: 67.7971003933 kpc for the Hernquist profile
#5th star
i = 4
R_nfw = Re(R5,Energy_nfw[i],Momentum[i],NFW(R5))
R_h = Re(R5,Energy_h[i],Momentum[i],H(R5))
plt.figure(figsize=(15,10))
plt.plot(R5,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R5,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"5th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
print "The pericentre is found to be:",z1,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
z2 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e2 = R_H(z2,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"The pericentre is found to be: 52.2586723359 kpc for the NFW profile
The pericentre is found to be: 55.9497763757 kpc for the Hernquist profile
</code>
The table below lists all the parameters of the five stars._____no_output_____<img src="Desktop/table.png">_____no_output_____## Problem 3 -- System of equations_____no_output_____$$f(x,y) = x^2+y^2-50=0$$
$$g(x,y) = x \times y -25 = 0$$_____no_output_____### (a) Analytical solution_____no_output_____First $f(x,y)-2g(x,y)$,we find:
\begin{align*}
x^2+y^2-2xy &=0\\
(x-y)^2 &= 0\\
x&=y
\end{align*}
Then $f(x,y)+2g(x,y)$,we find:
\begin{align*}
x^2+y^2+2xy &=100\\
(x+y)^2 &= 100\\
x,y = 5,5 \ &or -5,-5
\end{align*}_____no_output_____### (b) Newton's method_____no_output_____Newton-Raphson method can also be applied to solve multivariate systems. The algorithm is simply as follows:
* Suppose we have an N-D multivariate system of the form:
\begin{cases}
f_1(x_1,...,x_N)=f_1(\mathbf{x})=0\\
f_2(x_1,...,x_N)=f_2(\mathbf{x})=0\\
...... \\
f_N(x_1,...,x_N)=f_N(\mathbf{x})=0\\
\end{cases}
where we have defined
$$\mathbf{x}=[x_1,...,x_N]^T$$
Define a vector function
$$\mathbf{f}(\mathbf{x})=[f_1(\mathbf{x}),...,f_N(\mathbf{x})]^T$$
So that the equation system above can be written as
$$\mathbf{f}(\mathbf{x})=\mathbf{0}$$
* $\mathbf{J}_{\mathbf{f}}(\mathbf{x})$ is the $\textit{Jacobian matrix}$ over the function vector $\mathbf{f}(\mathbf{x})$
$$\mathbf{J}_{\mathbf{f}}(\mathbf{x})=\begin{bmatrix}
\frac{\partial f_1}{\partial x_1} & \dots & \frac{\partial f_1}{\partial x_N} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_N}{\partial x_1} & \dots & \frac{\partial f_N}{\partial x_N}
\end{bmatrix}$$
* If all equations are linear we have
$$\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=\mathbf{f}(\mathbf{x})+\mathbf{J}(\mathbf{x})\delta\mathbf{x}$$
* by assuming $\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=0$, we can find the roots as $\mathbf{x}+\delta \mathbf{x}$, where
$$\delta \mathbf{x} = -\mathbf{J}(\mathbf{x})^{-1}\mathbf{f}(\mathbf{x})$$
* The approximation can be improved iteratively
$$\mathbf{x}_{n+1} = \mathbf{x}_n +\delta \mathbf{x}_n = \mathbf{x}_n-\mathbf{J}(\mathbf{x}_n)^{-1}\mathbf{f}(\mathbf{x}_n)$$_____no_output_____
<code>
from scipy.optimize import fsolve
import numpy as np
f1 = lambda x: [x[0]**2+x[1]**2-50,x[0]*x[1]-25]
#the Jacobian needed to implement Newton's method
fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2)
#define the domain where we want to find the solution (x,y)
a = np.linspace(-10,10,100)
b = a
#for every point (a,b), pass on to fsolve and append the result
#then round the result and see how many pairs of solutions there are
i = 0
result = np.array([[5,5]])
#print result
for a,b in zip(a,b):
x = fsolve(f1,[a,b],fprime=fd)
x = np.round(x)
result = np.append(result,[x],axis=0)
print "The sets of solutions are found to be:",np.unique(result,axis=0)
The sets of solutions are found to be: [[-5. -5.]
[ 5. 5.]]
</code>
From above we learn that the solutions are indeed left with $(x,y) = (5,5)$ or $(x,y) = (-5,-5)$ _____no_output_____### (c) Convergence_____no_output_____
<code>
%config InlineBackend.figure_format = 'retina'
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
def f(x, y):
return x**2+y**2-50;
def g(x, y):
return x*y-25
x = np.linspace(-6, 6, 500)
@np.vectorize
def fy(x):
x0 = 0.0
def tmp(y):
return f(x, y)
y1, = fsolve(tmp, x0)
return y1
@np.vectorize
def gy(x):
x0 = 0.0
def tmp(y):
return g(x, y)
y1, = fsolve(tmp, x0)
return y1
plt.plot(x, fy(x), x, gy(x))
plt.xlabel('x')
plt.ylabel('y')
plt.rc('xtick', labelsize=10) # fontsize of the tick labels
plt.rc('ytick', labelsize=10)
plt.legend(['fy', 'gy'])
plt.show()
#print fy(x)_____no_output_____i =1
I = np.array([])
F = np.array([])
G = np.array([])
X_std = np.array([])
Y_std = np.array([])
while i<50:
x_result = fsolve(f1,[-100,-100],maxfev=i)
f_result = f(x_result[0],x_result[1])
g_result = g(x_result[0],x_result[1])
x1_std = abs(x_result[0]+5.0)
x2_std = abs(x_result[1]+5.0)
F = np.append(F,f_result)
G = np.append(G,g_result)
I = np.append(I,i)
X_std = np.append(X_std,x1_std)
Y_std = np.append(Y_std,x2_std)
i+=1
xtol = 1.49012e-08
plt.loglog(I,np.abs(F),I,np.abs(G))
plt.title("converge of f and g")
plt.xlabel("iterations")
plt.ylabel("function values")
plt.legend(['f','g'])
plt.show()_____no_output_____plt.loglog(I,X_std,I,Y_std)
plt.axhline(y=xtol,color='#b22222',lw=3)
plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$")
plt.xlabel("iterations")
plt.ylabel("Deviation values")
plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance'])
plt.show()_____no_output_____
</code>
### (d) Maximum iterations_____no_output_____Now also apply the Jacobian. The jacobian of the system of equation is simply as follows
$$\mathbf{J} = \begin{bmatrix}
\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\
\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}
\end{bmatrix}$$
$$=\begin{bmatrix}
2x & 2y \\
y & x
\end{bmatrix}$$_____no_output_____
<code>
fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2)
i =1
I = np.array([])
F = np.array([])
G = np.array([])
X_std = np.array([])
Y_std = np.array([])
while i<50:
x_result = fsolve(f1,[-100,-100],fprime=fd,maxfev=i)
f_result = f(x_result[0],x_result[1])
g_result = g(x_result[0],x_result[1])
x1_std = abs(x_result[0]+5.0)
x2_std = abs(x_result[1]+5.0)
F = np.append(F,f_result)
G = np.append(G,g_result)
I = np.append(I,i)
X_std = np.append(X_std,x1_std)
Y_std = np.append(Y_std,x2_std)
i+=1
xtol = 1.49012e-08
plt.loglog(I,np.abs(F),I,np.abs(G))
plt.title("converge of f and g")
plt.xlabel("iterations")
plt.ylabel("function values")
plt.legend(['f','g'])
plt.show()_____no_output_____plt.loglog(I,X_std,I,Y_std)
plt.axhline(y=xtol,color='#b22222',lw=3)
plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$")
plt.xlabel("iterations")
plt.ylabel("Deviation values")
plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance'])
plt.show()_____no_output_____
</code>
Now that we have applied the Jacobian and it can be seen directly from the two plots above that they both drop more quickly to zero, and f and g (also $\Delta x$ and $\Delta y$) started to converge more quickly in the case when the Jacobian is applied, and is approaching the tolerance much faster ($\Delta x$ and $\Delta y$). This happens because when the comupter is trying to build up the Jacobian, it needs multiple sets of solutions to estimate the partial derivatives, and the derivatives are all just calculated from $\frac{f(x+\Delta x)}{\Delta x}$ and its accuracy will only increase with the cumulation of sets of solutions. But in the case of the Jacobian is applied, the function for the first derivative is then already known, so even if we start very far off we can still have an exponential increase in accuracy (the increase in accuracy in the case without Jacobian is also exponential but much slower, approximately 10 times slower)._____no_output_____
| {
"repository": "fatginger1024/NumericalMethods",
"path": "numerical5.ipynb",
"matched_keywords": [
"STAR",
"Salmon"
],
"stars": null,
"size": 931614,
"hexsha": "d01d3b492a4f59193788b65055f9bf9557e1948e",
"max_line_length": 139344,
"avg_line_length": 735.2912391476,
"alphanum_fraction": 0.9354936701
} |
# Notebook from daviesje/21cmFAST
Path: docs/tutorials/coeval_cubes.ipynb
# Running and Plotting Coeval Cubes_____no_output_____The aim of this tutorial is to introduce you to how `21cmFAST` does the most basic operations: producing single coeval cubes, and visually verifying them. It is a great place to get started with `21cmFAST`._____no_output_____
<code>
%matplotlib inline
import matplotlib.pyplot as plt
import os
# We change the default level of the logger so that
# we can see what's happening with caching.
import logging, sys, os
logger = logging.getLogger('21cmFAST')
logger.setLevel(logging.INFO)
import py21cmfast as p21c
# For plotting the cubes, we use the plotting submodule:
from py21cmfast import plotting
# For interacting with the cache
from py21cmfast import cache_tools
_____no_output_____print(f"Using 21cmFAST version {p21c.__version__}")Using 21cmFAST version 3.0.2
</code>
Clear the cache so that we get the same results for the notebook every time (don't worry about this for now). Also, set the default output directory to `_cache/`:_____no_output_____
<code>
if not os.path.exists('_cache'):
os.mkdir('_cache')
p21c.config['direc'] = '_cache'
cache_tools.clear_cache(direc="_cache")2020-10-02 09:51:10,651 | INFO | Removed 0 files from cache.
</code>
## Basic Usage_____no_output_____The simplest (and typically most efficient) way to produce a coeval cube is simply to use the `run_coeval` method. This consistently performs all steps of the calculation, re-using any data that it can without re-computation or increased memory overhead._____no_output_____
<code>
coeval8, coeval9, coeval10 = p21c.run_coeval(
redshift = [8.0, 9.0, 10.0],
user_params = {"HII_DIM": 100, "BOX_LEN": 100, "USE_INTERPOLATION_TABLES": True},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
astro_params = p21c.AstroParams({"HII_EFF_FACTOR":20.0}),
random_seed=12345
)_____no_output_____
</code>
There are a number of possible inputs for `run_coeval`, which you can check out either in the [API reference](../reference/py21cmfast.html) or by calling `help(p21c.run_coeval)`. Notably, the `redshift` must be given: it can be a single number, or a list of numbers, defining the redshift at which the output coeval cubes will be defined.
Other params we've given here are `user_params`, `cosmo_params` and `astro_params`. These are all used for defining input parameters into the backend C code (there's also another possible input of this kind; `flag_options`). These can be given either as a dictionary (as `user_params` has been), or directly as a relevant object (like `cosmo_params` and `astro_params`). If creating the object directly, the parameters can be passed individually or via a single dictionary. So there's a lot of flexibility there! Nevertheless we *encourage* you to use the basic dictionary. The other ways of passing the information are there so we can use pre-defined objects later on. For more information about these "input structs", see the [API docs](../reference/_autosummary/py21cmfast.inputs.html).
We've also given a `direc` option: this is the directory in which to search for cached data (and also where cached data should be written). Throughout this notebook we're going to set this directly to the `_cache` folder, which allows us to manage it directly. By default, the cache location is set in the global configuration in `~/.21cmfast/config.yml`. You'll learn more about caching further on in this tutorial.
Finally, we've given a random seed. This sets all the random phases for the simulation, and ensures that we can exactly reproduce the same results on every run.
The output of `run_coeval` is a list of `Coeval` instances, one for each input redshift (it's just a single object if a single redshift was passed, not a list). They store *everything* related to that simulation, so that it can be completely compared to other simulations.
For example, the input parameters:_____no_output_____
<code>
print("Random Seed: ", coeval8.random_seed)
print("Redshift: ", coeval8.redshift)
print(coeval8.user_params)Random Seed: 12345
Redshift: 8.0
UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False)
</code>
This is where the utility of being able to pass a *class instance* for the parameters arises: we could run another iteration of coeval cubes, with the same user parameters, simply by doing `p21c.run_coeval(user_params=coeval8.user_params, ...)`.
Also in the `Coeval` instance are the various outputs from the different steps of the computation. You'll see more about what these steps are further on in the tutorial. But for now, we show that various boxes are available:_____no_output_____
<code>
print(coeval8.hires_density.shape)
print(coeval8.brightness_temp.shape)(300, 300, 300)
(100, 100, 100)
</code>
Along with these, full instances of the output from each step are available as attributes that end with "struct". These instances themselves contain the `numpy` arrays of the data cubes, and some other attributes that make them easier to work with:_____no_output_____
<code>
coeval8.brightness_temp_struct.global_Tb_____no_output_____
</code>
By default, each of the components of the cube are cached to disk (in our `_cache/` folder) as we run it. However, the `Coeval` cube itself is _not_ written to disk by default. Writing it to disk incurs some redundancy, since that data probably already exists in the cache directory in seperate files.
Let's save to disk. The save method by default writes in the current directory (not the cache!):_____no_output_____
<code>
filename = coeval8.save(direc='_cache')_____no_output_____
</code>
The filename of the saved file is returned:_____no_output_____
<code>
print(os.path.basename(filename))Coeval_z8.0_a3c7dea665420ae9c872ba2fab1b3d7d_r12345.h5
</code>
Such files can be read in:_____no_output_____
<code>
new_coeval8 = p21c.Coeval.read(filename, direc='.')_____no_output_____
</code>
Some convenient plotting functions exist in the `plotting` module. These can work directly on `Coeval` objects, or any of the output structs (as we'll see further on in the tutorial). By default the `coeval_sliceplot` function will plot the `brightness_temp`, using the standard traditional colormap:_____no_output_____
<code>
fig, ax = plt.subplots(1,3, figsize=(14,4))
for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])):
plotting.coeval_sliceplot(coeval, ax=ax[i], fig=fig);
plt.title("z = %s"%redshift)
plt.tight_layout()_____no_output_____
</code>
Any 3D field can be plotted, by setting the `kind` argument. For example, we could alternatively have plotted the dark matter density cubes perturbed to each redshift:_____no_output_____
<code>
fig, ax = plt.subplots(1,3, figsize=(14,4))
for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])):
plotting.coeval_sliceplot(coeval, kind='density', ax=ax[i], fig=fig);
plt.title("z = %s"%redshift)
plt.tight_layout()_____no_output_____
</code>
To see more options for the plotting routines, see the [API Documentation](../reference/_autosummary/py21cmfast.plotting.html)._____no_output_____`Coeval` instances are not cached themselves -- they are containers for data that is itself cached (i.e. each of the `_struct` attributes of `Coeval`). See the [api docs](../reference/_autosummary/py21cmfast.outputs.html) for more detailed information on these.
You can see the filename of each of these structs (or the filename it would have if it were cached -- you can opt to *not* write out any given dataset):_____no_output_____
<code>
coeval8.init_struct.filename_____no_output_____
</code>
You can also write the struct anywhere you'd like on the filesystem. This will not be able to be automatically used as a cache, but it could be useful for sharing files with colleagues._____no_output_____
<code>
coeval8.init_struct.save(fname='my_init_struct.h5')_____no_output_____
</code>
This brief example covers most of the basic usage of `21cmFAST` (at least with `Coeval` objects -- there are also `Lightcone` objects for which there is a separate tutorial).
For the rest of the tutorial, we'll cover a more advanced usage, in which each step of the calculation is done independently._____no_output_____## Advanced Step-by-Step Usage_____no_output_____Most users most of the time will want to use the high-level `run_coeval` function from the previous section. However, there are several independent steps when computing the brightness temperature field, and these can be performed one-by-one, adding any other effects between them if desired. This means that the new `21cmFAST` is much more flexible. In this section, we'll go through in more detail how to use the lower-level methods.
Each step in the chain will receive a number of input-parameter classes which define how the calculation should run. These are the `user_params`, `cosmo_params`, `astro_params` and `flag_options` that we saw in the previous section.
Conversely, each step is performed by running a function which will return a single object. Every major function returns an object of the same fundamental class (an ``OutputStruct``) which has various methods for reading/writing the data, and ensuring that it's in the right state to receive/pass to and from C.
These are the objects stored as `init_box_struct` etc. in the `Coeval` class.
As we move through each step, we'll outline some extra details, hints and tips about using these inputs and outputs._____no_output_____### Initial Conditions_____no_output_____The first step is to get the initial conditions, which defines the *cosmological* density field before any redshift evolution is applied._____no_output_____
<code>
initial_conditions = p21c.initial_conditions(
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
random_seed=54321
)_____no_output_____
</code>
We've already come across all these parameters as inputs to the `run_coeval` function. Indeed, most of the steps have very similar interfaces, and are able to take a random seed and parameters for where to look for the cache. We use a different seed than in the previous section so that all our boxes are "fresh" (we'll show how the caching works in a later section).
These initial conditions have 100 cells per side, and a box length of 100 Mpc. Note again that they can either be passed as a dictionary containing the input parameters, or an actual instance of the class. While the former is the suggested way, one benefit of the latter is that it can be queried for the relevant parameters (by using ``help`` or a post-fixed ``?``), or even queried for defaults:_____no_output_____
<code>
p21c.CosmoParams._defaults______no_output_____
</code>
(these defaults correspond to the Planck15 cosmology contained in Astropy)._____no_output_____So what is in the ``initial_conditions`` object? It is what we call an ``OutputStruct``, and we have seen it before, as the `init_box_struct` attribute of `Coeval`. It contains a number of arrays specifying the density and velocity fields of our initial conditions, as well as the defining parameters. For example, we can easily show the cosmology parameters that are used (note the non-default $\sigma_8$ that we passed):_____no_output_____
<code>
initial_conditions.cosmo_params_____no_output_____
</code>
A handy tip is that the ``CosmoParams`` class also has a reference to a corresponding Astropy cosmology, which can be used more broadly:_____no_output_____
<code>
initial_conditions.cosmo_params.cosmo_____no_output_____
</code>
Merely printing the initial conditions object gives a useful representation of its dependent parameters:_____no_output_____
<code>
print(initial_conditions)InitialConditions(UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False);
CosmoParams(OMb:0.04897468161869667, OMm:0.30964144154550644, POWER_INDEX:0.9665, SIGMA_8:0.8, hlittle:0.6766);
random_seed:54321)
</code>
(side-note: the string representation of the object is used to uniquely define it in order to save it to the cache... which we'll explore soon!).
To see which arrays are defined in the object, access the ``fieldnames`` (this is true for *all* `OutputStruct` objects):_____no_output_____
<code>
initial_conditions.fieldnames_____no_output_____
</code>
The `coeval_sliceplot` function also works on `OutputStruct` objects (as well as the `Coeval` object as we've already seen). It takes the object, and a specific field name. By default, the field it plots is the _first_ field in `fieldnames` (for any `OutputStruct`)._____no_output_____
<code>
plotting.coeval_sliceplot(initial_conditions, "hires_density");_____no_output_____
</code>
### Perturbed Field_____no_output_____After obtaining the initial conditions, we need to *perturb* the field to a given redshift (i.e. the redshift we care about). This step clearly requires the results of the previous step, which we can easily just pass in. Let's do that:_____no_output_____
<code>
perturbed_field = p21c.perturb_field(
redshift = 8.0,
init_boxes = initial_conditions
)_____no_output_____
</code>
Note that we didn't need to pass in any input parameters, because they are all contained in the `initial_conditions` object itself. The random seed is also taken from this object.
Again, the output is an `OutputStruct`, so we can view its fields:_____no_output_____
<code>
perturbed_field.fieldnames_____no_output_____
</code>
This time, it has only density and velocity (the velocity direction is chosen without loss of generality). Let's view the perturbed density field:_____no_output_____
<code>
plotting.coeval_sliceplot(perturbed_field, "density");_____no_output_____
</code>
It is clear here that the density used is the *low*-res density, but the overall structure of the field looks very similar._____no_output_____### Ionization Field_____no_output_____Next, we need to ionize the box. This is where things get a little more tricky. In the simplest case (which, let's be clear, is what we're going to do here) the ionization occurs at the *saturated limit*, which means we can safely ignore the contribution of the spin temperature. This means we can directly calculate the ionization on the density/velocity fields that we already have. A few more parameters are needed here, and so two more "input parameter dictionaries" are available, ``astro_params`` and ``flag_options``. Again, a reminder that their parameters can be viewed by using eg. `help(p21c.AstroParams)`, or by looking at the [API docs](../reference/_autosummary/py21cmfast.inputs.html)._____no_output_____For now, let's leave everything as default. In that case, we can just do:_____no_output_____
<code>
ionized_field = p21c.ionize_box(
perturbed_field = perturbed_field
)2020-02-29 15:10:43,902 | INFO | Existing init_boxes found and read in (seed=54321).
</code>
That was easy! All the information required by ``ionize_box`` was given directly by the ``perturbed_field`` object. If we had _also_ passed a redshift explicitly, this redshift would be checked against that from the ``perturbed_field`` and an error raised if they were incompatible:_____no_output_____Let's see the fieldnames:_____no_output_____
<code>
ionized_field.fieldnames_____no_output_____
</code>
Here the ``first_box`` field is actually just a flag to tell the C code whether this has been *evolved* or not. Here, it hasn't been, it's the "first box" of an evolutionary chain. Let's plot the neutral fraction:_____no_output_____
<code>
plotting.coeval_sliceplot(ionized_field, "xH_box");_____no_output_____
</code>
### Brightness Temperature_____no_output_____Now we can use what we have to get the brightness temperature:_____no_output_____
<code>
brightness_temp = p21c.brightness_temperature(ionized_box=ionized_field, perturbed_field=perturbed_field)_____no_output_____
</code>
This has only a single field, ``brightness_temp``:_____no_output_____
<code>
plotting.coeval_sliceplot(brightness_temp);_____no_output_____
</code>
### The Problem_____no_output_____And there you have it -- you've computed each of the four steps (there's actually another, `spin_temperature`, that you require if you don't assume the saturated limit) individually.
However, some problems quickly arise. What if you want the `perturb_field`, but don't care about the initial conditions? We know how to get the full `Coeval` object in one go, but it would seem that the sub-boxes have to _each_ be computed as the input to the next.
A perhaps more interesting problem is that some quantities require *evolution*: i.e. a whole bunch of simulations at a string of redshifts must be performed in order to obtain the current redshift. This is true when not in the saturated limit, for example. That means you'd have to manually compute each redshift in turn, and pass it to the computation at the next redshift. While this is definitely possible, it becomes difficult to set up manually when all you care about is the box at the final redshift.
`py21cmfast` solves this by making each of the functions recursive: if `perturb_field` is not passed the `init_boxes` that it needs, it will go and compute them, based on the parameters that you've passed it. If the previous `spin_temp` box required for the current redshift is not passed -- it will be computed (and if it doesn't have a previous `spin_temp` *it* will be computed, and so on).
That's all good, but what if you now want to compute another `perturb_field`, with the same fundamental parameters (but at a different redshift)? Since you didn't ever see the `init_boxes`, they'll have to be computed all over again. That's where the automatic caching comes in, which is where we turn now..._____no_output_____## Using the Automatic Cache_____no_output_____To solve all this, ``21cmFAST`` uses an on-disk caching mechanism, where all boxes are saved in HDF5 format in a default location. The cache allows for reading in previously-calculated boxes automatically if they match the parameters that are input. The functions used at every step (in the previous section) will try to use a cached box instead of calculating a new one, unless its explicitly asked *not* to.
Thus, we could do this:_____no_output_____
<code>
perturbed_field = p21c.perturb_field(
redshift = 8.0,
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");2020-02-29 15:10:45,367 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=12345).
</code>
Note that here we pass exactly the same parameters as were used in the previous section. It gives a message that the full box was found in the cache and immediately returns. However, if we change the redshift:_____no_output_____
<code>
perturbed_field = p21c.perturb_field(
redshift = 7.0,
user_params = {"HII_DIM": 100, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");2020-02-29 15:10:45,748 | INFO | Existing init_boxes found and read in (seed=12345).
</code>
Now it finds the initial conditions, but it must compute the perturbed field at the new redshift. If we had changed the initial parameters as well, it would have to calculate everything:_____no_output_____
<code>
perturbed_field = p21c.perturb_field(
redshift = 8.0,
user_params = {"HII_DIM": 50, "BOX_LEN": 100},
cosmo_params = p21c.CosmoParams(SIGMA_8=0.8),
)
plotting.coeval_sliceplot(perturbed_field, "density");_____no_output_____
</code>
This shows that we don't need to perform the *previous* step to do any of the steps, they will be calculated automatically.
Now, let's get an ionized box, but this time we won't assume the saturated limit, so we need to use the spin temperature. We can do this directly in the ``ionize_box`` function, but let's do it explicitly. We will use the auto-generation of the initial conditions and perturbed field. However, the spin temperature is an *evolved* field, i.e. to compute the field at $z$, we need to know the field at $z+\Delta z$. This continues up to some redshift, labelled ``z_heat_max``, above which the spin temperature can be defined directly from the perturbed field.
Thus, one option is to pass to the function a *previous* spin temperature box, to evolve to *this* redshift. However, we don't have a previous spin temperature box yet. Of course, the function itself will go and calculate that box if it's not given (or read it from cache if it's been calculated before!). When it tries to do that, it will go to the one before, and so on until it reaches ``z_heat_max``, at which point it will calculate it directly.
To facilitate this recursive progression up the redshift ladder, there is a parameter, ``z_step_factor``, which is a multiplicate factor that determines the previous redshift at each step.
We can also pass the dependent boxes explicitly, which provides the parameters necessary.
**WARNING: THIS IS THE MOST TIME-CONSUMING STEP OF THE CALCULATION!**_____no_output_____
<code>
spin_temp = p21c.spin_temperature(
perturbed_field = perturbed_field,
zprime_step_factor=1.05,
)2020-02-29 15:11:38,347 | INFO | Existing init_boxes found and read in (seed=521414794440).
plotting.coeval_sliceplot(spin_temp, "Ts_box");_____no_output_____
</code>
Let's note here that each of the functions accepts a few of the same arguments that modifies how the boxes are cached. There is a ``write`` argument, which if set to ``False``, will disable writing that box to cache (and it is passed through the recursive heirarchy). There is also ``regenerate``, which if ``True``, forces this box and all its predecessors to be re-calculated even if they exist in the cache. Then there is ``direc``, which we have seen before.
Finally note that by default, ``random_seed`` is set to ``None``. If this is the case, then any cached dataset matching all other parameters will be read in, and the ``random_seed`` will be set based on the file read in. If it is set to an integer number, then the cached dataset must also match the seed. If it is ``None``, and no matching dataset is found, a random seed will be autogenerated._____no_output_____Now if we calculate the ionized box, ensuring that it uses the spin temperature, then it will also need to be evolved. However, due to the fact that we cached each of the spin temperature steps, these should be read in accordingly:_____no_output_____
<code>
ionized_box = p21c.ionize_box(
spin_temp = spin_temp,
zprime_step_factor=1.05,
)2020-02-29 15:12:55,794 | INFO | Existing init_boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,814 | INFO | Existing z=34.2811622461279 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,827 | INFO | Existing z=34.2811622461279 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,865 | INFO | Existing z=32.60110690107419 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,880 | INFO | Existing z=32.60110690107419 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,906 | INFO | Existing z=31.00105419149923 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,919 | INFO | Existing z=31.00105419149923 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,948 | INFO | Existing z=29.4771944680945 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,963 | INFO | Existing z=29.4771944680945 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:55,991 | INFO | Existing z=28.02589949342333 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,005 | INFO | Existing z=28.02589949342333 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,033 | INFO | Existing z=26.643713803260315 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,051 | INFO | Existing z=26.643713803260315 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,079 | INFO | Existing z=25.32734647929554 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,094 | INFO | Existing z=25.32734647929554 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,127 | INFO | Existing z=24.073663313614798 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,141 | INFO | Existing z=24.073663313614798 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,168 | INFO | Existing z=22.879679346299806 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,182 | INFO | Existing z=22.879679346299806 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,205 | INFO | Existing z=21.742551758380767 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,219 | INFO | Existing z=21.742551758380767 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,403 | INFO | Existing z=20.659573103219778 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,418 | INFO | Existing z=20.659573103219778 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,620 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,635 | INFO | Existing z=19.62816486020931 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,784 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,793 | INFO | Existing z=18.645871295437438 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,931 | INFO | Existing z=17.71035361470232 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:56,941 | INFO | Existing z=17.71035361470232 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,085 | INFO | Existing z=16.81938439495459 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,095 | INFO | Existing z=16.81938439495459 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,243 | INFO | Existing z=15.970842280909132 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,254 | INFO | Existing z=15.970842280909132 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,399 | INFO | Existing z=15.162706934199171 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,408 | INFO | Existing z=15.162706934199171 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,544 | INFO | Existing z=14.393054223046828 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,554 | INFO | Existing z=14.393054223046828 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,691 | INFO | Existing z=13.66005164099698 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,700 | INFO | Existing z=13.66005164099698 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,832 | INFO | Existing z=12.961953943806646 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,840 | INFO | Existing z=12.961953943806646 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,970 | INFO | Existing z=12.297098994101567 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:57,978 | INFO | Existing z=12.297098994101567 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,106 | INFO | Existing z=11.663903803906255 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,114 | INFO | Existing z=11.663903803906255 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,244 | INFO | Existing z=11.060860765625003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,254 | INFO | Existing z=11.060860765625003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,394 | INFO | Existing z=10.486534062500002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,402 | INFO | Existing z=10.486534062500002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,529 | INFO | Existing z=9.939556250000003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,538 | INFO | Existing z=9.939556250000003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,674 | INFO | Existing z=9.418625000000002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,682 | INFO | Existing z=9.418625000000002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,810 | INFO | Existing z=8.922500000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,819 | INFO | Existing z=8.922500000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,947 | INFO | Existing z=8.450000000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:12:58,956 | INFO | Existing z=8.450000000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:12:59,086 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=521414794440).
plotting.coeval_sliceplot(ionized_box, "xH_box");_____no_output_____
</code>
Great! So again, we can just get the brightness temp:_____no_output_____
<code>
brightness_temp = p21c.brightness_temperature(
ionized_box = ionized_box,
perturbed_field = perturbed_field,
spin_temp = spin_temp
)_____no_output_____
</code>
Now lets plot our brightness temperature, which has been evolved from high redshift with spin temperature fluctuations:_____no_output_____
<code>
plotting.coeval_sliceplot(brightness_temp);_____no_output_____
</code>
We can also check what the result would have been if we had limited the maximum redshift of heating. Note that this *recalculates* all previous spin temperature and ionized boxes, because they depend on both ``z_heat_max`` and ``zprime_step_factor``._____no_output_____
<code>
ionized_box = p21c.ionize_box(
spin_temp = spin_temp,
zprime_step_factor=1.05,
z_heat_max = 20.0
)
brightness_temp = p21c.brightness_temperature(
ionized_box = ionized_box,
perturbed_field = perturbed_field,
spin_temp = spin_temp
)
plotting.coeval_sliceplot(brightness_temp);2020-02-29 15:13:08,824 | INFO | Existing init_boxes found and read in (seed=521414794440).
2020-02-29 15:13:08,840 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:11,438 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:11,447 | INFO | Existing z=19.62816486020931 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:14,041 | INFO | Existing z=17.71035361470232 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:14,050 | INFO | Existing z=18.645871295437438 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:16,667 | INFO | Existing z=16.81938439495459 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:16,675 | INFO | Existing z=17.71035361470232 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:19,213 | INFO | Existing z=15.970842280909132 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:19,222 | INFO | Existing z=16.81938439495459 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:21,756 | INFO | Existing z=15.162706934199171 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:21,764 | INFO | Existing z=15.970842280909132 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:24,409 | INFO | Existing z=14.393054223046828 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:24,417 | INFO | Existing z=15.162706934199171 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:26,938 | INFO | Existing z=13.66005164099698 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:26,947 | INFO | Existing z=14.393054223046828 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:29,504 | INFO | Existing z=12.961953943806646 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:29,517 | INFO | Existing z=13.66005164099698 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:32,163 | INFO | Existing z=12.297098994101567 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:32,171 | INFO | Existing z=12.961953943806646 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:34,704 | INFO | Existing z=11.663903803906255 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:34,712 | INFO | Existing z=12.297098994101567 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:37,257 | INFO | Existing z=11.060860765625003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:37,266 | INFO | Existing z=11.663903803906255 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:39,809 | INFO | Existing z=10.486534062500002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:39,817 | INFO | Existing z=11.060860765625003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:42,378 | INFO | Existing z=9.939556250000003 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:42,387 | INFO | Existing z=10.486534062500002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:44,941 | INFO | Existing z=9.418625000000002 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:44,950 | INFO | Existing z=9.939556250000003 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:47,518 | INFO | Existing z=8.922500000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:47,528 | INFO | Existing z=9.418625000000002 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:50,077 | INFO | Existing z=8.450000000000001 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:50,086 | INFO | Existing z=8.922500000000001 spin_temp boxes found and read in (seed=521414794440).
2020-02-29 15:13:52,626 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=521414794440).
2020-02-29 15:13:52,762 | INFO | Existing brightness_temp box found and read in (seed=521414794440).
</code>
As we can see, it's very similar!_____no_output_____
| {
"repository": "daviesje/21cmFAST",
"path": "docs/tutorials/coeval_cubes.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 28,
"size": 758159,
"hexsha": "d01e0bcc5c2930017cf0606210fb5aef017b0313",
"max_line_length": 173848,
"avg_line_length": 478.0321563682,
"alphanum_fraction": 0.9417417718
} |
# Notebook from rhaas80/nrpytutorial
Path: Tutorial-GRHD_Equations-Cartesian.ipynb
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Equations of General Relativistic Hydrodynamics (GRHD)
## Authors: Zach Etienne & Patrick Nelson
## This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`
**Notebook Status:** <font color='orange'><b> Self-Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**
## Introduction
We write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):
\begin{eqnarray}
\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\
\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\
\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},
\end{eqnarray}
where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:
$$
T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},
$$
the $s$ source term is given in terms of ADM quantities via
$$
s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}
- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],
$$
and
\begin{align}
v^j &= \frac{u^j}{u^0} \\
\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\
h &= 1 + \epsilon + \frac{P}{\rho_0}.
\end{align}
Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.
Thus the full set of input variables include:
* Spacetime quantities:
* ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$
* Hydrodynamical quantities:
* Rest-mass density $\rho_0$
* Pressure $P$
* Internal energy $\epsilon$
* 4-velocity $u^\mu$
For completeness, the rest of the conservative variables are given by
\begin{align}
\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\
\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i
\end{align}
### A Note on Notation
As is standard in NRPy+,
* Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.
* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.
For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:
```python
T4EMUU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
# Term 1: b^2 u^{\mu} u^{\nu}
T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]
```
When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:
```python
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j] * betaU[j]
```
As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:
```python
# \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2
```_____no_output_____<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows
1. [Step 1](#importmodules): Import needed NRPy+ & Python modules
1. [Step 2](#stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$:
* **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**:
1. [Step 3](#primtoconserv): Writing the conservative variables in terms of the primitive variables:
* **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**
1. [Step 4](#grhdfluxes): Define the fluxes for the GRHD equations
1. [Step 4.a](#rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations:
* **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**:
1. [Step 4.b](#taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations:
* **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**
1. [Step 5](#grhdsourceterms): Define source terms on RHSs of GRHD equations
1. [Step 5.a](#ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation:
* **compute_s_source_term()**
1. [Step 5.b](#stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation
1. [Step 5.b.i](#fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives:
* **compute_g4DD_zerotimederiv_dD()**
1. [Step 5.b.ii](#stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$:
* **compute_S_tilde_source_termD()**
1. [Step 6](#convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson):
* **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**
1. [Step 7](#declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations
1. [Step 8](#code_validation): Code Validation against `GRHD.equations` NRPy+ module
1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file_____no_output_____<a id='importmodules'></a>
# Step 1: Import needed NRPy+ & Python modules \[Back to [top](#toc)\]
$$\label{importmodules}$$_____no_output_____
<code>
# Step 1: Import needed core NRPy+ modules
from outputC import nrpyAbs # NRPy+: Core C code output module
import NRPy_param_funcs as par # NRPy+: parameter interface
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support_____no_output_____
</code>
<a id='stressenergy'></a>
# Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](#toc)\]
$$\label{stressenergy}$$
Recall from above that
$$
T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},
$$
where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also
$$
T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}
$$_____no_output_____
<code>
# Step 2.a: First define h, the enthalpy:
def compute_enthalpy(rho_b,P,epsilon):
global h
h = 1 + epsilon + P/rho_b
# Step 2.b: Define T^{mu nu} (a 4-dimensional tensor)
def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U):
global T4UU
compute_enthalpy(rho_b,P,epsilon)
# Then define g^{mu nu} in terms of the ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
# Finally compute T^{mu nu}
T4UU = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu]
# Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor)
def compute_T4UD(gammaDD,betaU,alpha, T4UU):
global T4UD
# Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux.
# First we'll need g_{alpha nu} in terms of ADM quantities:
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
T4UD = ixp.zerorank2(DIM=4)
for mu in range(4):
for nu in range(4):
for delta in range(4):
T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]_____no_output_____
</code>
<a id='primtoconserv'></a>
# Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](#toc)\]
$$\label{primtoconserv}$$
Recall from above that the conservative variables may be written as
\begin{align}
\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\
\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\
\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i
\end{align}
$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric._____no_output_____
<code>
# Step 3: Writing the conservative variables in terms of the primitive variables
def compute_sqrtgammaDET(gammaDD):
global sqrtgammaDET
gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U):
global rho_star
# Compute rho_star:
rho_star = alpha*sqrtgammaDET*rho_b*u4U[0]
def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star):
global tau_tilde
tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star
def compute_S_tildeD(alpha, sqrtgammaDET, T4UD):
global S_tildeD
S_tildeD = ixp.zerorank1(DIM=3)
for i in range(3):
S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]_____no_output_____
</code>
<a id='grhdfluxes'></a>
# Step 4: Define the fluxes for the GRHD equations \[Back to [top](#toc)\]
$$\label{grhdfluxes}$$
<a id='rhostarfluxterm'></a>
## Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](#toc)\]
$$\label{rhostarfluxterm}$$
Recall from above that
\begin{array}
\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.
\end{array}
Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:_____no_output_____
<code>
# Step 4: Define the fluxes for the GRHD equations
# Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U
def compute_vU_from_u4U__no_speed_limit(u4U):
global vU
# Now compute v^i = u^i/u^0:
vU = ixp.zerorank1(DIM=3)
for j in range(3):
vU[j] = u4U[j+1]/u4U[0]
# Step 4.b: rho_star flux
def compute_rho_star_fluxU(vU, rho_star):
global rho_star_fluxU
rho_star_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
rho_star_fluxU[j] = rho_star*vU[j]_____no_output_____
</code>
<a id='taustildesourceterms'></a>
## Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](#toc)\]
$$\label{taustildesourceterms}$$
Recall from above that
\begin{array}
\ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\
\partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.
\end{array}
Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions):_____no_output_____
<code>
# Step 4.c: tau_tilde flux
def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star):
global tau_tilde_fluxU
tau_tilde_fluxU = ixp.zerorank1(DIM=3)
for j in range(3):
tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j]
# Step 4.d: S_tilde flux
def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD):
global S_tilde_fluxUD
S_tilde_fluxUD = ixp.zerorank2(DIM=3)
for j in range(3):
for i in range(3):
S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1]_____no_output_____
</code>
<a id='grhdsourceterms'></a>
# Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](#toc)\]
$$\label{grhdsourceterms}$$
<a id='ssourceterm'></a>
## Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](#toc)\]
$$\label{ssourceterm}$$
Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via
$$
s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}}
\underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right],
$$_____no_output_____
<code>
def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU):
global s_source_term
s_source_term = sp.sympify(0)
# Term 1:
for i in range(3):
for j in range(3):
s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j]
# Term 2:
for i in range(3):
s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i]
# Term 3:
s_source_term *= alpha*sqrtgammaDET_____no_output_____
</code>
<a id='stildeisourceterm'></a>
## Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](#toc)\]
$$\label{stildeisourceterm}$$
Recall from above
$$
\partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.
$$
Our goal here will be to compute
$$
\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}.
$$
<a id='fourmetricderivs'></a>
### Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](#toc)\]
$$\label{fourmetricderivs}$$
To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables.
We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via
$$
g_{\mu\nu} = \begin{pmatrix}
-\alpha^2 + \beta^k \beta_k & \beta_i \\
\beta_j & \gamma_{ij}
\end{pmatrix}.
$$
Thus
$$
g_{\mu\nu,k} = \begin{pmatrix}
-2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\
\beta_{j,k} & \gamma_{ij,k}
\end{pmatrix},
$$
where $\beta_{i} = \gamma_{ij} \beta^j$, so
$$
\beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k}
$$_____no_output_____
<code>
def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD):
global g4DD_zerotimederiv_dD
# Eq. 2.121 in B&S
betaD = ixp.zerorank1(DIM=3)
for i in range(3):
for j in range(3):
betaD[i] += gammaDD[i][j]*betaU[j]
betaDdD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S)
betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k]
# Eq. 2.122 in B&S
g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4)
for k in range(3):
# Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j]
g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k]
for j in range(3):
g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k]
for i in range(3):
for k in range(3):
# Recall that g4DD[i][0] = g4DD[0][i] = betaD[i]
g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k]
for i in range(3):
for j in range(3):
for k in range(3):
# Recall that g4DD[i][j] = gammaDD[i][j]
g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k]_____no_output_____
</code>
<a id='stildeisource'></a>
### Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](#toc)\]
$$\label{stildeisource}$$
Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed._____no_output_____
<code>
# Step 5.b.ii: Compute S_tilde source term
def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU):
global S_tilde_source_termD
S_tilde_source_termD = ixp.zerorank1(DIM=3)
for i in range(3):
for mu in range(4):
for nu in range(4):
S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1]_____no_output_____
</code>
<a id='convertvtou'></a>
# Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](#toc)\]
$$\label{convertvtou}$$
According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via
\begin{align}
\alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\
\implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right)
\end{align}
Defining $v^i = \frac{u^i}{u^0}$, we get
$$v^i = \alpha v^i_{(n)} - \beta^i,$$
and in terms of this variable we get
\begin{align}
g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\
\implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\
&= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\
&= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\
&= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\
&= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}
\end{align}
Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$:
\begin{align}
u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\
\implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\
\implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\
&= 1 - \frac{1}{\Gamma^2}
\end{align}
In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$.
Then our algorithm for computing $u^0$ is as follows:
If
$$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$
then adjust the 3-velocity $v^i$ as follows:
$$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$
After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$.
Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via
$$
u^0 = \frac{1}{\alpha \sqrt{1-R}},
$$
and the remaining components $u^i$ via
$$
u^i = u^0 v^i.
$$
In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows:
1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$.
1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$
1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step.
1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$.
1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$.
While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us.
Define $R^*$ as
$$
R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right).
$$
If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get:
$$
R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max}
$$
If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get:
$$
R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R
$$
Then we can rescale *all* $v^i_{(n)}$ via
$$
v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}},
$$
though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`:
$$
v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}.
$$
Finally, $u^0$ can be immediately and safely computed, via:
$$
u^0 = \frac{1}{\alpha \sqrt{1-R^*}},
$$
and $u^i$ via
$$
u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right).
$$_____no_output_____
<code>
# Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter
# Speed-limited ValenciavU is output to rescaledValenciavU global.
def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU):
# Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU
# Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU
# R = gamma_{ij} v^i v^j
R = sp.sympify(0)
for i in range(3):
for j in range(3):
R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j]
thismodule = "GRHD"
# The default value isn't terribly important here, since we can overwrite in the main C code
GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on
# IllinoisGRMHD.
# GiRaFFE default = 2000.0
Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT)
# Now, we set Rstar = min(Rmax,R):
# If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R
# If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax
Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R))
# We add TINYDOUBLE to R below to avoid a 0/0, which occurs when
# ValenciavU == 0 for all Valencia 3-velocity components.
# "Those tiny *doubles* make me warm all over
# with a feeling that I'm gonna love you till the end of time."
# - Adapted from Connie Francis' "Tiny Bubbles"
TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100)
# The rescaled (speed-limited) Valencia 3-velocity
# is given by, v_{(n)}^i = sqrt{Rstar/R} v^i
global rescaledValenciavU
rescaledValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
# If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0
# If your velocities are of order 1e-100 and this is physically
# meaningful, there must be something wrong with your unit conversion.
rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE))
# Finally compute u^mu in terms of Valenciav^i
# u^0 = 1/(alpha-sqrt(1-R^*))
global u4U_ito_ValenciavU
u4U_ito_ValenciavU = ixp.zerorank1(DIM=4)
u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar))
# u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity
for i in range(3):
u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i])
# Step 6.b: Convert v^i into u^\mu, and apply a speed limiter.
# Speed-limited vU is output to rescaledvU global.
def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU):
ValenciavU = ixp.zerorank1(DIM=3)
for i in range(3):
ValenciavU[i] = (vU[i] + betaU[i])/alpha
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU)
# Since ValenciavU is written in terms of vU,
# u4U_ito_ValenciavU is actually u4U_ito_vU
global u4U_ito_vU
u4U_ito_vU = ixp.zerorank1(DIM=4)
for mu in range(4):
u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu]
# Finally compute the rescaled (speed-limited) vU
global rescaledvU
rescaledvU = ixp.zerorank1(DIM=3)
for i in range(3):
rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i]_____no_output_____
</code>
<a id='declarevarsconstructgrhdeqs'></a>
# Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](#toc)\]
$$\label{declarevarsconstructgrhdeqs}$$_____no_output_____
<code>
# First define hydrodynamical quantities
u4U = ixp.declarerank1("u4U", DIM=4)
rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True)
# Then ADM quantities
gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3)
KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3)
betaU = ixp.declarerank1("betaU", DIM=3)
alpha = sp.symbols('alpha', real=True)
# First compute stress-energy tensor T4UU and T4UD:
compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
compute_T4UD(gammaDD,betaU,alpha, T4UU)
# Next sqrt(gamma)
compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U)
compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star)
compute_S_tildeD( alpha, sqrtgammaDET, T4UD)
# Then compute v^i from u^mu
compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
compute_rho_star_fluxU( vU, rho_star)
compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star)
compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Then compute source terms on tau_tilde and S_tilde equations
compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU)
compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU)
# Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i]
testValenciavU = ixp.declarerank1("testValenciavU",DIM=3)
u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU)
# Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0
testvU = ixp.declarerank1("testvU",DIM=3)
u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU)_____no_output_____
</code>
<a id='code_validation'></a>
# Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in
1. this tutorial versus
2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module._____no_output_____
<code>
import GRHD.equations as Ge
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then compute v^i from u^mu
Ge.compute_vU_from_u4U__no_speed_limit(u4U)
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star)
Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Then declare derivatives & compute g4DD_zerotimederiv_dD
# gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3)
# betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3)
# alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3)
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Finally compute source terms on tau_tilde and S_tilde equations
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU)
GetestValenciavU = ixp.declarerank1("testValenciavU")
Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU)
GetestvU = ixp.declarerank1("testvU")
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU)_____no_output_____all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Ge."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"])
exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term])
expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term])
for mu in range(4):
namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)])
exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]])
expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]])
for nu in range(4):
namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)])
exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]])
expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]])
for delta in range(4):
namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)])
exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]])
expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]])
for i in range(3):
namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i),
gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i),
gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)])
exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i],
Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i],
Ge.rescaledValenciavU[i],Ge.rescaledvU[i]])
expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i],
tau_tilde_fluxU[i],S_tilde_source_termD[i],
rescaledValenciavU[i],rescaledvU[i]])
for j in range(3):
namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)])
exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]])
expr_list.extend([S_tilde_fluxUD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
import sys
if all_passed:
print("ALL TESTS PASSED!")
else:
print("ERROR: AT LEAST ONE TEST DID NOT PASS")
sys.exit(1)ALL TESTS PASSED!
</code>
<a id='latex_pdf_output'></a>
# Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)_____no_output_____
<code>
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian")Created Tutorial-GRHD_Equations-Cartesian.tex, and compiled LaTeX file to
PDF file Tutorial-GRHD_Equations-Cartesian.pdf
</code>
| {
"repository": "rhaas80/nrpytutorial",
"path": "Tutorial-GRHD_Equations-Cartesian.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 46430,
"hexsha": "d01e7aa39a28d592a321f4077c48786e3ccf3e7f",
"max_line_length": 425,
"avg_line_length": 44.3881453155,
"alphanum_fraction": 0.5645703209
} |
# Notebook from DavidLeoni/iep
Path: jupman-tests.ipynb
<code>
import jupman;
jupman.init()_____no_output_____
</code>
# Jupman Tests
Tests and cornercases.
The page Title has one sharp, the Sections always have two sharps.
## Sezione 1
bla bla
## Sezione 2
Subsections always have three sharps
### Subsection 1
bla bla
### Subsection 2
bla bla
_____no_output_____## Quotes_____no_output_____> I'm quoted with **greater than** symbol
> on multiple lines
> Am I readable?_____no_output_____ I'm quoted with **spaces**
on multiple lines
Am I readable?_____no_output_____## Download links
Files manually put in `_static` :
* Download [trial.odt](_static/trial.odt)
* Download [trial.pdf](_static/trial.pdf)
Files in arbitrary folder position :
* Download [requirements.txt](requirements.txt)
NOTE: download links are messy, [see issue 8](https://github.com/DavidLeoni/jupman/issues/8)
_____no_output_____## Info/Warning Boxes
Until there is an info/warning extension for Markdown/CommonMark (see this issue), such boxes can be created by using HTML <div> elements like this:_____no_output_____<div class="alert alert-info">
**Note:** This is an info!
</div>_____no_output_____<div class="alert alert-warning">
**Note:** This is a warn!
</div>_____no_output_____For this to work reliably, you should obey the following guidelines:
* The class attribute has to be either "alert alert-info" or "alert alert-warning", other values will not be converted correctly.
* No further attributes are allowed.
* For compatibility with CommonMark, you should add an empty line between the <div> start tag and the beginning of the content.
_____no_output_____
## Math
For math stuff, [see npshpinx docs](https://nbsphinx.readthedocs.io/en/0.2.14/markdown-cells.html#Equations)
Here we put just some equation to show it behaves fine in Jupman
This is infinity: $\infty$ _____no_output_____## Unicode
Unicode characters should display an HTML, but with latex you might have problems, and need to manually map characters in conf.py
You should see a star in a black circle: ✪
You should see a check: ✓
table characters: │ ├ └ ─_____no_output_____## Image
### SVG Images
SVG images work in notebook, but here it is commented since it breaks Latex, [see issue](https://github.com/DavidLeoni/jupman/issues/1)
```

```
This one also doesn't works (and shows ugly code in the notebook anyway)
```
from IPython.display import SVG
SVG(filename='img/cc-by.svg')
```
_____no_output_____### PNG Images
_____no_output_____### Inline images - pure markdown_____no_output_____ Bla  bli blo_____no_output_____Bla  bli blo_____no_output_____### Inline images - markdown and img_____no_output_____ bla <img style="display:inline" src="_static/img/notebook_icon.png"> bli blo_____no_output_____
bla <img style="display:inline !important" src="_static/img/notebook_icon.png"> bli blo_____no_output_____### Img class
If we pass a class, it will to be present in the website:
<img class="jupman-inline-img" src="_static/img/notebook_icon.png">
This <img class="jupman-inline-img" src="_static/img/notebook_icon.png"> should be inline_____no_output_____## Expressions list
Highlighting **does** work both in Jupyter and Sphinx
Three quotes, multiple lines - Careful: put **exactly 4 spaces** indentation
1. ```python
[2,3,1] != "[2,3,1]"
```
1. ```python
[4,8,12] == [2*2,"4*2",6*2]
```
1. ```python
[][:] == []
```_____no_output_____Three quotes, multiple lines, more compact - works in Jupyter, **doesn't** in Sphinx
1. ```python
[2,3,1] != "[2,3,1]"```
1. ```python
[4,8,12] == [2*2,"4*2",6*2]```
1. ```python
[][:] == []```_____no_output_____Highlighting **doesn't** work in Jupyter neither in Sphinx:
Three quotes, single line
1. ```python [2,3,1] != ["2",3,1]```
1. ```python [4,8,12] == [2*2,"4*2",6*2]```
1. ```python [][:] == "[]"```
Single quote, single line
1. `python [2,3,1] != ["2",3,1]`
1. `python [4,8,12] == [2*2,"4*2",6*2]`
1. `python [][:] == "[]"`
_____no_output_____## Togglable cells
There are various ways to have togglable cells.
### Show/hide exercises (PREFERRED)
If you need clickable show/hide buttons for exercise solutions , see here: [Usage - Exercise types](https://jupman.softpython.org/en/latest/usage.html#Type-of-exercises). It manages comprehensively use cases for display in website, student zips, exams, etc
If you have other needs, we report here some test we made, but keep in mind this sort of hacks tend to change behaviour with different versions of jupyter.
### Toggling with Javascript
* Works in MarkDown
* Works while in Jupyter
* Works in HTML
* Does not show in Latex (which might be a good point, if you intend to put somehow solutions at the end of the document)
* NOTE: after creating the text to see the results you have to run the initial cell with jupman.init (as for the toc)
* NOTE: you can't use Markdown block code since of Sept 2017 doesn't show well in HTML output
<div class="jupman-togglable">
<code>
<pre>
# SOME CODE
color = raw_input("What's your eyes' color?")
if color == "":
sys.exit()
</pre>
</code>
</div>
<div class="jupman-togglable"
data-jupman-show="Customized show msg"
data-jupman-hide="Customized hide msg">
<code>
<pre>
# SOME OTHER CODE
how_old = raw_input("How old are you?")
x = random.randint(1,8)
if question == "":
sys.exit()
</pre>
</code>
</div>_____no_output_____### HTML details in Markdown, code tag
* Works while in Jupyter
* Doesn't work in HTML output
* as of Sept Oct 2017, not yet supported in Microsoft browsers
<details>
<summary>Click here to see the code</summary>
<code>
question = raw_input("What?")
answers = random.randint(1,8)
if question == "":
sys.exit()
</code>
</details>
_____no_output_____### HTML details in Markdown, Markdown mixed code
* Works while in Jupyter
* Doesn't work in HTML output
* as of Sept Oct 2017, not yet supported in Microsoft browsers
<details>
<summary>Click here to see the code</summary>
```python
question = raw_input("What?")
answers = random.randint(1,8)
if question == "":
sys.exit()
```
</details>
_____no_output_____### HTML details in HTML, raw NBConvert Format
* Doesn't work in Jupyter
* Works in HTML output
* NOTE: as of Sept Oct 2017, not yet supported in Microsoft browsers
* Doesn't show at all in PDF output
_____no_output_____
Some other Markdown cell afterwards ...._____no_output_____## Files in templates
Since Dec 2019 they are not accessible [see issue 10](https://github.com/DavidLeoni/jupman/issues/10), but it is not a great problem, you can always put a link to Github, see for example [exam-yyyy-mm-dd.ipynb](https://github.com/DavidLeoni/jupman/tree/master/_templates/exam/exam-yyyy-mm-dd.ipynb)_____no_output_____## Python tutor
There are various ways to embed Python tutor, first we put the recommended one._____no_output_____### jupman.pytut_____no_output_____**RECOMMENDED**: You can put a call to `jupman.pytut()` at the end of a cell, and the cell code will magically appear in python tutor in the output (except the call to `pytut()` of course).
Does not need internet connection._____no_output_____
<code>
x = [5,8,4,10,30,20,40,50,60,70,20,30]
y= {3:9}
z = [x]
jupman.pytut()_____no_output_____
</code>
**jupman.pytut scope**: BEWARE of variables which were initialized in previous cells, they WILL NOT be available in Python Tutor:_____no_output_____
<code>
w = 8_____no_output_____x = w + 5
jupman.pytut()Traceback (most recent call last):
File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript
self.run(script_str, user_globals, user_globals)
File "/usr/lib/python3.5/bdb.py", line 431, in run
exec(cmd, globals, locals)
File "<string>", line 2, in <module>
NameError: name 'w' is not defined
</code>
**jupman.pytut window overflow**: When too much right space is taken, it might be difficult to scroll:
_____no_output_____
<code>
x = [3,2,5,2,42,34,2,4,34,2,3,4,23,4,23,4,2,34,23,4,23,4,23,4,234,34,23,4,23,4,23,4,2]
jupman.pytut()_____no_output_____x = w + 5
jupman.pytut()Traceback (most recent call last):
File "/home/da/Da/prj/jupman/prj/jupman.py", line 2305, in _runscript
self.run(script_str, user_globals, user_globals)
File "/usr/lib/python3.5/bdb.py", line 431, in run
exec(cmd, globals, locals)
File "<string>", line 2, in <module>
NameError: name 'w' is not defined
</code>
**jupman.pytut execution:** Some cells might execute in Jupyter but not so well in Python Tutor, due to [its inherent limitations](https://github.com/pgbovine/OnlinePythonTutor/blob/master/unsupported-features.md):_____no_output_____
<code>
x = 0
for i in range(10000):
x += 1
print(x)
jupman.pytut()10000
</code>
**jupman.pytut infinite loops**: Since execution occurs first in Jupyter and then in Python tutor, if you have an infinite loop no Python Tutor instance will be spawned:
```python
while True:
pass
jupman.pytut()
```_____no_output_____**jupman.pytut() resizability:** long vertical and horizontal expansion should work:_____no_output_____
<code>
x = {0:'a'}
for i in range(1,30):
x[i] = x[i-1]+str(i*10000)
jupman.pytut()_____no_output_____
</code>
**jupman.pytut cross arrows**: With multiple visualizations, arrows shouldn't cross from one to the other even if underlying script is loaded multiple times (relates to visualizerIdOverride)_____no_output_____
<code>
x = [1,2,3]
jupman.pytut()_____no_output_____
</code>
**jupman.pytut print output**: With only one line of print, Print output panel shouldn't be too short:_____no_output_____
<code>
print("hello")
jupman.pytut()hello
y = [1,2,3,4]
jupman.pytut()_____no_output_____
</code>
### HTML magics
Another option is to directly paste Python Tutor iframe in the cells, and use Jupyter `%%HTML` magics command.
HTML should be available both in notebook and website - of course, requires an internet connection.
Beware: you need the HTTP**S** !_____no_output_____
<code>
%%HTML
<iframe width="800" height="300" frameborder="0"
src="https://pythontutor.com/iframe-embed.html#code=x+%3D+5%0Ay+%3D+10%0Az+%3D+x+%2B+y&cumulative=false&py=2&curInstr=3">
</iframe>_____no_output_____
</code>
### NBTutor_____no_output_____To show Python Tutor in notebooks, there is already a jupyter extension called [NBTutor](https://github.com/lgpage/nbtutor) , afterwards you can use magic `%%nbtutor` to show the interpreter.
Unfortunately, it doesn't show in the generated HTML :-/
_____no_output_____
<code>
%reload_ext nbtutor_____no_output_____%%nbtutor
for x in range(1,4):
print("ciao")
x=5
y=7
x +y
ciao
ciao
ciao
</code>
## Stripping answers
For stripping answers examples, see [jupyter-example/jupyter-example-sol](jupyter-example/jupyter-example-sol.ipynb). For explanation, see [usage](usage.ipynb#Tags-to-strip)_____no_output_____## Metadata to HTML classes
_____no_output_____## Formatting problems_____no_output_____### Characters per line
Python standard for code has limit to 79, many styles have 80 (see [Wikipedia](https://en.wikipedia.org/wiki/Characters_per_line))
We can keep 80:
```
--------------------------------------------------------------------------------
```
```python
--------------------------------------------------------------------------------
```
Errors hold 75 dashes:
Plain:
```
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-15-9e1622b385b6> in <module>()
----> 1 1/0
ZeroDivisionError: division by zero
```
As Python markup:
```python
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-15-9e1622b385b6> in <module>()
----> 1 1/0
ZeroDivisionError: division by zero
```_____no_output_____
<code>
len('---------------------------------------------------------------------------')_____no_output_____
</code>
On website this **may** display a scroll bar, because it will actually print `'` apexes plus the dashes_____no_output_____
<code>
'-'*80_____no_output_____
</code>
This should **not** display a scrollbar:_____no_output_____
<code>
'-'*78_____no_output_____
</code>
This should **not** display a scrollbar:_____no_output_____
<code>
print('-'*80)--------------------------------------------------------------------------------
</code>
### Very large input
In Jupyter: default behaviour, show scrollbar
On the website: should expand in horizontal as much as it wants, the rationale is that for input code since it may be printed to PDF you should always manually put line breaks._____no_output_____
<code>
# line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment
# line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment
_____no_output_____
</code>
**Very long HTML** (and long code line)
Should expand in vertical as much as it wants._____no_output_____
<code>
%%HTML
<iframe width="100%" height="1300px" frameBorder="0" src="https://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055?scaleControl=false&miniMap=false&scrollWheelZoom=false&zoomControl=true&allowEdit=false&moreControl=true&searchControl=null&tilelayersControl=null&embedControl=null&datalayersControl=true&onLoadPanel=undefined&captionBar=false#11/46.0966/11.4024"></iframe><p><a href="http://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055">See full screen</a></p>
_____no_output_____
</code>
### Very long output
In Jupyter: by clicking, you can collapse
On the website: a scrollbar should appear_____no_output_____
<code>
for x in range(150):
print('long output ...', x)long output ... 0
long output ... 1
long output ... 2
long output ... 3
long output ... 4
long output ... 5
long output ... 6
long output ... 7
long output ... 8
long output ... 9
long output ... 10
long output ... 11
long output ... 12
long output ... 13
long output ... 14
long output ... 15
long output ... 16
long output ... 17
long output ... 18
long output ... 19
long output ... 20
long output ... 21
long output ... 22
long output ... 23
long output ... 24
long output ... 25
long output ... 26
long output ... 27
long output ... 28
long output ... 29
long output ... 30
long output ... 31
long output ... 32
long output ... 33
long output ... 34
long output ... 35
long output ... 36
long output ... 37
long output ... 38
long output ... 39
long output ... 40
long output ... 41
long output ... 42
long output ... 43
long output ... 44
long output ... 45
long output ... 46
long output ... 47
long output ... 48
long output ... 49
long output ... 50
long output ... 51
long output ... 52
long output ... 53
long output ... 54
long output ... 55
long output ... 56
long output ... 57
long output ... 58
long output ... 59
long output ... 60
long output ... 61
long output ... 62
long output ... 63
long output ... 64
long output ... 65
long output ... 66
long output ... 67
long output ... 68
long output ... 69
long output ... 70
long output ... 71
long output ... 72
long output ... 73
long output ... 74
long output ... 75
long output ... 76
long output ... 77
long output ... 78
long output ... 79
long output ... 80
long output ... 81
long output ... 82
long output ... 83
long output ... 84
long output ... 85
long output ... 86
long output ... 87
long output ... 88
long output ... 89
long output ... 90
long output ... 91
long output ... 92
long output ... 93
long output ... 94
long output ... 95
long output ... 96
long output ... 97
long output ... 98
long output ... 99
long output ... 100
long output ... 101
long output ... 102
long output ... 103
long output ... 104
long output ... 105
long output ... 106
long output ... 107
long output ... 108
long output ... 109
long output ... 110
long output ... 111
long output ... 112
long output ... 113
long output ... 114
long output ... 115
long output ... 116
long output ... 117
long output ... 118
long output ... 119
long output ... 120
long output ... 121
long output ... 122
long output ... 123
long output ... 124
long output ... 125
long output ... 126
long output ... 127
long output ... 128
long output ... 129
long output ... 130
long output ... 131
long output ... 132
long output ... 133
long output ... 134
long output ... 135
long output ... 136
long output ... 137
long output ... 138
long output ... 139
long output ... 140
long output ... 141
long output ... 142
long output ... 143
long output ... 144
long output ... 145
long output ... 146
long output ... 147
long output ... 148
long output ... 149
</code>
| {
"repository": "DavidLeoni/iep",
"path": "jupman-tests.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 361679,
"hexsha": "d01e9e6e267f0396dc454a608390a983297b0dcc",
"max_line_length": 202812,
"avg_line_length": 157.3885987815,
"alphanum_fraction": 0.5116443034
} |
# Notebook from llondon6/koalas
Path: factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
# Dempnstration that GMVRFIT reduces to GMVPFIT (or equivalent) for polynomial cases
<center>Development for a fitting function (greedy+linear based on mvpolyfit and gmvpfit) that handles rational fucntions</center>_____no_output_____
<code>
# Low-level import
from numpy import *
from numpy.linalg import pinv,lstsq
# Setup ipython environment
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Setup plotting backend
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 0.8
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['axes.titlesize'] = 20
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.pyplot import *
#
from positive import *_____no_output_____
</code>
## Package Development (positive/learning.py)_____no_output_____### Setup test data_____no_output_____
<code>
################################################################################
h = 3
Q = 25
x = h*linspace(-1,1,Q)
y = h*linspace(-1,1,Q)
X,Y = meshgrid(x,y)
# X += np.random.random( X.shape )-0.5
# Y += np.random.random( X.shape )-0.5
zfun = lambda xx,yy: 50 + (1.0 + 0.5*xx*yy + xx**2 + yy**2 )
numerator_symbols, denominator_symbols = ['01','00','11'],[]
np.random.seed(42)
ns = 0.1*(np.random.random( X.shape )-0.5)
Z = zfun(X,Y) + ns
domain,scalar_range = ndflatten( [X,Y], Z )
################################################################################_____no_output_____
</code>
### Initiate class object for fitting_____no_output_____
<code>
foo = mvrfit( domain, scalar_range, numerator_symbols, denominator_symbols, verbose=True )_____no_output_____
</code>
### Plot using class method_____no_output_____
<code>
foo.plot()_____no_output_____
</code>
### Generate python string for fit model_____no_output_____
<code>
print foo.__str_python__(precision=8)f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
</code>
### Use greedy algorithm_____no_output_____
<code>
star = gmvrfit( domain, scalar_range, verbose=True )([0;36mgmvrfit[0m)>> Now working deg = 1
&& The estimator has changed by -inf
&& Degree tempering will continue.
False
&& The current boundary is [('1', True), ('1', False)]
&& The current estimator value is 0.988648
([0;36mgmvrfit[0m)>> Now working deg = 2
&& The estimator has changed by -0.981918
&& Degree tempering will continue.
False
&& The current boundary is [('00', True), ('11', True), ('01', True)]
&& The current estimator value is 0.006730
([0;36mgmvrfit[0m)>> Now working deg = 3
&& The estimator has changed by 0.000000
&& Degree tempering will continue.
False
&& The current boundary is [('00', True), ('11', True), ('01', True)]
&& The current estimator value is 0.006730
([0;36mgmvrfit[0m)>> Now working deg = 4
&& The estimator has changed by -0.000024
&& Degree tempering has completed becuase the estimator has changed by |-0.000024| < 0.010000. The results of the last iteration will be kept.
True
&& The Final boundary is [('00', True), ('11', True), ('01', True)]
&& The Final estimator value is 0.006730
========================================
# Degree Tempered Positive Greedy Solution:
========================================
f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
############################################
# Applying a Negative Greedy Algorithm
############################################
Iteration #1 (Negative Greedy)
------------------------------------
>> min_estimator = 3.6869e-01
>> The current boundary = [('00', True), ('11', True), ('01', True)]
>> Exiting because |min_est-initial_estimator_value| = |0.368686-0.006730| = |0.361955| > 0.358336.
>> NOTE that the result of the previous iteration will be kept.
========================================
# Negative Greedy Solution:
========================================
f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
Fit Information:
----------------------------------------
f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
star.plot()
star.bin['pgreedy_result'].plot()
star.bin['ngreedy_result'].plot()_____no_output_____
</code>
| {
"repository": "llondon6/koalas",
"path": "factory/gmvrfit_reduce_to_gmvpfit_example.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 734101,
"hexsha": "d01f4dd1a9aeb0a9a6800e04ef2b2097a5fda0f8",
"max_line_length": 336950,
"avg_line_length": 2050.561452514,
"alphanum_fraction": 0.9490792139
} |
# Notebook from fernandascovino/pr-educacao
Path: notebooks/2_socioeconomic_data_validation.ipynb
<h1>Índice<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Socioeconomic-data-validation" data-toc-modified-id="Socioeconomic-data-validation-1"><span class="toc-item-num">1 </span>Socioeconomic data validation</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Goals" data-toc-modified-id="Goals-1.0.1"><span class="toc-item-num">1.0.1 </span>Goals</a></span></li><li><span><a href="#Data-scources" data-toc-modified-id="Data-scources-1.0.2"><span class="toc-item-num">1.0.2 </span>Data scources</a></span></li><li><span><a href="#Methodology" data-toc-modified-id="Methodology-1.0.3"><span class="toc-item-num">1.0.3 </span>Methodology</a></span></li><li><span><a href="#Results" data-toc-modified-id="Results-1.0.4"><span class="toc-item-num">1.0.4 </span>Results</a></span><ul class="toc-item"><li><span><a href="#Outputs" data-toc-modified-id="Outputs-1.0.4.1"><span class="toc-item-num">1.0.4.1 </span>Outputs</a></span></li></ul></li><li><span><a href="#Authors" data-toc-modified-id="Authors-1.0.5"><span class="toc-item-num">1.0.5 </span>Authors</a></span></li></ul></li><li><span><a href="#Import-data" data-toc-modified-id="Import-data-1.1"><span class="toc-item-num">1.1 </span>Import data</a></span></li><li><span><a href="#INSE-data-analysis" data-toc-modified-id="INSE-data-analysis-1.2"><span class="toc-item-num">1.2 </span>INSE data analysis</a></span><ul class="toc-item"><li><span><a href="#Filtering-model-(refence)-and-risk-(attention)-schools" data-toc-modified-id="Filtering-model-(refence)-and-risk-(attention)-schools-1.2.1"><span class="toc-item-num">1.2.1 </span>Filtering model (<code>refence</code>) and risk (<code>attention</code>) schools</a></span></li><li><span><a href="#Join-INSE-data" data-toc-modified-id="Join-INSE-data-1.2.2"><span class="toc-item-num">1.2.2 </span>Join INSE data</a></span></li><li><span><a href="#Comparing-INSE-data-in-categories" data-toc-modified-id="Comparing-INSE-data-in-categories-1.2.3"><span class="toc-item-num">1.2.3 </span>Comparing INSE data in categories</a></span></li></ul></li><li><span><a href="#Statistical-INSE-analysis" data-toc-modified-id="Statistical-INSE-analysis-1.3"><span class="toc-item-num">1.3 </span>Statistical INSE analysis</a></span><ul class="toc-item"><li><span><a href="#Normality-test" data-toc-modified-id="Normality-test-1.3.1"><span class="toc-item-num">1.3.1 </span>Normality test</a></span><ul class="toc-item"><li><span><a href="#D'Agostino-and-Pearson's" data-toc-modified-id="D'Agostino-and-Pearson's-1.3.1.1"><span class="toc-item-num">1.3.1.1 </span>D'Agostino and Pearson's</a></span></li><li><span><a href="#Shapiro-Wiki" data-toc-modified-id="Shapiro-Wiki-1.3.1.2"><span class="toc-item-num">1.3.1.2 </span>Shapiro-Wiki</a></span></li></ul></li><li><span><a href="#t-test" data-toc-modified-id="t-test-1.3.2"><span class="toc-item-num">1.3.2 </span><em>t</em> test</a></span><ul class="toc-item"><li><span><a href="#Model-x-risk-schools" data-toc-modified-id="Model-x-risk-schools-1.3.2.1"><span class="toc-item-num">1.3.2.1 </span>Model x risk schools</a></span></li></ul></li><li><span><a href="#Cohen's-D" data-toc-modified-id="Cohen's-D-1.3.3"><span class="toc-item-num">1.3.3 </span>Cohen's D</a></span><ul class="toc-item"><li><span><a href="#Model-x-risk-schools" data-toc-modified-id="Model-x-risk-schools-1.3.3.1"><span class="toc-item-num">1.3.3.1 </span>Model x risk schools</a></span></li><li><span><a href="#Best-evolution-model-x-risk-schools" data-toc-modified-id="Best-evolution-model-x-risk-schools-1.3.3.2"><span class="toc-item-num">1.3.3.2 </span>Best evolution model x risk schools</a></span></li><li><span><a href="#Other-model-x-risk-schools" data-toc-modified-id="Other-model-x-risk-schools-1.3.3.3"><span class="toc-item-num">1.3.3.3 </span>Other model x risk schools</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#Testes-estatísticos" data-toc-modified-id="Testes-estatísticos-2"><span class="toc-item-num">2 </span>Testes estatísticos</a></span><ul class="toc-item"><li><span><a href="#Cohen's-D" data-toc-modified-id="Cohen's-D-2.1"><span class="toc-item-num">2.1 </span>Cohen's D</a></span></li></ul></li><li><span><a href="#Tentando-inferir-causalidade" data-toc-modified-id="Tentando-inferir-causalidade-3"><span class="toc-item-num">3 </span>Tentando inferir causalidade</a></span><ul class="toc-item"><li><span><a href="#Regressões-lineares" data-toc-modified-id="Regressões-lineares-3.1"><span class="toc-item-num">3.1 </span>Regressões lineares</a></span></li><li><span><a href="#Testes-pareados" data-toc-modified-id="Testes-pareados-3.2"><span class="toc-item-num">3.2 </span>Testes pareados</a></span></li></ul></li></ul></div>_____no_output_____# Socioeconomic data validation
---
A literatura indica que o fator mais importante para o desempenho das escolas é o nível sócio econômico dos alunos. Estamos pressupondo que escolas próximas possuem alunos de nível sócio econômico próximo, mas isso precisa ser testado. Usei os dados do [INSE](http://portal.inep.gov.br/web/guest/indicadores-educacionais) para medir qual era o nível sócio econômico dos alunos de cada escola em 2015.
### Goals
Examining the geolocated IDEB data frrom schools and modeling *risk* and *model* schools for the research.
Combining the school's IDEB (SAEB + approval rate) marks with Rio de Janeiro's municipal shapefile, we hope to discover some local standards in school performance over the years. The time interval we will analyze is from 2011 until today.
### Data scources
- `ideb_merged.csv`: resulted data from the geolocalization, IDEB by years on columns
- `ideb_merged_kepler.csv`: resulted data from the geolocalization, format for kepler input
### Methodology
The goal is to determine the "model" schools in a certain ratio. We'll define those "models" as schools that had a great grown and stands nearby "high risk" schools, the ones in the lowest strata. For that, we construct the model below with suggestions by Ragazzo:
We are interested in the following groups:
- Group 1: Schools from very low (< 4) to high (> 6)
- Group 2: Schools from low (4 < x < 5) to high (> 6)
- Group 3: Schools went to high (> 6) with delta > 2
The *attention level* (or risk) of a school is defined by which quartile it belongs on IDEB 2017 distribution (most recent), from the lowest quartile (level 4) to the highest (level 1).
### Results
1. [Identify the schools with most IDEB variation from 2005 to 2017](#1)
2. [Identify schools that jumped from low / very low IDEB (<5 / <4) and went to high IDEB (> 6), from 2005 to 2017](#2)
2. [Model neighboors: which schools had a large delta and were nearby schools on the highest attention level (4)?](#3)
3. [See if the education census contains information on who was the principal of each school each year.](#4) - actually, we use an indicator of school's "managment complexity" with the IDEB data. We didn't find any difference between levels of "managment complexity" related to IDEB marks from those schools in each level.
#### Outputs
- `model_neighboors_closest_multiple.csv`: database with the risk schools and closest model schools
- `top_15_delta.csv`, `bottom_15_delta.csv`: top and bottom schools evolution from 2005 to 2017
- `kepler_with_filters.csv`: database for plot in kepler with schools categories (from the methology)
### Authors
Original code by Guilherme Almeida here, adapted by Fernanda Scovino - 2019._____no_output_____
<code>
# Import config
import os
import sys
sys.path.insert(0, '../')
from config import RAW_PATH, TREAT_PATH, OUTPUT_PATH
# DATA ANALYSIS & VIZ TOOLS
from copy import deepcopy
import pandas as pd
import numpy as np
pd.options.display.max_columns = 999
import geopandas as gpd
from shapely.wkt import loads
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
pylab.rcParams['figure.figsize'] = (12, 15)
# CONFIGS
%load_ext autoreload
#%autoreload 2
#import warnings
#warnings.filterwarnings('ignore')Populating the interactive namespace from numpy and matplotlib
palette = ['#FEC300', '#F1920E', '#E3611C', '#C70039', '#900C3F', '#5A1846', '#3a414c', '#29323C']
sns.set()_____no_output_____
</code>
## Import data_____no_output_____
<code>
inse = pd.read_excel(RAW_PATH / "INSE_2015.xlsx")_____no_output_____schools_ideb = pd.read_csv(OUTPUT_PATH / "kepler_with_filters.csv")_____no_output_____
</code>
## INSE data analysis_____no_output_____
<code>
inse.rename(columns={"CO_ESCOLA" : "cod_inep"}, inplace=True)
inse.head()_____no_output_____schools_ideb['ano'] = pd.to_datetime(schools_ideb['ano'])
schools_ideb.head()_____no_output_____
</code>
### Filtering model (`refence`) and risk (`attention`) schools_____no_output_____
<code>
reference = schools_ideb[(schools_ideb['ano'].dt.year == 2017) &
((schools_ideb['pessimo_pra_bom_bin'] == 1) | (schools_ideb['ruim_pra_bom_bin'] == 1))]
reference.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 161 entries, 4131 to 4729
Data columns (total 14 columns):
ano 161 non-null datetime64[ns]
cod_inep 161 non-null int64
geometry 161 non-null object
ideb 161 non-null float64
nome_abrev 161 non-null object
nome_escola 161 non-null object
lon 161 non-null float64
lat 161 non-null float64
pessimo_pra_bom_bin 161 non-null int64
ruim_pra_bom_bin 161 non-null int64
melhora_com_final_bom_bin 161 non-null int64
inicial_baixo_bin 161 non-null int64
inicial_baixissimo_bin 161 non-null int64
nivel_atencao 161 non-null float64
dtypes: datetime64[ns](1), float64(4), int64(6), object(3)
memory usage: 18.9+ KB
attention = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & (schools_ideb['nivel_atencao'] == 4)]
attention.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 176 entries, 4127 to 4728
Data columns (total 14 columns):
ano 176 non-null datetime64[ns]
cod_inep 176 non-null int64
geometry 176 non-null object
ideb 176 non-null float64
nome_abrev 176 non-null object
nome_escola 176 non-null object
lon 176 non-null float64
lat 176 non-null float64
pessimo_pra_bom_bin 176 non-null int64
ruim_pra_bom_bin 176 non-null int64
melhora_com_final_bom_bin 176 non-null int64
inicial_baixo_bin 176 non-null int64
inicial_baixissimo_bin 176 non-null int64
nivel_atencao 176 non-null float64
dtypes: datetime64[ns](1), float64(4), int64(6), object(3)
memory usage: 20.6+ KB
</code>
### Join INSE data_____no_output_____
<code>
inse_cols = ["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]
reference = pd.merge(reference, inse[inse_cols], how = "left", on = "cod_inep")
attention = pd.merge(attention, inse[inse_cols], how = "left", on = "cod_inep")_____no_output_____reference['tipo_escola'] = 'Escola referência'
reference.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 161 entries, 0 to 160
Data columns (total 18 columns):
ano 161 non-null datetime64[ns]
cod_inep 161 non-null int64
geometry 161 non-null object
ideb 161 non-null float64
nome_abrev 161 non-null object
nome_escola 161 non-null object
lon 161 non-null float64
lat 161 non-null float64
pessimo_pra_bom_bin 161 non-null int64
ruim_pra_bom_bin 161 non-null int64
melhora_com_final_bom_bin 161 non-null int64
inicial_baixo_bin 161 non-null int64
inicial_baixissimo_bin 161 non-null int64
nivel_atencao 161 non-null float64
NOME_ESCOLA 147 non-null object
INSE_VALOR_ABSOLUTO 147 non-null float64
INSE_CLASSIFICACAO 147 non-null object
tipo_escola 161 non-null object
dtypes: datetime64[ns](1), float64(5), int64(6), object(6)
memory usage: 23.9+ KB
attention['tipo_escola'] = 'Escola de risco'
attention.info()<class 'pandas.core.frame.DataFrame'>
Int64Index: 176 entries, 0 to 175
Data columns (total 18 columns):
ano 176 non-null datetime64[ns]
cod_inep 176 non-null int64
geometry 176 non-null object
ideb 176 non-null float64
nome_abrev 176 non-null object
nome_escola 176 non-null object
lon 176 non-null float64
lat 176 non-null float64
pessimo_pra_bom_bin 176 non-null int64
ruim_pra_bom_bin 176 non-null int64
melhora_com_final_bom_bin 176 non-null int64
inicial_baixo_bin 176 non-null int64
inicial_baixissimo_bin 176 non-null int64
nivel_atencao 176 non-null float64
NOME_ESCOLA 167 non-null object
INSE_VALOR_ABSOLUTO 167 non-null float64
INSE_CLASSIFICACAO 167 non-null object
tipo_escola 176 non-null object
dtypes: datetime64[ns](1), float64(5), int64(6), object(6)
memory usage: 26.1+ KB
df_inse = attention.append(reference)_____no_output_____df_inse['escola_risco'] = df_inse['nivel_atencao'].apply(lambda x : 1 if x == 4 else 0)
df_inse['tipo_especifico'] = df_inse[['pessimo_pra_bom_bin', 'ruim_pra_bom_bin', 'escola_risco']].idxmax(axis=1)
del df_inse['escola_risco']_____no_output_____df_inse.head()_____no_output_____df_inse['tipo_especifico'].value_counts()_____no_output_____df_inse.to_csv(TREAT_PATH / "risk_and_model_schools_inse.csv", index = False)_____no_output_____
</code>
### Comparing INSE data in categories_____no_output_____
<code>
sns.distplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas de risco')
sns.distplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas modelo')
plt.legend()_____no_output_____pylab.rcParams['figure.figsize'] = (10, 8)
title = "Comparação do nível sócio-econômico das escolas selecionadas"
ylabel="INSE (2015) médio da escola"
xlabel="Tipo da escola"
sns.boxplot(y ="INSE_VALOR_ABSOLUTO", x="tipo_escola", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title)_____no_output_____pylab.rcParams['figure.figsize'] = (10, 8)
xlabel = "Tipo da escola (específico)"
sns.boxplot(y = "INSE_VALOR_ABSOLUTO", x="tipo_especifico", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title)_____no_output_____
</code>
## Statistical INSE analysis_____no_output_____### Normality test
From [this article:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/)
> According to the available literature, **assessing the normality assumption should be taken into account for using parametric statistical tests.** It seems that the most popular test for normality, that is, the K-S test, should no longer be used owing to its low power. It is preferable that normality be assessed both visually and through normality tests, of which the Shapiro-Wilk test, provided by the SPSS software, is highly recommended. The normality assumption also needs to be considered for validation of data presented in the literature as it shows whether correct statistical tests have been used._____no_output_____
<code>
from scipy.stats import normaltest, shapiro, probplot_____no_output_____
</code>
#### D'Agostino and Pearson's_____no_output_____
<code>
normaltest(attention["INSE_VALOR_ABSOLUTO"].dropna())_____no_output_____normaltest(reference["INSE_VALOR_ABSOLUTO"].dropna())_____no_output_____
</code>
#### Shapiro-Wiki_____no_output_____
<code>
shapiro(attention["INSE_VALOR_ABSOLUTO"].dropna())_____no_output_____qs = probplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt)_____no_output_____shapiro(reference["INSE_VALOR_ABSOLUTO"].dropna())_____no_output_____ws = probplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt)_____no_output_____
</code>
### *t* test
About parametric tests: [here](https://www.healthknowledge.org.uk/public-health-textbook/research-methods/1b-statistical-methods/parametric-nonparametric-tests)
We can test the hypothesis of INSE be related to IDEB scores from the risk ($\mu_r$) and model schools ($\mu_m$) as it follows:
$H_0 = \mu_r = \mu_m$
$H_a = \mu_r != \mu_m$
For the *t* test, we need to ensure that:
1. the variances arer equal (1.94 close ennough to 2.05)
2. the samples have the same size (?)
3. _____no_output_____
<code>
from scipy.stats import ttest_ind as ttest, normaltest, kstest_____no_output_____attention["INSE_VALOR_ABSOLUTO"].dropna().describe()_____no_output_____reference["INSE_VALOR_ABSOLUTO"].dropna().describe()_____no_output_____
</code>
#### Model x risk schools_____no_output_____
<code>
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=True)_____no_output_____ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=False)_____no_output_____
</code>
### Cohen's D
Minha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/)._____no_output_____
<code>
from numpy.random import randn
from numpy.random import seed
from numpy import mean
from numpy import var
from math import sqrt
# == Code made by Guilherme Almeida, 2019 ==
# function to calculate Cohen's d for independent samples
def cohend(d1, d2):
# calculate the size of samples
n1, n2 = len(d1), len(d2)
# calculate the variance of the samples
s1, s2 = var(d1, ddof=1), var(d2, ddof=1)
# calculate the pooled standard deviation
s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2))
# calculate the means of the samples
u1, u2 = mean(d1), mean(d2)
# calculate the effect size
result = abs(u1 - u2) / s
return result_____no_output_____
</code>
#### Model x risk schools_____no_output_____
<code>
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit")_____no_output_____cohend(reference["INSE_VALOR_ABSOLUTO"], attention["INSE_VALOR_ABSOLUTO"])_____no_output_____
</code>
#### Best evolution model x risk schools_____no_output_____
<code>
best_evolution = df_inse[df_inse['tipo_especifico'] == "pessimo_pra_bom_bin"]
ttest(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit")_____no_output_____cohend(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"])_____no_output_____
</code>
#### Other model x risk schools_____no_output_____
<code>
medium_evolution = df_inse[df_inse['tipo_especifico'] == "ruim_pra_bom_bin"]
ttest(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit")_____no_output_____cohend(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"])_____no_output_____
</code>
<code>
referencias.head()_____no_output_____referencias = pd.merge(referencias, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how = "left", on = "cod_inep")
risco = pd.merge(risco, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how="left", on="cod_inep")_____no_output_____referencias.INSE_VALOR_ABSOLUTO.describe()_____no_output_____risco.INSE_VALOR_ABSOLUTO.describe()_____no_output_____risco["tipo"] = "Escolas com desempenho abaixo do esperado"
referencias["tipo"] = "Escolas-referência"_____no_output_____df = risco.append(referencias)_____no_output_____df.to_csv("risco_referencia_inse.csv", index = False)_____no_output_____df = pd.read_csv("risco_referencia_inse.csv")
sen.sen_boxplot(x = "tipo", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ",
plot_title = "Comparação do nível sócio-econômico das escolas selecionadas",
palette = {"Escolas com desempenho abaixo do esperado" : "indianred",
"Escolas-referência" : "skyblue"},
data = df, output_path = "inse_op1.png")_____no_output_____df = pd.read_csv("risco_referencia_inse.csv")
sen.sen_boxplot(x = "tipo_especifico", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ",
plot_title = "Comparação do nível sócio-econômico das escolas selecionadas",
palette = {"Desempenho abaixo\ndo esperado" : "indianred",
"Ruim para bom" : "skyblue",
"Muito ruim para bom" : "lightblue"},
data = df, output_path = "inse_op2.png")_____no_output_____
</code>
# Testes estatísticos_____no_output_____## Cohen's D
Minha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/)._____no_output_____
<code>
from numpy.random import randn
from numpy.random import seed
from numpy import mean
from numpy import var
from math import sqrt
# function to calculate Cohen's d for independent samples
def cohend(d1, d2):
# calculate the size of samples
n1, n2 = len(d1), len(d2)
# calculate the variance of the samples
s1, s2 = var(d1, ddof=1), var(d2, ddof=1)
# calculate the pooled standard deviation
s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2))
# calculate the means of the samples
u1, u2 = mean(d1), mean(d2)
# calculate the effect size
return (u1 - u2) / s_____no_output_____
</code>
Todas as escolas referência vs. escolas risco_____no_output_____
<code>
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias["INSE_VALOR_ABSOLUTO"], nan_policy="omit")_____no_output_____cohend(referencias["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])_____no_output_____
</code>
Só as escolas muito ruim pra bom vs. escolas risco_____no_output_____
<code>
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], nan_policy="omit")_____no_output_____cohend(referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])_____no_output_____
</code>
# Tentando inferir causalidade
Sabemos que existe uma diferença significativa entre os níveis sócio econômicos dos 2 grupos. Mas até que ponto essa diferença no INSE é capaz de explicar a diferença no IDEB? Será que resta algum efeito que pode ser atribuído às práticas de gestão? Esses testes buscam encontrar uma resposta para essa pergunta._____no_output_____## Regressões lineares_____no_output_____
<code>
#pega a nota do IDEB pra servir de DV
ideb = pd.read_csv("./pr-educacao/data/output/ideb_merged_kepler.csv")
ideb["ano_true"] = ideb["ano"].apply(lambda x: int(x[0:4]))
ideb = ideb.query("ano_true == 2017").copy()
nota_ideb = ideb[["cod_inep", "ideb"]]_____no_output_____df = pd.merge(df, nota_ideb, how = "left", on = "cod_inep")_____no_output_____df.dropna(subset=["INSE_VALOR_ABSOLUTO"], inplace = True)_____no_output_____df["tipo_bin"] = np.where(df["tipo"] == "Escolas-referência", 1, 0)_____no_output_____from statsmodels.regression.linear_model import OLS as ols_py
from statsmodels.tools.tools import add_constant
ivs_multi = add_constant(df[["tipo_bin", "INSE_VALOR_ABSOLUTO"]])
modelo_multi = ols_py(df[["ideb"]], ivs_multi).fit()
print(modelo_multi.summary()) OLS Regression Results
==============================================================================
Dep. Variable: ideb R-squared: 0.843
Model: OLS Adj. R-squared: 0.841
Method: Least Squares F-statistic: 391.6
Date: qua, 22 mai 2019 Prob (F-statistic): 2.13e-59
Time: 12:22:10 Log-Likelihood: -23.834
No. Observations: 149 AIC: 53.67
Df Residuals: 146 BIC: 62.68
Df Model: 2
Covariance Type: nonrobust
=======================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------------
const 4.1078 0.652 6.297 0.000 2.819 5.397
tipo_bin 1.3748 0.056 24.678 0.000 1.265 1.485
INSE_VALOR_ABSOLUTO 0.0169 0.013 1.293 0.198 -0.009 0.043
==============================================================================
Omnibus: 7.292 Durbin-Watson: 1.867
Prob(Omnibus): 0.026 Jarque-Bera (JB): 11.543
Skew: -0.180 Prob(JB): 0.00312
Kurtosis: 4.315 Cond. No. 1.40e+03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.4e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
</code>
O problema de fazer a regressão da maneira como eu coloquei acima é que tipo_bin foi criada parcialmente em função do IDEB (ver histogramas abaixo), então não é uma variável verdadeiramente independente. Talvez uma estratégia seja comparar modelos simples só com INSE e só com tipo_bin._____no_output_____
<code>
df.ideb.hist()_____no_output_____df.query("tipo_bin == 0").ideb.hist()_____no_output_____df.query("tipo_bin == 1").ideb.hist()_____no_output_____#correlação simples
from scipy.stats import pearsonr
pearsonr(df[["ideb"]], df[["INSE_VALOR_ABSOLUTO"]])_____no_output_____iv_inse = add_constant(df[["INSE_VALOR_ABSOLUTO"]])
iv_ideb = add_constant(df[["tipo_bin"]])
modelo_inse = ols_py(df[["ideb"]], iv_inse).fit()
modelo_tipo = ols_py(df[["ideb"]], iv_ideb).fit()
print(modelo_inse.summary())
print("-----------------------------------------------------------")
print(modelo_tipo.summary()) OLS Regression Results
==============================================================================
Dep. Variable: ideb R-squared: 0.187
Model: OLS Adj. R-squared: 0.182
Method: Least Squares F-statistic: 33.90
Date: qua, 22 mai 2019 Prob (F-statistic): 3.51e-08
Time: 12:22:15 Log-Likelihood: -146.25
No. Observations: 149 AIC: 296.5
Df Residuals: 147 BIC: 302.5
Df Model: 1
Covariance Type: nonrobust
=======================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------------
const -2.4509 1.350 -1.815 0.072 -5.119 0.217
INSE_VALOR_ABSOLUTO 0.1561 0.027 5.822 0.000 0.103 0.209
==============================================================================
Omnibus: 3.939 Durbin-Watson: 0.621
Prob(Omnibus): 0.140 Jarque-Bera (JB): 3.892
Skew: 0.353 Prob(JB): 0.143
Kurtosis: 2.642 Cond. No. 1.28e+03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.28e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
-----------------------------------------------------------
OLS Regression Results
==============================================================================
Dep. Variable: ideb R-squared: 0.841
Model: OLS Adj. R-squared: 0.840
Method: Least Squares F-statistic: 777.9
Date: qua, 22 mai 2019 Prob (F-statistic): 1.40e-60
Time: 12:22:15 Log-Likelihood: -24.683
No. Observations: 149 AIC: 53.37
Df Residuals: 147 BIC: 59.37
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 4.9505 0.029 173.049 0.000 4.894 5.007
tipo_bin 1.4058 0.050 27.891 0.000 1.306 1.505
==============================================================================
Omnibus: 6.509 Durbin-Watson: 1.870
Prob(Omnibus): 0.039 Jarque-Bera (JB): 9.934
Skew: -0.147 Prob(JB): 0.00696
Kurtosis: 4.230 Cond. No. 2.42
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
</code>
## Testes pareados
Nossa unidade de observação, na verdade, não deveria ser uma escola, mas sim um par de escolas. Abaixo, tento fazer as análises levando em consideração o delta de INSE e o delta de IDEB para cada par de escolas. Isso é importante: sabemos que o INSE faz a diferença no IDEB geral, mas a pergunta é se ele consegue explicar as diferenças na performance dentro de cada par._____no_output_____
<code>
pairs = pd.read_csv("sponsors_mais_proximos.csv")_____no_output_____pairs.head()_____no_output_____pairs.shape_____no_output_____inse_risco = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]]
inse_risco.columns = ["cod_inep_risco","inse_risco"]
inse_ref = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]]
inse_ref.columns = ["cod_inep_referencia","inse_referencia"]_____no_output_____pairs = pd.merge(pairs, inse_risco, how = "left", on = "cod_inep_risco")
pairs = pd.merge(pairs, inse_ref, how = "left", on = "cod_inep_referencia")_____no_output_____#calcula os deltas
pairs["delta_inse"] = pairs["inse_referencia"] - pairs["inse_risco"]
pairs["delta_ideb"] = pairs["ideb_referencia"] - pairs["ideb_risco"]_____no_output_____pairs["delta_inse"].describe()_____no_output_____pairs["delta_inse"].hist()_____no_output_____pairs["delta_ideb"].describe()_____no_output_____pairs["delta_ideb"].hist()_____no_output_____pairs[pairs["delta_inse"].isnull()]_____no_output_____clean_pairs = pairs.dropna(subset = ["delta_inse"])_____no_output_____import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize = sen.aspect_ratio_locker([16, 9], 0.6))
inse_plot = sns.regplot("delta_inse", "delta_ideb", data = clean_pairs)
plt.title("Correlação entre as diferenças do IDEB (2017) e do INSE (2015)\npara cada par de escolas mais próximas")
plt.xlabel("$INSE_{referência} - INSE_{desempenho\,abaixo\,do\,esperado}$", fontsize = 12)
plt.ylabel("$IDEB_{referência} - IDEB_{desempenh\,abaixo\,do\,esperado}$", fontsize = 12)
inse_plot.get_figure().savefig("delta_inse.png", dpi = 600)_____no_output_____pearsonr(clean_pairs[["delta_ideb"]], clean_pairs[["delta_inse"]])_____no_output_____X = add_constant(clean_pairs[["delta_inse"]])
modelo_pairs = ols_py(clean_pairs[["delta_ideb"]], X).fit()
print(modelo_pairs.summary()) OLS Regression Results
==============================================================================
Dep. Variable: delta_ideb R-squared: 0.000
Model: OLS Adj. R-squared: -0.010
Method: Least Squares F-statistic: 0.0004740
Date: qua, 22 mai 2019 Prob (F-statistic): 0.983
Time: 11:12:12 Log-Likelihood: -47.659
No. Observations: 100 AIC: 99.32
Df Residuals: 98 BIC: 104.5
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.4143 0.051 27.838 0.000 1.313 1.515
delta_inse 0.0004 0.017 0.022 0.983 -0.034 0.035
==============================================================================
Omnibus: 8.509 Durbin-Watson: 1.977
Prob(Omnibus): 0.014 Jarque-Bera (JB): 8.171
Skew: 0.654 Prob(JB): 0.0168
Kurtosis: 3.498 Cond. No. 3.97
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
</code>
Testando a assumption de que distância física se correlaciona com distância de INSE_____no_output_____
<code>
pairs.head()_____no_output_____sns.regplot("distancia", "delta_inse", data = clean_pairs.query("distancia < 4000"))_____no_output_____multi_iv = add_constant(clean_pairs[["distancia", "delta_inse"]])
modelo_ze = ols_py(clean_pairs[["delta_ideb"]], multi_iv).fit()
print(modelo_ze.summary()) OLS Regression Results
==============================================================================
Dep. Variable: delta_ideb R-squared: 0.000
Model: OLS Adj. R-squared: -0.021
Method: Least Squares F-statistic: 0.004600
Date: qua, 22 mai 2019 Prob (F-statistic): 0.995
Time: 11:40:22 Log-Likelihood: -47.654
No. Observations: 100 AIC: 101.3
Df Residuals: 97 BIC: 109.1
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.4200 0.080 17.851 0.000 1.262 1.578
distancia -3.958e-06 4.24e-05 -0.093 0.926 -8.8e-05 8.01e-05
delta_inse 0.0006 0.018 0.033 0.974 -0.034 0.035
==============================================================================
Omnibus: 8.544 Durbin-Watson: 1.973
Prob(Omnibus): 0.014 Jarque-Bera (JB): 8.212
Skew: 0.656 Prob(JB): 0.0165
Kurtosis: 3.500 Cond. No. 3.63e+03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 3.63e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
</code>
| {
"repository": "fernandascovino/pr-educacao",
"path": "notebooks/2_socioeconomic_data_validation.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 939574,
"hexsha": "d0202e7d55aed5ff965378766b7f24c84fd14dac",
"max_line_length": 308712,
"avg_line_length": 297.5218492717,
"alphanum_fraction": 0.9069929564
} |
# Notebook from ale-telefonica/market
Path: Scr/trainning/.ipynb_checkpoints/Untitled-checkpoint.ipynb
<code>
import MySQLdb
from sklearn.svm import LinearSVC
from tensorflow import keras
from keras.models import load_model
import tensorflow as tf
from random import seed
import pandas as pd
import numpy as np
import re
from re import sub
import os
import string
import tempfile
import pickle
import tarfile
from unidecode import unidecode
import nltk
from nltk.corpus import stopwords
from keras.callbacks import EarlyStopping, ReduceLROnPlateau_____no_output_____path = "."_____no_output_____username = "remote_root"
password = "Faltan_4Ks"
host = "ci-oand-apps-02.hi.inet"
db = "alejandro_test_db"
scheme = "MARKETv2"_____no_output_____query = f"Select * from {scheme}"
conn = MySQLdb.connect(host=host, user=username, passwd=password, db=db)
try:
cursor = conn.cursor()
cursor.execute(f"describe {scheme}")
columns_tuple = cursor.fetchall()
columns = [i[0] for i in columns_tuple]
cursor.execute(query)
results = cursor.fetchall()
except Exception as e:
print("Exception occur:", e)
finally:
conn.close()_____no_output_____data = pd.DataFrame(columns=columns, data=results)
data.head()_____no_output_____def text_to_word_list(text, stem=False, stopw=True):
from nltk.stem import SnowballStemmer
'''
Data Preprocess handler version 1.1
Pre process and convert texts to a list of words
'''
text = unidecode(text)
text = str(text)
text = text.lower()
# Clean the text
text = re.sub(r"<u.+>", "", text) # Remove emojis
text = re.sub(r"[^A-Za-z0-9^,!?.\/'+]", " ", text)
text = re.sub(r",", " ", text)
text = re.sub(r"\.", " ", text)
text = re.sub(r"!", " ! ", text)
text = re.sub(r"\?", " ? ", text)
text = re.sub(r"'", " ", text)
text = re.sub(r":", " : ", text)
text = re.sub(r"\s{2,}", " ", text)
text = text.split()
if stopw:
# Remove stopw
stopw = stopwords.words("spanish")
stopw.remove("no")
text = [word for word in text if word not in stopw and len(word) > 1]
if stem:
stemmer = SnowballStemmer("spanish")
text = [stemmer.stem(word) for word in text]
text = " ".join(text)
return text _____no_output_____def clean_dataset(df):
# df["Review.Last.Update.Date.and.Time"] = df["Review.Last.Update.Date.and.Time"].astype('datetime64')
df["State"] = True
df.loc[df.Pais == "Brasil", "State"] = False
df.loc[df.Comentario.isna(), "State"] = False
df["Clean_text"] = " "
df.loc[df.State, "Clean_text"] = df[df.State == True].Comentario.apply(lambda x: text_to_word_list(x))
df.loc[df.Clean_text.str.len() == 0, "State"] = False
df.loc[df.Clean_text.isna(), "State"] = False
df.Clean_text = df.Clean_text.str.replace("a{2,}", "a")
df.Clean_text = df.Clean_text.str.replace("e{3,}", "e")
df.Clean_text = df.Clean_text.str.replace("i{3,}", "i")
df.Clean_text = df.Clean_text.str.replace("o{3,}", "o")
df.Clean_text = df.Clean_text.str.replace("u{3,}", "u")
df.Clean_text = df.Clean_text.str.replace("y{2,}", "y")
df.Clean_text= df.Clean_text.str.replace(r"\bapp[s]?\b", " aplicacion ")
df.Clean_text= df.Clean_text.str.replace("^ns$", "no sirve")
df.Clean_text= df.Clean_text.str.replace("^ns .+", "no se")
df.Clean_text= df.Clean_text.str.replace("tlf", "telefono")
df.Clean_text= df.Clean_text.str.replace(" si no ", " sino ")
df.Clean_text= df.Clean_text.str.replace(" nose ", " no se ")
df.Clean_text= df.Clean_text.str.replace("extreno", "estreno")
df.Clean_text= df.Clean_text.str.replace("atravez", "a traves")
df.Clean_text= df.Clean_text.str.replace("root(\w+)?", "root")
df.Clean_text= df.Clean_text.str.replace("(masomenos)|(mas menos)]", "mas_menos")
df.Clean_text= df.Clean_text.str.replace("tbn", "tambien")
df.Clean_text= df.Clean_text.str.replace("deverian", "deberian")
df.Clean_text= df.Clean_text.str.replace("malicima", "mala")
return df_____no_output_____seed(20)
df2 = clean_dataset(data)
df2.sample(20)_____no_output_____def load_ml_model(path, model_name, data_type="stopw"):
# Open tarfile
tar = tarfile.open(mode="r:gz", fileobj=open(os.path.join(path, f"{model_name}_{data_type}.tar.gz"), "rb"))
for filename in tar.getnames():
if filename == f"{model_name}.pickle":
clf = pickle.loads(tar.extractfile(filename).read())
if filename == "vectorizer.pickle":
vectorizer = pickle.loads(tar.extractfile(filename).read())
if filename == "encoder.pickle":
encoder = pickle.loads(tar.extractfile(filename).read())
return clf, vectorizer, encoder_____no_output_____clf, vectorizer, encoder = load_ml_model(path, "linearSVM")C:\Users\b.amh\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\sklearn\base.py:310: UserWarning: Trying to unpickle estimator LinearSVC from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.
warnings.warn(
C:\Users\b.amh\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\sklearn\base.py:310: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.
warnings.warn(
C:\Users\b.amh\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\sklearn\base.py:310: UserWarning: Trying to unpickle estimator TfidfVectorizer from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.
warnings.warn(
C:\Users\b.amh\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\sklearn\base.py:310: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.
warnings.warn(
sample = df2.loc[df2.State, "Clean_text"].values
# df2.head()
sample_vect = vectorizer.transform(sample)
categorias = encoder.classes_[clf.predict(sample_vect)]_____no_output_____df2.loc[df2.State, "Categorias"] = categorias
df2.sample(20)_____no_output_____df2["Star.Rating"] = df2["Star.Rating"].astype("int")_____no_output_____df2.loc[(df2.State==False)&(df2["Star.Rating"]<3), "Categorias"] = "Valoración negativa"
df2.loc[(df2.State==False)&(df2["Star.Rating"]>=3), "Categorias"] = "Valoración positiva"
df2.head()_____no_output_____df2["Review.Last.Update.Date.and.Time"] = pd.to_datetime(df2["Review.Last.Update.Date.and.Time"])_____no_output_____df2 = df2.drop("tipo", axis=1)_____no_output_____df2.tipo_equivalencias.value_counts()
df2.tipo_equivalencias = df2.Categorias
df2.loc[df2.Categorias.str.contains("Actualización"), "tipo_equivalencias"] = "Actualizaciones"
df2.loc[df2.Categorias.str.contains("Error de Reproducción"), "tipo_equivalencias"] = "Error de Reproducción"_____no_output_____df2 = df2.rename(columns={"Categorias":"tipo"})
df_good = df2.loc[:,columns]_____no_output_____df_good.head()_____no_output_____scheme = "MARKETv2"
fields = ["%s" for i in range(len(df_good.columns))]
query = f"INSERT INTO {scheme} VALUES ({', '.join(fields)})"
values = df_good.values.tolist()
values = [tuple(x) for x in values]_____no_output_____
# query = f"Select * from {scheme}"
conn = MySQLdb.connect(host=host, user=username, passwd=password, db=db)
try:
cursor = conn.cursor()
cursor.executemany(query, values)
conn.commit()
except Exception as e:
print("Exception occur:", e)
finally:
conn.close()_____no_output_____data.sort_values(by="Review.Last.Update.Date.and.Time", ascending=False).head(10)_____no_output_____texto = "Mediocre"
# texto = text_to_word_list(texto)
print(texto)
# v = vectorizer.transform(np.array([texto]))
# encoder.inverse_transform(clf.predict(v))Mediocre
df2.loc[(df2.State)&(df2.Comentario.str.contains("Muy mala aplicacion, ")), "Comentario"]_____no_output_____data.to_csv("data_db.csv", index=False)_____no_output_____model = Word2VecKeras()
model.load(path, filename="word2vec_kerasv2_base.tar.gz")_____no_output_____sample = df2.loc[df2.State, "Comentario"].values[:200]_____no_output_____df2.loc[df2.State, ["Comentarios", "Categoría"]_____no_output_____# preds2.Predictions = "Valoración negativa"
preds2 = preds2.rename(columns={"Comments":"Comentarios", "Predictions":"Categorias"})_____no_output_____model.retrain(data=preds.iloc[:200,:])_____no_output_____class Word2VecKeras(object):
"""
Wrapper class that combines word2vec with keras model in order to build strong text classifier.
This class is adapted from the Word2VecKeras module that can be found on pypi, to fit the requirenments of our case
"""
def __init__(self, w2v_model=None):
"""
Initialize empty classifier
"""
self.w2v_size = None
self.w2v_window = None
self.w2v_min_count = None
self.w2v_epochs = None
self.label_encoder = None
self.num_classes = None
self.tokenizer = None
self.k_max_sequence_len = None
self.k_batch_size = None
self.k_epochs = None
self.k_lstm_neurons = None
self.k_hidden_layer_neurons = None
self.w2v_model = w2v_model
self.k_model = None
def train(self, x_train, y_train, corpus, x_test, y_test, w2v_size=300, w2v_window=5, w2v_min_count=1,
w2v_epochs=100, k_max_sequence_len=350, k_batch_size=128, k_epochs=20, k_lstm_neurons=128,
k_hidden_layer_neurons=(128, 64, 32), verbose=1):
"""
Train new Word2Vec & Keras model
:param x_train: list of sentence for trainning
:param y_train: list of categories for trainning
:param x_test: list of sentence for testing
:param y_test: list of categories for testing
:param corpus: text corpus to create vocabulary
:param w2v_size: Word2Vec vector size (embeddings dimensions)
:param w2v_window: Word2Vec windows size
:param w2v_min_count: Word2Vec min word count
:param w2v_epochs: Word2Vec epochs number
:param k_max_sequence_len: Max sequence length
:param k_batch_size: Keras training batch size
:param k_epochs: Keras epochs number
:param k_lstm_neurons: neurons number for Keras LSTM layer
:param k_hidden_layer_neurons: array of keras hidden layers
:param verbose: Verbosity
"""
# Set variables
self.w2v_size = w2v_size
self.w2v_window = w2v_window
self.w2v_min_count = w2v_min_count
self.w2v_epochs = w2v_epochs
self.k_max_sequence_len = k_max_sequence_len
self.k_batch_size = k_batch_size
self.k_epochs = k_epochs
self.k_lstm_neurons = k_lstm_neurons
self.k_hidden_layer_neurons = k_hidden_layer_neurons
# split text in tokens
# x_train = [gensim.utils.simple_preprocess(text) for text in x_train]
# x_test = [gensim.utils.simple_preprocess(text) for text in x_test]
corpus = [gensim.utils.simple_preprocess(corpus_text) for corpus_text in corpus]
logging.info("Build & train Word2Vec model")
self.w2v_model = gensim.models.Word2Vec(min_count=self.w2v_min_count, window=self.w2v_window,
size=self.w2v_size,
workers=multiprocessing.cpu_count())
self.w2v_model.build_vocab(corpus)
self.w2v_model.train(corpus, total_examples=self.w2v_model.corpus_count, epochs=self.w2v_epochs)
w2v_words = list(self.w2v_model.wv.vocab)
logging.info("Vocabulary size: %i" % len(w2v_words))
logging.info("Word2Vec trained")
logging.info("Fit LabelEncoder")
self.label_encoder = LabelEncoder()
y_train = self.label_encoder.fit_transform(y_train)
self.num_classes = len(self.label_encoder.classes_)
y_train = tf.keras.utils.to_categorical(y_train, self.num_classes)
y_test = self.label_encoder.transform(y_test)
y_test = tf.keras.utils.to_categorical(y_test, self.num_classes)
logging.info("Fit Tokenizer")
self.tokenizer = Tokenizer()
self.tokenizer.fit_on_texts(corpus)
x_train = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_train),
maxlen=self.k_max_sequence_len, padding="post", truncating="post")
x_test = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_test),
maxlen=self.k_max_sequence_len, padding="post", truncating="post")
num_words = len(self.tokenizer.word_index) + 1
logging.info("Number of unique words: %i" % num_words)
logging.info("Create Embedding matrix")
word_index = self.tokenizer.word_index
vocab_size = len(word_index) + 1
embedding_matrix = np.zeros((vocab_size, self.w2v_size))
for word, idx in word_index.items():
if word in w2v_words:
embedding_vector = self.w2v_model.wv.get_vector(word)
if embedding_vector is not None:
embedding_matrix[idx] = self.w2v_model.wv[word]
logging.info("Embedding matrix: %s" % str(embedding_matrix.shape))
logging.info("Build Keras model")
logging.info('x_train shape: %s' % str(x_train.shape))
logging.info('y_train shape: %s' % str(y_train.shape))
self.k_model = Sequential()
self.k_model.add(Embedding(vocab_size,
self.w2v_size,
weights=[embedding_matrix],
input_length=self.k_max_sequence_len,
trainable=False, name="w2v_embeddings"))
self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5, return_sequences=True), name="Bidirectional_LSTM_1"))
self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5), name="Bidirectional_LSTM_2"))
for hidden_layer in self.k_hidden_layer_neurons:
self.k_model.add(Dense(hidden_layer, activation='relu', name="dense_%s"%hidden_layer))
self.k_model.add(Dropout(0.2))
if self.num_classes > 1:
self.k_model.add(Dense(self.num_classes, activation='softmax', name="output_layer"))
else:
self.k_model.add(Dense(self.num_classes, activation='sigmoid'))
self.k_model.compile(loss='categorical_crossentropy' if self.num_classes > 1 else 'binary_crossentropy',
optimizer="adam",
metrics=['accuracy'])
logging.info(self.k_model.summary())
print(tf.keras.utils.plot_model(self.k_model, show_shapes=True, rankdir="LR"))
# Callbacks
early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True)
rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max')
callbacks = [early_stopping, rop]
logging.info("Fit Keras model")
self.history = self.k_model.fit(x_train, y_train,
batch_size=self.k_batch_size,
epochs=self.k_epochs,
callbacks=callbacks,
verbose=verbose,
validation_data=(x_test, y_test))
logging.info("Done")
return self.history
def preprocess(self, text):
"""Not implemented"""
pass
def retrain(self, data=None, filename="new_data.csv"):
"""
Method to train incrementally
:param filename: CSV file that contains the new data to feed the algorithm. This CSV must contains as columns ("Comentarios", "Categorías")
"""
if data.empty:
df = pd.read_csv(filename)
else:
df = data
comments = df.Comentarios
tokens = [self.text_to_word_list(text) for text in comments]
labels = df.Categorias
labels = self.label_encoder.fit_transform(labels)
labels = tf.keras.utils.to_categorical(labels, self.num_classes)
sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(tokens),
maxlen=self.k_max_sequence_len, padding="post", truncating="post")
early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True)
rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max')
callbacks = [early_stopping, rop]
# logging.info("Fit Keras model")
history = self.k_model.fit(sequences, labels,
batch_size=self.k_batch_size,
epochs=10,
callbacks=callbacks,
verbose=1)
def predict(self, texts: np.array, return_df=False):
"""
Predict and array of comments
:param text: numpy array of shape (n_samples,)
:param return_df: Whether return only predictions labels or a dataframe containing
sentences and predicted labels
"""
comments = [self.text_to_word_list(text) for text in texts]
sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(comments),
maxlen=self.k_max_sequence_len, padding="post", truncating="post")
confidences = self.k_model.predict(sequences, verbose=1)
preds = [self.label_encoder.classes_[np.argmax(c)] for c in confidences]
if return_df:
results = pd.DataFrame(data={"Comments": texts, "Predictions": preds})
else:
results = np.array(preds)
return results
def evaluate(self, x_test, y_test):
"""
Evaluate Model with several KPI
:param x_test: Text to test
:param y_test: labels for text
:return: dictionary with KPIs
"""
result = {}
results = []
# Prepare test
x_test = [self.text_to_word_list(text) for text in x_test]
x_test = keras.preprocessing.sequence.pad_sequences(
self.tokenizer.texts_to_sequences(x_test),
maxlen=self.k_max_sequence_len, padding="post", truncating="post")
# Predict
confidences = self.k_model.predict(x_test, verbose=1)
y_pred_1d = []
for confidence in confidences:
idx = np.argmax(confidence)
y_pred_1d.append(self.label_encoder.classes_[idx])
y_pred_bin = []
for i in range(0, len(results)):
y_pred_bin.append(1 if y_pred_1d[i] == y_test[i] else 0)
# Classification report
result["CLASSIFICATION_REPORT"] = classification_report(y_test, y_pred_1d, output_dict=True)
result["CLASSIFICATION_REPORT_STR"] = classification_report(y_test, y_pred_1d)
# Confusion matrix
result["CONFUSION_MATRIX"] = confusion_matrix(y_test, y_pred_1d)
# Accuracy
result["ACCURACY"] = accuracy_score(y_test, y_pred_1d)
return result
def save(self, path="word2vec_keras.tar.gz"):
"""
Save all models in pickles file
:param path: path to save
"""
tokenizer_path = os.path.join(tempfile.gettempdir(), "tokenizer.pkl")
label_encoder_path = os.path.join(tempfile.gettempdir(), "label_encoder.pkl")
params_path = os.path.join(tempfile.gettempdir(), "params.pkl")
keras_path = os.path.join(tempfile.gettempdir(), "model.h5")
w2v_path = os.path.join(tempfile.gettempdir(), "model.w2v")
# Dump pickle
pickle.dump(self.tokenizer, open(tokenizer_path, "wb"))
pickle.dump(self.label_encoder, open(label_encoder_path, "wb"))
pickle.dump(self.__attributes__(), open(params_path, "wb"))
pickle.dump(self.w2v_model, open(w2v_path, "wb"))
self.k_model.save(keras_path)
# self.w2v_model.save(w2v_path)
# Create Tar file
tar = tarfile.open(path, "w:gz")
for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]:
tar.add(name, arcname=os.path.basename(name))
tar.close()
# Remove temp file
for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]:
os.remove(name)
def load(self, path, filename="word2vec_keras.tar.gz"):
"""
Load all attributes from path
:param path: tar.gz dump
"""
# Open tarfile
tar = tarfile.open(mode="r:gz", fileobj=open(os.path.join(path, filename), "rb"))
# Extract keras model
temp_dir = tempfile.gettempdir()
tar.extract("model.h5", temp_dir)
self.k_model = load_model(os.path.join(temp_dir, "model.h5"))
os.remove(os.path.join(temp_dir, "model.h5"))
# Iterate over every member
for filename in tar.getnames():
if filename == "model.w2v":
self.w2v_model = pickle.loads(tar.extractfile(filename).read())
if filename == "tokenizer.pkl":
self.tokenizer = pickle.loads(tar.extractfile(filename).read())
if filename == "label_encoder.pkl":
self.label_encoder = pickle.loads(tar.extractfile(filename).read())
if filename == "params.pkl":
params = pickle.loads(tar.extractfile(filename).read())
for k, v in params.items():
self.__setattr__(k, v)
def text_to_word_list(self, text, stem=False, stopw=False):
''' Pre process and convert texts to a list of words
'''
text = unidecode(text)
text = str(text)
text = text.lower()
# Clean the text
text = re.sub(r"<u.+>", "", text) # Remove emojis
text = re.sub(r"[^A-Za-z0-9^,!?.\/'+]", " ", text)
text = re.sub(r",", " ", text)
text = re.sub(r"\.", " ", text)
text = re.sub(r"!", " ! ", text)
text = re.sub(r"\?", " ? ", text)
text = re.sub(r"'", " ", text)
text = re.sub(r":", " : ", text)
text = re.sub(r"\s{2,}", " ", text)
text = text.split()
if stopw:
# Remove stopw
stopw = stopwords.words("spanish")
stopw.remove("no")
text = [word for word in text if word not in stopw and len(word) > 1]
# if stem:
# stemmer = SnowballStemmer("spanish")
# text = [stemmer.stem(word) for word in text]
# text = " ".join(text)
return text
def __attributes__(self):
"""
Attributes to dump
:return: dictionary
"""
return {
"w2v_size": self.w2v_size,
"w2v_window": self.w2v_window,
"w2v_min_count": self.w2v_min_count,
"w2v_epochs": self.w2v_epochs,
"num_classes": self.num_classes,
"k_max_sequence_len": self.k_max_sequence_len,
"k_batch_size": self.k_batch_size,
"k_epochs": self.k_epochs,
"k_lstm_neurons": self.k_lstm_neurons,
"k_hidden_layer_neurons": self.k_hidden_layer_neurons,
"history": self.history.history
}
_____no_output_____
</code>
| {
"repository": "ale-telefonica/market",
"path": "Scr/trainning/.ipynb_checkpoints/Untitled-checkpoint.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 108720,
"hexsha": "d02054182c741f31b3c1fb1e9f7c693fbc9a0294",
"max_line_length": 1616,
"avg_line_length": 43.7329042639,
"alphanum_fraction": 0.4382082414
} |
# Notebook from SandyGuru/TeamFunFinalProject
Path: Run Project Models - Census Data.ipynb
<code>
from sklearn import *
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import cross_validation
from sklearn import tree
from sklearn import neighbors
from sklearn import svm
from sklearn import ensemble
from sklearn import cluster
from sklearn import model_selectionC:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
DeprecationWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\learning_curve.py:22: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the functions are moved. This module will be removed in 0.20
DeprecationWarning)
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns #for graphics and figure styling
import pandas as pd_____no_output_____data = pd.read_csv('adult.data.txt', sep=", ", encoding='latin1', header=None)C:\Users\Victoria\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
"""Entry point for launching an IPython kernel.
data.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']_____no_output_____data.head()_____no_output_____data.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 32561 entries, 0 to 32560
Data columns (total 15 columns):
Age 32561 non-null int64
Status 32561 non-null object
Weight 32561 non-null int64
Degree 32561 non-null object
Education 32561 non-null int64
Married 32561 non-null object
Occupation 32561 non-null object
Relationship 32561 non-null object
Race 32561 non-null object
Sex 32561 non-null object
Gain 32561 non-null int64
Loss 32561 non-null int64
Hours 32561 non-null int64
Country 32561 non-null object
Income 32561 non-null object
dtypes: int64(6), object(9)
memory usage: 3.7+ MB
from sklearn.preprocessing import LabelEncoder_____no_output_____data = data.apply(LabelEncoder().fit_transform)_____no_output_____dataIncomeColumn = data.Income_____no_output_____dataIncomeColumn.head()_____no_output_____data= data.drop('Income', axis=1)_____no_output_____from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data)_____no_output_____data_____no_output_____standardized_data = scaler.transform(data)_____no_output_____data_Test = pd.read_csv('adult.test.txt', sep=", ", encoding='latin1', header=None)
data_Test.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']
enc = LabelEncoder()
data_Test = data_Test.apply(LabelEncoder().fit_transform)
data_TestIncomeColumn = data_Test.Income
data_Test=data_Test.drop('Income', axis=1)C:\Users\Victoria\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
"""Entry point for launching an IPython kernel.
data_Test_____no_output_____data_TestIncomeColumn.head()_____no_output_____standardized_test_data = scaler.transform(data_Test)_____no_output_____standardized_test_data_____no_output_____a=0
b=0
for col in data:
for i in data[col].isnull():
if i:
a+=1
b+=1
print('Missing data in',col,'is',a/b*100,'%')
a=0
b=0
##check for missing data
##so now, we have standardized_data and standardized_test_data that we can run our models onMissing data in Age is 0.0 %
Missing data in Status is 0.0 %
Missing data in Weight is 0.0 %
Missing data in Degree is 0.0 %
Missing data in Education is 0.0 %
Missing data in Married is 0.0 %
Missing data in Occupation is 0.0 %
Missing data in Relationship is 0.0 %
Missing data in Race is 0.0 %
Missing data in Sex is 0.0 %
Missing data in Gain is 0.0 %
Missing data in Loss is 0.0 %
Missing data in Hours is 0.0 %
Missing data in Country is 0.0 %
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
censusIDM = RandomForestClassifier(max_depth=3, random_state=0)
from sklearn.feature_selection import RFE
rfe = RFE(censusIDM, n_features_to_select=6)
rfe.fit(standardized_data, dataIncomeColumn)_____no_output_____rfe.ranking______no_output_____predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#standardized_data for the training_____no_output_____goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
#good=(predictOutput==dataIncomeColumn).sum();good - for training error#13606
27347
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)2675
5214
good/(good+bad)*100_____no_output_____goodTest/(goodTest+badTest)*100_____no_output_____#Using the Random Forest Classifier on our Data, with depth 3.
censusIDM = RandomForestClassifier(max_depth=3, random_state=0)
frfe = RFE(censusIDM, n_features_to_select=3)
frfe.fit(standardized_data, dataIncomeColumn)
print(frfe.ranking_)
predict_TestOutput=frfe.predict(standardized_test_data)
predictOutput=frfe.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)[ 3 9 12 5 1 2 8 1 11 6 1 7 4 10]
13623
27354
2658
5207
#Using the Random Forest Classifier on our Data, with depth 7.
censusIDM = RandomForestClassifier(max_depth=7, random_state=0)
frfe = RFE(censusIDM, n_features_to_select=3)
frfe.fit(standardized_data, dataIncomeColumn)
print(frfe.ranking_)
predict_TestOutput=frfe.predict(standardized_test_data)
predictOutput=frfe.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)[ 4 10 9 5 1 2 7 1 12 8 1 3 6 11]
13670
27656
2611
4905
#Testing the Linear Regression Model on a large numer of different features to select to see if the accuracy changes significantly or not
from sklearn.linear_model import LinearRegression
beerIDM = linear_model.LogisticRegression()
rfe2 = RFE(beerIDM, n_features_to_select=4)
rfe2.fit(standardized_data, dataIncomeColumn)
print(rfe2.ranking_)
predict_TestOutput=rfe2.predict(standardized_test_data)
predictOutput=rfe2.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
good/(good+bad)[ 1 10 8 7 1 3 9 5 6 1 1 4 2 11]
13258
26575
3023
5986
n=50
precision=[0]*n
for i in range(1,n+1):
censusIDM = RandomForestClassifier(max_depth=i, random_state=0)
rfe = RFE(censusIDM, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();
good=(predictOutput==dataIncomeColumn).sum();
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();
bad=(predictOutput!=dataIncomeColumn).sum();
precision[i-1]=good/(good+bad);
_____no_output_____fig=plt.figure(figsize=[20,10])
plt.plot(range(1,n+1),precision)
plt.xlabel('Depth', fontsize=20)
plt.ylabel('Precision', fontsize=20)
plt.title('RandomForestClassifier', fontsize=20)
fig.savefig('RandomForest2.pdf',dpi=200)_____no_output_____#Linear Model Lasso curently not working.
from sklearn import linear_model
clf = linear_model.Lasso(alpha=0.1)
rfe = RFE(clf, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
print(rfe.ranking_)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
[11 10 9 8 7 6 5 4 3 2 1 1 1 1]
0
0
16281
32561
#Running the Perceptron Model on our data
from sklearn.linear_model import Perceptron
clf = linear_model.Perceptron()
rfe = RFE(clf, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
print(rfe.ranking_)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\linear_model\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
standardized_data2 = pd.DataFrame(standardized_data)
standardized_test_data2 = pd.DataFrame(standardized_test_data)
standardizedFrames = [standardized_data2, standardized_test_data2]
standardizedResult = pd.concat(standardizedFrames)
dataIncomeColumn2 = pd.DataFrame(dataIncomeColumn)
data_TestIncomeColumn2 = pd.DataFrame(data_TestIncomeColumn)
combinedIncomeColumn = [dataIncomeColumn2, data_TestIncomeColumn2]
combinedResult = pd.concat(combinedIncomeColumn) _____no_output_____from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
#where L is in the loop
rng = np.random.RandomState(42)
yy = []
heldout = [0.95, 0.90, .85, .8, 0.75, .7, .65, 0.6, .55, 0.5, 0.45, 0.4, 0.35, .3, .25, .2, .15, .1, .05, 0.01]
xx = 1. - np.array(heldout)
rounds = 20
for i in heldout:
yy_ = []
for r in range(rounds):
#clf = SGDClassifier()
clf = SVR(kernel="linear")
standardized_dataL, standardized_test_dataL, dataIncomeColumnL, data_TestIncomeColumnL = \
train_test_split(standardizedResult, combinedResult, test_size=i, random_state=rng)
clf.fit(standardized_dataL, dataIncomeColumnL)
y_pred = clf.predict(standardized_test_dataL)
yy_.append(1 - sum(y_pred == data_TestIncomeColumnL.Income)/len(y_pred))
yy.append(np.mean(yy_))
plt.plot(xx, yy, label='Linear Regression')
plt.legend(loc="upper right")
plt.xlabel("Proportion train")
plt.ylabel("Test Error Rate")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\utils\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
fig=plt.figure(figsize=[20,10])
plt.plot(xx, yy, label='Linear Regression')
plt.legend(loc="upper right")
plt.xlabel("Proportion train")
plt.ylabel("Test Error Rate")
plt.show()
fig.savefig('test2png.pdf', dpi=100)_____no_output_____xx,yy_____no_output_____#k-nearest Neighbors
from sklearn.neighbors import NearestNeighbors
clf = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(standardized_data)
predict_TestOutput=clf.predict(standardized_test_data)
predictOutput=clf.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)_____no_output_____from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(verbose=0, random_state=0)
mlp.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=mlp.predict(standardized_test_data)
predictOutput=mlp.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)13867
28048
2414
4513
#SVM
from sklearn.svm import SVR
clf = SVR(kernel="linear")
rfe4 = RFE(clf, n_features_to_select=5)
rfe4.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)13606
27347
2675
5214
#Running The Random Forest OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
#Running The Extra Trees OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 1
max_estimators = 25
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
#Running The Extra Trees OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
plt.plot(xss[0],yss[0],'v');
plt.plot(xss[2],yss[2],'o');
plt.plot(xss[1],yss[1],'-')
_____no_output_____yss=np.asarray(yss)
xss=np.asarray(xss)_____no_output_____help(plt.plot
)Help on function plot in module matplotlib.pyplot:
plot(*args, **kwargs)
Plot lines and/or markers to the
:class:`~matplotlib.axes.Axes`. *args* is a variable length
argument, allowing for multiple *x*, *y* pairs with an
optional format string. For example, each of the following is
legal::
plot(x, y) # plot x and y using default line style and color
plot(x, y, 'bo') # plot x and y using blue circle markers
plot(y) # plot y using x as index array 0..N-1
plot(y, 'r+') # ditto, but with red plusses
If *x* and/or *y* is 2-dimensional, then the corresponding columns
will be plotted.
If used with labeled data, make sure that the color spec is not
included as an element in data, as otherwise the last case
``plot("v","r", data={"v":..., "r":...)``
can be interpreted as the first case which would do ``plot(v, r)``
using the default line style and color.
If not used with labeled data (i.e., without a data argument),
an arbitrary number of *x*, *y*, *fmt* groups can be specified, as in::
a.plot(x1, y1, 'g^', x2, y2, 'g-')
Return value is a list of lines that were added.
By default, each line is assigned a different style specified by a
'style cycle'. To change this behavior, you can edit the
axes.prop_cycle rcParam.
The following format string characters are accepted to control
the line style or marker:
================ ===============================
character description
================ ===============================
``'-'`` solid line style
``'--'`` dashed line style
``'-.'`` dash-dot line style
``':'`` dotted line style
``'.'`` point marker
``','`` pixel marker
``'o'`` circle marker
``'v'`` triangle_down marker
``'^'`` triangle_up marker
``'<'`` triangle_left marker
``'>'`` triangle_right marker
``'1'`` tri_down marker
``'2'`` tri_up marker
``'3'`` tri_left marker
``'4'`` tri_right marker
``'s'`` square marker
``'p'`` pentagon marker
``'*'`` star marker
``'h'`` hexagon1 marker
``'H'`` hexagon2 marker
``'+'`` plus marker
``'x'`` x marker
``'D'`` diamond marker
``'d'`` thin_diamond marker
``'|'`` vline marker
``'_'`` hline marker
================ ===============================
The following color abbreviations are supported:
========== ========
character color
========== ========
'b' blue
'g' green
'r' red
'c' cyan
'm' magenta
'y' yellow
'k' black
'w' white
========== ========
In addition, you can specify colors in many weird and
wonderful ways, including full names (``'green'``), hex
strings (``'#008000'``), RGB or RGBA tuples (``(0,1,0,1)``) or
grayscale intensities as a string (``'0.8'``). Of these, the
string specifications can be used in place of a ``fmt`` group,
but the tuple forms can be used only as ``kwargs``.
Line styles and colors are combined in a single format string, as in
``'bo'`` for blue circles.
The *kwargs* can be used to set line properties (any property that has
a ``set_*`` method). You can use this to set a line label (for auto
legends), linewidth, anitialising, marker face color, etc. Here is an
example::
plot([1,2,3], [1,2,3], 'go-', label='line 1', linewidth=2)
plot([1,2,3], [1,4,9], 'rs', label='line 2')
axis([0, 4, 0, 10])
legend()
If you make multiple lines with one plot command, the kwargs
apply to all those lines, e.g.::
plot(x1, y1, x2, y2, antialiased=False)
Neither line will be antialiased.
You do not need to use format strings, which are just
abbreviations. All of the line properties can be controlled
by keyword arguments. For example, you can set the color,
marker, linestyle, and markercolor with::
plot(x, y, color='green', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=12).
See :class:`~matplotlib.lines.Line2D` for details.
The kwargs are :class:`~matplotlib.lines.Line2D` properties:
agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha: float (0.0 transparent through 1.0 opaque)
animated: bool
antialiased or aa: [True | False]
clip_box: a `~.Bbox` instance
clip_on: bool
clip_path: [(`~matplotlib.path.Path`, `~.Transform`) | `~.Patch` | None]
color or c: any matplotlib color
contains: a callable function
dash_capstyle: ['butt' | 'round' | 'projecting']
dash_joinstyle: ['miter' | 'round' | 'bevel']
dashes: sequence of on/off ink in points
drawstyle: ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post']
figure: a `~.Figure` instance
fillstyle: ['full' | 'left' | 'right' | 'bottom' | 'top' | 'none']
gid: an id string
label: object
linestyle or ls: ['solid' | 'dashed', 'dashdot', 'dotted' | (offset, on-off-dash-seq) | ``'-'`` | ``'--'`` | ``'-.'`` | ``':'`` | ``'None'`` | ``' '`` | ``''``]
linewidth or lw: float value in points
marker: :mod:`A valid marker style <matplotlib.markers>`
markeredgecolor or mec: any matplotlib color
markeredgewidth or mew: float value in points
markerfacecolor or mfc: any matplotlib color
markerfacecoloralt or mfcalt: any matplotlib color
markersize or ms: float
markevery: [None | int | length-2 tuple of int | slice | list/array of int | float | length-2 tuple of float]
path_effects: `~.AbstractPathEffect`
picker: float distance in points or callable pick function ``fn(artist, event)``
pickradius: float distance in points
rasterized: bool or None
sketch_params: (scale: float, length: float, randomness: float)
snap: bool or None
solid_capstyle: ['butt' | 'round' | 'projecting']
solid_joinstyle: ['miter' | 'round' | 'bevel']
transform: a :class:`matplotlib.transforms.Transform` instance
url: a url string
visible: bool
xdata: 1D array
ydata: 1D array
zorder: float
kwargs *scalex* and *scaley*, if defined, are passed on to
:meth:`~matplotlib.axes.Axes.autoscale_view` to determine
whether the *x* and *y* axes are autoscaled; the default is
*True*.
.. note::
In addition to the above described arguments, this function can take a
**data** keyword argument. If such a **data** argument is given, the
following arguments are replaced by **data[<arg>]**:
* All arguments with the following names: 'x', 'y'.
#Running The Random Forest OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
#Running The Extra Trees Test Error Plot
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTrees, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("Test Error Rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 20
max_estimators = 30
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("Test error rate")
plt.legend(loc="upper right")
plt.show()C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.
warn("Some inputs do not have OOB scores. "
C:\Users\Victoria\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:458: RuntimeWarning: invalid value encountered in true_divide
predictions[k].sum(axis=1)[:, np.newaxis])
</code>
| {
"repository": "SandyGuru/TeamFunFinalProject",
"path": "Run Project Models - Census Data.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 565492,
"hexsha": "d0211edcd40b7d9750abbdfb8861341442f657cc",
"max_line_length": 44012,
"avg_line_length": 101.798739874,
"alphanum_fraction": 0.7373473011
} |
# Notebook from drammock/mne-tools.github.io
Path: 0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
<code>
%matplotlib inline_____no_output_____
</code>
# Brainstorm CTF phantom tutorial dataset
Here we compute the evoked from raw for the Brainstorm CTF phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf
References
----------
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
_____no_output_____
<code>
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import fit_dipole
from mne.datasets.brainstorm import bst_phantom_ctf
from mne.io import read_raw_ctf
print(__doc__)_____no_output_____
</code>
The data were collected with a CTF system at 2400 Hz.
_____no_output_____
<code>
data_path = bst_phantom_ctf.data_path()
# Switch to these to use the higher-SNR data:
# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')
# dip_freq = 7.
raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')
dip_freq = 23.
erm_path = op.join(data_path, 'emptyroom_20150709_01.ds')
raw = read_raw_ctf(raw_path, preload=True)_____no_output_____
</code>
The sinusoidal signal is generated on channel HDAC006, so we can use
that to obtain precise timing.
_____no_output_____
<code>
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]
plt.figure()
plt.plot(times[times < 1.], sinusoid.T[times < 1.])_____no_output_____
</code>
Let's create some events using this signal by thresholding the sinusoid.
_____no_output_____
<code>
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp
events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T_____no_output_____
</code>
The CTF software compensation works reasonably well:
_____no_output_____
<code>
raw.plot()_____no_output_____
</code>
But here we can get slightly better noise suppression, lower localization
bias, and a better dipole goodness of fit with spatio-temporal (tSSS)
Maxwell filtering:
_____no_output_____
<code>
raw.apply_gradient_compensation(0) # must un-do software compensation first
mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)
raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)
raw.plot()_____no_output_____
</code>
Our choice of tmin and tmax should capture exactly one cycle, so
we can make the unusual choice of baselining using the entire epoch
when creating our evoked data. We also then crop to a single time point
(@t=0) because this is a peak in our signal.
_____no_output_____
<code>
tmin = -0.5 / dip_freq
tmax = -tmin
epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,
baseline=(None, None))
evoked = epochs.average()
evoked.plot()
evoked.crop(0., 0.)_____no_output_____
</code>
Let's use a sphere head geometry model and let's see the coordinate
alignement and the sphere location.
_____no_output_____
<code>
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
mne.viz.plot_alignment(raw.info, subject='sample',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain'])
del raw, epochs_____no_output_____
</code>
To do a dipole fit, let's use the covariance provided by the empty room
recording.
_____no_output_____
<code>
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)
raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',
**mf_kwargs)
cov = mne.compute_raw_covariance(raw_erm)
del raw_erm
dip, residual = fit_dipole(evoked, cov, sphere)_____no_output_____
</code>
Compare the actual position with the estimated one.
_____no_output_____
<code>
expected_pos = np.array([18., 0., 49.])
diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))
print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))
print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))
print('Difference: %0.1f mm' % diff)
print('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))
print('GOF: %0.1f %%' % dip.gof[0])_____no_output_____
</code>
| {
"repository": "drammock/mne-tools.github.io",
"path": "0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": null,
"size": 7455,
"hexsha": "d021c58f5bce572203c05790d8e2e616371510a2",
"max_line_length": 531,
"avg_line_length": 34.5138888889,
"alphanum_fraction": 0.5441985245
} |
# Notebook from freshskates/machine-learning
Path: Robert_Cacho_Proj2_stats_notebook.ipynb
## Instructions
Please make a copy and rename it with your name (ex: Proj6_Ilmi_Yoon). All grading points should be explored in the notebook but some can be done in a separate pdf file.
*Graded questions will be listed with "Q:" followed by the corresponding points.*
You will be submitting **a pdf** file containing **the url of your own proj6.**
---_____no_output_____**Hypothesis testing**
===
**Outline**
At the end of this week, you will be a pro at:
- **hypothesis testing**
* is there something interesting/meaningful going on in my data?
- one-sample t-test
- two-sample t-test
- **correcting for multiple testing**
* doing thousands of hypothesis tests at a time will increase your likelihood of incorrect conclusions
* you'll learn how to account for that
- **false discovery rates**
* you could be a perfectionist ("even one wrong conclusion is the worst"), aka family-wise error rate (FWER)
* or become a pragmatic ("of my significant discoveries, i expect x% of them to be false positives."), aka false discovery rate (FDR)
- **permutation tests**
* if your assumptions about your data are wrong, you may over/underestimate your confidence in your conclusions
* assume as little as possible about the data with a permutation test
_____no_output_____**Examples**
In class, we will talk about 3 examples:
- confidence intervals
- how much time do Americans spend on average per day on Netflix?
- one-sample t-test
- do Americans spend more time on average per day on Netflix compared to before the pandemic?
- two-sample t-test
- does exercise affect baseline blood pressure?
**Your project**
- RNA sequencing: which genes differentiate the different immune cells in your blood?
- two-sample t-test
- multiple testing correction
_____no_output_____**How do you make the best of this week?**
- start seeing all statistics reported around you, and think of how they relate to what we have learned.
- do rigorous statistics in your work from now on_____no_output_____**LET'S BEGIN!**
===============================================================_____no_output_____
<code>
#import python packages
import numpy as np
import scipy as sp
import scipy.stats as st
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt_____no_output_____rng=np.random.RandomState(1234) #this will ensure the reproducibility of the notebook_____no_output_____
</code>
**EXAMPLE I:**
===
How much time do subscribers spend on average each day on Netflix?
--
Example discussed in class (Lecture 1). The data we are working with are simulated, but the mean time spent on Netflix is inspired by https://www.pcmag.com/news/us-netflix-subscribers-watch-32-hours-and-use-96-gb-of-data-per-day (average of 3.2 hours for subscribers).
_____no_output_____
<code>
#Summarizing data
#================
population=np.array([1,1.8,2,3.2,3.3,4,4,4.2])
our_sample=np.array([2,3.2,4])
#means
population_mean=np.mean(population)
print('Population mean',population_mean.round(2))
sample_mean=np.mean(our_sample)
print('- Sample mean',sample_mean.round(2))
#standard deviations
population_sd=np.std(population)
print('Population standard deviation',population_sd.round(2))
#biased sample sd
biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])
print('- Biased sample standard deviation',biased_sample_sd.round(2))
#unbiased sample sd
unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))
print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))
plt.hist(population,range(0,6),color='black')
plt.yticks([0,1,2])
plt.xlabel('Number of hours spent\nper day on Netflix')
plt.ylabel('Number of observations')
plt.show()Population mean 2.94
- Sample mean 3.07
Population standard deviation 1.12
- Biased sample standard deviation 0.82
- Unbiased sample standard deviation 1.01
#larger example
MEAN_NETFLIX=3.2
SD_NETFLIX=1
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=1000)
population[population<0]=0
our_sample=population[0:100]
#means
population_mean=np.mean(population)
print('Population mean',population_mean.round(2))
sample_mean=np.mean(our_sample)
print('- Sample mean',sample_mean.round(2))
#standard deviations
population_sd=np.std(population)
print('Population standard deviation',population_sd.round(2))
#biased sample sd
biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])
print('- Biased sample standard deviation',biased_sample_sd.round(2))
#unbiased sample sd
unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))
print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))Population mean 3.22
- Sample mean 3.24
Population standard deviation 0.97
- Biased sample standard deviation 0.98
- Unbiased sample standard deviation 0.99
#representing sets of datapoints
#===============================
#histograms
plt.hist(population,[x*0.6 for x in range(10)],color='lightgray',edgecolor='black')
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Number of respondents',fontsize=15)
plt.xlim(0,6)
plt.show()
plt.hist(our_sample,[x*0.6 for x in range(10)],color='lightblue',edgecolor='black')
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Number of respondents',fontsize=15)
plt.xlim(0,6)
plt.show()
#densities
sns.distplot(population, hist=True, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4})
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Density',fontsize=15)
plt.xlim(0,6)
plt.show()
sns.distplot(our_sample, hist=True, kde=True,
bins=[x*0.6 for x in range(10)], color = 'blue',
hist_kws={'edgecolor':'black','color':'lightblue'},
kde_kws={'linewidth': 4})
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Density',fontsize=15)
plt.xlim(0,6)
plt.show()
#put both data in the same plot
fig,plots=plt.subplots(1)
sns.distplot(population, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlim(0,6)
sns.distplot(our_sample, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'blue',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plots.set_ylabel('Density',fontsize=15)
x = plots.lines[-1].get_xdata()
y = plots.lines[-1].get_ydata()
plots.fill_between(x, 0, y, where=x < 2, color='lightblue', alpha=0.3)
plt.xlim(0,6)
plt.show()
_____no_output_____
#put both data in the same plot
fig,plots=plt.subplots(1)
sns.distplot(population, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlim(0,6)
x = plots.lines[-1].get_xdata()
y = plots.lines[-1].get_ydata()
plots.fill_between(x, 0, y, where=(x < 4) & (x>2), color='gray', alpha=0.3)
plt.xlim(0,6)
plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plots.set_ylabel('Density',fontsize=15)
plt.show()
np.multiply((population<=4),(population>=2)).sum()/population.shape[0]C:\Users\freshskates\.conda\envs\ml\lib\site-packages\seaborn\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).
warnings.warn(msg, FutureWarning)
#brute force confidence interval
N_POPULATION=10000
N_SAMPLE=1000
population=np.random.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,10)
mean_i=np.mean(sample_i)
sample_means.append(mean_i)
sample_means=np.array(sample_means)
#sd of the mean
means_mean=np.mean(sample_means)
means_sd=np.std(sample_means)
print('Mean of the means',means_mean)
print('SEM (SD of the means)',means_sd)
plt.hist(sample_means,100,color='red')
plt.xlabel('Number of hours spent on Netflix\nper day\nMEANS OF SAMPLES')
plt.xlim(0,6)
plt.axvline(x=means_mean,color='black')
plt.axvline(x=means_mean-means_sd,color='black',linestyle='--')
plt.axvline(x=means_mean+means_sd,color='black',linestyle='--')
plt.show()
#compute what fraction of points are within 1 means_sd from means_mean
within_1sd=0
within_2sd=0
for i in range(sample_means.shape[0]):
m=sample_means[i]
if m>=(means_mean-means_sd) and m<=(means_mean+means_sd):
within_1sd+=1
if m>=(means_mean-2*means_sd) and m<=(means_mean+2*means_sd):
within_2sd+=1
print('within 1 means SD:',within_1sd/sample_means.shape[0])
print('within 1 means SD:',within_2sd/sample_means.shape[0])Mean of the means 3.1934702093436456
SEM (SD of the means) 0.3191115562654143
from scipy import stats
print('SEM (SD of the means), empirically calculated',means_sd.round(2))
print('SEM computed in python',stats.sem(sample_i).round(2))
SEM (SD of the means), empirically calculated 0.32
SEM computed in python 0.34
#one sample t test in python
from scipy.stats import ttest_1samp
MEAN_NETFLIX=3.2
SD_NETFLIX=1
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=1000)
population[population<0]=0
our_sample=population[0:10]
print(our_sample.round(2))
print(our_sample.mean())
print(our_sample.std())
TEST_VALUE=1.5
t, pvalue = ttest_1samp(our_sample, popmean=TEST_VALUE)
print('t', t.round(2))
print('p-value', pvalue.round(6))[1.62 1.58 3.25 1.52 4.6 2.36 4.01 3.15 3.73 2.39]
2.8206757978474783
1.0381608736065147
t 3.82
p-value 0.004113
#confidence intervals
#=====================
#take 100 samples
#compute their confidence intervals
#plot them
import scipy.stats as st
N_SAMPLE=200
for CONFIDENCE in [0.9,0.98,0.999999]:
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
ci_lows=[]
ci_highs=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,10)
mean_i=np.mean(sample_i)
ci=st.t.interval(alpha=CONFIDENCE,
df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))
ci_lows.append(ci[0])
ci_highs.append(ci[1])
sample_means.append(mean_i)
data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})
data=data.sort_values(by='mean')
data.index=range(N_SAMPLE)
print(data)
for i in range(N_SAMPLE):
color='gray'
if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:
color='red'
plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)
#plt.scatter(data['mean'],range(N_SAMPLE),color='black')
plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')
plt.xlabel('Mean time spent on Netflix')
plt.ylabel('Sampling iteration')
plt.xlim(0,10)
plt.show() mean ci_low ci_high
0 2.328428 1.865753 2.791103
1 2.380752 1.843621 2.917883
2 2.447427 1.780282 3.114572
3 2.563095 2.087869 3.038322
4 2.631692 2.047800 3.215584
.. ... ... ...
195 3.897081 3.015483 4.778678
196 3.908490 3.559905 4.257074
197 3.961320 3.316092 4.606547
198 3.977324 3.158234 4.796413
199 4.137394 3.541674 4.733114
[200 rows x 3 columns]
#confidence intervals
#=====================
#take 100 samples
#compute their confidence intervals
#plot them
import scipy.stats as st
N_SAMPLE=200
for CONFIDENCE in [0.9,0.98,0.999999]:
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
ci_lows=[]
ci_highs=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,100)
mean_i=np.mean(sample_i)
ci=st.t.interval(alpha=CONFIDENCE,
df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))
ci_lows.append(ci[0])
ci_highs.append(ci[1])
sample_means.append(mean_i)
data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})
data=data.sort_values(by='mean')
data.index=range(N_SAMPLE)
print(data)
for i in range(N_SAMPLE):
color='gray'
if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:
color='red'
plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)
#plt.scatter(data['mean'],range(N_SAMPLE),color='black')
plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')
plt.xlabel('Mean time spent on Netflix')
plt.ylabel('Sampling iteration')
plt.xlim(0,10)
plt.show() mean ci_low ci_high
0 2.892338 2.733312 3.051364
1 2.895408 2.729131 3.061684
2 2.946665 2.799507 3.093822
3 2.957376 2.781855 3.132897
4 2.968571 2.784393 3.152748
.. ... ... ...
195 3.427391 3.250704 3.604077
196 3.431677 3.258160 3.605194
197 3.434345 3.274334 3.594356
198 3.450142 3.279478 3.620806
199 3.476758 3.310008 3.643508
[200 rows x 3 columns]
</code>
**EXAMPLE II:**
===
Is exercise associated with lower baseline blood pressure?
--
We will simulate data with control mean 120 mmHg, treatment mean 116 mmHg and population sd 5 for both conditions._____no_output_____
<code>
#simulate dataset
#=====================
def sample_condition_values(condition_mean,
condition_var,
condition_N,
condition=''):
condition_values=np.random.normal(loc = condition_mean,
scale=condition_var,
size = condition_N)
data_condition_here=pd.DataFrame({'BP':condition_values,
'condition':condition})
return(data_condition_here)
#=========================================================================
N_per_condition=10
ctrl_mean=120
test_mean=116
v=5
np.random.seed(1)
data_ctrl=sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='couch')
data_test=sample_condition_values(condition_mean=test_mean,
condition_N=N_per_condition,
condition_var=v,
condition='exercise')
data=pd.concat([data_ctrl,data_test],axis=0)
print(data) BP condition
0 128.121727 couch
1 116.941218 couch
2 117.359141 couch
3 114.635157 couch
4 124.327038 couch
5 108.492307 couch
6 128.724059 couch
7 116.193965 couch
8 121.595195 couch
9 118.753148 couch
0 123.310540 exercise
1 105.699296 exercise
2 114.387914 exercise
3 114.079728 exercise
4 121.668847 exercise
5 110.500544 exercise
6 115.137859 exercise
7 111.610708 exercise
8 116.211069 exercise
9 118.914076 exercise
#visualize data
#=====================
sns.catplot(x='condition',y='BP',data=data,height=2,aspect=1.5)
plt.ylabel('BP')
plt.show()_____no_output_____sns.catplot(data=data,x='condition',y='BP',
jitter=1,
)
plt.show()
sns.catplot(data=data,x='condition',y='BP',kind='box',)
plt.show()
sns.catplot(data=data,x='condition',y='BP',kind='violin',)
plt.show()
fig,plots=plt.subplots(1)
sns.boxplot(data=data,x='condition',y='BP',
ax=plots,
)
sns.stripplot(data=data,x='condition',y='BP',
jitter=1,
ax=plots,alpha=0.25,
)
plt.show()_____no_output_____
</code>
In our hypothesis test, we ask if these two groups differ significantly from each other. It's a bit hard to say just from looking at the plot.
This is where statistics comes in. It's time to:
*3. Think about how much the data surprise you, given your null model*
We'll convert this step to some math, as follows:
**Step 1. summarize the difference between the groups with a number.**
This is called a **test statistic**
"How to define the test statistic?" you say?
The world is your oyster. You are free to choose anything you wish.
(Later, we'll see that some choices come with nice math, which is why they are typically used. But a test statistic could be anything)
To demonstrate this intuition, let's come up with a very basic test statistic. For example, let's compute the difference between the BP in the 2 groups.
_____no_output_____
<code>
mean_ctrl=np.mean(data[data['condition']=='couch']['BP'])
mean_test=np.mean(data[data['condition']=='exercise']['BP'])
test_stat=mean_test-mean_ctrl
print('test statistic =',test_stat)test statistic = -4.362237456546268
</code>
What is this number telling us? Is the BP significantly different between the 2 conditions? It's impossible to say looking at only this number.
We have to ask ourselves, well, what did you expect?
This takes us to the next step.
_____no_output_____
**ii) think about what the test statistic would be if in reality there were no difference between the 2 groups. It will be a distribution, not just a single number, because you would expect to see some variation in the test statistic whenever you do an experiment, due to sampling noise, and due to variation in the population.**
Here is where the wasteful part comes in. You go and repeat the measurement on 1000 different couch grouos. Then, for each of these, you compute the same test statistic = the difference between the mean in that sample and your original couch group.
_____no_output_____
<code>
np.random.seed(1)
data_exp2=sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='control_0')
for i in range(1,1001):
data_exp2=pd.concat([data_exp2,sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='control_'+str(i))])
print(data_exp2) BP condition
0 128.121727 control_0
1 116.941218 control_0
2 117.359141 control_0
3 114.635157 control_0
4 124.327038 control_0
.. ... ...
5 120.846771 control_1000
6 123.368115 control_1000
7 118.363992 control_1000
8 118.473504 control_1000
9 122.624327 control_1000
[10010 rows x 2 columns]
#now, let's plot the distribution of the test statistic under the null hypothesis
#get mean of each control
exp2_means=data_exp2.groupby('condition').mean()
print(exp2_means.head())
null_test_stats=exp2_means-ctrl_mean
plt.hist(np.array(null_test_stats).flatten(),20,color='black')
plt.xlabel('Test statistic')
plt.axvline(x=test_stat,color='red')
BP
condition
control_0 119.514296
control_1 119.152058
control_10 120.201086
control_100 118.518008
control_1000 119.698545
null_test_stats_____no_output_____for i in range(null_test_stats.shape[0]):
if null_test_stats['BP'][i] > 4:
print(null_test_stats.index[i], null_test_stats['BP'][i])control_179 5.185586379422304
control_43 4.129129723892561
control_665 4.129526751544148
control_775 4.117971581535585
control_838 4.442499228276958
control_952 4.185169220649826
control_970 4.141431724614719
for i in range(null_test_stats.shape[0]):
if null_test_stats['BP'][i]<-4:
print(null_test_stats.index[i],null_test_stats['BP'][i])control_161 -4.148404587530905
control_202 -4.336675796802723
control_234 -4.137633016906577
control_854 -4.612546175530198
control_955 -4.0685297239182034
sns.catplot(data=data_exp2,x='condition',y='BP',order=['control_0',
'control_1','control_2','control_3',
'control_4','control_5',#'control_6',
#'control_7','control_8','control_9','control_10',
'control_179','control_161',],
color='black',#kind='box',
aspect=2,height=2)_____no_output_____x=5
plt.hist(np.array(null_test_stats[1:2]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:3]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:4]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:5]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:6]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats).flatten(),20,color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
#plt.ylim(0,5)
plt.show()
plt.hist(np.array(null_test_stats).flatten(),20,color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
#plt.ylim(0,5)
plt.axvline(x=test_stat,color='red')
plt.show()_____no_output_____
</code>
In black we have the distribution of test statistics we obtained from the 1000 experiments measuring couch participants. In other words, this is the distribution of the test statistic under the null hypothesis.
The red line shows the test statistic from our comparison of exercise group vs with couch group._____no_output_____**Is our difference in expression significant?**
if the null is true, in other words, if in reality there is no difference between couch and exercise, what is the probability of seeing such an extreme difference between their means (in other words, such an extreme test statistic)?
We can compute this from the plot above. We go to our null distribution, and count how many times we got a more extreme test statistic in our null experiment than the one we got for the couch vs exercise comparison.
_____no_output_____
<code>
count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(test_stat)))
print(count_more_extreme,'times we got a more extreme test statistic under the null')
print(count_more_extreme / 1000,'fraction of the time we got a more extreme test statistic under the null')3 times we got a more extreme test statistic under the null
0.003 fraction of the time we got a more extreme test statistic under the null
</code>
What we computed above is called a **p-value**. Now, this is a very often misunderstood term, so let's think about it deeply.
Deeply.
Deeply.
About what it is, what it is not.
**P-values**
--
To remember what a p-value is, you decide to make a promise to me and more importantly yourself, that from now on, any sentence in which you mention a p-value will start with "if the null were true, ...".
**A p-value IS:**
- if the null were true, the probability of observing something as extreme or more extreme than your test statistic.
- it's the quantification of your "whoa!", given your null hypothesis. More "whoa!" = smaller p-value.
**A p-value is NOT:**
- the probability that the null hypothesis is wrong. We don't know the probability of that. That's sort of up to the universe.
- the probability that the null hypothesis is wrong. This is so important, that it's worth putting it on the list twice.
Why is this distinction so important?
First, because we can be very good at estimating what happens under the null. It's much more challenging to think about other scenarios. For instance, if you needed to make a model for the BP being different between 2 conditions, how different do you expect them to be? Is the average couch group at 120 and the exercise at 110? Or the couch at 125 and exercise at 130? Do you make a model for each option and grow old estimating all possible models?
Second, it's also a matter of being conservative. It's common courtesy to assume the 2 conditions are the same. I expect you to come to me and convince me that it would be REALLY unlikely to observe what we have just seen given the null, to make it worthwhile my time. It would be weird to just assume the BP is different between the 2 conditions and have to prove that they are the same. We'd be swimming in false positives.
**Statistical significance**
Now that we have a p-value, you need to ask yourself where you set a cutoff for something being unlikely enough to be "significant", or worth your attention. Usually, that's 0.05, or 0.01, or 0.1. Yes, essentially it's a somewhat arbitrary small number.
I reiterate: this does not mean that the exercise group is different from the couch group for sure. If you were to do the experiment 1000 times with groups of participants assigned to "couch", in a small subset of your experiments, you'll get a test statistic as or more extreme than the one we found in our experiment comparing. But given that it's unlikely to get this result under the null hypohesis, you call it a significant difference, one that makes you think.
In summary:
- you look at your p-value - and you think about the probability of getting your result under the null, as you need to include these words in any sentence with p-values -
- compare it with your significance threshold
- if it is less than that threshold, you call that difference in expression significant between KO and control.
**Technical note: one-tailed vs two-tailed tests**
*Depending on what you believe would be the possible alternative to your null hypothesis (conveniently called the alternative hypothesis), you may compute the p-value differently.*
*Specifically, in our example above, we computed the p-value by asking:*
- *if the null were true, what is the probability of obtaining a test statistic as extreme or more extreme than the one we've seen. That means we asked whether there were test statistics larger than our test statistic, or lower than minus our test statistic. This is called a two-tailed test, because we looked at both sides (both tails) of the distribution under the null.*
*If your alternative hypothesis were that the treatment specifically decreases baseline blood pressure, you'd compute the p-value differently, as you'd look under the null at only what fraction of the time you've seen a test statistic lower than the one we've seen. This is a one-tailed test.*
*Of course, this is not an invitation to use one-tailed tests to try to get more significant p-values, since by definition the p-values from a one-tailed test will be smaller than those for a two-tailed test. You should define your alternative hypothesis based on deep thought. I personally like to be as conservative as possible, and as such strongly prefer two-tailed tests.*
_____no_output_____**Hypothesis testing in a nutshell**
- come up with a **null hypothesis**.
* In our case: the gene does not change in expression.
- collect some data
* yay, we love data!
- define a **test statistic** to measure your quantity of interest.
* here we looked at the difference between means, but as we'll see below, there are more sophisticated ways to go about it.
- figure out the **distribution of the test statistic under the null** hypothesis
* here, we did this by repeating the measurement on the same type of cells 1000 times. Next, we'll learn that under certain conditions we can comoute this distribution analytically, rather than having to do thousands of experiments.
- compute a **p-value**
* that tells you if the null were true, the probability of getting your test statistic or something even more outrageous
- decide if **significant**
* is p-value below a pre-defined threshold
If you deeply understand this, you're on a very good path to understand a LARGE fraction of all statistics you'll find in genomics._____no_output_____**PART II. EXAMPLE HYPOTHESIS TESTING USING THE T-TEST**
---
Now, let's do a t-test.
_____no_output_____
<code>
from scipy.stats import ttest_ind
t_stat,pvalue=ttest_ind(data[data['condition']=='exercise']['BP'],
data[data['condition']=='couch']['BP'],
)
print(t_stat,pvalue)
-1.6837025738594624 0.10950131551739636
#as before, compare to the distribution
null_test_stats=[]
for i in range(1000):
current_t,current_pvalue=ttest_ind(data_exp2[data_exp2['condition']=='control_'+str(i)]['BP'],
data_exp2[data_exp2['condition']=='control_0']['BP'],
)
null_test_stats.append(current_t)
plt.hist(np.array(null_test_stats).flatten(),color='black')
plt.xlabel('Test statistic (t)')
plt.axvline(x=t_stat,color='red')
count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(t_stat)))
print(count_more_extreme,'times we got a more extreme test statistic under the null')
print(count_more_extreme/1000,'fraction of the time we got a more extreme test statistic under the null = p-value')10 times we got a more extreme test statistic under the null
0.01 fraction of the time we got a more extreme test statistic under the null = p-value
</code>
Now, the exciting thing is that we didn't have to perform the second experiment to get an empirical distribution of the test statistic under the null. Rather, we were able to estimate it analytically. And indeed, the p-value we obtained from the t-test is similar to the one we got from our big experiment!
_____no_output_____Ok, so by now, you should be pros at hypothesis tests.
Remember: decide on the null, compute test statistic, get the distribution of the test statistic under the null, compute a p-value, decide if significant._____no_output_____There are of course many other types of hypothesis tests that don't look at the difference between groups as we did here. For instace, in GWAS, you want to see if a mutation is enriched in a disease cohort compared to healthy samples, and you do a chi-square test.
Or maybe you have more than 2 conditions. Then you do ANOVA, rather than a t-test.
_____no_output_____**PROJECT: EXAMPLE III:**
===
RNA sequencing: which genes are characteristic for different types of immune cells in your body?
--_____no_output_____Motivation
--
Although all cells in our body have the same DNA, they can have wildly different functions. That is because they activate different genes, for example your brain cells turn on genes that lead to production of neurotransmitters while liver cells activate genes encoding enzymes.
Here, you will compare different types of immune cells (e.g. B-cells that make your antibodies, and T-cells which fight infections), and identify which genes are specifically active in each type of cell._____no_output_____
<code>
#install scanpy
!pip install scanpyRequirement already satisfied: scanpy in c:\users\freshskates\.conda\envs\ml\lib\site-packages (1.8.1)
Requirement already satisfied: numpy>=1.17.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.20.0)
Requirement already satisfied: h5py>=2.10.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.4.0)
Requirement already satisfied: scikit-learn>=0.22 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.0)
Requirement already satisfied: numba>=0.41.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.54.1)
Requirement already satisfied: natsort in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (7.1.1)
Requirement already satisfied: umap-learn>=0.3.10 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.5.1)
Requirement already satisfied: joblib in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.1.0)
Requirement already satisfied: pandas>=0.21 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.3.3)
Requirement already satisfied: anndata>=0.7.4 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.7.6)
Requirement already satisfied: patsy in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.5.2)
Requirement already satisfied: sinfo in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.3.4)
Requirement already satisfied: tqdm in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (4.62.3)
Requirement already satisfied: seaborn in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.11.2)
Requirement already satisfied: packaging in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (21.0)
Requirement already satisfied: matplotlib>=3.1.2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.4.3)
Requirement already satisfied: statsmodels>=0.10.0rc2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (0.13.0)
Requirement already satisfied: scipy>=1.4 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (1.7.1)
Requirement already satisfied: tables in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (3.6.1)
Requirement already satisfied: networkx>=2.3 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scanpy) (2.6.3)
Requirement already satisfied: xlrd<2.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from anndata>=0.7.4->scanpy) (1.2.0)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (2.8.2)
Requirement already satisfied: cycler>=0.10 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (0.10.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (1.3.2)
Requirement already satisfied: pillow>=6.2.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from matplotlib>=3.1.2->scanpy) (8.3.2)
Requirement already satisfied: six in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from cycler>=0.10->matplotlib>=3.1.2->scanpy) (1.16.0)
Requirement already satisfied: setuptools in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from numba>=0.41.0->scanpy) (58.0.4)
Requirement already satisfied: llvmlite<0.38,>=0.37.0rc1 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from numba>=0.41.0->scanpy) (0.37.0)
Requirement already satisfied: pytz>=2017.3 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from pandas>=0.21->scanpy) (2021.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from scikit-learn>=0.22->scanpy) (3.0.0)
Requirement already satisfied: pynndescent>=0.5 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from umap-learn>=0.3.10->scanpy) (0.5.5)
Requirement already satisfied: stdlib-list in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from sinfo->scanpy) (0.8.0)
Requirement already satisfied: numexpr>=2.6.2 in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from tables->scanpy) (2.7.3)
Requirement already satisfied: colorama in c:\users\freshskates\.conda\envs\ml\lib\site-packages (from tqdm->scanpy) (0.4.4)
</code>
RNA sequencing
--
RNA sequencing allows us to quantify the extent to which each gene is active in a sample. When a gene is active, its DNA is transcribed into mRNA and then translated into protein. With RNA sequencing, we are counting how frequent mRNAs for each gene occur in a sample. Genes that are more active will have higher counts, while genes that are not made into mRNA will have 0 counts.
Data
--
The code below will download the data for you, and organize it into a data frame, where:
- every row is a different gene
- every column is a different sample.
- We have 6 samples, 3 of T cells (called "CD4 T cells" and B cells ("B cells").
- every value is the number of reads from each gene in each sample.
- Note: the values have been normalized to be comparable between samples._____no_output_____
<code>
import scanpy as sc
def prep_data():
adata=sc.datasets.pbmc3k_processed()
counts=pd.DataFrame(np.expm1(adata.raw.X.toarray()),
index=adata.raw.obs_names,
columns=adata.raw.var_names)
#make 3 reps T-cells and 3 reps B-cells
cells_per_bulk=100
celltype='CD4 T cells'
cells=adata.obs_names[adata.obs['louvain']==celltype]
bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],
index=adata.raw.var_names)
for i in range(3):
cells_here=cells[(i*100):((i+1)*100)]
bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))
bulk_t=bulks
celltype='B cells'
cells=adata.obs_names[adata.obs['louvain']==celltype]
bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],
index=adata.raw.var_names)
for i in range(3):
cells_here=cells[(i*100):((i+1)*100)]
bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))
bulks=pd.concat([bulk_t,bulks],axis=1)
bulks=bulks.sort_values(by=bulks.columns[0],ascending=False)
return(bulks)
data=prep_data()
print(data.head())
print("min: ", data.min())
print("max: ", data.max())
CD4 T cells.rep1 CD4 T cells.rep2 CD4 T cells.rep3 B cells.rep1 \
index
MALAT1 8303.0 7334.0 7697.0 5246.0
B2M 4493.0 4675.0 4546.0 2861.0
TMSB4X 4198.0 4297.0 3932.0 2551.0
RPL10 3615.0 3565.0 3965.0 3163.0
RPL13 3501.0 3556.0 3679.0 2997.0
B cells.rep2 B cells.rep3
index
MALAT1 5336.0 4950.0
B2M 2844.0 2796.0
TMSB4X 2066.0 2276.0
RPL10 2830.0 2753.0
RPL13 2636.0 2506.0
min: CD4 T cells.rep1 0.0
CD4 T cells.rep2 0.0
CD4 T cells.rep3 0.0
B cells.rep1 0.0
B cells.rep2 0.0
B cells.rep3 0.0
dtype: float64
max: CD4 T cells.rep1 8303.0
CD4 T cells.rep2 7334.0
CD4 T cells.rep3 7697.0
B cells.rep1 5246.0
B cells.rep2 5336.0
B cells.rep3 4950.0
dtype: float64
</code>
**Let's explore the dataset**
**(1 pt)** What are the names of the samples?
**(2 pts)** What is the highest recorded value? What is the lowest?
_____no_output_____#write code to answer the questions here
1)
Sample names are
- CD4 T cells.rep1, CD4 T cells.rep2, CD4 T cells.rep3,
- B cells.rep1, B cells.rep2, B cells.rep3
2)
- The highest recorded value:
**max: CD4 T cells.rep1 8303.0**
- The lowest recorded value:
**min: CD4 T cells.rep1 0.0**_____no_output_____**Exploring the data**
One gene that is different between our 2 cell types is IL7R.
**(1 pt)** Plot the distribution of the IL7R gene in the 2 conditions. Which cell type (CD4 T cells or B cells) has the higher level of this gene?
**(1 pt)** How many samples do we have for each condition?
4) _____no_output_____# Answers
3)
- CD4 T has a higher level of this gene, it can be seen in the graph plotted
4)
- Three samples for each condition
For CD4 T Cells:
- CD4 T cells rep1
- CD4 T cells rep2
- CD4 T cells rep3
For B Cells:
- B cells rep1
- B cells rep2
- B cells rep3
_____no_output_____
<code>
#inspect the data
GENE='IL7R'
long_data=pd.DataFrame({GENE:data.loc[GENE,:],
'condition':[x.split('.')[0] for x in data.columns]})
print(long_data)
sns.catplot(data=long_data,x='condition', y=GENE) IL7R condition
CD4 T cells.rep1 175.0 CD4 T cells
CD4 T cells.rep2 128.0 CD4 T cells
CD4 T cells.rep3 146.0 CD4 T cells
B cells.rep1 13.0 B cells
B cells.rep2 10.0 B cells
B cells.rep3 20.0 B cells
</code>
**Two-sample t-test for one gene across 2 conditions**
We are now going to check whether the gene IL7R is differentially active in CD4 T cells vs B cells.
**(1 pt)** What is the null hypothesis?
**(1 pt)** Based on your plot of the gene in the two conditions, and the fact that there looks like there might be a difference, what do you expect the sign of the t-statistic to be (CD4 T cells vs B cells)?
_____no_output_____We are going to use the function ttest_ind to perform our t-test. You can read about it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html._____no_output_____
**(1 pt)** What is the t-statistic?
**(1 pt)** What is the p-value?
**(1 pt)** Describe in your own words what the p-value means.
**(1 pt)** Is the p-value significant at alpha = 0.05?
_____no_output_____# Answers
5)
- The graph is interesting, at first glance it seems like IL7R is not differentially active in CD4 T cells vs B cells
- they are similar
6)
- If the sign is positive, then reject the null hypothesis
---
7)
- t statistic: 9.66
<br>
test statistic tells us how much our sample mean deviates from the null hypothesis mean
<br>
T Statistic is the value calculated when you replace the population std with the sd(sample standard)
8)
- p-value: 0.00064<br>
9)
- the p valueis the porbability / likelihood of the null hypothesis, since it was rejected it should be low. the smaller the p value, the more "wrong" the null hypothesis it is
10)
- p value != alpha
- 0.00064 != 0.05
- null hypothesis rejected because of that, therefore making the P value significant
_____no_output_____
<code>
#pick 1 gene, do 1 t-test
GENE='IL7R'
COND1=['CD4 T cells.rep' + str(x+1) for x in range(3)]
COND2=['B cells.rep' + str(x+1) for x in range(3)]
#plot gene across samples
#t-test
from scipy.stats import ttest_ind
t_stat,pvalue=ttest_ind(data.loc[GENE,COND1],data.loc[GENE,COND2])
print('t statistic',t_stat.round(2))
print('p-value',pvalue.round(5))t statistic 9.66
p-value 0.00064
</code>
**Two-sample t-tests for each gene across 2 conditions**
We are now going to repeat our analysis from before for all genes in our dataset.
**(1 pt)** How many genes are present in our dataset?
_____no_output_____#Answers
11)
- 13714 genes present in the dataset, displayed with display(results)_____no_output_____
<code>
from IPython.display import display
#all genes t-tests
PSEUDOCOUNT=1
results=pd.DataFrame(index=data.index,
columns=['t','p','lfc'])
for gene in data.index:
t_stat,pvalue=ttest_ind(data.loc[gene,COND1],data.loc[gene,COND2])
lfc=np.log2((data.loc[gene,COND1].mean()+PSEUDOCOUNT)/(data.loc[gene,COND2].mean()+PSEUDOCOUNT))
results.loc[gene,'t']=t_stat
results.loc[gene,'p']=pvalue
results.loc[gene,'lfc']=lfc
_____no_output_____
</code>
**Ranking discoveries by either significance or fold change**
For each gene, we have obtained:
- a t-statistic
- a p-value for the difference between the 2 conditions
- a log2 fold change between CD4 T cells and B cells
We can inspect how fold changes relate to the significance of the differences.
**(1 pt)** What do you expect the relationship to be between significance/p-values and fold changes?
_____no_output_____#Answers
12) Fold change has a correlation p value, the bigger the fold change from 0 the bigger p value_____no_output_____
<code>
#volcano plot
######
results['p']=results['p'].fillna(1)
PS2=1e-7
plt.scatter(results['lfc'],-np.log10(results['p']+PS2),s=5,alpha=0.5,color='black')
plt.xlabel('Log2 fold change (CD4 T cells/B cells)')
plt.ylabel('-log10(p-value)')
plt.show()
display(results)_____no_output_____
</code>
**Multiple testing correction**
Now, we will explore how the number of differentially active genes differs depending on how we correct for multiple tests.
**(1 pt)** How many genes pass the significance level of 0.05, without performing any correction for multiple testing?
_____no_output_____#Answers
13)
- there are 1607 genes that pass the significance level of 0.05
_____no_output_____
<code>
ALPHA=0.05
print((results['p']<=ALPHA).sum())1607
</code>
We will use a function that adjusts our p-values using different methods, called "multipletests". You can read about it here: https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.html
We will use the following settings:
- for Bonferroni correction, we set method='bonferroni'. This will multiply our p-values by the number of tests we did. If the resulting values are greated than 1 they will be clipped to 1.
- for Benjamini-Hochberg correction, we set method='fdr_bh'_____no_output_____**(2 pts)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Bonferroni method? What is the revised p-value threshold?
**(1 pt)** Would the gene we tested before, IL7R, pass this threshold?
_____no_output_____#Answers
14)
- 63 genes pass the significance level of 0.05(after correcting multiple testing using the bonferroni method)
- new p value threshold: 0.05/13714 = 3.6 * 10^-6
- uses 13714 - alpha = alpha / k
15)
- Yes it would, with the corrected p-value greater than 1
_____no_output_____
<code>
#multiple testing correction
#bonferroni
from statsmodels.stats.multitest import multipletests
results['p.adj.bonferroni']=multipletests(results['p'], method='bonferroni')[1]
FDR=ALPHA
plt.hist(results['p'],100)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('Unadjusted p-values')
plt.ylabel('Number of genes')
plt.show()
plt.hist(results['p.adj.bonferroni'],100)
#plt.ylim(0,200)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('P-values (Bonferroni corrected)')
plt.ylabel('Number of genes')
plt.show()
plt.show()
print('DE Bonferroni',(results['p.adj.bonferroni']<=FDR).sum())_____no_output_____
</code>
**(1 pt)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Benjamini-Hochberg method?
_____no_output_____#Answers
16)
- 220_____no_output_____
<code>
results['p.adj.bh']=multipletests(results['p'], method='fdr_bh')[1]
FDR=0.05
plt.hist(results['p'],100)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('Unadjusted p-values')
plt.ylabel('Number of genes')
plt.show()
plt.hist(results['p.adj.bh'],100)
plt.ylim(0,2000)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('P-values (Benjamini-Hochberg corrected)')
plt.ylabel('Number of genes')
plt.show()
print('DE BH',(results['p.adj.bh']<=FDR).sum())_____no_output_____
</code>
**(1 pt)** Which multiple testing correction is the most stringent?
Finally, let's look at our results. Print the significant differential genes and look up a few on the internet._____no_output_____#Answers
17)
- Bonferroni, and this is because the corrected p values resulted in values of 1 or greater, so it was limited to 1_____no_output_____
<code>
results.loc[results['p.adj.bonferroni']<=FDR,:].sort_values(by='lfc')_____no_output_____
</code>
For example, CD7 is a gene found on T cells, whereas HLA genes are found on B cells._____no_output_____
| {
"repository": "freshskates/machine-learning",
"path": "Robert_Cacho_Proj2_stats_notebook.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": null,
"size": 541370,
"hexsha": "d023904228dba223d06ed99b44cc7ab6210b7b33",
"max_line_length": 43124,
"avg_line_length": 173.7945425361,
"alphanum_fraction": 0.8619317657
} |
# Notebook from gerhajdu/rrl_binaries_1
Path: 06498_oc.ipynb
# Example usage of the O-C tools
## This example shows how to construct and fit with MCMC the O-C diagram of the RR Lyrae star OGLE-BLG-RRLYR-02950_____no_output_____### We start with importing some libraries_____no_output_____
<code>
import numpy as np
import oc_tools as octs_____no_output_____
</code>
### We read in the data, set the period used to construct the O-C diagram (and to fold the light curve to construct the template curves, etc.), and the orders of the Fourier series we will fit to the light curve in the first and second iterations in the process_____no_output_____
<code>
who = "06498"
period = 0.589490
order1 = 10
order2 = 15
jd3, mag3 = np.loadtxt('data/{:s}.o3'.format(who), usecols=[0,1], unpack=True)
jd4, mag4 = np.loadtxt('data/{:s}.o4'.format(who), usecols=[0,1], unpack=True)_____no_output_____
</code>
### We correct for possible average magnitude and amplitude differences between The OGLE-III and IV photometries by moving the intensity average of the former to the intensity average measured for the latter
### The variables "jd" and "mag" contain the merged timings and magnitudes of the OGLE-III + IV photometry, wich are used from hereon to calculate the O-C values_____no_output_____
<code>
mag3_shift=octs.shift_int(jd3, mag3, jd4, mag4, order1, period, plot=True)
jd = np.hstack((jd3,jd4))
mag = np.hstack((mag3_shift, mag4))_____no_output_____
</code>
### Calling the split_lc_seasons() function provides us with an array containing masks splitting the combined light curve into short sections, depending on the number of points
### Optionally, the default splitting can be overriden by using the optional parameters "limits" and "into". For example, calling the function as:
octs.split_lc_seasons(jd, plot=True, mag = mag, limits = np.array((0, 8, np.inf)), into = np.array((0, 2)))
### will always split seasons with at least nine points into two separate segments_____no_output_____
<code>
splits = octs.split_lc_seasons(jd, plot=True, mag = mag)_____no_output_____
</code>
### The function calc_oc_points() fits the light curve of the variable to produce a template, and uses it to determine the O-C points of the individual segments_____no_output_____
<code>
oc_jd, oc_oc = octs.calc_oc_points(jd, mag, period, order1, splits, figure=True)_____no_output_____
</code>
### We make a guess at the binary parameters _____no_output_____
<code>
e = 0.37
P_orb = 2800.
T_peri = 6040
a_sini = 0.011
omega = -0.7
a= -8e-03
b= 3e-06
c= -3.5e-10
params = np.asarray((e, P_orb, T_peri, a_sini, omega, a, b, c))
lower_bounds = np.array((0., 100., -np.inf, 0.0, -np.inf, -np.inf, -np.inf, -np.inf))
upper_bounds = np.array((0.99, 6000., np.inf, 1.0, np.inf, np.inf, np.inf, np.inf))_____no_output_____
</code>
### We use the above guesses as the starting point (dashed grey line on the plot below) to find the O-C LTTE solution of the first iteration of our procedure. The yellow line on the plot shows the fit. The vertical blue bar shows the timing of the periastron passage
### Note that in this function also provides the timings of the individual observations corrected for this initial O-C solution_____no_output_____
<code>
params2, jd2 = octs.fit_oc1(oc_jd, oc_oc, jd, params, lower_bounds, upper_bounds)_____no_output_____
</code>
### We use the initial solution as the starting point for the MCMC fit, therefore we prepare it first by transforming $e$ and $\omega$ to $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$
### For each parameter, we also have a lower and higher limit in its prior, but the values given for $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$ are ignored, as these are handled separately within the function checking the priors_____no_output_____
<code>
start = np.zeros_like(params2)
start[0:3] = params2[1:4]
start[3] = np.sqrt(params2[0]) * np.sin(params2[4])
start[4] = np.sqrt(params2[0]) * np.cos(params2[4])
start[5:] = params2[5:]
prior_ranges = np.asanyarray([[start[0]*0.9, start[0]*1.1],
[start[1]-start[0]/4., start[1]+start[0]/4.],
[0., 0.057754266],
[0., 0.],
[0., 0.],
[-1., 1.],
[-1e-4, 1e-4],
[-1e-8, 1e-8]])_____no_output_____
</code>
### We set a random seed to get reproducible results, then prepare the initial positions of the 200 walkers we are using during the fitting. During this, we check explicitly that these correspond to a position with a finite prior (i.e., they are not outside of the prior ranges defined above)_____no_output_____
<code>
np.random.seed(0)
walkers = 200
random_scales = np.array((1e+1, 1e+1, 1e-4, 1e-2, 1e-2, 1e-3, 2e-7, 5e-11))
pos = np.zeros((walkers, start.size))
for i in range(walkers):
pos[i,:] = start + random_scales * np.random.normal(size=8)
while np.isinf(octs.log_prior(pos[i,:], prior_ranges)):
pos[i,:] = start + random_scales * np.random.normal(size=8)_____no_output_____
</code>
### We recalculate the O-C points, but this time we use a higher-order Fourier series to fit the light curve with the modified timings, and we also calculate errors using bootstrapping_____no_output_____
<code>
oc_jd, oc_oc, oc_sd = octs.calc_oc_points(jd, mag, period, order2, splits,
bootstrap_times = 500, jd_mod = jd2,
figure=True)_____no_output_____
</code>
### We fit the O-C points measured above using MCMC by calling the run_mcmc() function
### We plot both the fit, as well as the triangle plot showing the two- (and one-)dimensional posterior distributions (these can be suppressed by setting the optional parameters "plot_oc" and "plot_triangle" to False)_____no_output_____
<code>
sampler, fit_mcmc, oc_sigmas, param_means, param_sigmas, fit_at_points, K =\
octs.run_mcmc(oc_jd, oc_oc, oc_sd,
prior_ranges, pos,
nsteps = 31000, discard = 1000,
thin = 300, processes=1)100%|██████████| 31000/31000 [03:08<00:00, 164.32it/s]
100%|███████████████████████████████████| 20000/20000 [00:02<00:00, 8267.13it/s]
</code>
## The estimated LTTE parameters are:_____no_output_____
<code>
print("Orbital period: {:d} +- {:d} [d]".format(int(param_means[0]),
int(param_sigmas[0])))
print("Projected semi-major axis: {:.3f} +- {:.3f} [AU]".format(param_means[2]*173.144633,
param_sigmas[2]*173.144633))
print("Eccentricity: {:.3f} +- {:.3f}".format(param_means[3],
param_sigmas[3]))
print("Argumen of periastron: {:+4d} +- {:d} [deg]".format(int(param_means[4]*180/np.pi),
int(param_sigmas[4]*180/np.pi)))
print("Periastron passage time: {:d} +- {:d} [HJD-2450000]".format(int(param_means[1]),
int(param_sigmas[1])))
print("Period-change rate: {:+.3f} +- {:.3f} [d/Myr] ".format(param_means[7]*365.2422*2e6*period,
param_sigmas[7]*365.2422*2e6*period))
print("RV semi-amplitude: {:5.2f} +- {:.2f} [km/s]".format(K[0], K[1]))
print("Mass function: {:.5f} +- {:.5f} [M_Sun]".format(K[2], K[3]))Orbital period: 2803 +- 3 [d]
Projected semi-major axis: 2.492 +- 0.010 [AU]
Eccentricity: 0.136 +- 0.008
Argumen of periastron: -76 +- 3 [deg]
Periastron passage time: 6538 +- 24 [HJD-2450000]
Period-change rate: -0.002 +- 0.005 [d/Myr]
RV semi-amplitude: 9.76 +- 0.04 [km/s]
Mass function: 0.26290 +- 0.00334 [M_Sun]
</code>
| {
"repository": "gerhajdu/rrl_binaries_1",
"path": "06498_oc.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 913969,
"hexsha": "d023cceb721a718610c9c02561150760ce5291e9",
"max_line_length": 278904,
"avg_line_length": 2135.441588785,
"alphanum_fraction": 0.9599866079
} |
# Notebook from hossainlab/dsnotes
Path: book/_build/jupyter_execute/pandas/23-Kaggle Submission.ipynb
<code>
import pandas as pd _____no_output_____train = pd.read_csv("http://bit.ly/kaggletrain")_____no_output_____train.head() _____no_output_____feature_cols = ['Pclass', 'Parch']
X = train.loc[:, feature_cols] _____no_output_____X.shape_____no_output_____y = train.Survived_____no_output_____y.shape_____no_output_____from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X, y)/home/jubayer/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
test = pd.read_csv("http://bit.ly/kaggletest")_____no_output_____test.head() _____no_output_____X_new = test.loc[:, feature_cols] _____no_output_____X_new.shape_____no_output_____new_pred_class = logreg.predict(X_new)_____no_output_____test.PassengerId_____no_output_____new_pred_class_____no_output_____pd.DataFrame({'PassengerID' : test.PassengerId, 'Survived': new_pred_class}).to_csv("sub.csv", index=False)_____no_output_____subdf = pd.read_csv("sub.csv")_____no_output_____subdf.head() _____no_output_____
</code>
<h3>About the Author</h3>
This repo was created by <a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> <br>
<a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> is a student of Microbiology at Jagannath University and the founder of <a href="https://github.com/hdro" target="_blank">Health Data Research Organization</a>. He is also a team member of a bioinformatics research group known as Bio-Bio-1.
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.m_____no_output_____
| {
"repository": "hossainlab/dsnotes",
"path": "book/_build/jupyter_execute/pandas/23-Kaggle Submission.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": null,
"size": 19757,
"hexsha": "d0276af4d8d9914f47cdd3e1f7ea0e8c3c423f3c",
"max_line_length": 415,
"avg_line_length": 29.0117474302,
"alphanum_fraction": 0.3739434125
} |
# Notebook from hashmat3525/Titanic
Path: Titanic.ipynb
# Import Necessary Libraries_____no_output_____
<code>
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.metrics import precision_score, recall_score
# display images
from IPython.display import Image
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
import seaborn as sns
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import style
# Algorithms
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC, LinearSVC
from sklearn.naive_bayes import GaussianNB_____no_output_____
</code>
# Titanic
Titanic was a British passenger liner that sank in the North Atlantic Ocean in the early morning hours of 15 April 1912, after it collided with an iceberg during its maiden voyage from Southampton to New York City. There were an estimated 2,224 passengers and crew aboard the ship, and more than 1,500 died, making it one of the deadliest commercial peacetime maritime disasters in modern history. The RMS Titanic was the largest ship afloat at the time it entered service and was the second of three Olympic-class ocean liners operated by the White Star Line. The Titanic was built by the Harland and Wolff shipyard in Belfast. Thomas Andrews, her architect, died in the disaster._____no_output_____
<code>
# Image of Titanic ship
Image(filename='C:/Users/Nemgeree Armanonah/Documents/GitHub/Titanic/images/ship.jpeg')_____no_output_____
</code>
# Getting the Data_____no_output_____
<code>
#reading train.csv
data = pd.read_csv('./titanic datasets/train.csv')
data_____no_output_____
</code>
## Exploring Data_____no_output_____
<code>
data.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
</code>
### Describe Statistics
Describe method is used to view some basic statistical details like PassengerId,Servived,Age etc._____no_output_____
<code>
data.describe()_____no_output_____
</code>
### View All Features _____no_output_____
<code>
data.columns.values_____no_output_____
</code>
### What features could contribute to a high survival rate ?_____no_output_____To Us it would make sense if everything except ‘PassengerId’, ‘Ticket’ and ‘Name’ would be correlated with a high survival rate._____no_output_____
<code>
# defining variables
survived = 'survived'
not_survived = 'not survived'
# data to be plotted
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4))
women = data[data['Sex']=='female']
men = data[data['Sex']=='male']
# plot the data
ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False)
ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False)
ax.legend()
ax.set_title('Female')
ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False)
ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False)
ax.legend()
_ = ax.set_title('Male')_____no_output_____# count the null values
null_values = data.isnull().sum()
null_values_____no_output_____plt.plot(null_values)
plt.grid()
plt.show()_____no_output_____
</code>
## Data Processing_____no_output_____
<code>
def handle_non_numerical_data(df):
columns = df.columns.values
for column in columns:
text_digit_vals = {}
def convert_to_int(val):10
return text_digit_vals[val]
#print(column,df[column].dtype)
if df[column].dtype != np.int64 and df[column].dtype != np.float64:
column_contents = df[column].values.tolist()
#finding just the uniques
unique_elements = set(column_contents)
# great, found them.
x = 0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique] = x
x+=1
df[column] = list(map(convert_to_int,df[column]))
return df_____no_output_____y_target = data['Survived']
# Y_target.reshape(len(Y_target),1)
x_train = data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare','Embarked', 'Ticket']]
x_train = handle_non_numerical_data(x_train)
x_train.head()_____no_output_____fare = pd.DataFrame(x_train['Fare'])
# Normalizing
min_max_scaler = preprocessing.MinMaxScaler()
newfare = min_max_scaler.fit_transform(fare)
x_train['Fare'] = newfare
x_trainc:\users\nemgeree armanonah\appdata\local\programs\python\python36\lib\site-packages\ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""
null_values = x_train.isnull().sum()
null_values_____no_output_____plt.plot(null_values)
plt.show()_____no_output_____# Fill the NAN values with the median values in the datasets
x_train['Age'] = x_train['Age'].fillna(x_train['Age'].median())
print("Number of NULL values" , x_train['Age'].isnull().sum())
x_train.head()Number of NULL values 0
x_train['Sex'] = x_train['Sex'].replace('male', 0)
x_train['Sex'] = x_train['Sex'].replace('female', 1)
# print(type(x_train))
corr = x_train.corr()
corr.style.background_gradient()c:\users\nemgeree armanonah\appdata\local\programs\python\python36\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
c:\users\nemgeree armanonah\appdata\local\programs\python\python36\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
def plot_corr(df,size=10):
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
# plot_corr(x_train)
x_train.corr()
corr.style.background_gradient()_____no_output_____# Dividing the data into train and test data set
X_train, X_test, Y_train, Y_test = train_test_split(x_train, y_target, test_size = 0.4, random_state = 40)_____no_output_____clf = RandomForestClassifier()
clf.fit(X_train, Y_train)c:\users\nemgeree armanonah\appdata\local\programs\python\python36\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
print(clf.predict(X_test))
print("Accuracy: ",clf.score(X_test, Y_test))_____no_output_____## Testing the model.
test_data = pd.read_csv('./titanic datasets/test.csv')
test_data.head(3)
# test_data.isnull().sum()_____no_output_____### Preprocessing on the test data
test_data = test_data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare', 'Ticket', 'Embarked']]
test_data = handle_non_numerical_data(test_data)
fare = pd.DataFrame(test_data['Fare'])
min_max_scaler = preprocessing.MinMaxScaler()
newfare = min_max_scaler.fit_transform(fare)
test_data['Fare'] = newfare
test_data['Fare'] = test_data['Fare'].fillna(test_data['Fare'].median())
test_data['Age'] = test_data['Age'].fillna(test_data['Age'].median())
test_data['Sex'] = test_data['Sex'].replace('male', 0)
test_data['Sex'] = test_data['Sex'].replace('female', 1)
print(test_data.head())
_____no_output_____print(clf.predict(test_data))_____no_output_____from sklearn.model_selection import cross_val_predict
predictions = cross_val_predict(clf, X_train, Y_train, cv=3)
print("Precision:", precision_score(Y_train, predictions))
print("Recall:",recall_score(Y_train, predictions))_____no_output_____from sklearn.metrics import precision_recall_curve
# getting the probabilities of our predictions
y_scores = clf.predict_proba(X_train)
y_scores = y_scores[:,1]
precision, recall, threshold = precision_recall_curve(Y_train, y_scores)
def plot_precision_and_recall(precision, recall, threshold):
plt.plot(threshold, precision[:-1], "r-", label="precision", linewidth=5)
plt.plot(threshold, recall[:-1], "b", label="recall", linewidth=5)
plt.xlabel("threshold", fontsize=19)
plt.legend(loc="upper right", fontsize=19)
plt.ylim([0, 1])
plt.figure(figsize=(14, 7))
plot_precision_and_recall(precision, recall, threshold)
plt.axis([0.3,0.8,0.8,1])
plt.show()_____no_output_____def plot_precision_vs_recall(precision, recall):
plt.plot(recall, precision, "g--", linewidth=2.5)
plt.ylabel("recall", fontsize=19)
plt.xlabel("precision", fontsize=19)
plt.axis([0, 1.5, 0, 1.5])
plt.figure(figsize=(14, 7))
plot_precision_vs_recall(precision, recall)
plt.show()_____no_output_____from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
predictions = cross_val_predict(clf, X_train, Y_train, cv=3)
confusion_matrix(Y_train, predictions)_____no_output_____
</code>
True positive: 293 (We predicted a positive result and it was positive)
True negative: 143 (We predicted a negative result and it was negative)
False positive: 34 (We predicted a positive result and it was negative)
False negative: 64 (We predicted a negative result and it was positive)_____no_output_____
| {
"repository": "hashmat3525/Titanic",
"path": "Titanic.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 3,
"size": 226774,
"hexsha": "d0286a664c8681937df2ace0b975d61bde05c4bf",
"max_line_length": 60121,
"avg_line_length": 113.2171742386,
"alphanum_fraction": 0.7393175585
} |
# Notebook from 1966hs/MujeresDigitales
Path: Repaso_algebra_LinealHeidy.ipynb
<a href="https://colab.research.google.com/github/1966hs/MujeresDigitales/blob/main/Repaso_algebra_LinealHeidy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output__________no_output_____$$\vec{v}+\vec{w}=(x_1,x_2)+(y_1,y_2)=(x_1 +y_1, x_2 +y_2)$$
$$\vec{v}-\vec{w}=(x_1,x_2)-(y_1,y_2)=(x_1 -y_1, x_2 -y_2)$$
_____no_output__________no_output_____
<code>
#Algebra lineal se enfoca en matrices y vectores en python
import numpy as np #importar numpy
M = np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]])#Matriz
v = np.array([[1],[2],[3]])# Vector que es de una sola columna
v1=np.array([1,2,3])#Vector fila
print(M)
print(v)
print(v1)[[1 2 3]
[4 5 6]
[7 8 9]]
[[1]
[2]
[3]]
[1 2 3]
print (M.shape)
print (v.shape)#nos indica que tiene 3 elementos
v_single_dim = np.array([1, 2, 3])
print (v_single_dim.shape)(3, 3)
(3, 1)
(3,)
print(v+v)#suma de dos vecotres
#se lo suma a el m
print(3*v)#multiplcaciones de un valor escalar
#un valor escalar es un valor unico
#multiplca cada uno de los elemtnos por 3
[[2]
[4]
[6]]
[[3]
[6]
[9]]
# Otra forma de crear matrices
v1 = np.array([1, 2, 3])#puedo crear arreglos y unirlos
v2 = np.array([4, 5, 6])
v3 = np.array([7, 8, 9])
M = np.vstack([v1, v2, v3])#losasi se unen y forman una matriz
print(M)[[1 2 3]
[4 5 6]
[7 8 9]]
M_____no_output_____# Indexar matrices
print (M[:2, 1:3])#puedo hacer recortes de la matriz, SACAR ELEMENTOS
[[2 3]
[5 6]]
v_____no_output_____#Indexar vectores
print(v[1,0])# de la posicion 1 columna 0
print(v[1:,0])#desde el 1 en adelante
#similar a las listar pero puedo sacar filas y columnas 2
[2 3]
lista=[[1,2],[3,4],[4,6]]_____no_output_____#DIRENCIAS CON LISTAS
#los vectores si se suman entre ellos en cambio las listas indenxa otra lista igual
print(v+v)
print(lista+lista)
#LOS ARREGLOS ME PERMITEN HACER OPERACIONES MATRICIALES[[2]
[4]
[6]]
[[1, 2], [3, 4], [4, 6], [1, 2], [3, 4], [4, 6]]
v*3_____no_output_____lista*3_____no_output_____
</code>
_____no_output__________no_output__________no_output__________no_output_____
<code>
v.T# TRANSPUESTA _____no_output__________no_output_____print (M.dot(v))
print (v.T.dot(v))
v1=np.array([3,-3,1])
v2=np.array([4,9,2])
print (np.cross(v1, v2, axisa=0, axisb=0).T)
print (np.multiply(M, v))
print (np.multiply(v, v))
[[14]
[32]
[50]]
[[14]]
[-15 -2 39]
[[ 1 2 3]
[ 8 10 12]
[21 24 27]]
[[1]
[4]
[9]]
</code>
# Transpuesta
$$C_{mxn}=A_{nxm}^T$$
$$c_{ij}=a_{ji}$$
$$(A+B)^T = A^T + B^T$$
$$(AB)^T = B^T A^T$$
Si $A=A^T$ entonces A es **simetrica**
_____no_output_____
<code>
M_____no_output_____print(M.T)#transpuestas
print(v.T)[[1 4 7]
[2 5 8]
[3 6 9]]
[[1 2 3]]
#el determinante es para saber el valor de la matriz _____no_output_____
</code>
_____no_output__________no_output_____
<code>
np.identity(3)_____no_output_____#hacer una matriz que multiplcarse por si misma da la matriz identidad
#que todo lo diagonal es 1 y el resto 0
_____no_output_____v1 = np.array([3, 0, 2])
v2 = np.array([2, 0, -2])
v3 = np.array([0, 1, 1])
M = np.vstack([v1, v2, v3])#creamos la matriz apartir de esos valores y luego los unimos
print (np.linalg.inv(M))#para invertir la matriz
print (np.linalg.det(M))#para ver el determinante de mi matriz [[ 0.2 0.2 0. ]
[-0.2 0.3 1. ]
[ 0.2 -0.3 -0. ]]
10.000000000000002
print (np.linalg.inv(M))#invertir [[ 0.2 0.2 0. ]
[-0.2 0.3 1. ]
[ 0.2 -0.3 -0. ]]
print (np.linalg.det(M))#determinante10.000000000000002
</code>
**Definicion de Variables**_____no_output_____
<code>
a= np.array([1,1,1])
b= np.array([2,2,2])_____no_output_____#Multiplcacion de los elementos
#(lo hace elemento a elemento )
a*b_____no_output_____#metodo multiplicacion de elementos:
np.multiply(a,b)
_____no_output_____#Metodo multiplicacion de matrices
#2*1+2*1+2*1
np.matmul(a,b)_____no_output_____#Metodo producto punto
#similar a la multiplcacion matricial
np.dot(a,b)_____no_output_____#Metodo producto cruz
#como son paralelos no es muy perpendicular
np.cross(a,b)_____no_output_____#Metodo producto cruz con vectores ortogonales
#ortogonal seria la parte de abajo en eje z
np.cross(np.array([1,0,0]), np.array([0,1,0]))_____no_output_____
</code>
**Definicion de Matrices**_____no_output_____
<code>
a = np.array([[1,2], [2,3]])
b = np.array([[3,4],[5,6]])_____no_output_____print(a)
print(b)[[1 2]
[2 3]]
[[3 4]
[5 6]]
#Multiplicacion elemento a elemento
a*b
#mulitplicacion punto a punto _____no_output_____#Metodo multiplicacion elemento
np.multiply(a,b)_____no_output_____#Metodo multiplcacion matricial
#1*3+2*5=13
#1*4+2*4=16
#2*3+3*5=21
#2*4+3*6=26
np.matmul(a,b)_____no_output_____#Metodo producto punto _____no_output_____
</code>
**Inversion de matrices**_____no_output_____
<code>
a= np.array([[1,1,1],[0,2,5],[2,5,-1]])#esta es mi matriz
b= np.linalg.inv(a)#aqui la estoy invirtiendo
b_____no_output_____np.matmul(a,b)#cuando la multiplico matricialmente me da una matriz identidad
#cuando iniverto una matriz y luego la multilplico me da una identidad _____no_output_____v1= np.array([3,0,2])
v2=np.array([2,0,-2])
v3=np.array([0,1,1])
M=np.vstack([v1,v2,v3])
M
_____no_output_____M_inv = np.linalg.inv(M)#LA INVERTIMOS
M_inv_____no_output_____
</code>
# Valores y vectores propios
Un valor propio $\lambda$ y un vector propio $\vec{u}$ satisfacen
$$Au = \lambda u$$
Donde A es una matriz cuadrada.
Reordenando la ecuacion anterior tenemos el sistema:
$$Au -\lambda u = (A- \lambda I)u =0$$
El cual tiene solucion si y solo si $det(A-\lambda I)=0$
1. Los valores propios son las raices del polinomio caracteristico del determinante
2. Susituyendo los valores propios en $$Au = \lambda u$$ y resolviendo se puede obtener el vector propio asociado
_____no_output_____
<code>
#TENGO UN ESPACIO DE DOS DIMENSIONES Y LO QUE HAGO
#ES DISTORSIONAR ESE ESPACIO DIMENSIAL_____no_output_____v1 = np.array([0, 1])
v2 = np.array([-2, -3])
M = np.vstack([v1, v2])
eigvals, eigvecs= np.linalg.eig(M)
print(eigvals)#caractericas de las matrices
print(eigvecs)[-1. -2.]
[[ 0.70710678 -0.4472136 ]
[-0.70710678 0.89442719]]
#valor propio es un valor que podemos crear y hacer la solucion de las operaciones _____no_output_____A=np.array([[-81,16],[-420,83]])
A_____no_output_____eigvals,eigvecs=np.linalg.eig(A)_____no_output_____eigvals_____no_output_____
</code>
| {
"repository": "1966hs/MujeresDigitales",
"path": "Repaso_algebra_LinealHeidy.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 809309,
"hexsha": "d029ba35490782c025f348d562f60290eb42b0c6",
"max_line_length": 128101,
"avg_line_length": 620.1601532567,
"alphanum_fraction": 0.9452916006
} |
# Notebook from etattershall/trend-lifecycles
Path: Modelling trend life cycles in scientific research.ipynb
# Modelling trend life cycles in scientific research
**Authors:** E. Tattershall, G. Nenadic, and R.D. Stevens
**Abstract:** Scientific topics vary in popularity over time. In this paper, we model the life-cycles of 200 topics by fitting the Logistic and Gompertz models to their frequency over time in published abstracts. Unlike other work, the topics we use are algorithmically extracted from large datasets of abstracts covering computer science, particle physics, cancer research, and mental health. We find that the Gompertz model produces lower median error, leading us to conclude that it is the more appropriate model. Since the Gompertz model is asymmetric, with a steep rise followed a long tail, this implies that scientific topics follow a similar trajectory. We also explore the case of double-peaking curves and find that in some cases, topics will peak multiple times as interest resurges. Finally, when looking at the different scientific disciplines, we find that the lifespan of topics is longer in some disciplines (e.g. cancer research and mental health) than it is others, which may indicate differences in research process and culture between these disciplines.
**Requirements**
- Data. Data ingress is excluded from this notebook, but we alraedy have four large datasets of abstracts. The documents in these datasets have been cleaned (described in sections below) and separated by year. Anecdotally, this method works best when there are >100,000 documents in the dataset (and more is even better).
- The other utility files in this directory, including burst_detection.py, my_stopwords.py, etc...
**In this notebook**
- Vectorisation
- Burst detection
- Clustering
- Model fitting
- Comparing the error of the two models
- Calculating trend duration
- Double peaked curves
- Trends and fitted models in full_____no_output_____
<code>
import os
import csv
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import numpy as np
import scipy
from scipy.spatial.distance import squareform
from scipy.cluster import hierarchy
import pickle
import burst_detection
import my_stopwords
import cleaning
import tools
import logletlab
import scipy.optimize as opt
from sklearn.metrics import mean_squared_error
_____no_output_____stop = my_stopwords.get_stopwords()
burstiness_threshold = 0.004
cluster_distance_threshold = 7
# Burst detection internal parameters
# These are the same as in our earlier paper [Tattershall 2020]
parameters = {
"min_yearly_df": 5,
"significance_threshold": 0.0015,
"years_above_significance": 3,
"long_ma_length": 8,
"short_ma_length": 4,
"signal_line_ma": 3,
"significance_ma_length": 3
}
# Number of bursty terms to extract for each dataset. This will later be filtered down to 50 for each dataset after clustering.
max_bursts = 300
dataset_names = ['pubmed_mh', 'arxiv_hep', 'pubmed_cancer', 'dblp_cs']
dataset_titles = ['Computer science (dblp)', 'Particle physics (arXiv)', 'Mental health (PubMed)', 'Cancer (PubMed)']
datasets = {}
def reverse_cumsum(ls):
reverse = np.zeros_like(ls)
for i in range(len(ls)):
if i == 0:
reverse[i] = ls[i]
else:
reverse[i] = ls[i]-ls[i-1]
if reverse[0]>reverse[1]:
reverse[0]=reverse[1]
return reverse
def detransform_fit(ypc, F, dataset_name):
'''
The Gompertz and Logistic curves actually model *cumulative* frequency over time, not raw frequency.
However, raw frequency is more intuitive for graphs, so we use this function to change a cumulative
time series into a non-cumulative one. Additionally, the models were originally fitted to scaled curves
(such that the minumum frequency was zero and the maximum was one). This was done to make it possible to
directly compare the error between different time series without a much more frequent term dwarfing the calculation.
We now transform back.
'''
s = document_count_per_year[dataset_name]
yf = reverse_cumsum(F*(max(ypc)-min(ypc)) + min(ypc))
return yf
_____no_output_____# Location where the cleaned data is stored
data_root = 'cleaned_data/'
# Location where we will store the results of this notebook
root = 'results/'
os.mkdir(root+'clusters')
os.mkdir(root+'images')
os.mkdir(root+'fitted_curves')
os.mkdir(root+'vectors')
for dataset_name in dataset_names:
os.mkdir(root+'vectors/'+dataset_name)
os.mkdir(root+'fitted_curves/'+dataset_name)_____no_output_____
</code>
## The data
We have four datasets:
- **Computer Science (dblp_cs):** This dataset contains 2.6 million abstracts downloaded from Semantic Scholar. We select all abstracts with the dblp tag.
- **Particle Physics (arxiv_hep):** This dataset of 0.2 million abstracts was downloaded from arXiv's public API. We extracted particle physics-reladed documents by selecting everything under the categroies hep-ex, hep-lat, hep-ph and hep-th.
- **Mental Health (pubmed_mh):** 0.7 million abstracts downloaded from PubMed. This dataset was created by filtering on the MeSH keyword "Mental Health" and all its subterms.
- **Cancer (pubmed_cancer):** 1.9 million abstracts downloaded from PubMed. This dataset was created by filtering on the MeSH keyword "Neoplasms" and all its subterms.
The data in each dataset has already been cleaned. We removed punctuation, set all characters to lowercase and lemmatised each word using WordNetLemmatizer. The cleaned data is stored in pickled pandas dataframes in files named 1988.p, 1989.p, 1990.p. Each dataframe has a column "cleaned" which contains the cleaned and lemmatized text for each document in that dataset in the given year.
### How many documents are in each dataset in each year?_____no_output_____
<code>
document_count_per_year = {}
for dataset_name in dataset_names:
# For each dataset, we want a list of document counts for each year
document_count_per_year[dataset_name] = []
# The files in the directory are named 1988.p, 1989.p, 1990.p....
files = os.listdir(data_root+dataset_name)
min_year = np.min([int(file[0:4]) for file in files])
max_year = np.max([int(file[0:4]) for file in files])
for year in range(min_year, max_year+1):
df = pickle.load(open(data_root+dataset_name+'/'+str(year)+'.p', "rb"))
document_count_per_year[dataset_name].append(len(df))
pickle.dump(document_count_per_year, open(root + 'document_count_per_year.p', "wb"))_____no_output_____plt.figure(figsize=(6,3.7))
ax1=plt.subplot(111)
plt.subplots_adjust(left=0.2, right=0.9)
ax1.set_title('Documents per year in each dataset', fontsize=11)
ax1.plot(np.arange(1988, 2018), document_count_per_year['dblp_cs'], 'k', label='dblp')
ax1.plot(np.arange(1994, 2018), document_count_per_year['arxiv_hep'], 'k', linestyle= '-.', label='arXiv')
ax1.plot(np.arange(1975, 2018), document_count_per_year['pubmed_mh'], 'k', linestyle= '--', label='PubMed (Mental Health)')
ax1.plot(np.arange(1975, 2018), document_count_per_year['pubmed_cancer'], 'k', linestyle= ':', label='PubMed (Cancer)')
ax1.grid()
ax1.set_xlim([1975, 2018])
ax1.set_ylabel('Documents', fontsize=10)
ax1.set_xlabel('Year', fontsize=10)
ax1.set_ylim([0,200000])
ax1.legend(fontsize=10)
plt.savefig(root+'images/documents_per_year.eps', format='eps', dpi=1200)_____no_output_____
</code>
### Create a vocabulary for each dataset
- For each dataset, we find all **1-5 word terms** (after stopwords are removed). This allows us to use relatively complex phrases.
- Since the set of all 1-5 word terms is very large and contains much noise, we filter out terms that fail to meet a **minimum threshold of "significance"**. For significance we require that they occur at least six times in at least one year. We find that this also gets rid of spelling erros and cuts down the size of the data.
_____no_output_____
<code>
for dataset_name in dataset_names:
vocabulary = set()
files = os.listdir(data_root+dataset_name)
min_year = np.min([int(file[0:4]) for file in files])
max_year = np.max([int(file[0:4]) for file in files])
for year in range(min_year, max_year+1):
df = pickle.load(open(data_root+dataset_name+"/"+str(year)+".p", "rb"))
# Create an initial vocabulary based on the list of text files
vectorizer = CountVectorizer(strip_accents='ascii',
ngram_range=(1,5),
stop_words=stop,
min_df=6
)
# Vectorise the data in order to get the vocabulary
vector = vectorizer.fit_transform(df['cleaned'])
# Add the harvested vocabulary to the set. This removes duplicates of terms that occur in multiple years
vocabulary = vocabulary.union(set(vectorizer.vocabulary_))
# To conserve memory, delete the vector here
del vector
print('Overall vocabulary created for ', dataset_name)
# We now vectorise the dataset again based on the unifying vocabulary
vocabulary = list(vocabulary)
vectors = []
vectorizer = CountVectorizer(strip_accents='ascii',
ngram_range=(1,5),
stop_words=stop,
vocabulary=vocabulary)
for year in range(min_year, max_year+1):
df = pickle.load(open(data_root+dataset_name+"/"+str(year)+".p", "rb"))
vector = vectorizer.fit_transform(df['cleaned'])
# Set all elements of the vector that are greater than 1 to 1. This is because we only care about
# the overall document frequency of each term. If a word is used multiple times in a single document
# it only contributes 1 to the document frequency.
vector[vector>1] = 1
# Sum the vector along its columns in order to get the total document frequency of each term in a year
summed = np.squeeze(np.asarray(np.sum(vector, axis=0)))
vectors.append(summed)
# Turn the vector into a pandas dataframe
df = pd.DataFrame(vectors, columns=vocabulary)
# THE PART BELOW IS OPTIONAL
# We found that the process works better if very similar terms are removed from the vocabulary
# Therefore, for each 2-5 ngram, we identify all possible subterms, then attempt to calculate whether
# the subterms are legitimate terms in their own right (i.e. they appear in documents without their
# superterm parent). For example, the term "long short-term memory" is made up of the subterms
# ["long short", "short term", "term memory", "long short term", "short term memory"]
# However, when we calculate the document frequency of each subterm divided by the document frequency of
# "long short term memory", we find:
#
# long short 1.4
# short term 6.1
# term memory 2.2
# long short term 1.1
# short term memory 1.4
#
# Since the term "long short term" occurs only very rarely outside the phrase "long short term memory", we
# omit this term by setting an arbitrary threshold of 1.1. This preserves most of the subterms while removing the rarest.
removed = []
# for each term in the vocabulary
for i, term in enumerate(list(df.columns)):
# If the term is a 2-5 ngram (i.e. not a single word)
if ' ' in term:
# Find the overall term document frequency over the entire dataset
term_total_document_frequency = df[term].sum()
# Find all possible subterms of the term.
subterms = tools.all_subterms(term)
for subterm in subterms:
try:
# If the subterm is in the vocabulary, check whether it often occurs on its own
# without the superterm being present
subterm_total_document_frequency = df[subterm].sum()
if subterm_total_document_frequency < term_total_document_frequency*1.1:
removed.append([subterm, term])
except:
pass
# Remove the removed terms from the dataframe
df = df.drop(list(set([r[0] for r in removed])), axis=1)
# END OPTIONAL PART
# Store the stacked vectors for later use
pickle.dump(df, open(root+'vectors/'+dataset_name+"/stacked_vector.p", "wb"))
pickle.dump(list(df.columns), open(root+'vectors/'+dataset_name+"/vocabulary.p", "wb"))Overall vocabulary created for arxiv_hep
</code>
### Detect bursty terms
Now that we have vectors representing the document frequency of each term over time, we can use our MACD-based burst detection, as described in our earlier paper [Tattershall 2020]. _____no_output_____
<code>
bursts = dict()
for dataset_name in dataset_names:
files = os.listdir(data_root+dataset_name)
min_year = np.min([int(file[0:4]) for file in files])
max_year = np.max([int(file[0:4]) for file in files])
# Create a dataset object for the burst detection algorithm
bd_dataset = burst_detection.Dataset(
name = dataset_name,
years = (min_year, max_year),
# We divide the term-document frequency for each year by the number of documents in that year
stacked_vectors = pickle.load(open(root+dataset_name+"/stacked_vector.p", "rb")).divide(document_count_per_year[dataset_name],axis=0)
)
# We apply the significance threshold from the burst detection methodology. This cuts the size of the dataset by
# removing terms that occur only in one year
bd_dataset.get_sig_stacked_vectors(parameters["significance_threshold"], parameters["years_above_significance"])
bd_dataset.get_burstiness(parameters["short_ma_length"], parameters["long_ma_length"], parameters["significance_ma_length"], parameters["signal_line_ma"])
datasets[dataset_name] = bd_dataset
bursts[dataset_name] = tools.get_top_n_bursts(datasets[dataset_name].burstiness, max_bursts)
pickle.dump(bursts, open(root+'vectors/'+'bursts.p', "wb"))_____no_output_____
</code>
### Calculate burst co-occurence
We now have 300 bursts per dataset. Some of these describe very similar concepts, such as "internet of things" and "iot". The purpose of this section is the merge similar terms into clusters to prevent redundancy within the dataset. We calculate the relatedness of terms using term co-occurrence within the same document (terms that appear together are grouped together)._____no_output_____
<code>
for dataset_name in dataset_names:
vectors = []
vectorizer = CountVectorizer(strip_accents='ascii',
ngram_range=(1,5),
stop_words=stop,
vocabulary=bursts[dataset_name])
for year in range(min_year, max_year+1):
df = pickle.load(open(data_root+dataset_name+"/"+str(year)+".p", "rb"))
vector = vectorizer.fit_transform(df['cleaned'])
# Set all elements of the vector that are greater than 1 to 1. This is because we only care about
# the overall document frequency of each term. If a word is used multiple times in a single document
# it only contributes 1 to the document frequency.
vector[vector>1] = 1
vectors.append(vector)
# Calculate the cooccurrence matrix
v = vectors[0]
c = v.T*v
c.setdiag(0)
c = c.todense()
cooccurrence = c
for v in vectors[1:]:
c = v.T*v
c.setdiag(0)
c = c.toarray()
cooccurrence += c
pickle.dump(cooccurrence, open(root+'vectors/'+dataset_name+"/cooccurrence_matrix.p", "wb"))C:\Users\emmat\Anaconda3\lib\site-packages\scipy\sparse\_index.py:126: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient.
self._set_arrayXarray(i, j, x)
C:\Users\emmat\Anaconda3\lib\site-packages\scipy\sparse\_index.py:126: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient.
self._set_arrayXarray(i, j, x)
</code>
### Use burst co-occurrence to cluster terms
We use a hierarchichal clustering method to group terms together. This is highly customisable due to threshold setting, allowing us to group more or less conservatively if required._____no_output_____
<code>
# Reload bursts if required by uncommenting this line
#bursts = pickle.load(open(root+'bursts.p', "rb"))
dataset_clusters = dict()
for dataset_name in dataset_names:
#cooccurrence = pickle.load(open('Data/stacked_vectors/'+dataset_name+"/cooccurrence_matrix.p", "rb"))
# Translate co-occurence into a distance
dists = np.log(cooccurrence+1).max()- np.log(cooccurrence+1)
# Remove the diagonal (squareform requires diagonals be zero)
dists -= np.diag(np.diagonal(dists))
# Put the distance matrix into the format required by hierachy.linkage
flat_dists = squareform(dists)
# Get the linkage matrix
linkage_matrix = hierarchy.linkage(flat_dists, "ward")
assignments = hierarchy.fcluster(linkage_matrix, t=cluster_distance_threshold, criterion='distance')
clusters = defaultdict(list)
for term, assign, co in zip(bursts[dataset_name], assignments, cooccurrence):
clusters[assign].append(term)
dataset_clusters[dataset_name] = list(clusters.values())_____no_output_____dataset_clusters['arxiv_hep'] _____no_output_____
</code>
### Manual choice of clusters
We now sort the clusters in order of burstiness (using the burstiness of the most bursty term in the cluster) and manually exclude clusters that include publishing artefacts such as "elsevier science bv right reserved". From the remainder, we select the top fifty. We do this for all four datasets, giving 200 clusters. The selected clusters are stored in the file "200clusters.csv"._____no_output_____### For each cluster, create a time series of mentions in abstracts over time
We now need to search for the clusters to pull out the frequency of appearance in abstracts over time. For the cluster ["Internet of things", "IoT"], all abstracts that mention **either** term are included (i.e. an abstract that uses "Internet of things" without the abbreviation "IoT" still counts towards the total for that year). We take document frequency, not term frequency, so the number of times the terms are mentioned in each document do not matter, so long as they are mentioned once._____no_output_____
<code>
raw_clusters = pd.read_csv('200clusters.csv')
cluster_dict = defaultdict(list)
for dataset_name in dataset_names:
for raw_cluster in raw_clusters[dataset_name]:
cluster_dict[dataset_name].append(raw_cluster.split(','))
for dataset_name in dataset_names:
# List all the cluster terms. This will be more than the total number of clusters.
all_cluster_terms = sum(cluster_dict[dataset_name], [])
# Get the cluster titles. This is the list of terms in each cluster
all_cluster_titles = [','.join(cluster) for cluster in cluster_dict[dataset_name]]
# Get a list of files from the directory
files = os.listdir(data_root + dataset_name)
# This is where we will store the data. The columns correspond to clusters, the rows to years
prevalence_array = np.zeros([len(files),len(cluster_dict[dataset_name])])
# Open each year file in turn
for i, file in enumerate(files):
print(file)
year_data = pickle.load(open(data_root + dataset_name + '/' + file, 'rb'))
# Vectorise the data for that year
vectorizer = CountVectorizer(strip_accents='ascii',
ngram_range=(1,5),
stop_words=stop,
vocabulary=all_cluster_terms
)
vector = vectorizer.fit_transform(year_data['cleaned'])
# Get the index of each cluster term. This will allows us to map the full vocabulary
# e.g. (60 items) back onto the original clusters (e.g. 50 items)
for j, cluster in enumerate(cluster_dict[dataset_name]):
indices = []
for term in cluster:
indices.append(all_cluster_terms.index(term))
# If there are multiple terms in a cluster, sum the cluster columns together
summed_column = np.squeeze(np.asarray(vector[:,indices].sum(axis=1).flatten()))
# Set any element greater than one to one--we're only counting documents here, not
# total occurrences
summed_column[summed_column!=0] = 1
# This is the total number of occurrences of the cluster per year
prevalence_array[i, j] = np.sum(summed_column)
# Save the data
df = pd.DataFrame(data=prevalence_array, index=[f[0:4] for f in files], columns=all_cluster_titles)
pickle.dump(df, open(root+'clusters/'+dataset_name+'.p', 'wb'))1994.p
1995.p
1996.p
1997.p
1998.p
1999.p
2000.p
2001.p
2002.p
2003.p
2004.p
2005.p
2006.p
2007.p
2008.p
2009.p
2010.p
2011.p
2012.p
2013.p
2014.p
2015.p
2016.p
2017.p
</code>
### Curve fitting
The below is a pythonic version of the Loglet Lab 4 code found at https://github.com/pheguest/logletlab4. Loglet Lab also has a web interface at https://logletlab.com/ which allows you to create amazing graphs. However, the issue with the web interface is that it is not designed for processing hundreds of time series, and in order to do this, each time series must be laboriously copy-pasted into the input box, the parameters set, and then the results saved individually. With 200 time series and multiple parameter sets, this process is quite slow! Therefore, we have adapted the code from the github repository, but the original should be seen at https://github.com/pheguest/logletlab4/blob/master/javascript/src/psmlogfunc3.js.
_____no_output_____
<code>
curve_header_1 = ['', 'd', 'k', 'a', 'b', 'RMS']
curve_header_2 = ['', 'd', 'k1', 'a1', 'b1', 'k2', 'a2', 'b2', 'RMS']
dataset_names = ['arxiv_hep', 'pubmed_mh', 'pubmed_cancer', 'dblp_cs']
for dataset_name in dataset_names:
print('-'*50)
print(dataset_name.upper())
for curve_type in ['logistic', 'gompertz']:
for number_of_peaks in [1, 2]:
with open('our_loglet_lab/'+dataset_name+'/'+curve_type+str(number_of_peaks)+'.csv', 'w', newline='') as f:
writer = csv.writer(f)
if number_of_peaks == 1:
writer.writerow(curve_header_1)
elif number_of_peaks == 2:
writer.writerow(curve_header_2)
df = pickle.load(open(root+'clusters/'+dataset_name+'.p', 'rb'))
document_count_per_year = pickle.load(open(root+"/document_count_per_year.p", 'rb'))[dataset_name]
df = df.divide(document_count_per_year, axis=0)
for term in df.keys():
y = tools.normalise_time_series(df[term].cumsum())
x = np.array([int(i) for i in y.index])
y = y.values
if number_of_peaks == 1:
logobj = logletlab.LogObj(x, y, 1)
constraints = logletlab.estimate_constraints(x, y, 1)
if curve_type == 'logistic':
logobj = logletlab.loglet_MC_anneal_regression(logobj, constraints=constraints, number_of_loglets=1,
curve_type='logistic', anneal_iterations=20,
mc_iterations=1000, anneal_sample_size=100)
else:
logobj = logletlab.loglet_MC_anneal_regression(logobj, constraints=constraints, number_of_loglets=1,
curve_type='gompertz', anneal_iterations=20,
mc_iterations=1000, anneal_sample_size=100)
line = [term, logobj.parameters['d'], logobj.parameters['k'][0], logobj.parameters['a'][0], logobj.parameters['b'][0], logobj.energy_best]
print(curve_type, number_of_peaks, term, 'RMSE='+str(np.round(logobj.energy_best,3)))
with open(root+'fitted_curves/'+dataset_name+'/'+curve_type+'_single.csv', 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(line)
elif number_of_peaks == 2:
logobj = logletlab.LogObj(x, y, 2)
constraints = logletlab.estimate_constraints(x, y, 2)
if curve_type == 'logistic':
logobj = logletlab.loglet_MC_anneal_regression(logobj, constraints=constraints, number_of_loglets=2,
curve_type='logistic', anneal_iterations=30,
mc_iterations=1000, anneal_sample_size=100)
else:
logobj = logletlab.loglet_MC_anneal_regression(logobj, constraints=constraints, number_of_loglets=2,
curve_type='gompertz', anneal_iterations=30,
mc_iterations=1000, anneal_sample_size=100)
line = [term, logobj.parameters['d'],
logobj.parameters['k'][0],
logobj.parameters['a'][0],
logobj.parameters['b'][0],
logobj.parameters['k'][1],
logobj.parameters['a'][1],
logobj.parameters['b'][1],
logobj.energy_best]
print(curve_type, number_of_peaks, term, 'RMSE='+str(np.round(logobj.energy_best,3)))
with open(root+'fitted_curves/'+dataset_name+'/'+curve_type+'_double.csv', 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(line) --------------------------------------------------
ARXIV_HEP
logistic_single 1 125 gev 0.029907304762336263
logistic_single 1 pentaquark,pentaquarks 0.05043852824061915
logistic_single 1 wmap,wilkinson microwave anisotropy probe 0.0361380293123339
logistic_single 1 lhc run 0.020735398919035756
logistic_single 1 pamela 0.03204821466738317
logistic_single 1 lattice gauge 0.05233359007692712
logistic_single 1 tensor scalar ratio 0.036222971357601726
logistic_single 1 brane,branes 0.03141774013518978
logistic_single 1 atlas 0.01772382630535608
logistic_single 1 horava lifshitz,hovrava lifshitz 0.0410067585251185
logistic_single 1 lhc 0.006250825571034508
logistic_single 1 noncommutative,noncommutativity,non commutative,non commutativity 0.0327322808924473
logistic_single 1 black hole 0.020920939530327295
logistic_single 1 anomalous magnetic moment 0.04849255402257149
logistic_single 1 unparticle,unparticles 0.03351932242115829
logistic_single 1 superluminal 0.061748625288615105
logistic_single 1 m2 brane,m2 branes 0.039234821323279774
logistic_single 1 126 gev 0.018070446841532847
logistic_single 1 pp wave 0.047137089087624366
logistic_single 1 lambert 0.05871943152044709
logistic_single 1 tevatron 0.029469013159021687
logistic_single 1 higgs 0.034682515257204394
logistic_single 1 brane world 0.04485319867543418
logistic_single 1 extra dimension 0.03224289656019876
logistic_single 1 entropic 0.0366547700230139
logistic_single 1 kamland 0.05184286069114554
logistic_single 1 solar neutrino 0.02974273300483687
logistic_single 1 neutrino oscillation 0.04248474035767032
logistic_single 1 chern simon 0.027993037580545155
logistic_single 1 forward backward asymmetry 0.03979258482645767
logistic_single 1 dark energy 0.02603752198898685
logistic_single 1 bulk 0.029266519583018107
logistic_single 1 holographic 0.011123961217499157
logistic_single 1 international linear collider,ilc 0.04251997867004988
logistic_single 1 abjm 0.030827697912680977
logistic_single 1 babar 0.028343579032827054
logistic_single 1 daya bay 0.029215246675232537
logistic_single 1 sqrts7 tev 0.03478725079571082
logistic_single 1 130 gev 0.06940321757501901
logistic_single 1 20point3 0.041470794660599566
logistic_single 1 string field theory 0.03574859058388444
logistic_single 1 metastable vacuum 0.03939929585683627
logistic_single 1 gravitational wave 0.03579099579072222
logistic_single 1 belle 0.040482124354348815
logistic_single 1 diboson 0.04699497337736984
logistic_single 1 gamma ray excess 0.04102444964969219
logistic_single 1 generalized parton distribution 0.036712724912920894
logistic_single 1 lux 0.017863439822720473
logistic_single 1 higgsless 0.031371348784805776
logistic_single 1 planckian 0.03362768521566033
logistic_single 2 125 gev RMSE=0.094
logistic_single 2 pentaquark,pentaquarks RMSE=0.016
logistic_single 2 wmap,wilkinson microwave anisotropy probe RMSE=0.016
logistic_single 2 lhc run RMSE=0.099
logistic_single 2 pamela RMSE=0.067
logistic_single 2 lattice gauge RMSE=0.027
logistic_single 2 tensor scalar ratio RMSE=0.031
logistic_single 2 brane,branes RMSE=0.018
logistic_single 2 atlas RMSE=0.04
logistic_single 2 horava lifshitz,hovrava lifshitz RMSE=0.086
logistic_single 2 lhc RMSE=0.011
logistic_single 2 noncommutative,noncommutativity,non commutative,non commutativity RMSE=0.018
logistic_single 2 black hole RMSE=0.017
logistic_single 2 anomalous magnetic moment RMSE=0.013
logistic_single 2 unparticle,unparticles RMSE=0.07
logistic_single 2 superluminal RMSE=0.027
logistic_single 2 m2 brane,m2 branes RMSE=0.037
logistic_single 2 126 gev RMSE=0.106
logistic_single 2 pp wave RMSE=0.034
logistic_single 2 lambert RMSE=0.053
logistic_single 2 tevatron RMSE=0.02
logistic_single 2 higgs RMSE=0.017
logistic_single 2 brane world RMSE=0.038
logistic_single 2 extra dimension RMSE=0.017
logistic_single 2 entropic RMSE=0.04
logistic_single 2 kamland RMSE=0.026
logistic_single 2 solar neutrino RMSE=0.015
logistic_single 2 neutrino oscillation RMSE=0.014
logistic_single 2 chern simon RMSE=0.013
logistic_single 2 forward backward asymmetry RMSE=0.015
logistic_single 2 dark energy RMSE=0.009
logistic_single 2 bulk RMSE=0.013
logistic_single 2 holographic RMSE=0.019
logistic_single 2 international linear collider,ilc RMSE=0.025
logistic_single 2 abjm RMSE=0.083
logistic_single 2 babar RMSE=0.008
logistic_single 2 daya bay RMSE=0.08
logistic_single 2 sqrts7 tev RMSE=0.098
logistic_single 2 130 gev RMSE=0.023
logistic_single 2 20point3 RMSE=0.111
logistic_single 2 string field theory RMSE=0.024
logistic_single 2 metastable vacuum RMSE=0.04
logistic_single 2 gravitational wave RMSE=0.023
logistic_single 2 belle RMSE=0.012
logistic_single 2 diboson RMSE=0.048
logistic_single 2 gamma ray excess RMSE=0.077
logistic_single 2 generalized parton distribution RMSE=0.016
logistic_single 2 lux RMSE=0.118
logistic_single 2 higgsless RMSE=0.023
logistic_single 2 planckian RMSE=0.021
gompertz_single 1 125 gev 0.027990893264820727
gompertz_single 1 pentaquark,pentaquarks 0.05501721478166251
gompertz_single 1 wmap,wilkinson microwave anisotropy probe 0.022845668269851106
gompertz_single 1 lhc run 0.028579821827405053
gompertz_single 1 pamela 0.045009318530154496
gompertz_single 1 lattice gauge 0.03881798360027813
gompertz_single 1 tensor scalar ratio 0.04165122755811488
gompertz_single 1 brane,branes 0.015897368843519718
gompertz_single 1 atlas 0.025302368295095044
gompertz_single 1 horava lifshitz,hovrava lifshitz 0.03284369710043905
gompertz_single 1 lhc 0.011982748137246894
gompertz_single 1 noncommutative,noncommutativity,non commutative,non commutativity 0.019001965897180995
gompertz_single 1 black hole 0.014927532025715336
gompertz_single 1 anomalous magnetic moment 0.03815112878690011
gompertz_single 1 unparticle,unparticles 0.04951062524644681
gompertz_single 1 superluminal 0.06769864550310536
gompertz_single 1 m2 brane,m2 branes 0.04913553590544861
gompertz_single 1 126 gev 0.055558733922474034
gompertz_single 1 pp wave 0.03301172366747924
gompertz_single 1 lambert 0.06642398728502467
gompertz_single 1 tevatron 0.025650416554382518
gompertz_single 1 higgs 0.023162438641479193
gompertz_single 1 brane world 0.02731737986487246
gompertz_single 1 extra dimension 0.01412142348710811
gompertz_single 1 entropic 0.04244470928862996
gompertz_single 1 kamland 0.041561443675259296
gompertz_single 1 solar neutrino 0.019991527081873878
gompertz_single 1 neutrino oscillation 0.02728917506505852
gompertz_single 1 chern simon 0.021921267236475462
gompertz_single 1 forward backward asymmetry 0.033792375388002636
gompertz_single 1 dark energy 0.011328325469397564
gompertz_single 1 bulk 0.016397373612903957
gompertz_single 1 holographic 0.013523033011049823
gompertz_single 1 international linear collider,ilc 0.028670475081917165
gompertz_single 1 abjm 0.01908721302892229
gompertz_single 1 babar 0.011772702532270439
gompertz_single 1 daya bay 0.033161025569256077
gompertz_single 1 sqrts7 tev 0.02246390374238338
gompertz_single 1 130 gev 0.06634184936424548
gompertz_single 1 20point3 0.05854946662529169
gompertz_single 1 string field theory 0.020875119663090757
gompertz_single 1 metastable vacuum 0.05222736462207674
gompertz_single 1 gravitational wave 0.027673653499397457
gompertz_single 1 belle 0.02693039986623777
gompertz_single 1 diboson 0.057996631146896745
gompertz_single 1 gamma ray excess 0.04859899332579853
gompertz_single 1 generalized parton distribution 0.02058799001190155
gompertz_single 1 lux 0.013340072121053249
gompertz_single 1 higgsless 0.02542571744624044
gompertz_single 1 planckian 0.027723454726782445
gompertz_single 2 125 gev RMSE=0.067
gompertz_single 2 pentaquark,pentaquarks RMSE=0.019
gompertz_single 2 wmap,wilkinson microwave anisotropy probe RMSE=0.021
gompertz_single 2 lhc run RMSE=0.069
gompertz_single 2 pamela RMSE=0.068
gompertz_single 2 lattice gauge RMSE=0.025
gompertz_single 2 tensor scalar ratio RMSE=0.027
gompertz_single 2 brane,branes RMSE=0.015
gompertz_single 2 atlas RMSE=0.018
gompertz_single 2 horava lifshitz,hovrava lifshitz RMSE=0.065
gompertz_single 2 lhc RMSE=0.005
gompertz_single 2 noncommutative,noncommutativity,non commutative,non commutativity RMSE=0.018
gompertz_single 2 black hole RMSE=0.01
</code>
## Reload the data
The preceding step is very long, and may take many hours to complete. Therefore, since we did it in chunks, we now reload the results from memory._____no_output_____
<code>
# Load the data back up (since the steps above store the results in files, not local memory)
document_count_per_year = pickle.load(open(root+'document_count_per_year.p', "rb"))
datasets = {}
for dataset_name in dataset_names:
datasets[dataset_name] = {}
for curve_type in ['logistic', 'gompertz']:
datasets[dataset_name][curve_type] = {}
for peaks in ['single', 'double']:
df = pd.read_csv(root+'fitted_curves/'+dataset_name+'/'+curve_type+'_'+peaks+'.csv', index_col=0)
datasets[dataset_name][curve_type][peaks] = df_____no_output_____
</code>
### Graph: Example single-peaked fit for XML_____no_output_____
<code>
x = range(1988,2018)
term = 'xml'
# Load the original time series for xml
df = pickle.load(open(root+'clusters/dblp_cs.p', 'rb'))
# Divide the data for each year by the document count in each year
y_proportional = df[term].divide(document_count_per_year['dblp_cs'])
# Calculate Logistic and Gompertz curves from the parameters estimated earlier
y_logistic = logletlab.calculate_series(x,
datasets['dblp_cs']['logistic']['single']['a'][term],
datasets['dblp_cs']['logistic']['single']['k'][term],
datasets['dblp_cs']['logistic']['single']['b'][term],
'logistic'
)
# Since the fitting was done with a normalised version of the curve, we detransform it back into the original scale
y_logistic = detransform_fit(y_proportional.cumsum(), y_logistic, 'dblp_cs')
y_gompertz = logletlab.calculate_series(x,
datasets['dblp_cs']['gompertz']['single']['a'][term],
datasets['dblp_cs']['gompertz']['single']['k'][term],
datasets['dblp_cs']['gompertz']['single']['b'][term],
'gompertz'
)
y_gompertz = detransform_fit(y_proportional.cumsum(), y_gompertz, 'dblp_cs')
plt.figure(figsize=(6,3.7))
# Multiply by 100 so that values will be percentages
plt.plot(x, 100*y_proportional, label='Data', color='k')
plt.plot(x, 100*y_logistic, label='Logistic', color='k', linestyle=':')
plt.plot(x, 100*y_gompertz, label='Gompertz', color='k', linestyle='--')
plt.legend()
plt.grid()
plt.title("Logistic and Gompertz models fitted to the data for 'XML'", fontsize=12)
plt.xlim([1988,2017])
plt.ylim(0,2)
plt.ylabel("Documents containing term (%)", fontsize=11)
plt.xlabel("Year", fontsize=11)
plt.savefig(root+'images/xmlexamplefit.eps', format='eps', dpi=1200)_____no_output_____
</code>
### Table of results for Logistic vs Gompertz
Compare the error of the Logistic and Gompertz models across the entire dataset of 200 trends._____no_output_____
<code>
def statistics(df):
mean = df.mean()
ci = 1.96*logistic_error.std()/np.sqrt(len(logistic_error))
median = df.median()
std = df.std()
return [mean, mean-ci, mean+ci, median, std]
logistic_error = pd.concat([datasets['arxiv_hep']['logistic']['single']['RMS'],
datasets['dblp_cs']['logistic']['single']['RMS'],
datasets['pubmed_mh']['logistic']['single']['RMS'],
datasets['pubmed_cancer']['logistic']['single']['RMS']])
gompertz_error = pd.concat([datasets['arxiv_hep']['gompertz']['single']['RMS'],
datasets['dblp_cs']['gompertz']['single']['RMS'],
datasets['pubmed_mh']['gompertz']['single']['RMS'],
datasets['pubmed_cancer']['gompertz']['single']['RMS']])
print('Logistic')
mean = logistic_error.mean()
ci = 1.96*logistic_error.std()/np.sqrt(len(logistic_error))
print('Mean =', np.round(mean,3))
print('95% CI = [', np.round(mean-ci, 3), ',', np.round(mean+ci, 3), ']')
print('Median =', np.round(logistic_error.median(), 3))
print('STDEV =', np.round(logistic_error.std(), 3))
print('')
print('Gompertz')
mean = gompertz_error.mean()
ci = 1.96*gompertz_error.std()/np.sqrt(len(logistic_error))
print('Mean =', np.round(mean,3))
print('95% CI = [', np.round(mean-ci, 3), ',', np.round(mean+ci, 3), ']')
print('Median =', np.round(gompertz_error.median(), 3))
print('STDEV =', np.round(gompertz_error.std(), 3))
Logistic
Mean = 0.029
95% CI = [ 0.027 , 0.031 ]
Median = 0.029
STDEV = 0.014
Gompertz
Mean = 0.023
95% CI = [ 0.021 , 0.026 ]
Median = 0.019
STDEV = 0.017
</code>
### Is the difference between the means significant?
Here we use an independent t-test to investigate significance._____no_output_____
<code>
scipy.stats.ttest_ind(logistic_error, gompertz_error, axis=0, equal_var=True, nan_policy='propagate')_____no_output_____
</code>
Yes, it is significant! However, since the data is slightly skewed, we can also test the signficance of the difference between medians using Mood's median test:_____no_output_____
<code>
stat, p, med, tbl = scipy.stats.median_test(logistic_error, gompertz_error)
print(p)1.1980742802127062e-08
</code>
So either way, the p-value is very low, causing us to reject the null hypothesis. This leads us to the conclusion that the **Gompertz model** is more appropriate for the task of modelling publishing activity over time._____no_output_____### Box and whisker plots of Logistic and Gompertz model error_____no_output_____
<code>
axs = pd.DataFrame({
'CS Logistic': datasets['dblp_cs']['logistic']['single']['RMS'],
'CS Gompertz': datasets['dblp_cs']['gompertz']['single']['RMS'],
'Physics Logistic': datasets['arxiv_hep']['logistic']['single']['RMS'],
'Physics Gompertz': datasets['arxiv_hep']['gompertz']['single']['RMS'],
'MH Logistic': datasets['pubmed_mh']['logistic']['single']['RMS'],
'MH Gompertz': datasets['pubmed_mh']['gompertz']['single']['RMS'],
'Cancer Logistic': datasets['pubmed_cancer']['logistic']['single']['RMS'],
'Cancer Gompertz': datasets['pubmed_cancer']['gompertz']['single']['RMS'],
}).boxplot(figsize=(13,4), return_type='dict')
[item.set_color('k') for item in axs['boxes']]
[item.set_color('k') for item in axs['whiskers']]
[item.set_color('k') for item in axs['medians']]
plt.suptitle("")
p = plt.gca()
p.set_ylabel('RMSE error')
p.set_title('Distribution of RMSE error of models fitted to the four datasets', fontsize=12)
p.set_ylim([0,0.12])_____no_output_____
</code>
There is some variation across the datasets, although the Gompertz model is consistent in producing a lower median error than the Logistic model. It's worth noting also that the Particle Physics and Mental Health datasets are smaller than the Cancer and Computer Science ones. They also have higher error. _____no_output_____### Calculation of trend duration
The Loglet Lab documentation (https://logletlab.com/loglet/documentation/index) contains a formula for the time taken for a Gompertz curve to go from 10% to 90% of its eventual maximum cumulative frequency ($\Delta t$). Their calculation is that
$\Delta t = -\frac{\ln(\ln(81))}{r}$
However, our observation was that this did not remotely describe the observed span of the fitted curves. We have therefore done the derivation ourselves and found that the correct parameterisation is:
$\Delta t = \frac{1}{\ln(-(\ln(0.9))-\ln(-\ln(0.1))}$
Unfortunately, the LogletLab initial parameter guesses are tailored to this incorrect parameterisation so it is much simpler to use it when fitting the curve (and irrelevant, except when it comes to calculating curve span). Therefore we use it, then convert to the correct value using the conversion factor below:_____no_output_____
<code>
conversion_factor = -((np.log(-np.log(0.9))-np.log(-np.log(0.1)))/np.log(np.log(81)))_____no_output_____spans = pd.DataFrame({
'Computer Science': datasets['dblp_cs']['gompertz']['single']['a']*conversion_factor,
'Particle Physics': datasets['arxiv_hep']['gompertz']['single']['a']*conversion_factor,
'Mental Health': datasets['pubmed_mh']['gompertz']['single']['a']*conversion_factor,
'Cancer': datasets['pubmed_cancer']['gompertz']['single']['a']*conversion_factor
})
axs = spans.boxplot(figsize=(7.5,3.7), return_type='dict', fontsize=11)
[item.set_color('k') for item in axs['boxes']]
[item.set_color('k') for item in axs['whiskers']]
[item.set_color('k') for item in axs['medians']]
#plt.figure(figsize=(6,3.7))
plt.suptitle("")
p = plt.gca()
p.set_ylabel('Peak width (years)', fontsize=11)
p.set_title('Distribution of peak widths by dataset (Gomperz model)', fontsize=12)
p.set_ylim([0,100])
plt.savefig(root+'images/curvespans.eps', format='eps', dpi=1200)_____no_output_____
</code>
The data is quite skewed here...something to bear in mind when testing for significance later.
### Median trend durations in different disciplines_____no_output_____
<code>
for i , dataset_name in enumerate(dataset_names):
print(dataset_titles[i], '| Median trend duration =', np.round(np.median(datasets[dataset_name]['gompertz']['single']['a']*conversion_factor),1), 'years')
Computer science (dblp) | Median trend duration = 25.8 years
Particle physics (arXiv) | Median trend duration = 15.1 years
Mental health (PubMed) | Median trend duration = 24.6 years
Cancer (PubMed) | Median trend duration = 13.4 years
</code>
### Testing for significance between disciplines
There are substantial differences between the median trend durations, with Computer Science and Particle Physics having shorter durations and the two PubMed datasets having longer ones. But are these significant? Since the data is somewhat skewed, we use Mood's median test to find p-values for the differences (Mood's median test does not require normal data)._____no_output_____
<code>
for i in range(4):
for j in range(i,4):
if i == j:
pass
else:
spans1 = datasets[dataset_names[i]]['gompertz']['single']['a']*conversion_factor
spans2 = datasets[dataset_names[j]]['gompertz']['single']['a']*conversion_factor
stat, p, med, tbl = scipy.stats.median_test(spans1, spans2)
print(dataset_titles[i], 'vs', dataset_titles[j], 'p-value =', np.round(p,3))Computer science (dblp) vs Particle physics (arXiv) p-value = 0.003
Computer science (dblp) vs Mental health (PubMed) p-value = 0.841
Computer science (dblp) vs Cancer (PubMed) p-value = 0.009
Particle physics (arXiv) vs Mental health (PubMed) p-value = 0.072
Particle physics (arXiv) vs Cancer (PubMed) p-value = 0.549
Mental health (PubMed) vs Cancer (PubMed) p-value = 0.028
</code>
So the p value between Particle Physics and Computer Science is not acceptable, and neither is the p-value between Mental Health and Cancer. How about between these two groups?_____no_output_____
<code>
dblp_spans = datasets['dblp_cs']['gompertz']['single']['a']*conversion_factor
cancer_spans = datasets['pubmed_cancer']['gompertz']['single']['a']*conversion_factor
arxiv_spans = datasets['arxiv_hep']['gompertz']['single']['a']*conversion_factor
mh_spans = datasets['pubmed_mh']['gompertz']['single']['a']*conversion_factor
stat, p, med, tbl = scipy.stats.median_test(pd.concat([arxiv_spans, dblp_spans]), pd.concat([cancer_spans, mh_spans]))
print(np.round(p,5))0.00013
</code>
This difference IS significant!_____no_output_____### Double-peaking curves
We now move to analyse the data for double-peaked curves. For each term, we have calculated the error when two peaks are fitted, and the error when a single peak is fitted. We can compare the error in each case like so:_____no_output_____
<code>
print('Neural networks, single peak | error =', np.round(datasets['dblp_cs']['gompertz']['single']['RMS']['neural network'],3))
print('Neural networks, double peak| error =', np.round(datasets['dblp_cs']['gompertz']['double']['RMS']['neural network'],3))Neural networks, single peak | error = 0.031
Neural networks, double peak| error = 0.011
</code>
Where do we see the largest reductions?_____no_output_____
<code>
difference = datasets['dblp_cs']['gompertz']['single']['RMS']-datasets['dblp_cs']['gompertz']['double']['RMS']
for term in difference.index:
if difference[term] > 0.015:
print(term, np.round(difference[term], 3))neural network 0.02
machine learning 0.02
convolutional neural network,cnn 0.085
discrete mathematics 0.031
parallel 0.024
recurrent 0.026
embeddings 0.037
learning model 0.024
</code>
### Examples of double peaking curves
So in some cases there is an error reduction from moving from the single- to double-peaked model. What does this look like in practice?_____no_output_____
<code>
x = range(1988,2018)
# Load the original data
df = pickle.load(open(root+'clusters/dblp_cs.p', 'rb'))
# Choose four example terms
terms = ['big data', 'cloud', 'internet', 'neural network']
titles = ['a) Big Data', 'b) Cloud', 'c) Internet', 'd) Neural network']
# We want to set an overall y-label. The solution(found at https://stackoverflow.com/a/27430940) is to
# create an overall plot first, give it a y-label, then hide it by removing plot borders.
fig, big_ax = plt.subplots(figsize=(9.0, 6.0) , nrows=1, ncols=1, sharex=True)
big_ax.tick_params(labelcolor=(1,1,1,0.0), top=False, bottom=False, left=False, right=False)
big_ax._frameon = False
big_ax.set_ylabel("Documents containing term (%)", fontsize=11)
axs = [0,0,0,0]
axs[0]=fig.add_subplot(2,2,1)
axs[1]=fig.add_subplot(2,2,2)
axs[2]=fig.add_subplot(2,2,3)
axs[3]=fig.add_subplot(2,2,4)
fig.subplots_adjust(wspace=0.25, hspace=0.5, right=0.9)
# Set y limits manually beforehand
limits = [2, 4, 6, 8]
for i, term in enumerate(terms):
# Get the proportional document frequency of the term over time
y_proportional = df[term].divide(document_count_per_year['dblp_cs'])
# Multiply by 100 when plotting so that it reads as a percentage
axs[i].plot(x, 100*y_proportional, color='k')
axs[i].grid(True)
axs[i].set_xlabel("Year", fontsize=11)
axs[i].yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
# Now plot single and double peaked models
for j, curve_type in enumerate(['single', 'double']):
if curve_type == 'single':
y_overall = logletlab.calculate_series(x,
datasets['dblp_cs']['gompertz'][curve_type]['a'][term],
datasets['dblp_cs']['gompertz'][curve_type]['k'][term],
datasets['dblp_cs']['gompertz'][curve_type]['b'][term],
'gompertz')
y_overall = detransform_fit(y_proportional.cumsum(), y_overall, 'dblp_cs')
error = datasets['dblp_cs']['gompertz'][curve_type]['RMS'][term]
axs[i].plot(x, 100*y_overall, color='k', linestyle='--', label="single peak, error="+str(np.round(error,3)))
else:
y_overall, y_1, y_2 = logletlab.calculate_series_double(x,
datasets['dblp_cs']['gompertz'][curve_type]['a1'][term],
datasets['dblp_cs']['gompertz'][curve_type]['k1'][term],
datasets['dblp_cs']['gompertz'][curve_type]['b1'][term],
datasets['dblp_cs']['gompertz'][curve_type]['a2'][term],
datasets['dblp_cs']['gompertz'][curve_type]['k2'][term],
datasets['dblp_cs']['gompertz'][curve_type]['b2'][term],
'gompertz')
y_overall = detransform_fit(y_proportional.cumsum(), y_overall, 'dblp_cs')
error = datasets['dblp_cs']['gompertz'][curve_type]['RMS'][term]
axs[i].plot(x, 100*y_overall, color='k', linestyle=':', label="double peak, error="+str(np.round(error,3)))
axs[i].set_title(titles[i], fontsize=12)
axs[i].legend( fontsize=11)
axs[i].set_ylim([0, limits[i]])
# We want the same number of y ticks for each axis so that it reads more neatly
axs[2].set_yticks([0, 1.5, 3, 4.5, 6])
fig.savefig(root+'images/doublepeaked.eps', format='eps', dpi=1200)
_____no_output_____
</code>
### Graphs of all four datasets
In this section we try to show as many graphs of fitted models as can reasonably fit on a page. The two functions used to make the graphs [below] are very hacky! However they work for this specific purpose._____no_output_____
<code>
def choose_ylimit(prevalence):
'''
This function works to find the most appropriate upper y limit to make the plots look good
'''
if max(prevalence) < 0.5:
return 0.5
elif max(prevalence) > 0.5 and max(prevalence) < 0.8:
return 0.8
elif max(prevalence) > 10 and max(prevalence) < 12:
return 12
elif max(prevalence) > 12 and max(prevalence) < 15:
return 15
elif max(prevalence) > 15 and max(prevalence) < 20:
return 20
else:
return np.ceil(max(prevalence))
def prettyplot(df, dataset_name, gompertz_params, yplots, xplots, title, ylabel, xlabel, xlims, plot_titles):
'''
Plot a nicely formatted set of trends with their fitted models. This function is rather hacky and made
for this specific purpose!
'''
fig, axs = plt.subplots(yplots, xplots)
plt.subplots_adjust(right=1, hspace=0.5, wspace=0.25)
plt.suptitle(title, fontsize=14)
fig.subplots_adjust(top=0.95)
fig.set_figheight(15)
fig.set_figwidth(9)
x = [int(i) for i in list(df.index)]
for i, term in enumerate(df.columns[0:yplots*xplots]):
prevalence = df[term].divide(document_count_per_year[dataset_name], axis=0)
if plot_titles == None:
title = term.split(',')[0]
else:
title = titles[i]
# Now get the gompertz representation of it
if gompertz_params['single']['RMS'][term]-gompertz_params['double']['RMS'][term] < 0.005:
# Use the single peaked version
y_overall = logletlab.calculate_series(x,
gompertz_params['single']['a'][term],
gompertz_params['single']['k'][term],
gompertz_params['single']['b'][term],
'gompertz')
y_overall = detransform_fit(prevalence.cumsum(), y_overall, dataset_name)
else:
y_overall, y_1, y_2 = logletlab.calculate_series_double(x,
gompertz_params['double']['a1'][term],
gompertz_params['double']['k1'][term],
gompertz_params['double']['b1'][term],
gompertz_params['double']['a2'][term],
gompertz_params['double']['k2'][term],
gompertz_params['double']['b2'][term],
'gompertz')
y_overall = detransform_fit(prevalence.cumsum(), y_overall, dataset_name)
axs[int(np.floor((i/xplots)%yplots)), i%xplots].plot(x, 100*prevalence, color='k', ls='-', label=title)
axs[int(np.floor((i/xplots)%yplots)), i%xplots].plot(x, 100*y_overall, color='k', ls='--', label='gompertz')
axs[int(np.floor((i/xplots)%yplots)), i%xplots].grid()
axs[int(np.floor((i/xplots)%yplots)), i%xplots].set_xlim(xlims[0], xlims[1])
axs[int(np.floor((i/xplots)%yplots)), i%xplots].set_ylim(0,choose_ylimit(100*prevalence))
axs[int(np.floor((i/xplots)%yplots)), i%xplots].set_title(title, fontsize=12)
axs[int(np.floor((i/xplots)%yplots)), i%xplots].yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
if i%yplots != yplots-1:
axs[i%yplots, int(np.floor((i/yplots)%xplots))].set_xticklabels([])
axs[5,0].set_ylabel(ylabel, fontsize=12)
_____no_output_____dataset_name = 'arxiv_hep'
df = pickle.load(open(root+'clusters/'+dataset_name+'.p', 'rb'))
titles = ['125 GeV', 'Pentaquark', 'WMAP', 'LHC Run', 'PAMELA', 'Lattice Gauge',
'Tensor-to-Scalar Ratio', 'Brane', 'ATLAS', 'Horava-Lifshitz', 'LHC',
'Noncommutative', 'Black Hole', 'Anomalous Magnetic Moment', 'Unparticle',
'Superluminal', 'M2 Brane', '126 GeV', 'pp-Wave', 'Lambert', 'Tevatron', 'Higgs',
'Brane World', 'Extra Dimension', 'Entropic', 'KamLAND', 'Solar Neutrino',
'Neutrino Oscillation', 'Chern Simon', 'Forward-Backward Asymmetry', 'Dark Energy',
'Bulk', 'Holographic', 'International Linear Collider', 'ABJM', 'BaBar']
prettyplot(df, 'arxiv_hep', datasets[dataset_name]['gompertz'], 12, 3, "Gompertz model fitted to trends in particle physics (1994-2017)", "Documents containing term (%)", None, [1990,2020], titles)
plt.savefig(root+'images/arxiv_hep.eps', format='eps', dpi=1200, bbox_inches='tight')_____no_output_____dataset_name = 'dblp_cs'
df = pickle.load(open(root+'clusters/'+dataset_name+'.p', 'rb'))
titles = ['Deep Learning', 'Neural Network', 'Machine Learning', 'Convolutional Neural Network',
'Java', 'Web', 'XML', 'Internet', 'Web Service', 'Internet of Things', 'World Wide Web',
'Speech', '5G', 'Discrete Mathematics', 'Parallel', 'Agent', 'Recurrent', 'SUP', 'Cloud',
'Big Data', 'Peer-to-peer', 'Wireless', 'Sensor Network', 'Electronic Commerce', 'ATM', 'Gene',
'Packet', 'Multimedia', 'Smart Grid', 'Embeddings', 'Ontology', 'Ad-hoc Network', 'Service Oriented',
'Web Site', 'RAC', 'Distributed Memory']
prettyplot(df, 'dblp_cs', datasets[dataset_name]['gompertz'], 12, 3, 'Gompertz model fitted to trends in computer science (1988-2017)', "Documents containing term (%)", None, [1980,2020], titles)
plt.savefig(root+'images/dblp_cs.eps', format='eps', dpi=1200, bbox_inches='tight')_____no_output_____dataset_name = 'pubmed_mh'
df = pickle.load(open(root+'clusters/'+dataset_name+'.p', 'rb'))
titles = titles = ['Alcoholic', 'Abeta', 'Psycinfo', 'Dexamethasone', 'Human Immunodeficiency Virus',
'Database', 'Alzheimers Disease', 'Amitriptyline', 'Intravenous Drug', 'Bupropion',
'DSM iii', 'Depression', 'Drug User', 'Apolipoprotein', 'Epsilon4 Allele', 'Rett Syndrome',
'Cocaine', 'Heroin', 'Panic', 'Imipramine', 'Papaverine', 'Cortisol', 'Presenilin', 'Plasma',
'Tricyclic', 'Epsilon Allele', 'HTLV iii', 'Learning Disability', 'DSM IV', 'DSM',
'Retardation', 'Aldehyde', 'Protein Precursor', 'Bulimia', 'Narcoleptic', 'Acquired Immunodeficiency Syndrome']
prettyplot(df, 'pubmed_mh', datasets[dataset_name]['gompertz'], 12, 3, 'Gompertz model fitted to trends in mental health research (1975-2017)', 'Documents containing term (%)', None, [1970,2020], titles)
plt.savefig(root+'images/pubmed_mh.eps', format='eps', dpi=1200, bbox_inches='tight')_____no_output_____dataset_name = 'pubmed_cancer'
df = pickle.load(open(root+'clusters/'+dataset_name+'.p', 'rb'))
titles = ['Immunohistochemical', 'Monoclonal Antibody', 'NF KappaB', 'Polymerase Chain Reaction',
'Immune Checkpoint', 'Tumor Suppressor Gene', 'Beta Catenin', 'PD-L1', 'Interleukin',
'Oncogene', 'Microarray', '1Alpha', 'PC12 Cell', 'Magnetic Resonance',
'Proliferating Cell Nuclear Antigen', 'Human T-cell Leukemia', 'Adult T-cell Leukemia',
'lncRNA', 'Apoptosis', 'CD4', 'Recombinant', 'Acquired Immunodeficiency Syndrome',
'HR', 'Meta Analysis', 'IC50', 'Immunoperoxidase', 'Blot', 'Interfering RNA', '18F',
'(Estrogen) Receptor Alpha', 'OKT4', 'kDa', 'CA', 'OKT8', 'Imatinib', 'Helper (T-cells)']
prettyplot(df, 'pubmed_cancer', datasets[dataset_name]['gompertz'], 12, 3, 'Gompertz model fitted to trends in cancer research (1988-2017)', 'Documents containing term (%)', None, [1970,2020], titles)
plt.savefig(root+'images/pubmed_cancer.eps', format='eps', dpi=1200, bbox_inches='tight')_____no_output_____
</code>
| {
"repository": "etattershall/trend-lifecycles",
"path": "Modelling trend life cycles in scientific research.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 1047827,
"hexsha": "d02b369a91cd5775e3ce0eeb2ed88e0dc781baf6",
"max_line_length": 211688,
"avg_line_length": 509.148202138,
"alphanum_fraction": 0.9320240841
} |
# Notebook from danikhani/CV1-2020
Path: Exercise3/Exercise3/local_feature_matching.ipynb
# Local Feature Matching
By the end of this exercise, you will be able to transform images of a flat (planar) object, or images taken from the same point into a common reference frame. This is at the core of applications such as panorama stitching.
A quick overview:
1. We will start with histogram representations for images (or image regions).
2. Then we will detect robust keypoints in images and use simple histogram descriptors to describe the neighborhood of each keypoint.
3. After this we will compare descriptors from different images using a distance function and establish matching points.
4. Using these matching points we will estimate the homography transformation between two images of a planar object (wall with graffiti) and use this to warp one image to look like the other._____no_output_____
<code>
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import imageio
import cv2
import math
from scipy import ndimage
from attrdict import AttrDict
from mpl_toolkits.mplot3d import Axes3D
# Many useful functions
def plot_multiple(images, titles=None, colormap='gray',
max_columns=np.inf, imwidth=4, imheight=4, share_axes=False):
"""Plot multiple images as subplots on a grid."""
if titles is None:
titles = [''] *len(images)
assert len(images) == len(titles)
n_images = len(images)
n_cols = min(max_columns, n_images)
n_rows = int(np.ceil(n_images / n_cols))
fig, axes = plt.subplots(
n_rows, n_cols, figsize=(n_cols * imwidth, n_rows * imheight),
squeeze=False, sharex=share_axes, sharey=share_axes)
axes = axes.flat
# Hide subplots without content
for ax in axes[n_images:]:
ax.axis('off')
if not isinstance(colormap, (list,tuple)):
colormaps = [colormap]*n_images
else:
colormaps = colormap
for ax, image, title, cmap in zip(axes, images, titles, colormaps):
ax.imshow(image, cmap=cmap)
ax.set_title(title)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout()
def load_image(f_name):
return imageio.imread(f_name, as_gray=True).astype(np.float32)/255
def convolve_with_two(image, kernel1, kernel2):
"""Apply two filters, one after the other."""
image = ndimage.convolve(image, kernel1)
image = ndimage.convolve(image, kernel2)
return image
def gauss(x, sigma):
return 1 / np.sqrt(2 * np.pi) / sigma * np.exp(- x**2 / 2 / sigma**2)
def gaussdx(x, sigma):
return (-1 / np.sqrt(2 * np.pi) / sigma**3 * x *
np.exp(- x**2 / 2 / sigma**2))
def gauss_derivs(image, sigma):
kernel_radius = np.ceil(3.0 * sigma)
x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis]
G = gauss(x, sigma)
D = gaussdx(x, sigma)
image_dx = convolve_with_two(image, D, G.T)
image_dy = convolve_with_two(image, G, D.T)
return image_dx, image_dy
def gauss_filter(image, sigma):
kernel_radius = np.ceil(3.0 * sigma)
x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis]
G = gauss(x, sigma)
return convolve_with_two(image, G, G.T)
def gauss_second_derivs(image, sigma):
kernel_radius = np.ceil(3.0 * sigma)
x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis]
G = gauss(x, sigma)
D = gaussdx(x, sigma)
image_dx, image_dy = gauss_derivs(image, sigma)
image_dxx = convolve_with_two(image_dx, D, G.T)
image_dyy = convolve_with_two(image_dy, G, D.T)
image_dxy = convolve_with_two(image_dx, G, D.T)
return image_dxx, image_dxy, image_dyy
def map_range(x, start, end):
"""Maps values `x` that are within the range [start, end) to the range [0, 1)
Values smaller than `start` become 0, values larger than `end` become
slightly smaller than 1."""
return np.clip((x-start)/(end-start), 0, 1-1e-10)
def draw_keypoints(image, points):
image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
radius = image.shape[1]//100+1
for x, y in points:
cv2.circle(image, (int(x), int(y)), radius, (1, 0, 0), thickness=2)
return image
def draw_point_matches(im1, im2, point_matches):
result = np.concatenate([im1, im2], axis=1)
result = (result.astype(float)*0.6).astype(np.uint8)
im1_width = im1.shape[1]
for x1, y1, x2, y2 in point_matches:
cv2.line(result, (x1, y1), (im1_width+x2, y2),
color=(0,255,255), thickness=2, lineType=cv2.LINE_AA)
return result_____no_output_____%%html
<!-- This adds heading numbers to each section header -->
<style>
body {counter-reset: section;}
h2:before {counter-increment: section;
content: counter(section) " ";}
</style>_____no_output_____
</code>
## Histograms in 1D
If we have a grayscale image, creating a histogram of the gray values tells us how frequently each gray value appears in the image, at a certain discretization level, which is controlled by the number of bins.
Implement `compute_1d_histogram(im, n_bins)`. Given an grayscale image `im` with shape `[height, width]` and the number of bins `n_bins`, return a `histogram` array that contains the number of values falling into each bin. Assume that the values (of the image) are in the range \[0,1), so the specified number of bins should cover the range from 0 to 1. Normalize the resulting histogram to sum to 1._____no_output_____
<code>
def compute_1d_histogram(im, n_bins):
histogram = np.zeros(n_bins)
# YOUR CODE HERE
raise NotImplementedError()
return histogram
fig, axes = plt.subplots(1,4, figsize=(10,2), constrained_layout=True)
bin_counts = [2, 25, 256]
gray_img = imageio.imread('terrain.png', as_gray=True
).astype(np.float32)/256
axes[0].set_title('Image')
axes[0].imshow(gray_img, cmap='gray')
for ax, n_bins in zip(axes[1:], bin_counts):
ax.set_title(f'1D histogram with {n_bins} bins')
bin_size = 1/n_bins
x_axis = np.linspace(0, 1, n_bins, endpoint=False)+bin_size/2
hist = compute_1d_histogram(gray_img, n_bins)
ax.bar(x_axis, hist, bin_size)_____no_output_____
</code>
What is the effect of the different bin counts?_____no_output_____YOUR ANSWER HERE_____no_output_____## Histograms in 3D
If the pixel values are more than one-dimensional (e.g. three-dimensional RGB, for red, green and blue color channels), we can build a multi-dimensional histogram. In the R, G, B example this will tell us how frequently each *combination* of R, G, B values occurs. (Note that this contains more information than simply building 3 one-dimensional histograms, each for R, G and B, separately. Why?)
Implement a new function `compute_3d_histogram(im, n_bins)`, which takes as input an array of shape `[height, width, 3]` and returns a histogram of shape `[n_bins, n_bins, n_bins]`. Again, assume that the range of values is \[0,1) and normalize the histogram at the end.
Visualize the RGB histograms of the images `sunset.png` and `terrain.png` using the provided code and describe what you see. We cannot use a bar chart in 3D. Instead, in the position of each 3D bin ("voxel"), we have a sphere, whose volume is proportional to the histogram's value in that bin. The color of the sphere is simply the RGB color that the bin represents. Which number of bins gives the best impression of the color distribution?_____no_output_____
<code>
def compute_3d_histogram(im, n_bins):
histogram = np.zeros([n_bins, n_bins, n_bins], dtype=np.float32)
# YOUR CODE HERE
raise NotImplementedError()
return histogram
def plot_3d_histogram(ax, data, axis_names='xyz'):
"""Plot a 3D histogram. We plot a sphere for each bin,
with volume proportional to the bin content."""
r,g,b = np.meshgrid(*[np.linspace(0,1, dim) for dim in data.shape], indexing='ij')
colors = np.stack([r,g,b], axis=-1).reshape(-1, 3)
marker_sizes = 300 * data**(1/3)
ax.scatter(r.flat, g.flat, b.flat, s=marker_sizes.flat, c=colors, alpha=0.5)
ax.set_xlabel(axis_names[0])
ax.set_ylabel(axis_names[1])
ax.set_zlabel(axis_names[2])
paths = ['sunset.png', 'terrain.png']
images = [imageio.imread(p) for p in paths]
plot_multiple(images, paths)
fig, axes = plt.subplots(1, 2, figsize=(8, 4), subplot_kw={'projection': '3d'})
for path, ax in zip(paths, axes):
im = imageio.imread(path).astype(np.float32)/256
hist = compute_3d_histogram(im, n_bins=16) # <--- FIDDLE WITH N_BINS HERE
plot_3d_histogram(ax, hist, 'RGB')
fig.tight_layout()_____no_output_____
</code>
## Histograms in 2D
Now modify your code to work in 2D. This can be useful, for example, for a gradient image that stores two values for each pixel: the vertical and horizontal derivative. Again, assume the values are in the range \[0,1).
Since gradients can be negative, we need to pick a relevant range of values an map them linearly to the range of \[0,1) before applying `compute_2d_histogram`. This is implemented by the function `map_range` provided at the beginning of the notebook.
In 2D we can plot the histogram as an image. For better visibility of small values, we plot the logarithm of each bin value. Yellowish colors mean high values. The center is (0,0). Can you explain why each histogram looks the way it does for the test images?_____no_output_____
<code>
def compute_2d_histogram(im, n_bins):
histogram = np.zeros([n_bins, n_bins], dtype=np.float32)
# YOUR CODE HERE
raise NotImplementedError()
return histogram
def compute_gradient_histogram(rgb_im, n_bins):
# Convert to grayscale
gray_im = cv2.cvtColor(im, cv2.COLOR_RGB2GRAY).astype(float)
# Compute Gaussian derivatives
dx, dy = gauss_derivs(gray_im, sigma=2.0)
# Map the derivatives between -10 and 10 to be between 0 and 1
dx = map_range(dx, start=-10, end=10)
dy = map_range(dy, start=-10, end=10)
# Stack the two derivative images along a new
# axis at the end (-1 means "last")
gradients = np.stack([dy, dx], axis=-1)
return dx, dy, compute_2d_histogram(gradients, n_bins=16)
paths = ['model/obj4__0.png', 'model/obj42__0.png']
images, titles = [], []
for path in paths:
im = imageio.imread(path)
dx, dy, hist = compute_gradient_histogram(im, n_bins=16)
images += [im, dx, dy, np.log(hist+1e-3)]
titles += [path, 'dx', 'dy', 'Histogram (log)']
plot_multiple(images, titles, max_columns=4, imwidth=2,
imheight=2, colormap='viridis')_____no_output_____
</code>
Similar to the function `compute_gradient_histogram` above, we can build a "Mag/Lap" histogram from the gradient magnitudes and the Laplacians at each pixel. Refer back to the first exercise to refresh your knowledge of the Laplacian. Implement this in `compute_maglap_histogram`!
Make sure to map the relevant range of the gradient magnitude and Laplacian values to \[0,1) using `map_range()`. For the magnitude you can assume that the values will mostly lie in the range \[0, 15) and the Laplacian in the range \[-5, 5)._____no_output_____
<code>
def compute_maglap_histogram(rgb_im, n_bins):
# Convert to grayscale
gray_im = cv2.cvtColor(rgb_im, cv2.COLOR_RGB2GRAY).astype(float)
# Compute Gaussian derivatives
sigma = 2
kernel_radius = np.ceil(3.0 * sigma)
x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis]
G = gauss(x, sigma)
D = gaussdx(x, sigma)
dx = convolve_with_two(gray_im, D, G.T)
dy = convolve_with_two(gray_im, G, D.T)
# Compute second derivatives
dxx = convolve_with_two(dx, D, G.T)
dyy = convolve_with_two(dy, G, D.T)
# Compute gradient magnitude and Laplacian
# YOUR CODE HERE
raise NotImplementedError()
mag_lap = np.stack([mag, lap], axis=-1)
return mag, lap, compute_2d_histogram(mag_lap, n_bins=16)
paths = [f'model/obj{i}__0.png' for i in [20, 37, 36, 55]]
images, titles = [], []
for path in paths:
im = imageio.imread(path)
mag, lap, hist = compute_maglap_histogram(im, n_bins=16)
images += [im, mag, lap, np.log(hist+1e-3)]
titles += [path, 'Gradient magn.', 'Laplacian', 'Histogram (log)']
plot_multiple(images, titles, imwidth=2, imheight=2,
max_columns=4, colormap='viridis')_____no_output_____
</code>
## Comparing Histograms
The above histograms looked different, but to quantify this objectively, we need a **distance measure**. The Euclidean distance is a common one.
Implement the function `euclidean_distance`, which takes two histograms $P$ and $Q$ as input and returns their Euclidean distance:
$$
\textit{dist}_{\textit{Euclidean}}(P, Q) = \sqrt{\sum_{i=1}^{D}{(P_i - Q_i)^2}}
$$
Another commonly used distance for histograms is the so-called chi-squared ($\chi^2$) distance, commonly defined as:
$$
\chi^2(P, Q) = \frac{1}{2} \sum_{i=1}^{D}\frac{(P_i - Q_i)^2}{P_i + Q_i + \epsilon}
$$
Where we can use a small value $\epsilon$ is used to avoid division by zero.
Implement it as `chi_square_distance`. The inputs `hist1` and `hist2` are histogram vectors containing the bin values. Remember to use numpy array functions (such as `np.sum()`) instead of looping over each element in Python (looping is slow)._____no_output_____
<code>
def euclidean_distance(hist1, hist2):
# YOUR CODE HERE
raise NotImplementedError()
def chi_square_distance(hist1, hist2, eps=1e-3):
# YOUR CODE HERE
raise NotImplementedError()_____no_output_____
</code>
Now let's take the image `obj1__0.png` as reference and let's compare it to `obj91__0.png` and `obj94__0.png`, using an RGB histogram, both with Euclidean and chi-square distance. Can you interpret the results?
You can also try other images from the "model" folder._____no_output_____
<code>
im1 = imageio.imread('model/obj1__0.png')
im2 = imageio.imread('model/obj91__0.png')
im3 = imageio.imread('model/obj94__0.png')
n_bins = 8
h1 = compute_3d_histogram(im1/256, n_bins)
h2 = compute_3d_histogram(im2/256, n_bins)
h3 = compute_3d_histogram(im3/256, n_bins)
eucl_dist1 = euclidean_distance(h1, h2)
chisq_dist1 = chi_square_distance(h1, h2)
eucl_dist2 = euclidean_distance(h1, h3)
chisq_dist2 = chi_square_distance(h1, h3)
titles = ['Reference image',
f'Eucl: {eucl_dist1:.3f}, ChiSq: {chisq_dist1:.3f}',
f'Eucl: {eucl_dist2:.3f}, ChiSq: {chisq_dist2:.3f}']
plot_multiple([im1, im2, im3], titles, imheight=3)_____no_output_____
</code>
# Keypoint Detection
Now we turn to finding keypoints in images.
## Harris Detector
The Harris detector searches for points, around which the second-moment matrix $M$ of the gradient vector has two large eigenvalues (This $M$ is denoted by $C$ in the Grauman & Leibe script). This matrix $M$ can be written as:
$$
M(\sigma, \tilde{\sigma}) = G(\tilde{\sigma}) \star \left[\begin{matrix} I_x^2(\sigma) & I_x(\sigma) \cdot I_y(\sigma) \cr I_x(\sigma)\cdot I_y(\sigma) & I_y^2(\sigma) \end{matrix}\right]
$$
Note that the matrix $M$ is computed for each pixel (we omitted the $x, y$ dependency in this formula for clarity). In the above notation the 4 elements of the second-moment matrix are considered as full 2D "images" (signals) and each of these 4 "images" are convolved with the Gaussian $G(\tilde{\sigma})$ independently. We have two sigmas $\sigma$ and $\tilde{\sigma}$ here for two different uses of Gaussian blurring:
* first for computing the derivatives themselves (as derivatives-of-Gaussian) with $\sigma$, and
* then another Gaussian with $\tilde{\sigma}$ that operates on "images" containing the *products* of the derivatives (such as $I_x^2(\sigma)$) in order to collect summary statistics from a window around each point.
Instead of explicitly computing the eigenvalues $\lambda_1$ and $\lambda_2$ of $M$, the following equivalences are used:
$$
\det(M) = \lambda_1 \lambda_2 = (G(\tilde{\sigma}) \star I_x^2)\cdot (G(\tilde{\sigma}) \star I_y^2) - (G(\tilde{\sigma}) \star (I_x\cdot I_y))^2
$$
$$
\mathrm{trace}(M) = \lambda_1 + \lambda_2 = G(\tilde{\sigma}) \star I_x^2 + G(\tilde{\sigma}) \star I_y^2
$$
The Harris criterion is then:
$$
\det(M) - \alpha \cdot \mathrm{trace}^2(M) > t
$$
In practice, the parameters are usually set as $\tilde{\sigma} = 2 \sigma, \alpha=0.06$.
Read more in Section 3.2.1.2 of the Grauman & Leibe script (grauman-leibe-ch3-local-features.pdf in the Moodle).
----
Write a function `harris_score(im, opts)` which:
- computes the values of $M$ **for each pixel** of the grayscale image `im`
- calculates the trace and the determinant at each pixel
- combines them to the Harris response and returns the resulting image
To handle the large number of configurable parameters in this exercise, we will store them in an `opts` object. Use `opts.sigma1` for $\sigma$, `opts.sigma2` for $\tilde{\sigma}$ and `opts.alpha` for $\alpha$.
Furthermore, implement `nms(scores)` to perform non-maximum suppression of the response image.
Then look at `score_map_to_keypoints(scores, opts)`. It takes a score map and returns an array of shape `[number_of_corners, 2]`, with each row being the $(x,y)$ coordinates of a found keypoint. We use `opts.score_threshold` as the threshold for considering a point to be a keypoint. (This is quite similar to how we found detections from score maps in the sliding-window detection exercise.)_____no_output_____
<code>
def harris_scores(im, opts):
dx, dy = gauss_derivs(im, opts.sigma1)
# YOUR CODE HERE
raise NotImplementedError()
return scores
def nms(scores):
"""Non-maximum suppression"""
# YOUR CODE HERE
raise NotImplementedError()
return scores_out
def score_map_to_keypoints(scores, opts):
corner_ys, corner_xs = (scores > opts.score_threshold).nonzero()
return np.stack([corner_xs, corner_ys], axis=1)_____no_output_____
</code>
Now check the score maps and keypoints:_____no_output_____
<code>
opts = AttrDict()
opts.sigma1=2
opts.sigma2=opts.sigma1*2
opts.alpha=0.06
opts.score_threshold=1e-8
paths = ['checkboard.jpg', 'graf.png', 'gantrycrane.png']
images = []
titles = []
for path in paths:
image = load_image(path)
score_map = harris_scores(image, opts)
score_map_nms = nms(score_map)
keypoints = score_map_to_keypoints(score_map_nms, opts)
keypoint_image = draw_keypoints(image, keypoints)
images += [score_map, keypoint_image]
titles += ['Harris response scores', 'Harris keypoints']
plot_multiple(images, titles, max_columns=2, colormap='viridis')_____no_output_____
</code>
## Hessian Detector
The Hessian detector operates on the second-derivative matrix $H$ (called the “Hessian” matrix)
$$
H = \left[\begin{matrix}I_{xx}(\sigma) & I_{xy}(\sigma) \cr I_{xy}(\sigma) & I_{yy}(\sigma)\end{matrix}\right] \tag{6}
$$
Note that these are *second* derivatives, while the Harris detector computes *products* of *first* derivatives! The score is computed as follows:
$$
\sigma^4 \det(H) = \sigma^4 (I_{xx}I_{yy} - I^2_{xy}) > t \tag{7}
$$
You can read more in Section 3.2.1.1 of the Grauman & Leibe script (grauman-leibe-ch3-local-features.pdf in the Moodle).
-----
Write a function `hessian_scores(im, opts)`, which:
- computes the four entries of the $H$ matrix for each pixel of a given image,
- calculates the determinant of $H$ to get the response image
Use `opts.sigma1` for computing the Gaussian second derivatives._____no_output_____
<code>
def hessian_scores(im, opts):
height, width = im.shape
# YOUR CODE HERE
raise NotImplementedError()
return scores_____no_output_____opts = AttrDict()
opts.sigma1=3
opts.score_threshold=5e-4
paths = ['checkboard.jpg', 'graf.png', 'gantrycrane.png']
images = []
titles = []
for path in paths:
image = load_image(path)
score_map = hessian_scores(image, opts)
score_map_nms = nms(score_map)
keypoints = score_map_to_keypoints(score_map_nms, opts)
keypoint_image = draw_keypoints(image, keypoints)
images += [score_map, keypoint_image]
titles += ['Hessian scores', 'Hessian keypoints']
plot_multiple(images, titles, max_columns=2, colormap='viridis')_____no_output_____
</code>
## Region Descriptor Matching
Now that we can detect robust keypoints, we can try to match them across different images of the same object. For this we need a way to compare the neighborhood of a keypoint found in one image with the neighborhood of a keypoint found in another. If the neighborhoods are similar, then the keypoints may represent the same physical point on the object.
To compare two neighborhoods, we compute a **descriptor** vector for the image window around each keypoint and then compare these descriptors using a **distance function**.
Inspect the following `compute_rgb_descriptors` function that takes a window around each point in `points` and computes a 3D RGB histogram and returns these as row vectors in a `descriptors` array.
Now write the function `compute_maglap_descriptors`, which works very similarly to `compute_rgb_descriptors`, but computes two-dimensional gradient-magnitude/Laplacian histograms. (Compute the gradient magnitude and the Laplacian for the full image first. See also the beginning of this exercise.) Pay attention to the scale of the gradient-magnitude values._____no_output_____
<code>
def compute_rgb_descriptors(rgb_im, points, opts):
"""For each (x,y) point in `points` calculate the 3D RGB histogram
descriptor and stack these into a matrix
of shape [num_points, descriptor_length]
"""
win_half = opts.descriptor_window_halfsize
descriptors = []
rgb_im_01 = rgb_im.astype(np.float32)/256
for (x, y) in points:
y_start = max(0, y-win_half)
y_end = y+win_half+1
x_start = max(0, x-win_half)
x_end = x+win_half+1
window = rgb_im_01[y_start:y_end, x_start:x_end]
histogram = compute_3d_histogram(window, opts.n_histogram_bins)
descriptors.append(histogram.reshape(-1))
return np.array(descriptors)
def compute_maglap_descriptors(rgb_im, points, opts):
"""For each (x,y) point in `points` calculate the magnitude-Laplacian
2D histogram descriptor and stack these into a matrix of
shape [num_points, descriptor_length]
"""
# Compute the gradient magnitude and Laplacian for each pixel first
gray_im = cv2.cvtColor(rgb_im, cv2.COLOR_RGB2GRAY).astype(float)
kernel_radius = np.ceil(3.0 * opts.sigma1)
x = np.arange(-kernel_radius, kernel_radius + 1)[np.newaxis]
G = gauss(x, opts.sigma1)
D = gaussdx(x, opts.sigma1)
dx = convolve_with_two(gray_im, D, G.T)
dy = convolve_with_two(gray_im, G, D.T)
dxx = convolve_with_two(dx, D, G.T)
dyy = convolve_with_two(dy, G, D.T)
# YOUR CODE HERE
raise NotImplementedError()
return np.array(descriptors)_____no_output_____
</code>
Now let's implement the distance computation between descriptors. Look at `compute_euclidean_distances`. It takes descriptors that were computed for keypoints found in two different images and returns the pairwise distances between all point pairs.
Implement `compute_chi_square_distances` in a similar manner._____no_output_____
<code>
def compute_euclidean_distances(descriptors1, descriptors2):
distances = np.empty((len(descriptors1), len(descriptors2)))
for i, desc1 in enumerate(descriptors1):
distances[i] = np.linalg.norm(descriptors2-desc1, axis=-1)
return distances
def compute_chi_square_distances(descriptors1, descriptors2):
distances = np.empty((len(descriptors1), len(descriptors2)))
# YOUR CODE HERE
raise NotImplementedError()
return distances_____no_output_____
</code>
Given the distances, a simple way to produce point matches is to take each descriptor extracted from a keypoint of the first image, and find the keypoint in the second image with the nearest descriptor. The full pipeline from images to point matches is implemented below in the function `find_point_matches(im1, im2, opts)`.
Experiment with different parameter settings. Which keypoint detector, region descriptor and distance function works best?_____no_output_____
<code>
def find_point_matches(im1, im2, opts):
# Process first image
im1_gray = cv2.cvtColor(im1, cv2.COLOR_RGB2GRAY).astype(float)/255
score_map1 = nms(opts.score_func(im1_gray, opts))
points1 = score_map_to_keypoints(score_map1, opts)
descriptors1 = opts.descriptor_func(im1, points1, opts)
# Process second image independently of first
im2_gray = cv2.cvtColor(im2, cv2.COLOR_RGB2GRAY).astype(float)/255
score_map2 = nms(opts.score_func(im2_gray, opts))
points2 = score_map_to_keypoints(score_map2, opts)
descriptors2 = opts.descriptor_func(im2, points2, opts)
# Compute descriptor distances
distances = opts.distance_func(descriptors1, descriptors2)
# Find the nearest neighbor of each descriptor from the first image
# among descriptors of the second image
closest_ids = np.argmin(distances, axis=1)
closest_dists = np.min(distances, axis=1)
# Sort the point pairs in increasing order of distance
# (most similar ones first)
ids1 = np.argsort(closest_dists)
ids2 = closest_ids[ids1]
points1 = points1[ids1]
points2 = points2[ids2]
# Stack the point matches into rows of (x1, y1, x2, y2) values
point_matches = np.concatenate([points1, points2], axis=1)
return point_matches
# Try changing these values in different ways and see if you can explain
# why the result changes the way it does.
opts = AttrDict()
opts.sigma1=2
opts.sigma2=opts.sigma1*2
opts.alpha=0.06
opts.score_threshold=1e-8
opts.descriptor_window_halfsize = 20
opts.n_histogram_bins = 16
opts.score_func = harris_scores
opts.descriptor_func = compute_maglap_descriptors
opts.distance_func = compute_chi_square_distances
# Or try these:
#opts.sigma1=3
#opts.n_histogram_bins = 8
#opts.score_threshold=5e-4
#opts.score_func = hessian_scores
#opts.descriptor_func = compute_rgb_descriptors
#opts.distance_func = compute_euclidean_distances
im1 = imageio.imread('graff5/img1.jpg')
im2 = imageio.imread('graff5/img2.jpg')
point_matches = find_point_matches(im1, im2, opts)
match_image = draw_point_matches(im1, im2, point_matches[:50])
plot_multiple([match_image], imwidth=16, imheight=8)_____no_output_____
</code>
## Homography Estimation
Now that we have these pairs of matching points (also called point correspondences), what can we do with them? In the above case, the wall is planar (flat) and the camera was moved towards the left to take the second image compared to the first image. Therefore, the way that points on the wall are transformed across these two images can be modeled as a **homography**. Homographies can model two distinct effects:
* transformation across images of **any scene** taken from the **exact same camera position** (center of projection)
* transformation across images of a **planar object** taken from **any camera position**.
We are dealing with the second case in these graffiti images. Therefore if our point matches are correct, there should be a homography that transforms image points in the first image to the corresponding points in the second image. Recap the algorithm from the lecture for finding this homography (it's called the **Direct Linear Transformation**, DLT). There is a 2 page description of it in the Grauman & Leibe script (grauman-leibe-ch5-geometric-verification.pdf in the Moodle) in Section 5.1.3.
----
Now let's actually put this into practice. Implement `estimate_homography(point_matches)`, which returns a 3x3 homography matrix that transforms points of the first image to points of the second image.
The steps are:
1. Build the matrix $A$ from the point matches according to Eq. 5.7 from the script.
2. Apply SVD using `np.linalg.svd(A)`. It returns $U,d,V^T$. Note that the last return value is not $V$ but $V^T$.
3. Compute $\mathbf{h}$ from $V$ according to Eq. 5.9 or 5.10
4. Reshape $\mathbf{h}$ to the 3x3 matrix $H$ and return it.
The input `point_matches` contains as many rows as there are point matches (correspondences) and each row has 4 elements: $x, y, x', y'$._____no_output_____
<code>
def estimate_homography(point_matches):
n_matches = len(point_matches)
A = np.empty((n_matches*2, 9))
for i, (x1, y1, x2, y2) in enumerate(point_matches):
# YOUR CODE HERE
raise NotImplementedError()
return H_____no_output_____
</code>
The `point_matches` have already been sorted in the `find_point_matches` function according to the descriptor distances, so the more accurate pairs will be near the beginning. We can use the top $k$, e.g. $k=10$ pairs in the homography estimation and have a reasonably accurate estimate. What $k$ give the best result? What happens if you use too many? Why?
We can use `cv2.warpPerspective` to warp the first image to the reference frame of the second. Does the result look good?
Can you interpret the entries of the resulting $H$ matrix and are the numbers as you would expect them for these images?
You can also try other image from the `graff5` folder or the `NewYork` folder._____no_output_____
<code>
# See what happens if you change top_k below
top_k = 10
H = estimate_homography(point_matches[:top_k])
H_string = np.array_str(H, precision=5, suppress_small=True)
print('The estimated homography matrix H is\n', H_string)
im1_warped = cv2.warpPerspective(im1, H, (im2.shape[1], im2.shape[0]))
absdiff = np.abs(im2.astype(np.float32)-im1_warped.astype(np.float32))/255
plot_multiple([im1, im2, im1_warped, absdiff],
['First image', 'Second image',
'Warped first image', 'Absolute difference'],
max_columns=2, colormap='viridis')_____no_output_____
</code>
| {
"repository": "danikhani/CV1-2020",
"path": "Exercise3/Exercise3/local_feature_matching.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 41285,
"hexsha": "d02d1b8258688297dd5f45d601777cca6f0d0880",
"max_line_length": 507,
"avg_line_length": 40.9167492567,
"alphanum_fraction": 0.6017197529
} |
# Notebook from fcivardi/spark-nlp-workshop
Path: tutorials/old_generation_notebooks/colab/6- Sarcasm Classifiers (TF-IDF).ipynb
_____no_output_____https://www.kaggle.com/danofer/sarcasm
<div class="markdown-converter__text--rendered"><h3>Context</h3>
<p>This dataset contains 1.3 million Sarcastic comments from the Internet commentary website Reddit. The dataset was generated by scraping comments from Reddit (not by me :)) containing the <code>\s</code> ( sarcasm) tag. This tag is often used by Redditors to indicate that their comment is in jest and not meant to be taken seriously, and is generally a reliable indicator of sarcastic comment content.</p>
<h3>Content</h3>
<p>Data has balanced and imbalanced (i.e true distribution) versions. (True ratio is about 1:100). The
corpus has 1.3 million sarcastic statements, along with what they responded to as well as many non-sarcastic comments from the same source.</p>
<p>Labelled comments are in the <code>train-balanced-sarcasm.csv</code> file.</p>
<h3>Acknowledgements</h3>
<p>The data was gathered by: Mikhail Khodak and Nikunj Saunshi and Kiran Vodrahalli for their article "<a href="https://arxiv.org/abs/1704.05579" rel="nofollow">A Large Self-Annotated Corpus for Sarcasm</a>". The data is hosted <a href="http://nlp.cs.princeton.edu/SARC/0.0/" rel="nofollow">here</a>.</p>
<p>Citation:</p>
<pre><code>@unpublished{SARC,
authors={Mikhail Khodak and Nikunj Saunshi and Kiran Vodrahalli},
title={A Large Self-Annotated Corpus for Sarcasm},
url={https://arxiv.org/abs/1704.05579},
year=2017
}
</code></pre>
<p><a href="http://nlp.cs.princeton.edu/SARC/0.0/readme.txt" rel="nofollow">Annotation of files in the original dataset: readme.txt</a>.</p>
<h3>Inspiration</h3>
<ul>
<li>Predicting sarcasm and relevant NLP features (e.g. subjective determinant, racism, conditionals, sentiment heavy words, "Internet Slang" and specific phrases). </li>
<li>Sarcasm vs Sentiment</li>
<li>Unusual linguistic features such as caps, italics, or elongated words. e.g., "Yeahhh, I'm sure THAT is the right answer".</li>
<li>Topics that people tend to react to sarcastically</li>
</ul></div>_____no_output_____
<code>
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp_____no_output_____import sys
import time
import sparknlp
from pyspark.sql import SparkSession
packages = [
'JohnSnowLabs:spark-nlp: 2.5.5'
]
spark = SparkSession \
.builder \
.appName("ML SQL session") \
.config('spark.jars.packages', ','.join(packages)) \
.config('spark.executor.instances','2') \
.config("spark.executor.memory", "2g") \
.config("spark.driver.memory","16g") \
.getOrCreate()_____no_output_____print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)Spark NLP version: 2.4.2
Apache Spark version: 2.4.4
! wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/sarcasm/train-balanced-sarcasm.csv -P /tmp--2020-02-11 19:18:09-- https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/sarcasm/train-balanced-sarcasm.csv
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.237.229
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.237.229|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 255268960 (243M) [text/csv]
Saving to: ‘/tmp/train-balanced-sarcasm.csv’
train-balanced-sarc 100%[===================>] 243,44M 5,01MB/s in 33s
2020-02-11 19:18:43 (7,46 MB/s) - ‘/tmp/train-balanced-sarcasm.csv’ saved [255268960/255268960]
from pyspark.sql import SQLContext
sql = SQLContext(spark)
trainBalancedSarcasmDF = spark.read.option("header", True).option("inferSchema", True).csv("/tmp/train-balanced-sarcasm.csv")
trainBalancedSarcasmDF.printSchema()
# Let's create a temp view (table) for our SQL queries
trainBalancedSarcasmDF.createOrReplaceTempView('data')
sql.sql('SELECT COUNT(*) FROM data').collect()root
|-- label: integer (nullable = true)
|-- comment: string (nullable = true)
|-- author: string (nullable = true)
|-- subreddit: string (nullable = true)
|-- score: string (nullable = true)
|-- ups: string (nullable = true)
|-- downs: string (nullable = true)
|-- date: string (nullable = true)
|-- created_utc: string (nullable = true)
|-- parent_comment: string (nullable = true)
sql.sql('select * from data limit 20').show()+-----+--------------------+------------------+------------------+-----+---+-----+-------+-------------------+--------------------+
|label| comment| author| subreddit|score|ups|downs| date| created_utc| parent_comment|
+-----+--------------------+------------------+------------------+-----+---+-----+-------+-------------------+--------------------+
| 0| NC and NH.| Trumpbart| politics| 2| -1| -1|2016-10|2016-10-16 23:55:23|Yeah, I get that ...|
| 0|You do know west ...| Shbshb906| nba| -4| -1| -1|2016-11|2016-11-01 00:24:10|The blazers and M...|
| 0|They were underdo...| Creepeth| nfl| 3| 3| 0|2016-09|2016-09-22 21:45:37|They're favored t...|
| 0|"This meme isn't ...| icebrotha|BlackPeopleTwitter| -8| -1| -1|2016-10|2016-10-18 21:03:47|deadass don't kil...|
| 0|I could use one o...| cush2push|MaddenUltimateTeam| 6| -1| -1|2016-12|2016-12-30 17:00:13|Yep can confirm I...|
| 0|I don't pay atten...| only7inches| AskReddit| 0| 0| 0|2016-09|2016-09-02 10:35:08|do you find arian...|
| 0|Trick or treating...| only7inches| AskReddit| 1| -1| -1|2016-10|2016-10-23 21:43:03|What's your weird...|
| 0|Blade Mastery+Mas...| P0k3rm4s7| FFBraveExvius| 2| -1| -1|2016-10|2016-10-13 21:13:55|Probably Sephirot...|
| 0|You don't have to...| SoupToPots| pcmasterrace| 1| -1| -1|2016-10|2016-10-27 19:11:06|What to upgrade? ...|
| 0|I would love to s...| chihawks| Lollapalooza| 2| -1| -1|2016-11|2016-11-21 23:39:12|Probably count Ka...|
| 0|I think a signifi...|ThisIsNotKimJongUn| politics| 92| 92| 0|2016-09|2016-09-20 17:53:52|I bet if that mon...|
| 0|Damn I was hoping...| Kvetch__22| baseball| 14| -1| -1|2016-10|2016-10-28 09:07:50|James Shields Wil...|
| 0|They have an agenda.| Readbooks6| exmormon| 4| -1| -1|2016-10|2016-10-15 01:14:03|There's no time t...|
| 0| Great idea!| pieman2005| fantasyfootball| 1| -1| -1|2016-10|2016-10-06 23:27:53|Team Specific Thr...|
| 0|Ayy bb wassup, it...| Jakethejoker| NYGiants| 29| 29| 0|2016-09|2016-09-19 18:46:58|Ill give you a hi...|
| 0| what the fuck| Pishwi| AskReddit| 22| -1| -1|2016-11|2016-11-04 20:10:33|Star Wars, easy. ...|
| 0| noted.| kozmo1313| NewOrleans| 2| -1| -1|2016-12|2016-12-20 21:59:45| You're adorable.|
| 0|because it's what...| kozmo1313| politics| 15| -1| -1|2016-12|2016-12-26 20:10:45|He actually acts ...|
| 0|why you fail me, ...| kozmo1313| HillaryForPrison| 1| 1| 0|2016-09|2016-09-18 13:02:45|Clinton struggles...|
| 0|Pre-Flashpoint Cl...| BreakingGarrick| superman| 2| 2| 0|2016-09|2016-09-16 02:34:04|Is that the Older...|
+-----+--------------------+------------------+------------------+-----+---+-----+-------+-------------------+--------------------+
sql.sql('select label,count(*) as cnt from data group by label order by cnt desc').show()+-----+------+
|label| cnt|
+-----+------+
| 0|505413|
| 1|505413|
+-----+------+
sql.sql('select count(*) from data where comment is null').collect()_____no_output_____df = sql.sql('select label,concat(parent_comment,"\n",comment) as comment from data where comment is not null and parent_comment is not null limit 100000')
print(type(df))
df.printSchema()
df.show()<class 'pyspark.sql.dataframe.DataFrame'>
root
|-- label: integer (nullable = true)
|-- comment: string (nullable = true)
+-----+--------------------+
|label| comment|
+-----+--------------------+
| 0|Yeah, I get that ...|
| 0|The blazers and M...|
| 0|They're favored t...|
| 0|deadass don't kil...|
| 0|Yep can confirm I...|
| 0|do you find arian...|
| 0|What's your weird...|
| 0|Probably Sephirot...|
| 0|What to upgrade? ...|
| 0|Probably count Ka...|
| 0|I bet if that mon...|
| 0|James Shields Wil...|
| 0|There's no time t...|
| 0|Team Specific Thr...|
| 0|Ill give you a hi...|
| 0|Star Wars, easy. ...|
| 0|You're adorable.
...|
| 0|He actually acts ...|
| 0|Clinton struggles...|
| 0|Is that the Older...|
+-----+--------------------+
only showing top 20 rows
from sparknlp.annotator import *
from sparknlp.common import *
from sparknlp.base import *
from pyspark.ml import Pipeline
document_assembler = DocumentAssembler() \
.setInputCol("comment") \
.setOutputCol("document")
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence") \
.setUseAbbreviations(True)
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
stemmer = Stemmer() \
.setInputCols(["token"]) \
.setOutputCol("stem")
normalizer = Normalizer() \
.setInputCols(["stem"]) \
.setOutputCol("normalized")
finisher = Finisher() \
.setInputCols(["normalized"]) \
.setOutputCols(["ntokens"]) \
.setOutputAsArray(True) \
.setCleanAnnotations(True)
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, stemmer, normalizer, finisher])
nlp_model = nlp_pipeline.fit(df)
processed = nlp_model.transform(df).persist()
processed.count()
processed.show()+-----+--------------------+--------------------+
|label| comment| ntokens|
+-----+--------------------+--------------------+
| 0|Yeah, I get that ...|[yeah, i, get, th...|
| 0|The blazers and M...|[the, blazer, and...|
| 0|They're favored t...|[theyr, favor, to...|
| 0|deadass don't kil...|[deadass, dont, k...|
| 0|Yep can confirm I...|[yep, can, confir...|
| 0|do you find arian...|[do, you, find, a...|
| 0|What's your weird...|[what, your, weir...|
| 0|Probably Sephirot...|[probabl, sephiro...|
| 0|What to upgrade? ...|[what, to, upgrad...|
| 0|Probably count Ka...|[probabl, count, ...|
| 0|I bet if that mon...|[i, bet, if, that...|
| 0|James Shields Wil...|[jame, shield, wi...|
| 0|There's no time t...|[there, no, time,...|
| 0|Team Specific Thr...|[team, specif, th...|
| 0|Ill give you a hi...|[ill, give, you, ...|
| 0|Star Wars, easy. ...|[star, war, easi,...|
| 0|You're adorable.
...| [your, ador, note]|
| 0|He actually acts ...|[he, actual, act,...|
| 0|Clinton struggles...|[clinton, struggl...|
| 0|Is that the Older...|[i, that, the, ol...|
+-----+--------------------+--------------------+
only showing top 20 rows
train, test = processed.randomSplit(weights=[0.7, 0.3], seed=123)
print(train.count())
print(test.count())70136
29864
from pyspark.ml import feature as spark_ft
stopWords = spark_ft.StopWordsRemover.loadDefaultStopWords('english')
sw_remover = spark_ft.StopWordsRemover(inputCol='ntokens', outputCol='clean_tokens', stopWords=stopWords)
tf = spark_ft.CountVectorizer(vocabSize=500, inputCol='clean_tokens', outputCol='tf')
idf = spark_ft.IDF(minDocFreq=5, inputCol='tf', outputCol='idf')
feature_pipeline = Pipeline(stages=[sw_remover, tf, idf])
feature_model = feature_pipeline.fit(train)
train_featurized = feature_model.transform(train).persist()
train_featurized.count()
train_featurized.show()+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
|label| comment| ntokens| clean_tokens| tf| idf|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
| 0| !
Goes| [goe]| [goe]| (500,[375],[1.0])|(500,[375],[4.866...|
| 0|!completed
!compl...| [complet, complet]| [complet, complet]| (500,[227],[2.0])|(500,[227],[8.875...|
| 0|""" ""Very Right ...|[veri, right, win...|[veri, right, win...|(500,[1,7,31,77,9...|(500,[1,7,31,77,9...|
| 0|""" Perhaps you n...|[perhap, you, ne,...|[perhap, ne, stro...| (500,[34],[1.0])|(500,[34],[3.1336...|
| 0|""" This covering...|[thi, cover, not,...|[thi, cover, onli...|(500,[0,6,14,18,2...|(500,[0,6,14,18,2...|
| 0|"""*Kirk
I am sin...|[kirk, i, am, sin...|[kirk, singl, gue...|(500,[31,168,348]...|(500,[31,168,348]...|
| 0|"""*looks at hand...|[look, at, hand, ...|[look, hand, doe,...|(500,[22,58,211,2...|(500,[22,58,211,2...|
| 0|"""+100"" indicat...|[+, indic, come, ...|[+, indic, come, ...|(500,[5,9,18,57,9...|(500,[5,9,18,57,9...|
| 0|""".$witty_remark...|[wittyremark, shi...|[wittyremark, shi...| (500,[],[])| (500,[],[])|
| 0|"""... and Fancy ...|[and, fanci, feas...|[fanci, feast, so...| (500,[1],[1.0])|(500,[1],[1.87740...|
| 0|"""...and then th...|[and, then, the, ...|[entir, food, cou...|(500,[14,31,64,19...|(500,[14,31,64,19...|
| 0|"""...newtons."" ...|[newton, which, i...|[newton, dont, ge...|(500,[0,5,6,208],...|(500,[0,5,6,208],...|
| 0|"""100 level and ...|[level, and, k, e...|[level, k, easfc,...|(500,[0,1,27,56,8...|(500,[0,1,27,56,8...|
| 0|"""8 operators.""...|[oper, well, i, m...|[oper, well, mean...|(500,[5,24,51,66,...|(500,[5,24,51,66,...|
| 0|"""@wikileaks - A...|[wikileak, americ...|[wikileak, americ...| (500,[300],[1.0])|(500,[300],[4.703...|
| 0|"""A Cyborg... Ni...|[a, cyborg, ninja...|[cyborg, ninja, n...| (500,[],[])| (500,[],[])|
| 0|"""A Victoria's S...|[a, victoria, sec...|[victoria, secret...|(500,[2,139,173,2...|(500,[2,139,173,2...|
| 0|"""A basic aspect...|[a, basic, aspect...|[basic, aspect, f...|(500,[0,1,2,3,10,...|(500,[0,1,2,3,10,...|
| 0|"""A sense of pur...|[a, sens, of, pur...|[sens, purpos, sh...|(500,[131,133,326...|(500,[131,133,326...|
| 0|"""Agreed. I thin...|[agr, i, think, w...|[agr, think, issu...|(500,[0,1,7,9,29,...|(500,[0,1,7,9,29,...|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+
only showing top 20 rows
train_featurized.groupBy("label").count().show()
train_featurized.printSchema()+-----+-----+
|label|count|
+-----+-----+
| 0|40466|
| 1|29670|
+-----+-----+
root
|-- label: integer (nullable = true)
|-- comment: string (nullable = true)
|-- ntokens: array (nullable = true)
| |-- element: string (containsNull = true)
|-- clean_tokens: array (nullable = true)
| |-- element: string (containsNull = true)
|-- tf: vector (nullable = true)
|-- idf: vector (nullable = true)
from pyspark.ml import classification as spark_cls
rf = spark_cls. RandomForestClassifier(labelCol="label", featuresCol="idf", numTrees=100)
model = rf.fit(train_featurized)_____no_output_____test_featurized = feature_model.transform(test)
preds = model.transform(test_featurized)
preds.show()+-----+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+----------+
|label| comment| ntokens| clean_tokens| tf| idf| rawPrediction| probability|prediction|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+----------+
| 0|!RemindMe 1 week
...|[remindm, week, r...|[remindm, week, r...|(500,[56,132],[1....|(500,[56,132],[3....|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|!Remindme 2 weeks...|[remindm, week, r...|[remindm, week, r...| (500,[132],[2.0])|(500,[132],[8.254...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|!SH!TPOST!: All t...|[shtpost, all, th...|[shtpost, poor, u...|(500,[286,476],[1...|(500,[286,476],[4...|[58.6927668196978...|[0.58692766819697...| 0.0|
| 0|"""**FUCK** Cloud...|[fuck, cloud, lin...|[fuck, cloud, lin...|(500,[30,35],[1.0...|(500,[30,35],[3.0...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""*Komrad
"*""Th...|[komrad, those, w...|[komrad, prousa, ...| (500,[308],[1.0])|(500,[308],[4.833...|[57.9747819444301...|[0.57974781944430...| 0.0|
| 0|"""... thanks to ...|[thank, to, a, pa...|[thank, parad, tr...|(500,[18,31,81,14...|(500,[18,31,81,14...|[57.8971892730668...|[0.57897189273066...| 0.0|
| 0|"""...FUCK IS THA...|[fuck, i, tha, de...|[fuck, tha, death...|(500,[3,11,29,30,...|(500,[3,11,29,30,...|[58.8662918135306...|[0.58866291813530...| 0.0|
| 0|"""...I'm Going T...|[im, go, to, end,...|[im, go, end, dre...|(500,[8,11,119],[...|(500,[8,11,119],[...|[59.1473600893163...|[0.59147360089316...| 0.0|
| 0|"""A SMALL FUCKIN...|[a, small, fuck, ...|[small, fuck, hol...|(500,[30,31,57,42...|(500,[30,31,57,42...|[57.9152715153977...|[0.57915271515397...| 0.0|
| 0|"""A new brick wa...|[a, new, brick, w...|[new, brick, wall...|(500,[3,32,43,124...|(500,[3,32,43,124...|[58.7612174551342...|[0.58761217455134...| 0.0|
| 0|"""Add dabbing to...|[add, dab, to, mi...|[add, dab, minecr...| (500,[358],[1.0])|(500,[358],[4.866...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""All according ...|[all, accord, to,...|[accord, keikaku,...|(500,[51,350],[1....|(500,[51,350],[3....|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""An unmet playe...|[an, unmet, playe...|[unmet, player, h...|(500,[0,1,7,8,14,...|(500,[0,1,7,8,14,...|[58.3304632923714...|[0.58330463292371...| 0.0|
| 0|"""And bacon. Lot...|[and, bacon, lot,...|[bacon, lot, lot,...|(500,[6,74,82,483...|(500,[6,74,82,483...|[58.9443340184272...|[0.58944334018427...| 0.0|
| 0|"""And later... S...|[and, later, some...|[later, someth, f...|(500,[54,73,120,1...|(500,[54,73,120,1...|[58.9578427385407...|[0.58957842738540...| 0.0|
| 0|"""And please tel...|[and, pleas, tell...|[pleas, tell, mom...|(500,[0,43,94,116...|(500,[0,43,94,116...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""Angry Birds?""...|[angri, bird, u, ...|[angri, bird, u, ...|(500,[12,43,44,28...|(500,[12,43,44,28...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""Any objections...|[ani, object, fuc...|[ani, object, fuc...|(500,[1,30,33,34,...|(500,[1,30,33,34,...|[58.6845929081117...|[0.58684592908111...| 0.0|
| 0|"""Anyway here's ...|[anywai, here, st...|[anywai, stairwai...| (500,[361],[1.0])|(500,[361],[4.817...|[58.8715006890861...|[0.58871500689086...| 0.0|
| 0|"""Aren't you a C...|[arent, you, a, c...|[arent, christian...|(500,[123,207],[1...|(500,[123,207],[3...|[59.0632707665339...|[0.59063270766533...| 0.0|
+-----+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+----------+
only showing top 20 rows
pred_df = preds.select('comment', 'label', 'prediction').toPandas()_____no_output_____pred_df.head()_____no_output_____import pandas as pd
from sklearn import metrics as skmetrics
pd.DataFrame(
data=skmetrics.confusion_matrix(pred_df['label'], pred_df['prediction']),
columns=['pred ' + l for l in ['0','1']],
index=['true ' + l for l in ['0','1']]
)_____no_output_____print(skmetrics.classification_report(pred_df['label'], pred_df['prediction'],
target_names=['0','1'])) precision recall f1-score support
0 0.59 0.99 0.74 17224
1 0.83 0.04 0.08 12640
accuracy 0.59 29864
macro avg 0.71 0.52 0.41 29864
weighted avg 0.69 0.59 0.46 29864
spark.stop()_____no_output_____
</code>
| {
"repository": "fcivardi/spark-nlp-workshop",
"path": "tutorials/old_generation_notebooks/colab/6- Sarcasm Classifiers (TF-IDF).ipynb",
"matched_keywords": [
"STAR"
],
"stars": 687,
"size": 40113,
"hexsha": "d02edbaeaac510d581d7c5092d9f32d163498dd6",
"max_line_length": 418,
"avg_line_length": 35.5927240461,
"alphanum_fraction": 0.466881061
} |
# Notebook from jpzhangvincent/MobileAppRecommendSys
Path: notebooks/Correlation between app size and app quality.ipynb
<code>
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline_____no_output_____app = pd.read_pickle('/Users/krystal/Desktop/app_cleaned.pickle')
app.head()_____no_output_____app = app.drop_duplicates()_____no_output_____for i in range(0,len(app)):
unit = app['size'][i][-2:]
if unit == 'GB':
app['size'][i] = float(app['size'][i][:-3])*1000
else:
app['size'][i] = float(app['size'][i][:-3])
_____no_output_____
</code>
<p> Convert unit of app size from GB into KB.</p>_____no_output_____
<code>
rating_df = app[["name","size","overall_rating", "current_rating", 'num_current_rating', "num_overall_rating"]].dropna()_____no_output_____rating_cleaned = {'1 star':1, "1 and a half stars": 1.5, '2 stars': 2, '2 and a half stars':2.5, "3 stars":3, "3 and a half stars":3.5, "4 stars": 4,
'4 and a half stars': 4.5, "5 stars": 5}_____no_output_____rating_df.overall_rating = rating_df.overall_rating.replace(rating_cleaned)_____no_output_____rating_df['weighted_rating'] = np.divide(rating_df['num_current_rating'],rating_df['num_overall_rating'])*rating_df['current_rating']+(1-np.divide(rating_df['num_current_rating'],rating_df['num_overall_rating']))*rating_df['overall_rating']_____no_output_____
</code>
<p>Add variable weighted rating as app's quality into data set.</p>_____no_output_____
<code>
plt.scatter(rating_df['size'], rating_df['weighted_rating'])
plt.xlabel('Size of app')
plt.ylabel('Quality of app')
plt.title('Relationship between app size and quality')
plt.show()_____no_output_____rating_df_2 = rating_df[rating_df['size'] <= 500]_____no_output_____plt.scatter(rating_df_2['size'], rating_df_2['weighted_rating'])
plt.xlabel('Size of app')
plt.ylabel('Quality of app')
plt.title('Relationship between app size(less than 500) and quality')
plt.show()_____no_output_____
</code>
<p>I plot scatter plot for app size and overall rating of app. The second plot only contains app with size less than 500KB. I find that there is a positive association between app size and app overall rating. Further analysis is still needed.</p>_____no_output_____
| {
"repository": "jpzhangvincent/MobileAppRecommendSys",
"path": "notebooks/Correlation between app size and app quality.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 100885,
"hexsha": "d031be7ee86b55536ca4064c94edc20bf4b5d5b0",
"max_line_length": 52000,
"avg_line_length": 216.0278372591,
"alphanum_fraction": 0.8709025128
} |
# Notebook from saudijack/unfpyboot
Path: Day_00/02_Strings_and_FileIO/00 Strings in Python.ipynb
# Strings in Python_____no_output_____## What is a string?_____no_output_____A "string" is a series of characters of arbitrary length.
Strings are immutable - they cannot be changed once created. When you modify a string, you automatically make a copy and modify the copy._____no_output_____
<code>
s1 = 'Godzilla'
print s1, s1.upper(), s1_____no_output_____
</code>
## String literals_____no_output_____A "literal" is essentially a string constant, already spelled out for you. Python uses either on output, but that's just for formatting simplicity._____no_output_____
<code>
"Godzilla"_____no_output_____
</code>
### Single and double quotes_____no_output_____Generally, a string literal can be in single ('), double ("), or triple (''') quotes. Single and double quotes are equivalent - use whichever you prefer (but be consistent). If you need to have a single or double quote in your literal, surround your literal with the other type, or use the backslash to escape the quote._____no_output_____
<code>
"Godzilla's a kaiju."_____no_output_____'Godzilla\'s a kaiju.'_____no_output_____'We call him... "Godzilla".'_____no_output_____
</code>
### Triple quotes (''')_____no_output_____Triple quotes are a special form of quoting used for documenting your Python files (docstrings). We won't discuss that type here._____no_output_____### Raw strings_____no_output_____Raw strings don't use any escape character interpretation. Use them when you have a complicated string that you don't want to clutter with lots of backslashes. Python puts them in for you._____no_output_____
<code>
print('This is a\ncomplicated string with newline escapes in it.')_____no_output_____print(r'This is a\ncomplicated string with newline escapes in it.')_____no_output_____
</code>
## Strings and numbers_____no_output_____
<code>
x=int('122', 3)
x+1_____no_output_____
</code>
### String objects_____no_output_____String objects are just the string variables you create in Python._____no_output_____
<code>
kaiju = 'Godzilla'
print(kaiju)_____no_output_____kaiju_____no_output_____
</code>
Note the print() call shows no quotes, while the simple variable name did. That is a Python output convention. Just entering the name will call the repr() method, which displays the value of the argument as Python would see it when it reads it in, not as the user wants it._____no_output_____
<code>
repr(kaiju)_____no_output_____print(repr(kaiju))_____no_output_____
</code>
### String operators_____no_output_____When you read text from a file, it's just that - text. No matter what the data represents, it's still text. To use it as a number, you have to explicitly convert it to a number._____no_output_____
<code>
one = 1
two = '2'
print one, two, one + two_____no_output_____one = 1
two = int('2')
print one, two, one + two_____no_output_____num1 = 1.1
num2 = float('2.2')
print num1, num2, num1 + num2_____no_output_____
</code>
You can also do this with hexadecimal and octal numbers, or any other base, for that matter._____no_output_____
<code>
print int('FF', 16)
print int('0xff', 16)
print int('777', 8)
print int('0777', 8)
print int('222', 7)
print int('110111001', 2)_____no_output_____
</code>
If the conversion cannot be done, an exception is thrown._____no_output_____
<code>
print int('0xGG', 16)_____no_output_____
</code>
#### Concatenation_____no_output_____
<code>
kaiju1 = 'Godzilla'
kaiju2 = 'Mothra'
kaiju1 + ' versus ' + kaiju2_____no_output_____
</code>
#### Repetition_____no_output_____
<code>
'Run away! ' * 3_____no_output_____
</code>
### String keywords_____no_output_____#### in()_____no_output_____NOTE: This _particular_ statement is false regardless of how the statement is evaluated! :^)_____no_output_____
<code>
'Godzilla' in 'Godzilla vs Gamera'_____no_output_____
</code>
### String functions_____no_output_____#### len()_____no_output_____
<code>
len(kaiju)_____no_output_____
</code>
### String methods_____no_output_____Remember - methods are functions attached to objects, accessed via the 'dot' notation._____no_output_____#### Basic formatting and manipulation_____no_output_____##### capitalize()/lower()/upper()/swapcase()/title()_____no_output_____
<code>
kaiju.capitalize()_____no_output_____kaiju.lower()_____no_output_____kaiju.upper()_____no_output_____kaiju.swapcase()_____no_output_____'godzilla, king of the monsters'.title()_____no_output_____
</code>
##### center()/ljust()/rjust()_____no_output_____
<code>
kaiju.center(20, '*')_____no_output_____kaiju.ljust(20, '*')_____no_output_____kaiju.rjust(20, '*')_____no_output_____
</code>
##### expandtabs()_____no_output_____
<code>
tabbed_kaiju = '\tGodzilla'
print('[' + tabbed_kaiju + ']')_____no_output_____print('[' + tabbed_kaiju.expandtabs(16) + ']')_____no_output_____
</code>
##### join()_____no_output_____
<code>
' vs '.join(['Godzilla', 'Hedorah'])_____no_output_____','.join(['Godzilla', 'Mothra', 'King Ghidorah'])_____no_output_____
</code>
##### strip()/lstrip()/rstrip()_____no_output_____
<code>
' Godzilla '.strip()_____no_output_____'xxxGodzillayyy'.strip('xy')_____no_output_____' Godzilla '.lstrip()_____no_output_____' Godzilla '.rstrip()_____no_output_____
</code>
##### partition()/rpartition()_____no_output_____
<code>
battle = 'Godzilla x Gigan'
battle.partition(' x ')_____no_output_____battle = 'Godzilla and Jet Jaguar vs. Gigan and Megalon'
battle.partition(' vs. ')_____no_output_____battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.partition('vs')_____no_output_____battle = 'Godzilla vs Megalon vs Jet Jaguar'
battle.rpartition('vs')_____no_output_____
</code>
##### replace()_____no_output_____
<code>
battle = 'Godzilla vs Mothra'
battle.replace('Mothra', 'Anguiras')_____no_output_____battle = 'Godzilla vs a monster and another monster'
battle.replace('monster', 'kaiju', 2)_____no_output_____battle = 'Godzilla vs a monster and another monster and yet another monster'
battle.replace('monster', 'kaiju', 2)_____no_output_____
</code>
##### split()/rsplit()_____no_output_____
<code>
battle = 'Godzilla vs King Ghidorah vs Mothra'
battle.split(' vs ')_____no_output_____kaijus = 'Godzilla,Mothra,King Ghidorah'
kaijus.split(',')_____no_output_____kaijus = 'Godzilla Mothra King Ghidorah'
kaijus.split()_____no_output_____kaijus = 'Godzilla,Mothra,King Ghidorah,Megalon'
kaijus.rsplit(',', 2)_____no_output_____
</code>
##### splitlines()_____no_output_____
<code>
kaijus_in_lines = 'Godzilla\nMothra\nKing Ghidorah\nEbirah'
print(kaijus_in_lines)_____no_output_____kaijus_in_lines.splitlines()_____no_output_____kaijus_in_lines.splitlines(True)_____no_output_____
</code>
##### zfill()_____no_output_____
<code>
age_of_Godzilla = 60
age_string = str(age_of_Godzilla)
print(age_string, age_string.zfill(5))_____no_output_____
</code>
#### String information_____no_output_____##### isXXX()_____no_output_____
<code>
print('Godzilla'.isalnum())
print('*Godzilla*'.isalnum())
print('Godzilla123'.isalnum())_____no_output_____print('Godzilla'.isalpha())
print('Godzilla123'.isalpha())_____no_output_____print('Godzilla'.isdigit())
print('60'.isdigit())_____no_output_____print('SpaceGodzilla'.isspace())
print(' '.isspace())_____no_output_____print('Godzilla'.islower())
print('godzilla'.islower())_____no_output_____print('Godzilla'.isupper())
print('GODZILLA'.isupper())_____no_output_____print('Godzilla vs Mothra'.istitle())
print('Godzilla X Mothra'.istitle())_____no_output_____
</code>
##### count()_____no_output_____
<code>
monsters = 'Godzilla and Space Godzilla and MechaGodzilla'
print 'There are ', monsters.count('Godzilla'), ' Godzillas.'
print 'There are ', monsters.count('Godzilla', len('Godzilla')), ' pseudo-Godzillas.'_____no_output_____
</code>
##### startswith()/endswith()_____no_output_____
<code>
king_kaiju = 'Godzilla'
print king_kaiju.startswith('God')
print king_kaiju.endswith('lla')
print king_kaiju.startswith('G')
print king_kaiju.endswith('amera')_____no_output_____
</code>
##### find()/index()/rfind()/rindex()_____no_output_____
<code>
kaiju_string = 'Godzilla,Gamera,Gorgo,Space Godzilla'
print 'The first Godz is at position', kaiju_string.find('Godz')
print 'The second Godz is at position', kaiju_string.find('Godz', len('Godz'))_____no_output_____kaiju_string.index('Minilla')_____no_output_____kaiju_string.rindex('Godzilla')_____no_output_____
</code>
#### Advanced features_____no_output_____##### decode()/encode()/translate()_____no_output_____Used to convert strings to/from Unicode and other systems. Rarely used in science code._____no_output_____##### String formatting_____no_output_____Similar to formatting in C, FORTRAN, etc.. There is a _lot_ more to this than I am showing here._____no_output_____
<code>
kaiju = 'Godzilla'
age = 60
print '%s is %d years old.' % (kaiju, age)_____no_output_____
</code>
## The _string_ module_____no_output_____The _string_ module is the Python equivalent of "junk DNA" in living organisms. It's been around since the beginning, but many of its functions have been superseded by evolution. But some ancient code still relies on it, so they leave the old parts in....
For modern code, the _string_ module does have some useful constants and functions._____no_output_____
<code>
import string_____no_output_____print string.ascii_letters
print string.ascii_lowercase
print string.ascii_uppercase_____no_output_____print string.digits
print string.hexdigits
print string.octdigits_____no_output_____print string.letters
print string.lowercase
print string.uppercase_____no_output_____print string.printable
print string.punctuation
print string.whitespace_____no_output_____
</code>
The _string_ module also provides the _Formatter_ class, which can be useful for sophisticated text formatting._____no_output_____## Regular Expressions_____no_output_____### What is a regular expression?_____no_output_____Regular expressions ('regexps') are essentially a mini-language for describing string operations. Everything shown above with string methods and operators can be done with regular expressions. Most of the time, the regular expression verrsion is more concise. But not always more readable....
To use regular expressions, you have to import the 're' module._____no_output_____
<code>
import re_____no_output_____
</code>
### A very short, whirlwind tour of regular expressions_____no_output_____#### Scanning_____no_output_____
<code>
kaiju_truth = 'Godzilla is the King of the Monsters. Ebirah is also a monster, but looks like a giant lobster.'
re.findall('Godz', kaiju_truth)_____no_output_____print re.findall('(^.+) is the King', kaiju_truth)_____no_output_____
</code>
For simple searches like this, using in() is typically easier.
Regexps are by default case-sensitive._____no_output_____
<code>
print re.findall('\. (.+) is also', kaiju_truth)_____no_output_____print re.findall('(.+) is also a (.+)', kaiju_truth)[0]
print re.findall('\. (.+) is also a (.+),', kaiju_truth)[0]_____no_output_____
</code>
#### Changing_____no_output_____
<code>
some_kaiju = 'Godzilla, Space Godzilla, Mechagodzilla'
print re.sub('Godzilla', 'Gamera', some_kaiju)
print re.sub('(?i)Godzilla', 'Gamera', some_kaiju)_____no_output_____
</code>
#### And so much more..._____no_output_____You could spend a whole day (or more) just learning about regular expressions. But they are incredibly useful and powerful, especially in the all-to-frequent drudgery of munging files from one format to another.
Regular expressions can be internally compiled for speed._____no_output_____
| {
"repository": "saudijack/unfpyboot",
"path": "Day_00/02_Strings_and_FileIO/00 Strings in Python.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 26016,
"hexsha": "d031d5e9617d10a642344ada83ed455558bcbf3b",
"max_line_length": 327,
"avg_line_length": 19.1575846834,
"alphanum_fraction": 0.5187961255
} |
# Notebook from oercompbiomed/CBM101
Path: C_Data_resources/2_Open_datasets.ipynb
# Acquiring Data from open repositories
A crucial step in the work of a computational biologist is not only to analyse data, but acquiring datasets to analyse as well as toy datasets to test out computational methods and algorithms. The internet is full of such open datasets. Sometimes you have to sign up and make a user to get authentication, especially for medical data. This can sometimes be time consuming, so here we will deal with easy access resources, mostly of modest size. Multiple python libraries provide a `dataset` module which makes the effort to fetch online data extremely seamless, with little requirement for preprocessing.
#### Goal of the notebook
Here you will get familiar with some ways to fetch datasets from online. We do some data exploration on the data just for illustration, but the methods will be covered later.
# Useful resources and links
When playing around with algorithms, it can be practical to use relatively small datasets. A good example is the `datasets` submodule of `scikit-learn`. `Nilearn` (library for neuroimaging) also provides a collection of neuroimaging datasets. Many datasets can also be acquired through the competition website [Kaggle](https://www.kaggle.com), in which they describe how to access the data.
### Links
- [OpenML](https://www.openml.org/search?type=data)
- [Nilearn datasets](https://nilearn.github.io/modules/reference.html#module-nilearn.datasets)
- [Sklearn datasets](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets)
- [Kaggle](https://www.kaggle.com/datasets)
- [MEDNIST]
- [**Awesomedata**](https://github.com/awesomedata/awesome-public-datasets)
- We strongly recommend to check out the Awesomedata lists of public datasets, covering topics such as [biology/medicine](https://github.com/awesomedata/awesome-public-datasets#biology) and [neuroscience](https://github.com/awesomedata/awesome-public-datasets#neuroscience)
- [Papers with code](https://paperswithcode.com)
- [SNAP](https://snap.stanford.edu/data/)
- Stanford Large Network Dataset Collection
- [Open Graph Benchmark (OGB)](https://github.com/snap-stanford/ogb)
- Network datasets
- [Open Neuro](https://openneuro.org/)
- [Open fMRI](https://openfmri.org/dataset/)_____no_output_____
<code>
# import basic libraries
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt_____no_output_____
</code>
We start with scikit-learn's datasets for testing out ML algorithms. Visit [here](https://scikit-learn.org/stable/modules/classes.html?highlight=datasets#module-sklearn.datasets) for an overview of the datasets._____no_output_____
<code>
from sklearn.datasets import fetch_olivetti_faces, fetch_20newsgroups, load_breast_cancer, load_diabetes, load_digits, load_irisC:\Users\Peder\Anaconda3\envs\cbm101\lib\importlib\_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
return f(*args, **kwds)
</code>
Load the MNIST dataset (images of hand written digits)_____no_output_____
<code>
X,y = load_digits(return_X_y=True)_____no_output_____y.shape_____no_output_____X.shape #1797 images, 64 pixels per image_____no_output_____
</code>
#### exercise 1. Make a function `plot` taking an argument (k) to visualize the k'th sample.
It is currently flattened, you will need to reshape it. Use `plt.imshow` for plotting. _____no_output_____
<code>
# %load solutions/ex2_1.py
def plot(k):
plt.imshow(X[k].reshape(8,8), cmap='gray')
plt.title(f"Number = {y[k]}")
plt.show()_____no_output_____plot(15); plot(450)_____no_output_____faces = fetch_olivetti_faces()_____no_output_____
</code>
#### Exercise 2. Inspect the dataset. How many classes are there? How many samples per class? Also, plot some examples. What do the classes represent? _____no_output_____
<code>
# %load solutions/ex2_2.py
# example solution.
# You are not expected to make a nice plotting function,
# you can simply call plt.imshow a number of times and observe
print(faces.DESCR) # this shows there are 40 classes, 10 samples per class
print(faces.target) #the targets i.e. classes
print(np.unique(faces.target).shape) # another way to see n_classes
X = faces.images
y = faces.target
fig = plt.figure(figsize=(16,5))
idxs = [0,1,2, 11,12,13, 40,41]
for i,k in enumerate(idxs):
ax=fig.add_subplot(2,4,i+1)
ax.imshow(X[k])
ax.set_title(f"target={y[k]}")
# looking at a few plots shows that each target is a single person... _olivetti_faces_dataset:
The Olivetti faces dataset
--------------------------
`This dataset contains a set of face images`_ taken between April 1992 and
April 1994 at AT&T Laboratories Cambridge. The
:func:`sklearn.datasets.fetch_olivetti_faces` function is the data
fetching / caching function that downloads the data
archive from AT&T.
.. _This dataset contains a set of face images: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
As described on the original website:
There are ten different images of each of 40 distinct subjects. For some
subjects, the images were taken at different times, varying the lighting,
facial expressions (open / closed eyes, smiling / not smiling) and facial
details (glasses / no glasses). All the images were taken against a dark
homogeneous background with the subjects in an upright, frontal position
(with tolerance for some side movement).
**Data Set Characteristics:**
================= =====================
Classes 40
Samples total 400
Dimensionality 4096
Features real, between 0 and 1
================= =====================
The image is quantized to 256 grey levels and stored as unsigned 8-bit
integers; the loader will convert these to floating point values on the
interval [0, 1], which are easier to work with for many algorithms.
The "target" for this database is an integer from 0 to 39 indicating the
identity of the person pictured; however, with only 10 examples per class, this
relatively small dataset is more interesting from an unsupervised or
semi-supervised perspective.
The original dataset consisted of 92 x 112, while the version available here
consists of 64x64 images.
When using these images, please give credit to AT&T Laboratories Cambridge.
[ 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2
2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4
4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7
7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9
9 9 9 9 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11
12 12 12 12 12 12 12 12 12 12 13 13 13 13 13 13 13 13 13 13 14 14 14 14
14 14 14 14 14 14 15 15 15 15 15 15 15 15 15 15 16 16 16 16 16 16 16 16
16 16 17 17 17 17 17 17 17 17 17 17 18 18 18 18 18 18 18 18 18 18 19 19
19 19 19 19 19 19 19 19 20 20 20 20 20 20 20 20 20 20 21 21 21 21 21 21
21 21 21 21 22 22 22 22 22 22 22 22 22 22 23 23 23 23 23 23 23 23 23 23
24 24 24 24 24 24 24 24 24 24 25 25 25 25 25 25 25 25 25 25 26 26 26 26
26 26 26 26 26 26 27 27 27 27 27 27 27 27 27 27 28 28 28 28 28 28 28 28
28 28 29 29 29 29 29 29 29 29 29 29 30 30 30 30 30 30 30 30 30 30 31 31
31 31 31 31 31 31 31 31 32 32 32 32 32 32 32 32 32 32 33 33 33 33 33 33
33 33 33 33 34 34 34 34 34 34 34 34 34 34 35 35 35 35 35 35 35 35 35 35
36 36 36 36 36 36 36 36 36 36 37 37 37 37 37 37 37 37 37 37 38 38 38 38
38 38 38 38 38 38 39 39 39 39 39 39 39 39 39 39]
(40,)
</code>
Once you have made yourself familiar with the dataset you can do some data exploration with unsupervised methods, like below. The next few lines of code are simply for illustration, don't worry about the code (we will cover unsupervised methods in submodule F)._____no_output_____
<code>
from sklearn.decomposition import randomized_svd_____no_output_____X = faces.data_____no_output_____n_dim = 3
u, s, v = randomized_svd(X, n_dim)_____no_output_____
</code>
Now we have factorized the images into their constituent parts. The code below displays the various components isolated one by one._____no_output_____
<code>
def show_ims(ims):
fig = plt.figure(figsize=(16,10))
idxs = [0,1,2, 11,12,13, 40,41,42, 101,101,103]
for i,k in enumerate(idxs):
ax=fig.add_subplot(3,4,i+1)
ax.imshow(ims[k])
ax.set_title(f"target={y[k]}")_____no_output_____for i in range(n_dim):
my_s = np.zeros(s.shape[0])
my_s[i] = s[i]
recon = [email protected](my_s)@v
recon = recon.reshape(400,64,64)
show_ims(recon)_____no_output_____
</code>
Are you able to see what the components represent? It at least looks like the second component signifies the lightning (the light direction), the third highlights eyebrows and facial chin shape._____no_output_____
<code>
from sklearn.manifold import TSNE_____no_output_____tsne = TSNE(init='pca', random_state=0)
trans = tsne.fit_transform(X)_____no_output_____m = 8*10 # choose 4 people
plt.figure(figsize=(16,10))
xs, ys = trans[:m,0], trans[:m,1]
plt.scatter(xs, ys, c=y[:m], cmap='rainbow')
for i,v in enumerate(zip(xs,ys, y[:m])):
xx,yy,s = v
#plt.text(xx,yy,s) #class
plt.text(xx,yy,i) #index_____no_output_____
</code>
Many people seem to have multiple subclusters. What is the difference between those clusters? (e.g. 68,62,65 versus the other 60's)_____no_output_____
<code>
ims = faces.images
idxs = [68,62,65,66,60,64,63]
#idxs = [9,4,1, 5,3]
for k in idxs:
plt.imshow(ims[k], cmap='gray')
plt.show()_____no_output_____def show(im):
return plt.imshow(im, cmap='gray')_____no_output_____import pandas as pd
df= pd.read_csv('data/archive/covid_impact_on_airport_traffic.csv')_____no_output_____df.shape_____no_output_____df.describe()_____no_output_____df.head()_____no_output_____df.Country.unique()_____no_output_____df.ISO_3166_2.unique()_____no_output_____df.AggregationMethod.unique()_____no_output_____
</code>
Here we will look at [OpenML](https://www.openml.org/) - a repository of open datasets free to explore data and test methods.
### Fetching an OpenML dataset
We need to pass in an ID to access, as follows:_____no_output_____
<code>
from sklearn.datasets import fetch_openml_____no_output_____
</code>
OpenML contains all sorts of datatypes. By browsing the website we found a electroencephalography (EEG) dataset to explore: _____no_output_____
<code>
data_id = 1471 #this was found by browsing OpenML
dataset = fetch_openml(data_id=data_id, as_frame=True)_____no_output_____dir(dataset)_____no_output_____dataset.url_____no_output_____type(dataset)_____no_output_____print(dataset.DESCR)**Author**: Oliver Roesler
**Source**: [UCI](https://archive.ics.uci.edu/ml/datasets/EEG+Eye+State), Baden-Wuerttemberg, Cooperative State University (DHBW), Stuttgart, Germany
**Please cite**: [UCI](https://archive.ics.uci.edu/ml/citation_policy.html)
All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analyzing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.
The features correspond to 14 EEG measurements from the headset, originally labeled AF3, F7, F3, FC5, T7, P, O1, O2, P8, T8, FC6, F4, F8, AF4, in that order.
Downloaded from openml.org.
original_names = ['AF3',
'F7',
'F3',
'FC5',
'T7',
'P',
'O1',
'O2',
'P8',
'T8',
'FC6',
'F4',
'F8',
'AF4']_____no_output_____dataset.feature_names_____no_output_____df = dataset.frame_____no_output_____df.head()_____no_output_____df.shape[0] / 117
# 128 frames per second_____no_output_____df = dataset.frame
y = df.Class
#df.drop(columns='Class', inplace=True)_____no_output_____df.dtypes_____no_output_____#def summary(s):
# print(s.max(), s.min(), s.mean(), s.std())
# print()
#
#for col in df.columns[:-1]:
# column = df.loc[:,col]
# summary(column)_____no_output_____df.plot()_____no_output_____
</code>
From the plot we can quickly identify a bunch of huge outliers, making the plot look completely uselss. We assume these are artifacts, and remove them._____no_output_____
<code>
df2 = df.iloc[:,:-1].clip_upper(6000)
df2.plot()C:\Users\Peder\Anaconda3\envs\cbm101\lib\site-packages\ipykernel_launcher.py:1: FutureWarning: clip_upper(threshold) is deprecated, use clip(upper=threshold) instead
"""Entry point for launching an IPython kernel.
</code>
Now we see better what is going on. Lets just remove the frames corresponding to those outliers_____no_output_____
<code>
frames = np.nonzero(np.any(df.iloc[:,:-1].values>5000, axis=1))[0]
frames_____no_output_____df.drop(index=frames, inplace=True)_____no_output_____df.plot(figsize=(16,8))
plt.legend(labels=original_names)_____no_output_____df.columns_____no_output_____
</code>
### Do some modelling of the data_____no_output_____
<code>
from sklearn.linear_model import LogisticRegression_____no_output_____lasso = LogisticRegression(penalty='l2')_____no_output_____X = df.values[:,:-1]
y = df.Class
y = y.astype(np.int) - 1 # map to 0,1_____no_output_____print(X.shape)
print(y.shape)(14976, 14)
(14976,)
lasso.fit(X,y)C:\Users\Peder\Anaconda3\envs\cbm101\lib\site-packages\sklearn\linear_model\_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
comp = (lasso.predict(X) == y).values_____no_output_____np.sum(comp.astype(np.int))/y.shape[0] # shitty accuracy_____no_output_____lasso.coef_[0].shape_____no_output_____names = dataset.feature_names_____no_output_____original_names_____no_output_____coef = lasso.coef_[0]
plt.barh(range(coef.shape[0]), coef)
plt.yticks(ticks=range(14),labels=original_names)
plt.show()_____no_output_____
</code>
Interpreting the coeficients: we naturally tend to read the magnitude of the coefficients as feature importance. That is a fair interpretation, but currently we did not scale our features to a comparable range prior to fittting the model, so we cannot draw that conclusion._____no_output_____### Extra exercise. Go to [OpenML](https://openml.org) and use the search function (or just look around) to find any dataset that interest you. Load it using the above methodology, and try to do anything you can to understand the datatype, visualize it etc._____no_output_____
<code>
### YOUR CODE HERE_____no_output_____
</code>
| {
"repository": "oercompbiomed/CBM101",
"path": "C_Data_resources/2_Open_datasets.ipynb",
"matched_keywords": [
"biology",
"neuroscience"
],
"stars": 7,
"size": 946532,
"hexsha": "d032c2d8731a9a3123c839bf07368239d0b925e5",
"max_line_length": 280896,
"avg_line_length": 519.2166758091,
"alphanum_fraction": 0.9417357258
} |
# Notebook from lmorri/personalize-movielens-20m
Path: getting_started/2.View_Campaign_And_Interactions.ipynb
# View Campaign and Interactions
In the first notebook `Personalize_BuildCampaign.ipynb` you successfully built and deployed a recommendation model using deep learning with Amazon Personalize.
This notebook will expand on that and will walk you through adding the ability to react to real time behavior of users. If their intent changes while browsing a movie, you will see revised recommendations based on that behavior.
It will also showcase demo code for simulating user behavior selecting movies before the recommendations are returned._____no_output_____Below we start with just importing libraries that we need to interact with Personalize_____no_output_____
<code>
# Imports
import boto3
import json
import numpy as np
import pandas as pd
import time
import uuid_____no_output_____
</code>
Below you will paste in the campaign ARN that you used in your previous notebook. Also pick a random user ID from 50 - 300.
Lastly you will also need to find your Dataset Group ARN from the previous notebook._____no_output_____
<code>
# Setup and Config
# Recommendations from Event data
personalize = boto3.client('personalize')
personalize_runtime = boto3.client('personalize-runtime')
HRNN_Campaign_ARN = "arn:aws:personalize:us-east-1:930444659029:campaign/DEMO-campaign"
# Define User
USER_ID = "676"
# Dataset Group Arn:
datasetGroupArn = "arn:aws:personalize:us-east-1:930444659029:dataset-group/DEMO-dataset-group"
# Establish a connection to Personalize's Event Streaming
personalize_events = boto3.client(service_name='personalize-events')_____no_output_____
</code>
## Creating an Event Tracker
Before your recommendation system can respond to real time events you will need an event tracker, the code below will generate one and can be used going forward with this lab. Feel free to name it something more clever._____no_output_____
<code>
response = personalize.create_event_tracker(
name='MovieClickTracker',
datasetGroupArn=datasetGroupArn
)
print(response['eventTrackerArn'])
print(response['trackingId'])
TRACKING_ID = response['trackingId']arn:aws:personalize:us-east-1:930444659029:event-tracker/bbe80586
b8a5944c-8095-40ff-a915-2a6af53b7f55
</code>
## Configuring Source Data
Above you'll see your tracking ID and this has been assigned to a variable so no further action is needed by you. The lines below are going to setup the data used for recommendations so you can render the list of movies later._____no_output_____
<code>
data = pd.read_csv('./ml-20m/ratings.csv', sep=',', dtype={'userid': "int64", 'movieid': "int64", 'rating': "float64", 'timestamp': "int64"})
pd.set_option('display.max_rows', 5)
data.rename(columns = {'userId':'USER_ID','movieId':'ITEM_ID','rating':'RATING','timestamp':'TIMESTAMP'}, inplace = True)
data = data[data['RATING'] > 3] # keep only movies rated 3
data = data[['USER_ID', 'ITEM_ID', 'TIMESTAMP']] # select columns that match the columns in the schema below
data_____no_output_____items = pd.read_csv('./ml-20m/movies.csv', sep=',', usecols=[0,1], header=0)
items.columns = ['ITEM_ID', 'TITLE']
user_id, item_id, _ = data.sample().values[0]
item_title = items.loc[items['ITEM_ID'] == item_id].values[0][-1]
print("USER: {}".format(user_id))
print("ITEM: {}".format(item_title))
itemsUSER: 40094
ITEM: Hotel Rwanda (2004)
</code>
## Getting Recommendations
Just like in the previous notebook it is a great idea to get a list of recommendatiosn first and then see how additional behavior by a user alters the recommendations._____no_output_____
<code>
# Get Recommendations as is
get_recommendations_response = personalize_runtime.get_recommendations(
campaignArn = HRNN_Campaign_ARN,
userId = USER_ID,
)
item_list = get_recommendations_response['itemList']
title_list = [items.loc[items['ITEM_ID'] == np.int(item['itemId'])].values[0][-1] for item in item_list]
print("Recommendations: {}".format(json.dumps(title_list, indent=2)))
print(item_list)Recommendations: [
"Signs (2002)",
"Panic Room (2002)",
"Vanilla Sky (2001)",
"American Pie 2 (2001)",
"Blade II (2002)",
"Bourne Identity, The (2002)",
"Star Wars: Episode II - Attack of the Clones (2002)",
"Memento (2000)",
"Fast and the Furious, The (2001)",
"Unbreakable (2000)",
"Snatch (2000)",
"Austin Powers in Goldmember (2002)",
"Resident Evil (2002)",
"xXx (2002)",
"Sum of All Fears, The (2002)",
"Others, The (2001)",
"American Beauty (1999)",
"Pulp Fiction (1994)",
"Spider-Man (2002)",
"Minority Report (2002)",
"Rock, The (1996)",
"Ring, The (2002)",
"Black Hawk Down (2001)",
"Ocean's Eleven (2001)",
"Schindler's List (1993)"
]
[{'itemId': '5502'}, {'itemId': '5266'}, {'itemId': '4975'}, {'itemId': '4718'}, {'itemId': '5254'}, {'itemId': '5418'}, {'itemId': '5378'}, {'itemId': '4226'}, {'itemId': '4369'}, {'itemId': '3994'}, {'itemId': '4011'}, {'itemId': '5481'}, {'itemId': '5219'}, {'itemId': '5507'}, {'itemId': '5400'}, {'itemId': '4720'}, {'itemId': '2858'}, {'itemId': '296'}, {'itemId': '5349'}, {'itemId': '5445'}, {'itemId': '733'}, {'itemId': '5679'}, {'itemId': '5010'}, {'itemId': '4963'}, {'itemId': '527'}]
</code>
## Simulating User Behavior
The lines below provide a code sample that simulates a user interacting with a particular item, you will then get recommendations that differ from those when you started._____no_output_____
<code>
session_dict = {}_____no_output_____def send_movie_click(USER_ID, ITEM_ID):
"""
Simulates a click as an envent
to send an event to Amazon Personalize's Event Tracker
"""
# Configure Session
try:
session_ID = session_dict[USER_ID]
except:
session_dict[USER_ID] = str(uuid.uuid1())
session_ID = session_dict[USER_ID]
# Configure Properties:
event = {
"itemId": str(ITEM_ID),
}
event_json = json.dumps(event)
# Make Call
personalize_events.put_events(
trackingId = TRACKING_ID,
userId= USER_ID,
sessionId = session_ID,
eventList = [{
'sentAt': int(time.time()),
'eventType': 'EVENT_TYPE',
'properties': event_json
}]
)_____no_output_____
</code>
Immediately below this line will update the tracker as if the user has clicked a particular title._____no_output_____
<code>
# Pick a movie, we will use ID 1653 or Gattica
send_movie_click(USER_ID=USER_ID, ITEM_ID=1653)_____no_output_____
</code>
After executing this block you will see the alterations in the recommendations now that you have event tracking enabled and that you have sent the events to the service._____no_output_____
<code>
get_recommendations_response = personalize_runtime.get_recommendations(
campaignArn = HRNN_Campaign_ARN,
userId = str(USER_ID),
)
item_list = get_recommendations_response['itemList']
title_list = [items.loc[items['ITEM_ID'] == np.int(item['itemId'])].values[0][-1] for item in item_list]
print("Recommendations: {}".format(json.dumps(title_list, indent=2)))
print(item_list)Recommendations: [
"Signs (2002)",
"Fifth Element, The (1997)",
"Gattaca (1997)",
"Unbreakable (2000)",
"Face/Off (1997)",
"Predator (1987)",
"Dark City (1998)",
"Star Wars: Episode II - Attack of the Clones (2002)",
"Cube (1997)",
"Spider-Man (2002)",
"Game, The (1997)",
"Minority Report (2002)",
"X-Files: Fight the Future, The (1998)",
"Twelve Monkeys (a.k.a. 12 Monkeys) (1995)",
"Rock, The (1996)",
"Vanilla Sky (2001)",
"Starship Troopers (1997)",
"Bourne Identity, The (2002)",
"Sneakers (1992)",
"American Beauty (1999)",
"Austin Powers in Goldmember (2002)",
"Memento (2000)",
"Pulp Fiction (1994)",
"X-Men (2000)",
"Star Wars: Episode I - The Phantom Menace (1999)"
]
[{'itemId': '5502'}, {'itemId': '1527'}, {'itemId': '1653'}, {'itemId': '3994'}, {'itemId': '1573'}, {'itemId': '3527'}, {'itemId': '1748'}, {'itemId': '5378'}, {'itemId': '2232'}, {'itemId': '5349'}, {'itemId': '1625'}, {'itemId': '5445'}, {'itemId': '1909'}, {'itemId': '32'}, {'itemId': '733'}, {'itemId': '4975'}, {'itemId': '1676'}, {'itemId': '5418'}, {'itemId': '1396'}, {'itemId': '2858'}, {'itemId': '5481'}, {'itemId': '4226'}, {'itemId': '296'}, {'itemId': '3793'}, {'itemId': '2628'}]
</code>
## Conclusion
You can see now that recommendations are altered by changing the movie that a user interacts with, this system can be modified to any application where users are interacting with a collection of items. These tools are available at any time to pull down and start exploring what is possible with the data you have.
Finally when you are ready to remove the items from your account, open the `Cleanup.ipynb` notebook and execute the steps there.
_____no_output_____
<code>
eventTrackerArn = response['eventTrackerArn']
print("Tracker ARN is: " + str(eventTrackerArn))_____no_output_____
</code>
| {
"repository": "lmorri/personalize-movielens-20m",
"path": "getting_started/2.View_Campaign_And_Interactions.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 17515,
"hexsha": "d032c73eb9480622db0c4f5e31a1bd7f342028d1",
"max_line_length": 507,
"avg_line_length": 32.0787545788,
"alphanum_fraction": 0.4954610334
} |
# Notebook from MusabNaik/LinMLTBS
Path: LinMLTBS.ipynb
<code>
%load_ext Cython
import numpy as np
np.set_printoptions(precision=2,suppress=True,linewidth=250,threshold=2000)_____no_output_____import numpy as np
import pandas as pd
import pyBigWig
import math
import csv
import multiprocessing_____no_output_____bw = pyBigWig.open("/home/musab/bigwig/wgEncodeSydhTfbsHepg2Arid3anb100279IggrabSig.bigWig")
chromo = 'chr14'
total_size = bw.chroms()[chromo]
F=bw.values(chromo,0,total_size,numpy=True)
F[np.isnan(F)] = 0_____no_output_____%%cython -a
import numpy as np
import pandas as pd
import math
import copy
import os
import cython
@cython.boundscheck(False)
@cython.wraparound(False)
cdef V_BC ( long i, long j,double [::1] CP,double [::1] CiP):
cdef double P_SUM,iP_SUM
if i == 1:
P_SUM = CP[j-1]
iP_SUM = CiP[j-1]
else:
P_SUM = CP[j-1]-CP[i-2]
iP_SUM = CiP[j-1]-CiP[i-2]
try:
return (P_SUM*(iP_SUM/P_SUM)**2)
except:
return 0
@cython.boundscheck(False)
@cython.wraparound(False)
cdef mean( long i, long j,double [::1] CP,double [::1] Cip):
cdef double P_SUM,iP_SUM
if i == 1:
P_SUM = CP[j-1]
iP_SUM = Cip[j-1]
else:
P_SUM = CP[j-1]-CP[i-2]
iP_SUM = Cip[j-1]-Cip[i-2]
try:
return iP_SUM/P_SUM
except:
return 0
@cython.boundscheck(False)
@cython.wraparound(False)
cdef lookup(long col,long j,long row,double [:,::1] C,double [::1] CP,double [::1] CiP):
return C[col+j,j] + V_BC(col+(j+1) , row+(j+1),CP,CiP)
@cython.boundscheck(False)
@cython.wraparound(False)
cdef SMAWK(long [::1] mri,long [::1] mci, long j,double [:,::1] C,long [:,::1] D,long [::1]mposi,double [::1] CP,double [::1] CiP):
cdef long [::1] aci
cdef long [::1] bri
if mci.shape[0] != mri.shape[0]:
aci=REDUCE(mri,mci,j,C,CP,CiP)
else:
aci=mci
if (mri.shape[0]==1 and aci.shape[0]==1):
C[mri[0]+j+1,j+1]= lookup(aci[0],j,mri[0],C,CP,CiP)
mposi[mri[0]]=D[mri[0]+j,j]=aci[0]
return
bri = mri[1::2].copy()
SMAWK(bri,aci,j,C,D,mposi,CP,CiP)
MFILL(mri,aci,j,C,D,mposi,CP,CiP)
@cython.boundscheck(False)
@cython.wraparound(False)
cdef REDUCE( long [::1] rows, long [::1] cols, long j,double [:,::1] C,double [::1] CP,double [::1] CiP):
cdef long p = rows.shape[0]
cdef long ncols = cols.shape[0]
cdef long m = cols.shape[0]
predd = np.arange(-1,m+1,dtype=np.int64)
cdef long [::1] pred = predd
valuee = np.full((m+1),-1,dtype=np.double)
cdef double [::1] value = valuee
cdef long a=2
cdef long b=1
rett = np.empty(p,dtype=np.int64)
cdef long [::1] ret = rett
cdef long lc = ncols+1
value[1]=lookup(cols[0],j,rows[0],C,CP,CiP)
while m > p:
if value[pred[a]] == -1:
value[pred[a]] = (lookup(cols[pred[a]-1] ,j, rows[b-1],C,CP,CiP) if cols[pred[a]-1] <= rows[b-1] else 0)
if value[pred[a]] >= (lookup(cols[a-1] ,j, rows[b-1],C,CP,CiP) if cols[a-1] <= rows[b-1] else 0) :
if b < p :
b = b+1
value[a] = (lookup(cols[a-1] ,j, rows[b-1],C,CP,CiP) if cols[a-1] <= rows[b-1] else 0)
else:
pred[a+1] = pred[a]
m = m-1
a = a+1
else:
pred[a] = pred[pred[a]]
m = m-1
if b != 1:
b = b-1
else:
a = a+1
for i in range(p-1,-1,-1):
ret[i]=cols[pred[lc]-1]
lc=pred[lc]
return ret
@cython.boundscheck(False)
@cython.wraparound(False)
cdef MFILL( long [::1] ari, long [::1] aci, long j,double [:,::1] C,long [:,::1] D,long [::1]mposi,double [::1] CP,double [::1] CiP):
cdef long m = ari.shape[0]
cdef long n = aci.shape[0]
cdef long ii = n-1
cdef long start
if (m % 2) == 0:
start = m-2
else:
start = m-1
cdef long s,e,ar,ac,i
cdef double MAX,cc,vv,CURRENT_MAT
for i in range(start,-1,-2):
if (i==0):
s=int(aci[i])
e=int(mposi[ari[i+1]])
elif (i==m-1):
s=int(mposi[ari[i-1]])
e=int(aci[n-1])
else:
s=int(mposi[ari[i-1]])
e=int(mposi[ari[i+1]])
if(e > ari[i]):
e=int(ari[i])
MAX = 0
while True:
ac = aci[ii]
ar = ari[i]
if (ac > ar):
pass
else:
CURRENT_MAT = lookup(ac,j,ar,C,CP,CiP)
if (MAX < CURRENT_MAT):
MAX = CURRENT_MAT
mposi[ar]=ac
C[ar+j+1,j+1]=MAX
D[ar+j,j]=ac+j
if(ac<=s or ii==-1):
break
ii-=1
@cython.boundscheck(False)
@cython.wraparound(False)
cdef findBestK( long n, long k,long [:,::1] D,double [::1] P,double [::1] CP,double [::1] Cip,double [::1] CF):
cdef double E1 = 0
cdef double mean1 = mean(1,n,CP,Cip)
cdef long J,kk,j,a,K,i,t
for J in range(1,n):
E1 += (P[J])*(abs(J-mean1))
cdef double bestAlpha = 0
cdef long bestK = 0
cdef double Dk,Ek,meanK,A,alpha
cdef long [::1] T
cdef long [::1] bestT
for kk in range(k,1,-1):
T = np.zeros(kk+2,dtype=np.int64)
T[0] = 1
T[kk+1] = n
i = n
for j in range(kk,0,-1):
T[j] = D[i-1,j]
i = int(D[i-1,j])
if i < 1:
i=1
Dk = mean(T[kk],T[kk+1],CP,Cip)-mean(T[0],T[1],CP,Cip)
Ek = 1
for K in range(kk+1):
meanK = mean(T[K],T[K+1],CP,Cip)
for J in range(T[K],T[K+1]):
Ek += (P[J])*(abs(J-meanK))
A = 1
for a in range (kk+1):
t = T[a]
A += P[t]
A *= math.sqrt(kk)
alpha=((((E1/Ek)*Dk)**2)/A)
if alpha > bestAlpha :
bestK = kk
bestT = T
bestAlpha = alpha
cdef double [::1] vol= np.empty(bestK)
cdef double [::1] leng = np.empty(bestK)
if bestK != 0:
for i in range (1,bestK+1):
vol[i-1]=(CF[bestT[i]]-CF[bestT[i-1]])
leng[i-1]=(bestT[i]-bestT[i-1])
else:
bestT = np.empty(0,dtype=long)
return bestT,bestK,vol,leng
@cython.boundscheck(False)
@cython.wraparound(False)
def L(float [::1] F, long k):
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import math
import copy
import time
import os
cdef long n = len(F)
cdef double F_SUM = sum(F)
if F_SUM == 0:
return [],0,[],[]
CC = np.zeros((n+1,k+2),dtype=np.float64)
DD = np.zeros((n+1,k+2),dtype=np.int64)
cdef double [:, ::1] C = CC
cdef long[:, ::1] D = DD
cdef long sizeMAT
cdef long[::1] mposi
cdef double [::1] P = np.empty(n)
cdef double [::1] CP = np.empty(n)
cdef double [::1] CF = np.empty(n)
cdef double [::1] CiP = np.empty(n)
cdef double [::1] Cip = np.empty(n)
cdef double PP,iP,ip
cdef double firstValue = F[0]/F_SUM
CF[0]=(F[0])
P[0]=(firstValue)
CP[0]=(firstValue)
CiP[0]=(firstValue)
Cip[0]=(firstValue)
cdef long i,j,dl
for i in range(1,n):
CF[i]=(CF[i-1]+F[i])
PP = F[i]/F_SUM
iP = PP * i
ip = PP * (i+1)
P[i]=(PP)
CP[i]=(CP[i-1]+PP)
CiP[i]=(CiP[i-1]+iP)
Cip[i]=(Cip[i-1]+ip)
mminTj = np.zeros(k+2,dtype=np.int64)
mmaxTj = np.zeros(k+2,dtype=np.int64)
cdef long[::1] minTj = mminTj
cdef long[::1] maxTj = mmaxTj
for j in range(0,k+2):
if j == k+1:
minTj[j] = n
else:
minTj[j] = j
for j in range(0,k+2):
if j == 0:
maxTj[j] = 0
else:
maxTj[j] = n-k+j-1
cdef long l,tj,ex=k-2
cdef double v,f
cdef long [::1] initial_rc
for j in range (0,k+1):
if j == 0:
for tj in range(minTj[j+1],maxTj[j+1]+1+ex):
C[tj,j+1]=V_BC(1,tj,CP,CiP)
else:
if (j>=(3)):
ex-=1
sizeMAT = n-k+1+ex
if (j != k):
mposi = np.zeros(sizeMAT-1,dtype=np.int64)
initial_rc = np.arange(0,sizeMAT-1,dtype=int)
SMAWK(initial_rc,initial_rc,j,C,D,mposi,CP,CiP)
else:
dl = minTj[k]
for l in range(minTj[k],maxTj[k]+1):
f = V_BC(l+1,minTj[k+1],CP,CiP)
v = f + C[l,k]
if (C[minTj[k+1],k+1] < v ):
C[minTj[k+1],k+1] = v
dl = l
D[maxTj[k],k]=dl
D[n,k+1] = n
cdef long [::1] bestT
cdef long bestK
cdef double [::1] vol
cdef double [::1] leng
bestT,bestK,vol,leng = findBestK(n,k,D,P,CP,Cip,CF)
return (bestT,bestK,vol,leng) _____no_output_____def window(tup):
start = tup[0]
end = tup[1]
froM = start
winSize = 10000
to = froM+winSize
'''
fname = str(str(chromo)+"-"+str(start)+"-"+str(end)+".csv")
with open(fname, 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(["START", "END", "LENGTH", "VOLUME", "RANGE"])
'''
df = pd.DataFrame(columns=["CHR","START", "END", "LENGTH", "VOLUME", "RANGE"])
while(froM < end):
try:
bestT=[]
bestK=0
sF=F[froM:to]
if len(sF) != winSize:
sF=np.append(sF,np.zeros(winSize - len(sF),dtype=sF.dtype))
bestT,bestK,vol,leng=(L(sF,6))
'''
with open(fname, 'a', newline='') as file:
writer = csv.writer(file)
for i in range(bestK):
writer.writerow([froM+bestT[i],froM+bestT[i+1],leng[i],vol[i],str(str(froM)+':'+str(to))])
'''
if bestK != 0:
for i in range(bestK):
df = df.append({"CHR":chromo,"START":froM+bestT[i],"END":froM+bestT[i+1], "LENGTH":leng[i], "VOLUME":vol[i], "RANGE":str(str(froM)+':'+str(to))},ignore_index=True)
if bestK == 0:
froM+=winSize
else:
froM+=bestT[bestK]
to = froM+winSize
except Exception as e:
print("From:{}, To:{}, bestK={}".format(froM,to,bestK))
print('from Window, ',e)
froM+=winSize
to=froM+winSize
return df_____no_output_____%%time
path="/home/musab/chip-seq/GM12878-H3K27Ac_COPY/"
file = "wgEncodeBroadHistoneGm12878H3k27acStdSig.bigWig"
bw = pyBigWig.open(str(path+file))
full_result_list = []
for chromo in bw.chroms():
total_size = bw.chroms()[chromo]
F=bw.values(chromo,0,total_size,numpy=True)
F[np.isnan(F)] = 0
n = multiprocessing.cpu_count()
frag = math.ceil(total_size/n)
job = []
st=0
while (st<total_size):
en =st+frag
job.append((st,en))
st = en+1
pool = multiprocessing.Pool(processes=n)
r = pool.map(window,job)
result = pd.concat(r)
result = result.reset_index(drop=True)
full_result_list.extend(r)
result.to_csv(str(path+"results/"+chromo+'.csv'), index=False)
pool.close()
pool.join()
print("Chromo {} Done !".format(chromo))
full_result = pd.concat(full_result_list)
full_result = full_result.reset_index(drop=True)
full_result.to_csv(str(path+"results/"+'full.csv'), index=False)Chromo chr1 Done !
Chromo chr10 Done !
Chromo chr11 Done !
Chromo chr12 Done !
Chromo chr13 Done !
Chromo chr14 Done !
Chromo chr15 Done !
Chromo chr16 Done !
Chromo chr17 Done !
Chromo chr18 Done !
Chromo chr19 Done !
Chromo chr2 Done !
Chromo chr20 Done !
Chromo chr21 Done !
Chromo chr22 Done !
Chromo chr3 Done !
Chromo chr4 Done !
Chromo chr5 Done !
Chromo chr6 Done !
Chromo chr7 Done !
Chromo chr8 Done !
Chromo chr9 Done !
Chromo chrX Done !
CPU times: user 46.2 s, sys: 21.6 s, total: 1min 7s
Wall time: 28min 31s
import os
import glob
import pandas as pd
import math
os.chdir("/home/musab/chip-seq/GM12878-H3K27Ac/results/")
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
#combine all files in the list
combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames ])
combined_csv = combined_csv.sort_values('VOLUME', ascending=False)
#export to csv
# del(combined_csv['RANGE'])
# del(combined_csv['VOLUME'])
# del(combined_csv['LENGTH'])
#combined_csv.to_csv( "annotate.bed", header=False,index=False, sep='\t',encoding='utf-8-sig')_____no_output_____anno = combined_csv_____no_output_____del(anno['RANGE'])
del(anno['VOLUME'])
del(anno['LENGTH'])_____no_output_____length = anno.shape[0]_____no_output_____annotate = anno.head(math.ceil(40821*1.5))#math.ceil(length*0.01))_____no_output_____annotate.shape_____no_output_____annotate.to_csv( "anno_1.5.bed", header=False,index=False, sep='\t',encoding='utf-8-sig')_____no_output_____annotate.tail()_____no_output_____anno.tail()_____no_output_____
</code>
| {
"repository": "MusabNaik/LinMLTBS",
"path": "LinMLTBS.ipynb",
"matched_keywords": [
"ChIP-seq"
],
"stars": null,
"size": 337047,
"hexsha": "d038c8f9c849845e7e96b85044867398fe6e9365",
"max_line_length": 1667,
"avg_line_length": 97.8934069126,
"alphanum_fraction": 0.5815123707
} |
# Notebook from detrout/encode4-curation
Path: encode-mirna-2018-01.ipynb
Submitting various things for end of grant._____no_output_____
<code>
import os
import sys
import requests
import pandas
import paramiko
import json
from IPython import display_____no_output_____from curation_common import *
from htsworkflow.submission.encoded import DCCValidator_____no_output_____PANDAS_ODF = os.path.expanduser('~/src/odf_pandas')
if PANDAS_ODF not in sys.path:
sys.path.append(PANDAS_ODF)
from pandasodf import ODFReader_____no_output_____import gcat_____no_output_____from htsworkflow.submission.encoded import Document
from htsworkflow.submission.aws_submission import run_aws_cp_____no_output_____# live server & control file
#server = ENCODED('www.encodeproject.org')
spreadsheet_name = "ENCODE_test_miRNA_experiments_01112018"
# test server & datafile
server = ENCODED('test.encodedcc.org')
#spreadsheet_name = os.path.expanduser('~diane/woldlab/ENCODE/C1-encode3-limb-2017-testserver.ods')
server.load_netrc()
validator = DCCValidator(server)_____no_output_____award = 'UM1HG009443'_____no_output_____
</code>
# Submit Documents_____no_output_____Example Document submission_____no_output_____
<code>
#atac_uuid = '0fc44318-b802-474e-8199-f3b6d708eb6f'
#atac = Document(os.path.expanduser('~/proj/encode3-curation/Wold_Lab_ATAC_Seq_protocol_December_2016.pdf'),
# 'general protocol',
# 'ATAC-Seq experiment protocol for Wold lab',
# )
#body = atac.create_if_needed(server, atac_uuid)
#print(body['@id'])_____no_output_____
</code>
# Submit Annotations_____no_output_____
<code>
#sheet = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
#annotations = sheet.parse('Annotations', header=0)
#created = server.post_sheet('/annotations/', annotations, verbose=True, dry_run=True)
#print(len(created))_____no_output_____#if created:
# annotations.to_excel('/tmp/annotations.xlsx', index=False)_____no_output_____
</code>
# Register Biosamples_____no_output_____
<code>
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
biosample = book.parse('Biosamples', header=0)
created = server.post_sheet('/biosamples/', biosample,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))_____no_output_____if created:
biosample.to_excel('/dev/shm/biosamples.xlsx', index=False)_____no_output_____
</code>
# Register Libraries_____no_output_____
<code>
print(spreadsheet_name)
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
libraries = book.parse('Libraries', header=0)
created = server.post_sheet('/libraries/', libraries, verbose=True, dry_run=True, validator=validator)
print(len(created))_____no_output_____if created:
libraries.to_excel('/dev/shm/libraries.xlsx', index=False)_____no_output_____
</code>
# Register Experiments_____no_output_____
<code>
print(server.server)
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
experiments = book.parse('Experiments', header=0)
created = server.post_sheet('/experiments/', experiments, verbose=True, dry_run=False, validator=validator)
print(len(created))_____no_output_____if created:
experiments.to_excel('/dev/shm/experiments.xlsx', index=False)_____no_output_____
</code>
# Register Replicates_____no_output_____
<code>
print(server.server)
print(spreadsheet_name)
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
replicates = book.parse('Replicates', header=0)
created = server.post_sheet('/replicates/', replicates, verbose=True, dry_run=True, validator=validator)
print(len(created))_____no_output_____if created:
replicates.to_excel('/dev/shm/replicates.xlsx', index=False)_____no_output_____
</code>
| {
"repository": "detrout/encode4-curation",
"path": "encode-mirna-2018-01.ipynb",
"matched_keywords": [
"ATAC-seq"
],
"stars": null,
"size": 7098,
"hexsha": "d039321969de4a6f9e276e0c0e605881f60b4827",
"max_line_length": 117,
"avg_line_length": 22.8967741935,
"alphanum_fraction": 0.5517047056
} |
# Notebook from schwaaweb/aimlds1_11-NLP
Path: M11_A_DJ_NLP_Assignment.ipynb
[View in Colaboratory](https://colab.research.google.com/github/schwaaweb/aimlds1_11-NLP/blob/master/M11_A_DJ_NLP_Assignment.ipynb)_____no_output_____### Assignment: Natural Language Processing_____no_output_____In this assignment, you will work with a data set that contains restaurant reviews. You will use a Naive Bayes model to classify the reviews (positive or negative) based on the words in the review. The main objective of this assignment is gauge the performance of a Naive Bayes model by using a confusion matrix; however in order to ascertain the efficiacy of the model, you will have to first train the Naive Bayes model with a portion (i.e. 70%) of the underlying data set and then test it against the remainder of the data set . Before you can train the model, you will have to go through a sequence of steps to get the data ready for training the model.
Steps you may need to perform:
**1) **Read in the list of restaurant reviews
**2)** Convert the reviews into a list of tokens
**3) **You will most likely have to eliminate stop words
**4)** You may have to utilize stemming or lemmatization to determine the base form of the words
**5) **You will have to vectorize the data (i.e. construct a document term/word matix) wherein select words from the reviews will constitute the columns of the matrix and the individual reviews will be part of the rows of the matrix
**6) ** Create 'Train' and 'Test' data sets (i.e. 70% of the underlying data set will constitute the training set and 30% of the underlying data set will constitute the test set)
**7)** Train a Naive Bayes model on the Train data set and test it against the test data set
**8) **Construct a confusion matirx to gauge the performance of the model
**Dataset**: https://www.dropbox.com/s/yl5r7kx9nq15gmi/Restaurant_Reviews.tsv?raw=1
_____no_output_____**1) **Read in the list of restaurant reviews_____no_output_____
<code>
#%%time
#!wget -c https://www.dropbox.com/s/yl5r7kx9nq15gmi/Restaurant_Reviews.tsv?raw=1 && mv Restaurant_Reviews.tsv?raw=1 Restaurant_Reviews.tsv
!ls -lh *tsv-rw-r--r--@ 1 darwinm staff 60K Jun 13 17:43 Restaurant_Reviews.tsv
%%time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
import string
import nltk
nltk.download('all')
[nltk_data] Downloading collection 'all'
[nltk_data] |
[nltk_data] | Downloading package abc to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package abc is already up-to-date!
[nltk_data] | Downloading package alpino to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package alpino is already up-to-date!
[nltk_data] | Downloading package biocreative_ppi to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package biocreative_ppi is already up-to-date!
[nltk_data] | Downloading package brown to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package brown is already up-to-date!
[nltk_data] | Downloading package brown_tei to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package brown_tei is already up-to-date!
[nltk_data] | Downloading package cess_cat to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package cess_cat is already up-to-date!
[nltk_data] | Downloading package cess_esp to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package cess_esp is already up-to-date!
[nltk_data] | Downloading package chat80 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package chat80 is already up-to-date!
[nltk_data] | Downloading package city_database to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package city_database is already up-to-date!
[nltk_data] | Downloading package cmudict to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package cmudict is already up-to-date!
[nltk_data] | Downloading package comparative_sentences to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package comparative_sentences is already up-to-
[nltk_data] | date!
[nltk_data] | Downloading package comtrans to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package comtrans is already up-to-date!
[nltk_data] | Downloading package conll2000 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package conll2000 is already up-to-date!
[nltk_data] | Downloading package conll2002 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package conll2002 is already up-to-date!
[nltk_data] | Downloading package conll2007 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package conll2007 is already up-to-date!
[nltk_data] | Downloading package crubadan to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package crubadan is already up-to-date!
[nltk_data] | Downloading package dependency_treebank to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package dependency_treebank is already up-to-date!
[nltk_data] | Downloading package dolch to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package dolch is already up-to-date!
[nltk_data] | Downloading package europarl_raw to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package europarl_raw is already up-to-date!
[nltk_data] | Downloading package floresta to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package floresta is already up-to-date!
[nltk_data] | Downloading package framenet_v15 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package framenet_v15 is already up-to-date!
[nltk_data] | Downloading package framenet_v17 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package framenet_v17 is already up-to-date!
[nltk_data] | Downloading package gazetteers to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package gazetteers is already up-to-date!
[nltk_data] | Downloading package genesis to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package genesis is already up-to-date!
[nltk_data] | Downloading package gutenberg to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package gutenberg is already up-to-date!
[nltk_data] | Downloading package ieer to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package ieer is already up-to-date!
[nltk_data] | Downloading package inaugural to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package inaugural is already up-to-date!
[nltk_data] | Downloading package indian to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package indian is already up-to-date!
[nltk_data] | Downloading package jeita to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package jeita is already up-to-date!
[nltk_data] | Downloading package kimmo to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package kimmo is already up-to-date!
[nltk_data] | Downloading package knbc to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package knbc is already up-to-date!
[nltk_data] | Downloading package lin_thesaurus to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package lin_thesaurus is already up-to-date!
[nltk_data] | Downloading package mac_morpho to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package mac_morpho is already up-to-date!
[nltk_data] | Downloading package machado to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package machado is already up-to-date!
[nltk_data] | Downloading package masc_tagged to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package masc_tagged is already up-to-date!
[nltk_data] | Downloading package moses_sample to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package moses_sample is already up-to-date!
[nltk_data] | Downloading package movie_reviews to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package movie_reviews is already up-to-date!
[nltk_data] | Downloading package names to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package names is already up-to-date!
[nltk_data] | Downloading package nombank.1.0 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package nombank.1.0 is already up-to-date!
[nltk_data] | Downloading package nps_chat to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package nps_chat is already up-to-date!
[nltk_data] | Downloading package omw to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package omw is already up-to-date!
[nltk_data] | Downloading package opinion_lexicon to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package opinion_lexicon is already up-to-date!
[nltk_data] | Downloading package paradigms to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package paradigms is already up-to-date!
[nltk_data] | Downloading package pil to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package pil is already up-to-date!
[nltk_data] | Downloading package pl196x to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package pl196x is already up-to-date!
[nltk_data] | Downloading package ppattach to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package ppattach is already up-to-date!
[nltk_data] | Downloading package problem_reports to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package problem_reports is already up-to-date!
[nltk_data] | Downloading package propbank to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package propbank is already up-to-date!
[nltk_data] | Downloading package ptb to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package ptb is already up-to-date!
[nltk_data] | Downloading package product_reviews_1 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package product_reviews_1 is already up-to-date!
[nltk_data] | Downloading package product_reviews_2 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package product_reviews_2 is already up-to-date!
[nltk_data] | Downloading package pros_cons to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package pros_cons is already up-to-date!
[nltk_data] | Downloading package qc to /Users/darwinm/nltk_data...
[nltk_data] | Package qc is already up-to-date!
[nltk_data] | Downloading package reuters to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package reuters is already up-to-date!
[nltk_data] | Downloading package rte to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package rte is already up-to-date!
[nltk_data] | Downloading package semcor to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package semcor is already up-to-date!
[nltk_data] | Downloading package senseval to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package senseval is already up-to-date!
[nltk_data] | Downloading package sentiwordnet to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package sentiwordnet is already up-to-date!
[nltk_data] | Downloading package sentence_polarity to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package sentence_polarity is already up-to-date!
[nltk_data] | Downloading package shakespeare to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package shakespeare is already up-to-date!
[nltk_data] | Downloading package sinica_treebank to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package sinica_treebank is already up-to-date!
[nltk_data] | Downloading package smultron to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package smultron is already up-to-date!
[nltk_data] | Downloading package state_union to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package state_union is already up-to-date!
[nltk_data] | Downloading package stopwords to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package stopwords is already up-to-date!
[nltk_data] | Downloading package subjectivity to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package subjectivity is already up-to-date!
[nltk_data] | Downloading package swadesh to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package swadesh is already up-to-date!
[nltk_data] | Downloading package switchboard to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package switchboard is already up-to-date!
[nltk_data] | Downloading package timit to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package timit is already up-to-date!
[nltk_data] | Downloading package toolbox to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package toolbox is already up-to-date!
[nltk_data] | Downloading package treebank to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package treebank is already up-to-date!
[nltk_data] | Downloading package twitter_samples to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package twitter_samples is already up-to-date!
[nltk_data] | Downloading package udhr to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package udhr is already up-to-date!
[nltk_data] | Downloading package udhr2 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package udhr2 is already up-to-date!
[nltk_data] | Downloading package unicode_samples to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package unicode_samples is already up-to-date!
[nltk_data] | Downloading package universal_treebanks_v20 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package universal_treebanks_v20 is already up-to-
[nltk_data] | date!
[nltk_data] | Downloading package verbnet to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package verbnet is already up-to-date!
[nltk_data] | Downloading package webtext to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package webtext is already up-to-date!
[nltk_data] | Downloading package wordnet to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package wordnet is already up-to-date!
[nltk_data] | Downloading package wordnet_ic to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package wordnet_ic is already up-to-date!
[nltk_data] | Downloading package words to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package words is already up-to-date!
[nltk_data] | Downloading package ycoe to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package ycoe is already up-to-date!
[nltk_data] | Downloading package rslp to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package rslp is already up-to-date!
[nltk_data] | Downloading package maxent_treebank_pos_tagger to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package maxent_treebank_pos_tagger is already up-
[nltk_data] | to-date!
[nltk_data] | Downloading package universal_tagset to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package universal_tagset is already up-to-date!
[nltk_data] | Downloading package maxent_ne_chunker to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package maxent_ne_chunker is already up-to-date!
[nltk_data] | Downloading package punkt to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package punkt is already up-to-date!
[nltk_data] | Downloading package book_grammars to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package book_grammars is already up-to-date!
[nltk_data] | Downloading package sample_grammars to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package sample_grammars is already up-to-date!
[nltk_data] | Downloading package spanish_grammars to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package spanish_grammars is already up-to-date!
[nltk_data] | Downloading package basque_grammars to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package basque_grammars is already up-to-date!
[nltk_data] | Downloading package large_grammars to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package large_grammars is already up-to-date!
[nltk_data] | Downloading package tagsets to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package tagsets is already up-to-date!
[nltk_data] | Downloading package snowball_data to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package snowball_data is already up-to-date!
[nltk_data] | Downloading package bllip_wsj_no_aux to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package bllip_wsj_no_aux is already up-to-date!
[nltk_data] | Downloading package word2vec_sample to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package word2vec_sample is already up-to-date!
[nltk_data] | Downloading package panlex_swadesh to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package panlex_swadesh is already up-to-date!
[nltk_data] | Downloading package mte_teip5 to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package mte_teip5 is already up-to-date!
[nltk_data] | Downloading package averaged_perceptron_tagger to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package averaged_perceptron_tagger is already up-
[nltk_data] | to-date!
[nltk_data] | Downloading package perluniprops to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package perluniprops is already up-to-date!
[nltk_data] | Downloading package nonbreaking_prefixes to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package nonbreaking_prefixes is already up-to-date!
[nltk_data] | Downloading package vader_lexicon to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package vader_lexicon is already up-to-date!
[nltk_data] | Downloading package porter_test to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package porter_test is already up-to-date!
[nltk_data] | Downloading package wmt15_eval to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package wmt15_eval is already up-to-date!
[nltk_data] | Downloading package mwa_ppdb to
[nltk_data] | /Users/darwinm/nltk_data...
[nltk_data] | Package mwa_ppdb is already up-to-date!
[nltk_data] |
[nltk_data] Done downloading collection all
CPU times: user 4.32 s, sys: 1.79 s, total: 6.1 s
Wall time: 50.3 s
df = pd.read_csv('Restaurant_Reviews.tsv', sep='\t')
df.head()_____no_output_____df.tail()_____no_output_____
</code>
**2)** Convert the reviews into a list of tokens_____no_output_____
<code>
review = df['Review'] # dropping the like here
print(review)
len(review)0 Wow... Loved this place.
1 Crust is not good.
2 Not tasty and the texture was just nasty.
3 Stopped by during the late May bank holiday of...
4 The selection on the menu was great and so wer...
5 Now I am getting angry and I want my damn pho.
6 Honeslty it didn't taste THAT fresh.)
7 The potatoes were like rubber and you could te...
8 The fries were great too.
9 A great touch.
10 Service was very prompt.
11 Would not go back.
12 The cashier had no care what so ever on what I...
13 I tried the Cape Cod ravoli, chicken, with cra...
14 I was disgusted because I was pretty sure that...
15 I was shocked because no signs indicate cash o...
16 Highly recommended.
17 Waitress was a little slow in service.
18 This place is not worth your time, let alone V...
19 did not like at all.
20 The Burrittos Blah!
21 The food, amazing.
22 Service is also cute.
23 I could care less... The interior is just beau...
24 So they performed.
25 That's right....the red velvet cake.....ohhh t...
26 - They never brought a salad we asked for.
27 This hole in the wall has great Mexican street...
28 Took an hour to get our food only 4 tables in ...
29 The worst was the salmon sashimi.
...
970 I immediately said I wanted to talk to the man...
971 The ambiance isn't much better.
972 Unfortunately, it only set us up for disapppoi...
973 The food wasn't good.
974 Your servers suck, wait, correction, our serve...
975 What happened next was pretty....off putting.
976 too bad cause I know it's family owned, I real...
977 Overpriced for what you are getting.
978 I vomited in the bathroom mid lunch.
979 I kept looking at the time and it had soon bec...
980 I have been to very few places to eat that und...
981 We started with the tuna sashimi which was bro...
982 Food was below average.
983 It sure does beat the nachos at the movies but...
984 All in all, Ha Long Bay was a bit of a flop.
985 The problem I have is that they charge $11.99 ...
986 Shrimp- When I unwrapped it (I live only 1/2 a...
987 It lacked flavor, seemed undercooked, and dry.
988 It really is impressive that the place hasn't ...
989 I would avoid this place if you are staying in...
990 The refried beans that came with my meal were ...
991 Spend your money and time some place else.
992 A lady at the table next to us found a live gr...
993 the presentation of the food was awful.
994 I can't tell you how disappointed I was.
995 I think food should have flavor and texture an...
996 Appetite instantly gone.
997 Overall I was not impressed and would not go b...
998 The whole experience was underwhelming, and I ...
999 Then, as if I hadn't wasted enough of my life ...
Name: Review, Length: 1000, dtype: object
</code>
**3) **You will most likely have to eliminate stop words_____no_output_____**4)** You may have to utilize stemming or lemmatization to determine the base form of the words_____no_output_____
<code>
stopwords = nltk.corpus.stopwords.words('english')
ps = nltk.PorterStemmer()
#Elmiminate punctations
#Tokenize based on whitespace
#Stem the text
#Remove stopwords
def process_text(txt):
eliminate_punct = "".join([word.lower() for word in txt if word not in string.punctuation])
tokens = re.split('\W+', txt)
txt = [ps.stem(word) for word in tokens if word not in stopwords]
return txt
df['clean_review'] = df['Review'].apply(lambda x: process_text(x))
df.head()_____no_output_____import gensim
# Use the Gensim document to create a dictionary - a dictionary maps every word to a number
dictionary = gensim.corpora.Dictionary(df['clean_review'])
# Examine the length of the dictionary
num_of_words = len(dictionary)
print("# of words in dictionary: {}".format(num_of_words))
#for index,word in dictionary.items():
# print(index,word)
print(dictionary)# of words in dictionary: 1668
Dictionary(1668 unique tokens: ['', 'love', 'place', 'wow', 'crust']...)
#print(dictionary.token2id)_____no_output_____
</code>
**5) **You will have to vectorize the data (i.e. construct a document term/word matix) wherein select words from the reviews will constitute the columns of the matrix and the individual reviews will be part of the rows of the matrix_____no_output_____
<code>
from pprint import pprint_____no_output_____%%time
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
def cv(data):
count_vectorizer = CountVectorizer()
emb = count_vectorizer.fit_transform(data)
return emb, count_vectorizer
list_corpus = df["clean_review"].tolist()
list_labels = df["Liked"].tolist()
X_train, X_test, y_train, y_test = train_test_split(list_corpus, list_labels, test_size=0.3, random_state=42)
#X_train_counts, count_vectorizer = cv(X_train)
#X_test_counts = count_vectorizer.transform(X_test)
#pprint(X_train)
#from sklearn.feature_extraction.text import CountVectorizer
#count_vect = CountVectorizer(analyzer=process_text, max_features=1668)
#W_counts = count_vect.fit_transform(df['clean_review'])
#print(W_counts.shape)
#print(count_vect.get_feature_names())CPU times: user 2.39 ms, sys: 5.47 ms, total: 7.86 ms
Wall time: 25 ms
%%time
corpus = [dictionary.doc2bow(text) for text in list_corpus]CPU times: user 18 ms, sys: 3.51 ms, total: 21.5 ms
Wall time: 26 ms
tfidf = gensim.models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
index = gensim.similarities.MatrixSimilarity(tfidf[corpus])
sims = index[corpus_tfidf]
#for vector in corpus:
# print(vector)
print(sims.shape)(1000, 1000)
</code>
**6) ** Create 'Train' and 'Test' data sets (i.e. 70% of the underlying data set will constitute the training set and 30% of the underlying data set will constitute the test set)_____no_output_____**7)** Train a Naive Bayes model on the Train data set and test it against the test data set
_____no_output_____**8) **Construct a confusion matirx to gauge the performance of the model_____no_output_____
| {
"repository": "schwaaweb/aimlds1_11-NLP",
"path": "M11_A_DJ_NLP_Assignment.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": null,
"size": 43026,
"hexsha": "d03b37650373a173076fdfa5209cf4be6d7e3d05",
"max_line_length": 667,
"avg_line_length": 42.017578125,
"alphanum_fraction": 0.5319341793
} |
# Notebook from bgalbraith/course-content
Path: tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial 1: Bayes with a binary hidden state
**Week 3, Day 1: Bayesian Decisions**
**By Neuromatch Academy**
__Content creators:__ [insert your name here]
__Content reviewers:__ _____no_output_____# Tutorial Objectives
This is the first in a series of two core tutorials on Bayesian statistics. In these tutorials, we will explore the fundemental concepts of the Bayesian approach from two perspectives. This tutorial will work through an example of Bayesian inference and decision making using a binary hidden state. The second main tutorial extends these concepts to a continuous hidden state. In the next days, each of these basic ideas will be extended--first through time as we consider what happens when we infere a hidden state using multiple observations and when the hidden state changes across time. In the third day, we will introduce the notion of how to use inference and decisions to select actions for optimal control. For this tutorial, you will be introduced to our binary state fishing problem!
This notebook will introduce the fundamental building blocks for Bayesian statistics:
1. How do we use probability distributions to represent hidden states?
2. How does marginalization work and how can we use it?
3. How do we combine new information with our prior knowledge?
4. How do we combine the possible loss (or gain) for making a decision with our probabilitic knowledge?
_____no_output_____
<code>
#@title Video 1: Introduction to Bayesian Statistics
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='JiEIn9QsrFg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
## Setup
Please execute the cells below to initialize the notebook environment._____no_output_____
<code>
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import patches
from matplotlib import transforms
from matplotlib import gridspec
from scipy.optimize import fsolve
from collections import namedtuple_____no_output_____#@title Figure Settings
import ipywidgets as widgets # interactive display
from ipywidgets import GridspecLayout
from IPython.display import clear_output
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
import warnings
warnings.filterwarnings("ignore")_____no_output_____# @title Plotting Functions
def plot_joint_probs(P, ):
assert np.all(P >= 0), "probabilities should be >= 0"
# normalize if not
P = P / np.sum(P)
marginal_y = np.sum(P,axis=1)
marginal_x = np.sum(P,axis=0)
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
spacing = 0.005
# start with a square Figure
fig = plt.figure(figsize=(5, 5))
joint_prob = [left, bottom, width, height]
rect_histx = [left, bottom + height + spacing, width, 0.2]
rect_histy = [left + width + spacing, bottom, 0.2, height]
rect_x_cmap = plt.cm.Blues
rect_y_cmap = plt.cm.Reds
# Show joint probs and marginals
ax = fig.add_axes(joint_prob)
ax_x = fig.add_axes(rect_histx, sharex=ax)
ax_y = fig.add_axes(rect_histy, sharey=ax)
# Show joint probs and marginals
ax.matshow(P,vmin=0., vmax=1., cmap='Greys')
ax_x.bar(0, marginal_x[0], facecolor=rect_x_cmap(marginal_x[0]))
ax_x.bar(1, marginal_x[1], facecolor=rect_x_cmap(marginal_x[1]))
ax_y.barh(0, marginal_y[0], facecolor=rect_y_cmap(marginal_y[0]))
ax_y.barh(1, marginal_y[1], facecolor=rect_y_cmap(marginal_y[1]))
# set limits
ax_x.set_ylim([0,1])
ax_y.set_xlim([0,1])
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{P[i,j]:.2f}"
ax.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = marginal_x[i]
c = f"{v:.2f}"
ax_x.text(i, v +0.1, c, va='center', ha='center', color='black')
v = marginal_y[i]
c = f"{v:.2f}"
ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
# set up labels
ax.xaxis.tick_bottom()
ax.yaxis.tick_left()
ax.set_xticks([0,1])
ax.set_yticks([0,1])
ax.set_xticklabels(['Silver','Gold'])
ax.set_yticklabels(['Small', 'Large'])
ax.set_xlabel('color')
ax.set_ylabel('size')
ax_x.axis('off')
ax_y.axis('off')
return fig
# test
# P = np.random.rand(2,2)
# P = np.asarray([[0.9, 0.8], [0.4, 0.1]])
# P = P / np.sum(P)
# fig = plot_joint_probs(P)
# plt.show(fig)
# plt.close(fig)
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
def plot_prior_likelihood_posterior(prior, likelihood, posterior):
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey = ax_prior)
rect_colormap = plt.cm.Blues
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0, 0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1, 0]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='Greens')
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m (right) | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Posterior p(s | m)')
ax_posterior.xaxis.set_ticks_position('bottom')
ax_posterior.spines['left'].set_visible(False)
ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{posterior[i,j]:.2f}"
ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i, 0]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
def plot_prior_likelihood(ps, p_a_s1, p_a_s0, measurement):
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
assert 0.0 <= ps <= 1.0
prior = np.asarray([ps, 1 - ps])
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.22
left_space = left + small_width + padding
small_padding = 0.05
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + width + small_padding, bottom , small_width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_prior)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
# ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
# yticks = [0, 1], yticklabels = ['left', 'right'],
# ylabel = 'state (s)', xlabel = 'measurement (m)',
# title = 'Posterior p(s | m)')
# ax_posterior.xaxis.set_ticks_position('bottom')
# ax_posterior.spines['left'].set_visible(False)
# ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{posterior[i,j]:.2f}"
# ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
from matplotlib import colors
def plot_utility(ps):
prior = np.asarray([ps, 1 - ps])
utility = np.array([[2, -3], [-2, 1]])
expected = prior @ utility
# definitions for the axes
left, width = 0.05, 0.16
bottom, height = 0.05, 0.9
padding = 0.04
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(17, 3))
rect_prior = [left, bottom, small_width, height]
rect_utility = [left + added_space , bottom , width, height]
rect_expected = [left + 2* added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_utility = fig.add_axes(rect_utility, sharey=ax_prior)
ax_expected = fig.add_axes(rect_expected)
rect_colormap = plt.cm.Blues
# Data of plots
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1]))
ax_utility.matshow(utility, cmap='cool')
norm = colors.Normalize(vmin=-3, vmax=3)
ax_expected.bar(0, expected[0], facecolor = rect_colormap(norm(expected[0])))
ax_expected.bar(1, expected[1], facecolor = rect_colormap(norm(expected[1])))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Probability of state")
ax_prior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected utility plot details
ax_expected.set(title = 'Expected utility', ylim = [-3, 3],
xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
yticks = [])
ax_expected.xaxis.set_ticks_position('bottom')
ax_expected.spines['left'].set_visible(False)
ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, 2.5, c, va='center', ha='center', color='black')
return fig
def plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,measurement):
assert 0.0 <= ps <= 1.0
assert 0.0 <= p_a_s1 <= 1.0
assert 0.0 <= p_a_s0 <= 1.0
prior = np.asarray([ps, 1 - ps])
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
utility = np.array([[2.0, -3.0], [-2.0, 1.0]])
# expected = np.zeros_like(utility)
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# expected[:, 0] = utility[:, 0] * posterior
# expected[:, 1] = utility[:, 1] * posterior
expected = posterior @ utility
# definitions for the axes
left, width = 0.05, 0.15
bottom, height = 0.05, 0.9
padding = 0.05
small_width = 0.1
large_padding = 0.07
left_space = left + small_width + large_padding
fig = plt.figure(figsize=(17, 4))
rect_prior = [left, bottom+0.05, small_width, height-0.1]
rect_likelihood = [left_space, bottom , width, height]
rect_posterior = [left_space + padding + width - 0.02, bottom+0.05 , small_width, height-0.1]
rect_utility = [left_space + padding + width + padding + small_width, bottom , width, height]
rect_expected = [left_space + padding + width + padding + small_width + padding + width, bottom+0.05 , width, height-0.1]
ax_likelihood = fig.add_axes(rect_likelihood)
ax_prior = fig.add_axes(rect_prior, sharey=ax_likelihood)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_likelihood)
ax_utility = fig.add_axes(rect_utility, sharey=ax_posterior)
ax_expected = fig.add_axes(rect_expected)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
expected_colormap = plt.cm.Wistia
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
ax_utility.matshow(utility, vmin=0., vmax=1., cmap='cool')
# ax_expected.matshow(expected, vmin=0., vmax=1., cmap='Wistia')
ax_expected.bar(0, expected[0], facecolor = expected_colormap(expected[0]))
ax_expected.bar(1, expected[1], facecolor = expected_colormap(expected[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected Utility plot details
ax_expected.set(ylim = [-2, 2], xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)', title = 'Expected utility', yticks=[])
# ax_expected.axis('off')
ax_expected.spines['left'].set_visible(False)
# ax_expected.set(xticks = [0, 1], xticklabels = ['left', 'right'],
# xlabel = 'action (a)',
# title = 'Expected utility')
# ax_expected.xaxis.set_ticks_position('bottom')
# ax_expected.spines['left'].set_visible(False)
# ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{expected[i,j]:.2f}"
# ax_expected.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, v, c, va='center', ha='center', color='black')
# # show values
# ind = np.arange(2)
# x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{P[i,j]:.2f}"
# ax.text(j,i, c, va='center', ha='center', color='white')
# for i in ind:
# v = marginal_x[i]
# c = f"{v:.2f}"
# ax_x.text(i, v +0.2, c, va='center', ha='center', color='black')
# v = marginal_y[i]
# c = f"{v:.2f}"
# ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig_____no_output_____# @title Helper Functions
def compute_marginal(px, py, cor):
# calculate 2x2 joint probabilities given marginals p(x=1), p(y=1) and correlation
p11 = px*py + cor*np.sqrt(px*py*(1-px)*(1-py))
p01 = px - p11
p10 = py - p11
p00 = 1.0 - p11 - p01 - p10
return np.asarray([[p00, p01], [p10, p11]])
# test
# print(compute_marginal(0.4, 0.6, -0.8))
def compute_cor_range(px,py):
# Calculate the allowed range of correlation values given marginals p(x=1) and p(y=1)
def p11(corr):
return px*py + corr*np.sqrt(px*py*(1-px)*(1-py))
def p01(corr):
return px - p11(corr)
def p10(corr):
return py - p11(corr)
def p00(corr):
return 1.0 - p11(corr) - p01(corr) - p10(corr)
Cmax = min(fsolve(p01, 0.0), fsolve(p10, 0.0))
Cmin = max(fsolve(p11, 0.0), fsolve(p00, 0.0))
return Cmin, Cmax_____no_output_____
</code>
---
# Section 1: Gone Fishin'
_____no_output_____
<code>
#@title Video 2: Gone Fishin'
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='McALsTzb494', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
You were just introduced to the **binary hidden state problem** we are going to explore. You need to decide which side to fish on. We know fish like to school together. On different days the school of fish is either on the left or right side, but we don’t know what the case is today. We will represent our knowledge probabilistically, asking how to make a decision (where to decide the fish are or where to fish) and what to expect in terms of gains or losses. In the next two sections we will consider just the probability of where the fish might be and what you gain or lose by choosing where to fish.
Remember, you can either think of your self as a scientist conducting an experiment or as a brain trying to make a decision. The Bayesian approach is the same!
_____no_output_____---
# Section 2: Deciding where to fish
_____no_output_____
<code>
#@title Video 3: Utility
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='xvIVZrqF_5s', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
You know the probability that the school of fish is on the left side of the dock today, $P(s = left)$. You also know the probability that it is on the right, $P(s = right)$, because these two probabilities must add up to 1. You need to decide where to fish. It may seem obvious - you could just fish on the side where the probability of the fish being is higher! Unfortunately, decisions and actions are always a little more complicated. Deciding to fish may be influenced by more than just the probability of the school of fish being there as we saw by the potential issues of submarines and sunburn.
We quantify these factors numerically using **utility**, which describes the consequences of your actions: how much value you gain (or if negative, lose) given the state of the world ($s$) and the action you take ($a$). In our example, our utility can be summarized as:
| Utility: U(s,a) | a = left | a = right |
| ----------------- |----------|----------|
| s = Left | 2 | -3 |
| s = right | -2 | 1 |
To use utility to choose an action, we calculate the **expected utility** of that action by weighing these utilities with the probability of that state occuring. This allows us to choose actions by taking probabilities of events into account: we don't care if the outcome of an action-state pair is a loss if the probability of that state is very low. We can formalize this as:
$$\text{Expected utility of action a} = \sum_{s}U(s,a)P(s) $$
In other words, the expected utility of an action a is the sum over possible states of the utility of that action and state times the probability of that state.
_____no_output_____## Interactive Demo 2: Exploring the decision
Let's start to get a sense of how all this works.
Take a look at the interactive demo below. You can change the probability that the school of fish is on the left side ($p(s = left)$ using the slider. You will see the utility matrix and the corresponding expected utility of each action.
First, make sure you understand how the expected utility of each action is being computed from the probabilities and the utility values. In the initial state: the probability of the fish being on the left is 0.9 and on the right is 0.1. The expected utility of the action of fishing on the left is then $U(s = left,a = left)p(s = left) + U(s = right,a = left)p(s = right) = 2(0.9) + -2(0.1) = 1.6$.
For each of these scenarios, think and discuss first. Then use the demo to try out each and see if your action would have been correct (that is, if the expected value of that action is the highest).
1. You just arrived at the dock for the first time and have no sense of where the fish might be. So you guess that the probability of the school being on the left side is 0.5 (so the probability on the right side is also 0.5). Which side would you choose to fish on given our utility values?
2. You think that the probability of the school being on the left side is very low (0.1) and correspondingly high on the right side (0.9). Which side would you choose to fish on given our utility values?
3. What would you choose if the probability of the school being on the left side is slightly lower than on the right side (0. 4 vs 0.6)?_____no_output_____
<code>
# @markdown Execute this cell to use the widget
ps_widget = widgets.FloatSlider(0.9, description='p(s = left)', min=0.0, max=1.0, step=0.01)
@widgets.interact(
ps = ps_widget,
)
def make_utility_plot(ps):
fig = plot_utility(ps)
plt.show(fig)
plt.close(fig)
return None_____no_output_____# to_remove explanation
# 1) With equal probabilities, the expected utility is higher on the left side,
# since that is the side without submarines, so you would choose to fish there.
# 2) If the probability that the fish is on the right side is high, you would
# choose to fish there. The high probability of fish being on the right far outweights
# the slightly higher utilities from fishing on the left (as you are unlikely to gain these)
# 3) If the probability that the fish is on the right side is just slightly higher
#. than on the left, you would choose the left side as the expected utility is still
#. higher on the left. Note that in this situation, you are not simply choosing the
#. side with the higher probability - the utility really matters here for the decision_____no_output_____
</code>
In this section, you have seen that both the utility of various state and action pairs and our knowledge of the probability of each state affects your decision. Importantly, we want our knowledge of the probability of each state to be as accurate as possible!
So how do we know these probabilities? We may have prior knowledge from years of fishing at the same dock. Over those years, we may have learned that the fish are more likely to be on the left side for example. We want to make sure this knowledge is as accurate as possible though. To do this, we want to collect more data, or take some more measurements! For the next few sections, we will focus on making our knowledge of the probability as accurate as possible, before coming back to using utility to make decisions._____no_output_____---
# Section 3: Likelihood of the fish being on either side
_____no_output_____
<code>
#@title Video 4: Likelihood
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='l4m0JzMWGio', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
First, we'll think about what it means to take a measurement (also often called an observation or just data) and what it tells you about what the hidden state may be. Specifically, we'll be looking at the **likelihood**, which is the probability of your measurement ($m$) given the hidden state ($s$): $P(m | s)$. Remember that in this case, the hidden state is which side of the dock the school of fish is on.
We will watch someone fish (for let's say 10 minutes) and our measurement is whether they catch a fish or not. We know something about what catching a fish means for the likelihood of the fish being on one side or the other. _____no_output_____## Think! 3: Guessing the location of the fish
Let's say we go to different dock from the one in the video. Here, there are different probabilities of catching fish given the state of the world. In this case, if they fish on the side of the dock where the fish are, they have a 70% chance of catching a fish. Otherwise, they catch a fish with only 20% probability.
The fisherperson is fishing on the left side.
1) Figure out each of the following:
- probability of catching a fish given that the school of fish is on the left side, $P(m = catch\text{ } fish | s = left )$
- probability of not catching a fish given that the school of fish is on the left side, $P(m = no \text{ } fish | s = left)$
- probability of catching a fish given that the school of fish is on the right side, $P(m = catch \text{ } fish | s = right)$
- probability of not catching a fish given that the school of fish is on the right side, $P(m = no \text{ } fish | s = right)$
2) If the fisherperson catches a fish, which side would you guess the school is on? Why?
3) If the fisherperson does not catch a fish, which side would you guess the school is on? Why?
_____no_output_____
<code>
#to_remove explanation
# 1) The fisherperson is on the left side so:
# - P(m = catch fish | s = left) = 0.7 because they have a 70% chance of catching
# a fish when on the same side as the school
# - P(m = no fish | s = left) = 0.3 because the probability of catching a fish
# and not catching a fish for a given state must add up to 1 as these
# are the only options: 1 - 0.7 = 0.3
# - P(m = catch fish | s = right) = 0.2
# - P(m = no fish | s = right) = 0.8
# 2) If the fisherperson catches a fish, you would guess the school of fish is on the
# left side. This is because the probability of catching a fish given that the
# school is on the left side (0.7) is higher than the probability given that
# the school is on the right side (0.2).
# 3) If the fisherperson does not catch a fish, you would guess the school of fish is on the
# right side. This is because the probability of not catching a fish given that the
# school is on the right side (0.8) is higher than the probability given that
# the school is on the right side (0.3)._____no_output_____
</code>
In the prior exercise, you guessed where the school of fish was based on the measurement you took (watching someone fish). You did this by choosing the state (side of school) that maximized the probability of the measurement. In other words, you estimated the state by maximizing the likelihood (had the highest probability of measurement given state $P(m|s$)). This is called maximum likelihood estimation (MLE) and you've encountered it before during this course, in W1D3!
What if you had been going to this river for years and you knew that the fish were almost always on the left side? This would probably affect how you make your estimate - you would rely less on the single new measurement and more on your prior knowledge. This is the idea behind Bayesian inference, as we will see later in this tutorial!_____no_output_____---
# Section 4: Correlation and marginalization
_____no_output_____
<code>
#@title Video 5: Correlation and marginalization
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='vsDjtWi-BVo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video_____no_output_____
</code>
In this section, we are going to take a step back for a bit and think more generally about the amount of information shared between two random variables. We want to know how much information you gain when you observe one variable (take a measurement) if you know something about another. We will see that the fundamental concept is the same if we think about two attributes, for example the size and color of the fish, or the prior information and the likelihood._____no_output_____## Math Exercise 4: Computing marginal likelihoods
To understand the information between two variables, let's first consider the size and color of the fish.
| P(X, Y) | Y = silver | Y = gold |
| ----------------- |----------|----------|
| X = small | 0.4 | 0.2 |
| X = large | 0.1 | 0.3 |
The table above shows us the **joint probabilities**: the probability of both specific attributes occuring together. For example, the probability of a fish being small and silver ($P(X = small, Y = silver$) is 0.4.
We want to know what the probability of a fish being small regardless of color. Since the fish are either silver or gold, this would be the probability of a fish being small and silver plus the probability of a fish being small and gold. This is an example of marginalizing, or averaging out, the variable we are not interested in across the rows or columns.. In math speak: $P(X = small) = \sum_y{P(X = small, Y)}$. This gives us a **marginal probability**, a probability of a variable outcome (in this case size), regardless of the other variables (in this case color).
Please complete the following math problems to further practice thinking through probabilities:
1. Calculate the probability of a fish being silver.
2. Calculate the probability of a fish being small, large, silver, or gold.
3. Calculate the probability of a fish being small OR gold. (Hint: $P(A\ \textrm{or}\ B) = P(A) + P(B) - P(A\ \textrm{and}\ B)$)
_____no_output_____
<code>
# to_remove explanation
# 1) The probability of a fish being silver is the joint probability of it being
#. small and silver plus the joint probability of it being large and silver:
#
#. P(Y = silver) = P(X = small, Y = silver) + P(X = large, Y = silver)
#. = 0.4 + 0.1
#. = 0.5
# 2) This is all the possibilities as in this scenario, our fish can only be small
#. or large, silver or gold. So the probability is 1 - the fish has to be at
#. least one of these.
#. 3) First we compute the marginal probabilities
#. P(X = small) = P(X = small, Y = silver) + P(X = small, Y = gold) = 0.6
#. P(Y = gold) = P(X = small, Y = gold) + P(X = large, Y = gold) = 0.5
#. We already know the joint probability: P(X = small, Y = gold) = 0.2
#. We can now use the given formula:
#. P( X = small or Y = gold) = P(X = small) + P(Y = gold) - P(X = small, Y = gold)
#. = 0.6 + 0.5 - 0.2
#. = 0.9_____no_output_____
</code>
## Think! 4: Covarying probability distributions
The relationship between the marginal probabilities and the joint probabilities is determined by the correlation between the two random variables - a normalized measure of how much the variables covary. We can also think of this as gaining some information about one of the variables when we observe a measurement from the other. We will think about this more formally in Tutorial 2.
Here, we want to think about how the correlation between size and color of these fish changes how much information we gain about one attribute based on the other. See Bonus Section 1 for the formula for correlation.
Use the widget below and answer the following questions:
1. When the correlation is zero, $\rho = 0$, what does the distribution of size tell you about color?
2. Set $\rho$ to something small. As you change the probability of golden fish, what happens to the ratio of size probabilities? Set $\rho$ larger (can be negative). Can you explain the pattern of changes in the probabilities of size as you change the probability of golden fish?
3. Set the probability of golden fish and of large fish to around 65%. As the correlation goes towards 1, how often will you see silver large fish?
4. What is increasing the (absolute) correlation telling you about how likely you are to see one of the properties if you see a fish with the other?
_____no_output_____
<code>
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
gs = GridspecLayout(2,2)
cor_widget = widgets.FloatSlider(0.0, description='ρ', min=-1, max=1, step=0.01)
px_widget = widgets.FloatSlider(0.5, description='p(color=golden)', min=0.01, max=0.99, step=0.01, style=style)
py_widget = widgets.FloatSlider(0.5, description='p(size=large)', min=0.01, max=0.99, step=0.01, style=style)
gs[0,0] = cor_widget
gs[0,1] = px_widget
gs[1,0] = py_widget
@widgets.interact(
px=px_widget,
py=py_widget,
cor=cor_widget,
)
def make_corr_plot(px, py, cor):
Cmin, Cmax = compute_cor_range(px, py) #allow correlation values
cor_widget.min, cor_widget.max = Cmin+0.01, Cmax-0.01
if cor_widget.value > Cmax:
cor_widget.value = Cmax
if cor_widget.value < Cmin:
cor_widget.value = Cmin
cor = cor_widget.value
P = compute_marginal(px,py,cor)
# print(P)
fig = plot_joint_probs(P)
plt.show(fig)
plt.close(fig)
return None
# gs[1,1] = make_corr_plot()_____no_output_____# to_remove explanation
#' 1. When the correlation is zero, the two properties are completely independent.
#' This means you don't gain any information about one variable from observing the other.
#' Importantly, the marginal distribution of one variable is therefore independent of the other.
#' 2. The correlation controls the distribution of probability across the joint probabilty table.
#' The higher the correlation, the more the probabilities are restricted by the fact that both rows
#' and columns need to sum to one! While the marginal probabilities show the relative weighting, the
#' absolute probabilities for one quality will become more dependent on the other as the correlation
#' goes to 1 or -1.
#' 3. The correlation will control how much probabilty mass is located on the diagonals. As the
#' correlation goes to 1 (or -1), the probability of seeing the one of the two pairings has to goes
#' towards zero!
#' 4. If we think about what information we gain by observing one quality, the intution from (3.) tells
#' us that we know more (have more information) about the other quality as a function of the correlation._____no_output_____
</code>
We have just seen how two random variables can be more or less independent. The more correlated, the less independent, and the more shared information. We also learned that we can marginalize to determine the marginal likelihood of a hidden state or to find the marginal probability distribution of two random variables. We are going to now complete our journey towards being fully Bayesian!_____no_output_____---
# Section 5: Bayes' Rule and the Posterior_____no_output_____Marginalization is going to be used to combine our prior knowlege, which we call the **prior**, and our new information from a measurement, the **likelihood**. Only in this case, the information we gain about the hidden state we are interested in, where the fish are, is based on the relationship between the probabilities of the measurement and our prior.
We can now calculate the full posterior distribution for the hidden state ($s$) using Bayes' Rule. As we've seen, the posterior is proportional the the prior times the likelihood. This means that the posterior probability of the hidden state ($s$) given a measurement ($m$) is proportional to the likelihood of the measurement given the state times the prior probability of that state (the marginal likelihood):
$$ P(s | m) \propto P(m | s) P(s) $$
We say proportional to instead of equal because we need to normalize to produce a full probability distribution:
$$ P(s | m) = \frac{P(m | s) P(s)}{P(m)} $$
Normalizing by this $P(m)$ means that our posterior is a complete probability distribution that sums or integrates to 1 appropriately. We now can use this new, complete probability distribution for any future inference or decisions we like! In fact, as we will see tomorrow, we can use it as a new prior! Finally, we often call this probability distribution our beliefs over the hidden states, to emphasize that it is our subjective knowlege about the hidden state.
For many complicated cases, like those we might be using to model behavioral or brain inferences, the normalization term can be intractable or extremely complex to calculate. We can be careful to choose probability distributions were we can analytically calculate the posterior probability or numerical approximation is reliable. Better yet, we sometimes don't need to bother with this normalization! The normalization term, $P(m)$, is the probability of the measurement. This does not depend on state so is essentially a constant we can often ignore. We can compare the unnormalized posterior distribution values for different states because how they relate to each other is unchanged when divided by the same constant. We will see how to do this to compare evidence for different hypotheses tomorrow. (It's also used to compare the likelihood of models fit using maximum likelihood estimation, as you did in W1D5.)
In this relatively simple example, we can compute the marginal probability $P(m)$ easily by using:
$$P(m) = \sum_s P(m | s) P(s)$$
We can then normalize so that we deal with the full posterior distribution.
_____no_output_____## Math Exercise 5: Calculating a posterior probability
Our prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is:
| Likelihood: p(m \| s) | m = catch fish | m = no fish |
| ----------------- |----------|----------|
| s = left | 0.5 | 0.5 |
| s = right | 0.1 | 0.9 |
Calculate the posterior probability (on paper) that:
1. The school is on the left if the fisherperson catches a fish: $p(s = left | m = catch fish)$ (hint: normalize by compute $p(m = catch fish)$)
2. The school is on the right if the fisherperson does not catch a fish: $p(s = right | m = no fish)$_____no_output_____
<code>
# to_remove explanation
# 1. Using Bayes rule, we know that P(s = left | m = catch fish) = P(m = catch fish | s = left)P(s = left) / P(m = catch fish)
#. Let's first compute P(m = catch fish):
#. P(m = catch fish) = P(m = catch fish | s = left)P(s = left) + P(m = catch fish | s = right)P(s = right)
# = 0.5 * 0.3 + .1*.7
# = 0.22
#. Now we can plug in all parts of Bayes rule:
# P(s = left | m = catch fish) = P(m = catch fish | s = left)P(s = left) / P(m = catch fish)
# = 0.5*0.3/0.22
# = 0.68
# 2. Using Bayes rule, we know that P(s = right | m = no fish) = P(m = no fish | s = right)P(s = right) / P(m = no fish)
#. Let's first compute P(m = no fish):
#. P(m = no fish) = P(m = no fish | s = left)P(s = left) + P(m = no fish | s = right)P(s = right)
# = 0.5 * 0.3 + .9*.7
# = 0.78
#. Now we can plug in all parts of Bayes rule:
# P(s = right | m = no fish) = P(m = no fish | s = right)P(s = right) / P(m = no fish)
# = 0.9*0.7/0.78
# = 0.81_____no_output_____
</code>
## Coding Exercise 5: Computing Posteriors
Let's implement our above math to be able to compute posteriors for different priors and likelihood.s
As before, our prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is:
| Likelihood: p(m \| s) | m = catch fish | m = no fish |
| ----------------- |----------|----------|
| s = left | 0.5 | 0.5 |
| s = right | 0.1 | 0.9 |
We want our full posterior to take the same 2 by 2 form. Make sure the outputs match your math answers!
_____no_output_____
<code>
def compute_posterior(likelihood, prior):
""" Use Bayes' Rule to compute posterior from likelihood and prior
Args:
likelihood (ndarray): i x j array with likelihood probabilities where i is
number of state options, j is number of measurement options
prior (ndarray): i x 1 array with prior probability of each state
Returns:
ndarray: i x j array with posterior probabilities where i is
number of state options, j is number of measurement options
"""
#################################################
## TODO for students ##
# Fill out function and remove
raise NotImplementedError("Student exercise: implement compute_posterior")
#################################################
# Compute unnormalized posterior (likelihood times prior)
posterior = ... # first row is s = left, second row is s = right
# Compute p(m)
p_m = np.sum(posterior, axis = 0)
# Normalize posterior (divide elements by p_m)
posterior /= ...
return posterior
# Make prior
prior = np.array([0.3, 0.7]).reshape((2, 1)) # first row is s = left, second row is s = right
# Make likelihood
likelihood = np.array([[0.5, 0.5], [0.1, 0.9]]) # first row is s = left, second row is s = right
# Compute posterior
posterior = compute_posterior(likelihood, prior)
# Visualize
with plt.xkcd():
plot_prior_likelihood_posterior(prior, likelihood, posterior)_____no_output_____# to_remove solution
def compute_posterior(likelihood, prior):
""" Use Bayes' Rule to compute posterior from likelihood and prior
Args:
likelihood (ndarray): i x j array with likelihood probabilities where i is
number of state options, j is number of measurement options
prior (ndarray): i x 1 array with prior probability of each state
Returns:
ndarray: i x j array with posterior probabilities where i is
number of state options, j is number of measurement options
"""
# Compute unnormalized posterior (likelihood times prior)
posterior = likelihood * prior # first row is s = left, second row is s = right
# Compute p(m)
p_m = np.sum(posterior, axis = 0)
# Normalize posterior (divide elements by p_m)
posterior /= p_m
return posterior
# Make prior
prior = np.array([0.3, 0.7]).reshape((2, 1)) # first row is s = left, second row is s = right
# Make likelihood
likelihood = np.array([[0.5, 0.5], [0.1, 0.9]]) # first row is s = left, second row is s = right
# Compute posterior
posterior = compute_posterior(likelihood, prior)
# Visualize
with plt.xkcd():
plot_prior_likelihood_posterior(prior, likelihood, posterior)_____no_output_____
</code>
## Interactive Demo 5: What affects the posterior?
Now that we can understand the implementation of *Bayes rule*, let's vary the parameters of the prior and likelihood to see how changing the prior and likelihood affect the posterior.
In the demo below, you can change the prior by playing with the slider for $p( s = left)$. You can also change the likelihood by changing the probability of catching a fish given that the school is on the left and the probability of catching a fish given that the school is on the right. The fisherperson you are observing is fishing on the left.
1. Keeping the likelihood constant, when does the prior have the strongest influence over the posterior? Meaning, when does the posterior look most like the prior no matter whether a fish was caught or not?
2. Keeping the likelihood constant, when does the prior exert the weakest influence? Meaning, when does the posterior look least like the prior and depend most on whether a fish was caught or not?
3. Set the prior probability of the state = left to 0.6 and play with the likelihood. When does the likelihood exert the most influence over the posterior?_____no_output_____
<code>
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s = left)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_plot(ps,p_a_s1,p_a_s0,m_right):
fig = plot_prior_likelihood(ps,p_a_s1,p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None_____no_output_____# to_remove explanation
# 1). The prior exerts a strong influence over the posterior when it is very informative: when
#. the probability of the school being on one side or the other. If the prior that the fish are
#. on the left side is very high (like 0.9), the posterior probability of the state being left is
#. high regardless of the measurement.
# 2). The prior does not exert a strong influence when it is not informative: when the probabilities
#. of the school being on the left vs right are similar (both are 0.5 for example). In this case,
#. the posterior is more driven by the collected data (the measurement) and more closely resembles
#. the likelihood.
#. 3) Similarly to the prior, the likelihood exerts the most influence when it is informative: when catching
#. a fish tells you a lot of information about which state is likely. For example, if the probability of the
#. fisherperson catching a fish if he is fishing on the right side and the school is on the left is 0
#. (p fish | s = left) = 0 and the probability of catching a fish if the school is on the right is 1, the
#. prior does not affect the posterior at all. The measurement tells you the hidden state completely._____no_output_____
</code>
# Section 6: Making Bayesian fishing decisions
We will explore how to consider the expected utility of an action based on our belief (the posterior distribution) about where we think the fish are. Now we have all the components of a Bayesian decision: our prior information, the likelihood given a measurement, the posterior distribution (belief) and our utility (the gains and losses). This allows us to consider the relationship between the true value of the hidden state, $s$, and what we *expect* to get if we take action, $a$, based on our belief!
Let's use the following widget to think about the relationship between these probability distributions and utility function._____no_output_____## Think! 6: What is more important, the probabilities or the utilities?
We are now going to put everything we've learned together to gain some intuitions for how each of the elements that goes into a Bayesian decision comes together. Remember, the common assumption in neuroscience, psychology, economics, ecology, etc. is that we (humans and animals) are tying to maximize our expected utility.
1. Can you find a situation where the expected utility is the same for both actions?
2. What is more important for determining the expected utility: the prior or a new measurement (the likelihood)?
3. Why is this a normative model?
4. Can you think of ways in which this model would need to be extended to describe human or animal behavior?_____no_output_____
<code>
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_utility_plot(ps, p_a_s1, p_a_s0,m_right):
fig = plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None_____no_output_____# to_remove explanation
#' 1. There are actually many (infinite) combinations that can produce the same
#. expected utility for both actions: but the posterior probabilities will always
# have to balance out the differences in the utility function. So, what is
# important is that for a given utility function, there will be some 'point
# of indifference'
#' 2. What matters is the relative information: if the prior is close to 50/50,
# then the likelihood has more infuence, if the likelihood is 50/50 given a
# measurement (the measurement is uninformative), the prior is more important.
# But the critical insite from Bayes Rule and the Bayesian approach is that what
# matters is the relative information you gain from a measurement, and that
# you can use all of this information for your decision.
#' 3. The model gives us a very precise way to think about how we *should* combine
# information and how we *should* act, GIVEN some assumption about our goals.
# In this case, if we assume we are trying to maximize expected utility--we can
# state what an animal or subject should do.
#' 4. There are lots of possible extensions. Humans may not always try to maximize
# utility; humans and animals might not be able to calculate or represent probabiltiy
# distributions exactly; The utility function might be more complicated; etc._____no_output_____
</code>
---
# Summary
In this tutorial, you learned about combining prior information with new measurements to update your knowledge using Bayes Rulem, in the context of a fishing problem.
Specifically, we covered:
* That the likelihood is the probability of the measurement given some hidden state
* That how the prior and likelihood interact to create the posterior, the probability of the hidden state given a measurement, depends on how they covary
* That utility is the gain from each action and state pair, and the expected utility for an action is the sum of the utility for all state pairs, weighted by the probability of that state happening. You can then choose the action with highest expected utility.
_____no_output_____---
# Bonus_____no_output_____## Bonus Section 1: Correlation Formula
To understand the way we calculate the correlation, we need to review the definition of covariance and correlation.
Covariance:
$$
cov(X,Y) = \sigma_{XY} = E[(X - \mu_{x})(Y - \mu_{y})] = E[X]E[Y] - \mu_{x}\mu_{y}
$$
Correlation:
$$
\rho_{XY} = \frac{cov(Y,Y)}{\sqrt{V(X)V(Y)}} = \frac{\sigma_{XY}}{\sigma_{X}\sigma_{Y}}
$$_____no_output_____
| {
"repository": "bgalbraith/course-content",
"path": "tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb",
"matched_keywords": [
"neuroscience",
"ecology"
],
"stars": null,
"size": 75957,
"hexsha": "d03cac6a8ba1abd88bb936af9bca459991ce7d42",
"max_line_length": 925,
"avg_line_length": 46.6853103872,
"alphanum_fraction": 0.5879510776
} |
# Notebook from kcbhamu/kaldo
Path: docs/docsource/theory.ipynb
<!--  -->
## Introduction
Understanding heat transport in semiconductors and insulators is of fundamental importance because of its technological impact in electronics and renewable energy harvesting and conversion.
Anharmonic Lattice Dynamics provides a powerful framework for the description of heat transport at the nanoscale. One of the advantages of this method is that it naturally includes quantum effects due to atoms vibrations, which are needed to compute thermal properties of semiconductors widely use in nanotechnology, like Silicon and Carbon, even at room temperature.
While heat transport in amorphous and crystalline semiconductors has a different microscopic origin, a unified approach to simulate both crystals and glasses has been devised.
Here we introduce a unified workflow, which implements both the Boltzmann Transport equation (BTE) and the Quasi Harmonic Green-Kubo (QHGK) methods. We discuss how the theory can be optimized to exploit modern parallel architectures, and how it is implemented in kALDo: a versatile and scalable open-source software to compute phonon transport in solids.
## Theory
In semiconductors, electronic and vibrational dynamics often occur over different time scales, and can thus be decoupled using the Born Oppenheimer approximation. Under this assumption, the potential $\phi$ of a system made of $N_{atoms}$ atoms, is a function of all the $x_{i\alpha}$ atomic positions, where $i$ and $\alpha$ refer to the atomic and Cartesian indices, respectively. Near thermal equilibrium, the potential energy can be Taylor expanded in the atomic displacements, $\mathbf{u}=\mathbf x-\mathbf{x}_{\rm equilibrium}$,
$$
\phi(\{x_{i\alpha}\})=\phi_0 +
\sum_{i\alpha}\phi^{\prime}_{i\alpha }u_{i\alpha}
+\frac{1}{2}
\sum_{i\alpha i'\alpha'}
\phi^{\prime\prime}_{i\alpha i'\alpha '}u_{i\alpha} u_{i'\alpha'}+
$$
$$
+
\frac{1}{3!}\sum_{i\alpha i'\alpha 'i''\alpha ''}
\phi^{\prime\prime\prime}_{i\alpha i'\alpha 'i''\alpha ''} u_{i\alpha }u_{i'\alpha '} u_{i''\alpha ''} + \dots,
$$
where
$$
\phi^{\prime\prime}_{i\alpha i'\alpha '}=\frac{\partial^{2} \phi}{\partial u_{i\alpha } \partial u_{i'\alpha '} },\qquad
\phi^{\prime\prime\prime}_{i\alpha i'\alpha 'i''\alpha ''}=\frac{\partial^{3} \phi}{\partial u_{i\alpha } \partial u_{i'\alpha '} \partial u_{i''\alpha ''}},
$$
are the second and third order interatomic force constants (IFC). The term $\phi_0$ can be discarded, and the forces $F = - \phi^{\prime}$ are zero at equilibrium.
The IFCs can be evaluated by finite difference, which consists in calculating the difference between the forces acting on the system when one of the atoms is displaced by a small finite shift along a Cartesian direction. The second and third order IFCs need respectively, $2N_{atoms}$, and $4N_{atoms}^2$ forces calculations. In crystals, this amount can be reduced exploiting the spatial symmetries of the system, or adopting a compressed sensing approach. In the framework of DFT, it is also possible and often convenient to compute IFCs using perturbation theory.
The dynamical matrix is the second order IFC rescaled by the masses, $D_{i\alpha i'\alpha}=\phi^{\prime\prime}_{i\alpha i'\alpha'}/\sqrt{m_im_{i'}}$. It is diagonal in the phonons basis
$$
\sum_{i'\alpha'} D_{i\alpha i'\alpha'}\eta_{i'\alpha'\mu} =\eta_{i\alpha\mu} \omega_\mu^2
$$
and $\omega_\mu/(2\pi)$ are the frequencies of the normal modes of the system.
For crystals, where there is long range order due to the periodicity, the dimensionality of the problem can be reduced. The Fourier transfom maps the large direct space onto a compact volume in the reciprocal space: the Brillouin zone. More precisely we adopt a supercell approach, where we calculate the dynamical matrix on $N_{\rm replicas}$ replicas of a unit cell of $N_{\rm unit}$ atoms, at positions $\mathbf R_l$, and calculate
$$
D_{i \alpha k i' \alpha'}=\sum_l \chi_{kl} D_{i \alpha l i' \alpha'},\quad \chi_{kl} = \mathrm{e}^{-i \mathbf{q_k}\cdot \mathbf{R}_{l} },
$$
where $\mathbf q_k$ is a grid of size $N_k$ indexed by $k$ and the eigenvalue equation becomes
$$
\sum_{i'\alpha'} D_{i \alpha k i' \alpha'} \eta_{i' \alpha'k s}=\omega_{k m}^{2} \eta_{i \alpha k s }.
$$
which now depends on the quasi-momentum index, $k$, and the phonons mode $s$.
### Boltzman Transport Equation
At finite temperature $T$, the Bose Einstein statistic is the quantum distribution for atomic vibrations
$$
n_{\mu}=n(\omega_{\mu})=\frac{1}{e^{\frac{\hbar\omega_{\mu}}{k_B T}}-1}
$$
where $k_B$ is the Boltzmann constant and we use $\mu =(k,s)$.
We consider a small temperature gradient applied along the $\alpha$-axis of a crystalline material. If the phonons population depends on the position only through the temperature, $\frac{\partial n_{\mu\alpha}}{\partial x_\alpha} = \frac{\partial n_{\mu\alpha}}{\partial T}\nabla_\alpha T$, we can Taylor expand it
$$
\tilde n_{\mu\alpha} \simeq n_\mu + \lambda_{\mu\alpha} \frac{\partial n_\mu}{\partial x_\alpha} \simeq n_\mu + \psi_{\mu\alpha}\nabla_\alpha T
$$
with $\psi_{\mu\alpha}=\lambda_{\mu\alpha} \frac{\partial n_\mu}{\partial T}$, where $\lambda_{\mu\alpha}$ is the phonons mean free path.
Being quantum quasi-particles, phonons have a well-defined group velocity, which, for the acoustic modes in the long wavelength limit, corresponds to the speed of sound in the material,
$$
v_{ ks\alpha}=\frac{\partial \omega_{k s}}{\partial {q_{k\alpha}}} = \frac{1}{2\omega_{ks}}\sum_{i\beta l i'\beta'}
i R_{l \alpha} D_{i\beta li'\beta'}\chi_{kl}
\eta_{ks i\beta}\eta_{ksi'\beta}
$$
and the last equality is obtained by applying the derivative with respect to $\mathbf{q}_k$ directly to tbe eigenvectors Equation
The heat current per mode is written in terms of the phonon energy $\hbar \omega$, velocity $v$, and out-of-equilibrium phonons population, $\tilde n$:
$$
j_{\mu\alpha'} =\sum_\alpha \hbar \omega_\mu v_{\mu\alpha'} (\tilde n_{\mu\alpha} - n_{\mu})\simeq- \sum_\alpha c_\mu v_{\mu\alpha'} \mathbf{\lambda}_{\mu\alpha} \nabla_\alpha T .
$$
As we deal with extended systems, we can assume heat transport in the diffusive regime, and we can use Fourier's law
$$
J_{\alpha}=-\sum_{\alpha'}\kappa_{\alpha\alpha'} \nabla_{\alpha'} T,
$$
where the heat current is the sum of the contribution from each phonon mode: $J_\alpha = 1/(N_k V)\sum_\mu j_{\mu\alpha}$.
The thermal conductivity then results:
$$
\kappa_{\alpha \alpha'}=\frac{1}{ V N_k} \sum_{\mu} c_\mu v_{\mu\alpha} \lambda_{\mu\alpha'},
$$
where we defined the heat capacity per mode
$$
c_\mu=\hbar \omega_\mu \frac{\partial n_\mu}{\partial T},
$$
which is connected to total heat capacity through $C = \sum_\mu c_\mu /NV$.
We can now introduce the BTE, which combines the kinetic theory of gases with collective phonons vibrations:
$$
{\mathbf{v}}_{\mu} \cdot {\boldsymbol{\nabla}} T \frac{\partial n_{\mu}}{\partial T}=\left.\frac{\partial n_{\mu}}{\partial t}\right|_{\text {scatt}},
$$
where the scattering term, in the linearized form is
$$
\left.\frac{\partial n_{\mu}}{\partial t}\right|_{\text {scatt}}=
$$
$$
\frac{\nabla_\alpha T}{\omega_\mu N_k}\sum_{\mu^{\prime} \mu^{\prime \prime}}^{+} \Gamma_{\mu \mu^{\prime} \mu^{\prime \prime}}^{+}
\left(\omega_\mu\mathbf{\psi}_{\mu\alpha}
+\omega_{\mu^{\prime}}\mathbf{\psi}_{\mu^{\prime}\alpha}
-\omega_{\mu^{\prime \prime}} \mathbf{\psi}_{\mu^{\prime \prime}\alpha}\right)
+
$$
$$
+\frac{\nabla_\alpha T}{\omega_\mu N_k}\sum_{\mu^{\prime} \mu^{\prime \prime}}^{-} \frac{1}{2} \Gamma_{\mu \mu^{\prime} \mu^{\prime \prime}}^{-}
\left(\omega_\mu\mathbf{\psi}_{\mu\alpha}
-\omega_{\mu^{\prime}} \mathbf{\psi}_{\mu^{\prime}\alpha}
-\omega_{\mu^{\prime \prime}} \mathbf{\psi}_{\mu^{\prime \prime}\alpha}\right) .
$$
$\Gamma^{+}_{\mu\mu'\mu''}$ and $\Gamma^{-}_{\mu\mu'\mu''}$ are the scattering rates for three-phonon scattering processes, and they correspond to the events of phonons annihilation $\mu, \mu'\rightarrow\mu''$ and phonons creation $\mu \rightarrow\mu',\mu''
$$
\Gamma_{\mu \mu^{\prime} \mu^{\prime \prime}}^{\pm} =\frac{\hbar \pi}{8} \frac{g_{\mu\mu'\mu''}^{\pm}}{\omega_{\mu} \omega_{\mu'} \omega_{\mu''}}\left|\phi_{\mu \mu^{\prime} \mu^{\prime \prime}}^{\pm}\right|^{2},
$$
and the projection of the potentials on the phonon modes are given by
$$
\phi^\pm
_{ksk's'k'' s''}=
\sum_{il'i'l''i''}
\frac{
\phi_{il'i'l''i''}}
{\sqrt{m_{i}m_{i'}m_{i''}}}
\eta_{i ks}\eta^{\pm}_{i'k' s'}
\eta^*_{i''k''s''}\chi^\pm_{k'l'}\chi^*_{k''l''}
$$
with $\eta^+=\eta$, $\chi^+=\chi$ and $\eta^-=\eta^*$, $\chi^-=\chi^*$.
The phase space volume $g^\pm_{\mu\mu^\prime\mu^{\prime\prime}}$ in the previous equation are defined as
$$
g^+_{\mu\mu^\prime\mu^{\prime\prime}} = (n_{\mu'}-n_{\mu''})
\delta^+_{\mu\mu^\prime\mu^{\prime\prime}}
$$
$$
g^-_{\mu\mu^\prime\mu^{\prime\prime}} = (1 + n_{\mu'}+n_{\mu''})
\delta^-_{\mu\mu^\prime\mu^{\prime\prime}},
$$
and include the $\delta$ for the conservation of the energy and momentum in three-phonons scattering processes,
$$
\delta_{ks k's' k''s''}^{\pm}=
\delta_{\mathbf q_{k}\pm\mathbf q_{k'}-\mathbf q_{k''}, \mathbf Q}
\delta\left(\omega_{ks}\pm\omega_{k's'}-\omega_{k''s''}\right),
$$
with $Q$ the lattice vectors. Finally, the normalized phase-space per mode $g_\mu=\frac{1}{N}\sum_{\mu'\mu''}g_{\mu\mu'\mu''}$, provides useful information about the weight of a specific mode in the anharmonic scattering processes.
In order to calculate the conductivity, we express the mean free path in terms of the 3-phonon scattering rates
$$
v_{\mu\alpha} = \tilde \Gamma_{\mu\mu' }\lambda_\mu = (\delta_{\mu\mu'}\Gamma^0_\mu + \Gamma^{1}_{\mu\mu'})\lambda_{\mu\alpha},
$$
where we introduced
$$
\Gamma^{0}_\mu=\sum_{\mu'\mu''}(\Gamma^+_{\mu\mu'\mu''} + \Gamma^-_{\mu\mu'\mu''} ),
$$
and
$$
\Gamma^{1}_{\mu\mu'}=
\frac{\omega_{\mu'}}{\omega_\mu}
\sum_{\mu''}(\Gamma^+_{\mu\mu'\mu''}
-\Gamma^+_{\mu\mu''\mu'}
-\Gamma^-_{\mu\mu'\mu''}
-\Gamma^-_{\mu\mu''\mu'}
).
$$
In RTA, the off-diagonal terms are ignored, $\Gamma^{1}_{\mu\mu'}=0$, and the conductivity is
$$
\kappa_{\alpha\alpha'} =\frac{1}{N_kV} \sum_{\mu}c_\mu v_{\mu\alpha}{\lambda_{\mu\alpha'}}
=\frac{1}{N_kV} \sum_\mu c_\mu v_{\mu\alpha} {\tau_\mu}{v_{\mu\alpha'}},
$$
where $\tau_\mu=1/2\Gamma_{\mu}^0$ corresponds the phonons lifetime calculated using the Fermi Golden Rule.
It has been shown that, to correctly capture the physics of phonon transport, especially in highly conductive materials, the off diagonal terms of the scattering rates cannot be disregarded.
More generally, the mean free path is calculated inverting the scattering tensor
$$
\lambda_{\mu\alpha} = \sum_{\mu'}(\tilde \Gamma_{\mu\mu' })^{-1}v_{\mu'\alpha}.
$$
$$
\kappa_{\alpha\alpha'} =\frac{1}{N_kV} \sum_{\mu\mu'} c_\mu v_{\mu\alpha}(\tilde \Gamma_{\mu\mu' })^{-1}v_{\mu'\alpha'}.
$$
This inversion operation is computationally expensive; however, when the off-diagonal elements of the scattering rate matrix are much smaller than the diagonal, we can rewrite the mean free path obtained from the BTE as a series:
$$
\lambda_{\mu\alpha} = \sum_{\mu'}\left(\delta_{\mu\mu'} + \frac{1}{\Gamma^0_\mu}\Gamma^{1}_{\mu\mu'}\right)^{-1}\frac{1}{\Gamma^0_{\mu'}}v_{\mu'\alpha} =
$$
$$
=
\sum_{\mu'}
\left[
\sum^{\infty}_{n=0}\left(- \frac{1}{\Gamma^0_\mu}\Gamma^{1}_{\mu\mu'}\right)^n
\right]\frac{1}{\Gamma^0_{\mu'}}v_{\mu'\alpha} ,
$$
where in the last step we used the identity $\sum_0 q^n = (1 - q)^{-1}$, true when $|q|=|\Gamma^1/\Gamma^0|<1$.
This equation can then be written in an iterative form
$$
\lambda^0_{\mu\alpha} = \frac{1}{\Gamma^0_\mu}v_\mu
\qquad
\lambda^{n+1}_{\mu\alpha} = - \frac{1}{\Gamma^0_\mu}\sum_{\mu'}\Gamma^{1}_{\mu\mu'} \lambda^{n}_{\mu'\alpha}.
$$
Hence, the inversion in of the scattering tensor is obtained by a recursive expression. Once the mean free path is calculated, the conductivity is straightforwardly computed.
### Quasi-Harmonic Green Kubo
In non-crystalline solids with no long range order, such as glasses, alloys, nano-crystalline, and partially disordered systems, the phonon picture is formally not well-defined. While vibrational modes are still the heat carriers, their mean-free-paths may be so short that the quasi-particle picture of heat carriers breaks down and the BTE is no longer applicable.
In glasses heat transport is dominated by a diffusive processes in which delocalized modes with similar frequency transfer energy from one to another.
Whereas this mechanism is intrinsically distinct from the underlying hypothesis of the BTE approach, the two transport pictures have been recently reconciled in a unified theory, in which the thermal conductivity is written as:
$$
\kappa_{\alpha \alpha'}=\frac{1}{V} \sum_{\mu \mu'} c_{\mu \mu'} v_{\mu \mu' \alpha} v_{\mu \mu' \alpha'} \tau_{\mu \mu'}.
$$
This expression is analogous to the RTA one, where modal heat capacity, phonon group velocity and lifetimes are replaced by
the generalized heat capacity,
$$
c_{\mu \mu'}=\frac{\hbar \omega_{\mu} \omega_{\mu'}}{T} \frac{n_{\mu}-n_{\mu'}}{\omega_{\mu}-\omega_{\mu'}},
$$
the generalized velocities,
$$
v_{\mu\mu'\alpha}=\frac{1}{2\sqrt{\omega_\mu\omega_{\mu'}}}
\sum_{ii'\beta'\beta''}(x_{i\alpha}-x_{i'\alpha })D_{i\beta i'\beta'}\eta_{\mu i\beta}\eta_{\mu'i'\beta'},
$$
and the generalized lifetime $\tau_{\mu\mu'}$.
The latter is expressed as a Lorentzian, which weighs diffusive processes between phonons with nearly-resonant frequencies:
$$
\tau_{\mu\mu'} =
\frac{\gamma_{\mu}+\gamma_{\mu'}}{\left(\omega_{\mu}-\omega_{\mu'}\right)^{2}+\left(\gamma_{\mu}+\gamma_{\mu'}\right)^{2}}
$$
where $\gamma_\mu$ is the line width of mode $\mu$ that can be computed using Fermi Golden rule.
These equations have been derived from the Green-Kubo theory of linear response applied to thermal conductivity, by taking a quasi-harmonic approximation of the heat current, from which this approach is named quasi-harmonic Green-Kubo (QHGK).
It has been proven that for crystalline materials QHGK is formally equivalent to the BTE in the relaxation time approximation, and that its classical limit reproduces correctly molecular dynamics simulations for amorphous silicon up to relatively high temperature (600 K).
Finally, we provide a microscopic definition of the mode diffusivity,
$$
D_{\mu} =\frac{1}{N_k V} \sum_{\mu'}v_{\mu\mu'} \tau_{\mu\mu'}v_{\mu\mu'},
$$
which conveniently provide a measure of the temperature-independent contribution of each mode to thermal transport.
## Benchmarks applications
The workflow for ALD calculations is illustrated below
<img src="_resources/timeline.png" width="650">
Here, we present two example simulations of both a periodic and an amorphous structure.
### *Ab initio* silicon diamond
In this example we calculate the second order IFC using Density Functional Perturbation Theory as implemented in the Quantum-Espresso package. The phonon lifetimes and thermal conductivity calculations are performed using a (19, 19, 19) q-point grid.
The kALDo minimal input file, looks like
```python
# IFCs object creation using ase.build.bulk
fc = ForceConstants(atoms=bulk('Si', 'diamond', a=2.699),
supercell=(5, 5, 5)))
# input is the ASE input for QE
fc.second.calculate(calculator=Espresso(**input))
fc.third.calculate(calculator=Espresso(**input))
# Phonons object creation
phonons = Phonons(force_constants=fc,
kpts=[19, 19, 19],
temperature=300)
# Conductivity calculations
cond = Conductivity(phonons=phonons))
print('Thermal conductivity matrix, in (W/m/K):')
print(cond.conductivity(method='inverse').sum(axis=0))
```
We performed the simulation using the local density approximation for the exchange and correlation functional and a Bachelet-Hamann-Schluter norm-conserving pseudoptential. Kohn-Sham orbitals are represented on a plane-waves basis set with a cutoff of 20 Ry and (8, 8, 8) k-points mesh. The minimized lattice parameter is 5.398A. The third-order IFC is calculated using finite difference displacement on (5, 5, 5) replicas of the irreducible fcc-unit cell, including up to the 5th nearest neighbor.
We obtained the following thermal properties
<img src="_resources/si-diamond-observables.png" width="650">
The silicon diamond modes analysis is shown above. Quantum (red) and classical (blue) results are compared. a) Normalized density of states, b) Normalized phase-space per mode $g$, c) lifetime per mode $\tau$, d) mean free path $\lambda$, and e) cumulative conductivity $\kappa_{cum}$.
### Amorphous silicon
Here we study a-Si generated by LAMMPS molecular dynamics simulations of quenching from the melt a 4096 atom crystal silicon structure, with 1989 Tersoff interatomic potential and the QHGK method. The minimal input file looks like the following
```python
# IFCs object creation
fc = ForceConstants.from_folder(atoms,
folder='./input_data'))
# Phonons object creation
phon = Phonons(force_constants=fc,
temperature=300)
# Conductivity calculations
cond = Conductivity(phonons=phonons))
print('Thermal conductivity matrix, in (W/m/K):')
print(cond.conductivity(method='qhgk').sum(axis=0))
```
In a simliar treatment to the silicon crystal, a full battery of modal analysis can be calculated with both quantum and classical statistics on the amorphous systems re- turning the phonon DoS as well as the associated lifetimes, generalized diffusivities, normalized phase space and cumulative conductivity
<img src="_resources/amorphous.png" width="650">
Classical and quantum properties for 4096 atom amorphous silicon system are shown above. a) density of states, b) lifetimes, c) diffusivities, and e) cumulative thermal conductivity. In spite of the increased quantum lifetimes, a decrease of 0.17W/m/K is seen in the quantum conductivity. The difference in conductivity is primarily a result of the overestimation of classical high frequency heat capacities.
### References
[1]: B. J. Alder, D. M. Gass, and T. E. Wainwright, “Studies in Molecular Dynamics. VIII. The Transport Coefficients for a Hard-Sphere Fluid,” Journal Chemical Physics 53, 3813–3826 (1970).
[2]: A. J. C. Ladd, B. Moran, and W. G. Hoover, “Lattice thermal conductivity: A comparison of molecular dynamics and anharmonic lattice dynamics,” Physical Review B 34, 5058–5064 (1986).
[3]: A. Marcolongo, P. Umari, and S. Baroni, “Microscopic theory and quantum simulation of atomic heat transport,” Nature Physics 12, 80–84 (2015).
[4]: R. Peierls, “Zur kinetischen Theorie der Wärmeleitung in Kristallen,” Annalen der Physik 395, 1055–1101 (1929).
[5]: J. M. Ziman, Electrons and Phonons: The Theory of Transport Phenomena in Solids, International series of monographs on physics (OUP Oxford, 2001).
[6]: A. J. H. McGaughey, A. Jain, and H.-Y. Kim, “Phonon properties and thermal conductivity from first principles, lattice dynamics, and the Boltzmann transport equation,” Journal of Applied Physics 125, 011101–20 (2019).
[7]: M. Omini and A. Sparavigna, “Beyond the isotropic-model approximation in the theory of thermal conductivity,” Physical Review B 53, 9064–9073 (1996).
[8]: A. Ward, D. A. Broido, D. A. Stewart, and G. Deinzer, “Ab initio theory of the lattice thermal conductivity in diamond,” Physical Review B 80, 125203 (2009).
[9]: L. Chaput, A. Togo, I. Tanaka, and G. Hug, “Phonon-phonon interactions in transition metals,” Physical Review B 84, 094302 (2011).
[10]: W. Li, J. Carrete, N. A. Katcho, and N. Mingo, “ShengBTE: A solver of the Boltzmann transport equation for phonons,” Computer Physics Communications 185, 1747–1758 (2014).
[11]: G. Fugallo, M. Lazzeri, L. Paulatto, and F. M. B, “Ab initio variational approach for evaluating lattice thermal conductivity,” Physical Review B 88, 045430 (2013).
[12]: A. Cepellotti and N. Marzari, “Thermal Transport in Crystals as a Kinetic Theory of Relaxons,” Physical Review X 6, 041013–14 (2016).
[13]: A. Chernatynskiy and S. R. Phillpot, “Phonon Transport Simulator (PhonTS),” Computer Physics Communications 192, 196–204 (2015).
[14]: A. Togo, L. Chaput, and I. Tanaka, “Distributions of phonon lifetimes in brillouin zones,” Physical Review B 91, 094306 (2015).
[15]: J. Carrete, B. Vermeersch, A. Katre, A. van Roekeghem, T. Wang, G. K. H. Madsen, and N. Mingo, “almaBTE : A solver of the space–time dependent Boltzmann transport equation for phonons in structured materials,” Computer Physics Communications 220, 351–362 (2017).
[16]: T. Tadano, Y. Gohda, and S. Tsuneyuki, “Anharmonic force constants extracted from first-principles molecular dynamics: applications to heat transfer simulations,” Journal of Physics: Condensed Matter 26, 225402–13 (2014).
[17]: D. A. Broido, M. Malorny, G. Birner, N. Mingo, and D. A. Stewart, “Intrinsic lattice thermal conductivity of semiconductors from first principles,” Applied Physics Letters 91, 231922 (2007).
[18]: L. Lindsay, A. Katre, A. Cepellotti, and N. Mingo, “Perspective on ab initio phonon thermal transport,” Journal Applied Physics 126, 050902–21 (2019).
[19]: L. Lindsay, D. A. Broido, and T. L. Reinecke, “First-Principles Determination of Ultrahigh Thermal Conductivity of Boron Arsenide: A Competitor for Diamond?” Physical Review Letters 111, 025901–5 (2013).
[20]: G. Fugallo, A. Cepellotti, L. Paulatto, M. Lazzeri, N. Marzari, and F. Mauri, “Thermal Conductivity of Graphene and Graphite: Collective Ex- citations and Mean Free Paths,” Nano Letters 14, 6109–6114 (2014).
[21]: A. Cepellotti, G. Fugallo, L. Paulatto, M. Lazzeri, F. Mauri, and N. Marzari, “Phonon hydrodynamics in two-dimensional materials,” Na- ture Communications 6, 6400 (2015).
[22]: A. Jain and A. J. H. Mcgaughey, “Strongly anisotropic in-plane thermal transport in single-layer black phosphorene,” Scientific Reports 5, 8501–5 (2015).
[23]: M. Zeraati, S. M. Vaez Allaei, I. Abdolhosseini Sarsari, M. Pourfath, and D. Donadio, “Highly anisotropic thermal conductivity of arsenene: An ab initio study,” Physical Review B 93, 085424 (2016).
[24]: B. Ouyang, S. Chen, Y. Jing, T. Wei, S. Xiong, and D. Donadio, “Enhanced thermoelectric performance of two dimensional MS2 (M=Mo,W) through phase engineering,” Journal of Materiomics 4, 329–337 (2018).
[25]: S. Chen, A. Sood, E. Pop, K. E. Goodson, and D. Donadio, “Strongly tunable anisotropic thermal transport in MoS 2by strain and lithium inter- calation: first-principles calculations,” 2D Materials 6, 025033–10 (2019). 26A. Sood, F. Xiong, S. Chen, R. Cheaito, F. Lian, M. Asheghi, Y. Cui, D. Donadio, K. E. Goodson, and E. Pop, “Quasi-Ballistic Thermal Transport Across MoS 2Thin Films,” Nano Letters 19, 2434–2442 (2019).
[27]: C. Ott, F. Reiter, M. Baumgartner, M. Pielmeier, A. Vogel, P. Walke, S. Burger, M. Ehrenreich, G. Kieslich, D. Daisenberger, J. Armstrong, U. K. Thakur, P. Kumar, S. Chen, D. Donadio, L. S. Walter, R. T. Weitz, K. Shankar, and T. Nilges, “Flexible and Ultrasoft Inorganic 1D Semiconductor and Heterostructure Systems Based on SnIP,” Advanced Functional Materials 271, 1900233 (2019).
[28]: P. B. Allen and J. L. Feldman, “Thermal conductivity of disordered harmonic solids,” Physical Review B 48, 12581–12588 (1993).
[29]: L. Isaeva, G. Barbalinardo, D. Donadio, and S. Baroni, “Modeling heat transport in crystals and glasses from a unified lattice-dynamical approach,”Nature Communications 10, 3853 (2019).
[30]: M. Simoncelli, N. Marzari, and F. Mauri, “Unified theory of thermal transport in crystals and glasses,” Nature Physics 15, 809–813 (2019).
[31]: F. Eriksson, E. Fransson, and P. Erhart, “The Hiphive Package for the Extraction of High-Order Force Constants by Machine Learning,” Advanced Theory and Simulations 2, 1800184–11 (2019).
[32]: S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi, “Phonons and related crystal properties from density-functional perturbation theory,” Rev Mod Phys 73, 515–562 (2001).
[33]: L. Paulatto, F. Mauri, and M. Lazzeri, “Anharmonic properties from a generalized third-order ab initioapproach: Theory and applications to graphite and graphene,” Phys. Rev. B 87, 214303–18 (2013).
[34]: G. P. Srivastava, “The Physics of Phonons, ,” Adam Hilger, Bristol 1990. (1990).
[35]: M. S. Green, “Markoff random processes and the statistical mechanics of time-dependent phenomena.” Journal Chemical Physics 20, 1281–1295 (1952).
[36]: M. Green, “Markoff random processes and the statistical mechanics of time-dependent phenomena. ii. irreversible processes in fluids,” Journal Chemical Physics 22, 398–413 (1954).
[37]: R. Kubo, “Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Prob lems,” Journal of the Physical Society of Japan 12, 570–586 (1957).
[38]: R. Kubo, M. Yokota, and S. Nakajima, “Statistical-Mechanical Theory of Irreversible Processes. II. Response to Thermal Disturbance,” Journal of the Physical Society of Japan 12, 1203–1211 (1957).
[39]: Y. He, I. Savic ́, D. Donadio, and G. Galli, “Lattice thermal conductivity of semiconducting bulk materials: atomistic simulations,” Physical Chemistry Chemical Physics 14, 16209–14 (2012).
[40]: A. H. Larsen, J. J. Mortensen, J. Blomqvist, I. E. Castelli, R. Christensen, M. Dułak, J. Friis, M. N. Groves, B. Hammer, C. Hargus, E. D. Hermes, P. C. Jennings, P. B. Jensen, J. Kermode, J. R. Kitchin, E. L. Kolsbjerg, J. Kubal, K. Kaasbjerg, S. Lysgaard, J. B. Maronsson, T. Maxson, T. Olsen, L. Pastewka, A. Peterson, C. Rostgaard, J. Schiøtz, O. Schütt, M. Strange, K. S. Thygesen, T. Vegge, L. Vilhelmsen, M. Walter, Z. Zeng, and K. W. Jacobsen, “The atomic simulation environment—a python library for working with atoms,” Journal of Physics: Condensed Matter 29, 273002 (2017).
[41]: B. Aradi, B. Hourahine, and T. Frauenheim, “Dftb+, a sparse matrix-based implementation of the dftb method,” J Phys Chem A 111, 5678–5684 (2007).
[42]: D. G A Smith and J. Gray, “opt_einsum - A Python package for optimizing contraction order for einsum-like expressions,” Journal of Open Source Software 3, 753–3 (2018).
[43]: P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. D. Jr, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, “Advanced capabilities for materials modelling with quantum espresso,” Journal of Physics: Condensed Matter 29, 465901 (2017).
[44]: G. B. Bachelet, D. R. Hamann, and M. Schluter, “Pseudopotentials That Work - From H to Pu,” Physical Review B 26, 4199–4228 (1982).
[45]: R. Kremer, K. Graf, M. Cardona, G. Devyatykh, A. Gusev, A. Gibin, A. In- yushkin, A. Taldenkov, and H. Pohl, “Thermal conductivity of isotopically enriched Si-28: revisited,” Solid State Communications 131, 499–503 (2004).
[46]: J. Tersoff, “Modeling solid-state chemistry: Interatomic potentials for multicomponent systems,” Physical Review B 39, 5566–5568 (1989).
[41]: A. Krylov, T. L. Windus, T. Barnes, E. Marin-Rimoldi, J. A. Nash, B. Pritchard, D. G. Smith, D. Altarawy, P. Saxe, C. Clementi, T. D. Crawford, R. J. Harrison, S. Jha, V. S. Pande, and T. Head-Gordon, “Perspective: Computational chemistry software and its advancement as illustrated through three grand challenge cases for molecular science,” Journal of Chemical Physics 149, 180901 (2018).
[42]: N.Wilkins-Diehrand, T.D.Crawford,“NSF’s Inaugural Software Institutes: The Science Gateways Community Institute and the Molecular Sciences Software Institute.” Computing in Science and Engineering 20 (2018).
[43]: W. Li, N. Mingo, L. Lindsay, D. A. Broido, D. A. Stewart, and N. A. Katcho, “Thermal conductivity of diamond nanowires from first princi- ples,” Physical Review B 85, 195436 (2012)._____no_output_____
| {
"repository": "kcbhamu/kaldo",
"path": "docs/docsource/theory.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": 49,
"size": 33861,
"hexsha": "d03dc47ae3565754701771556d3b5a3740f5b87d",
"max_line_length": 765,
"avg_line_length": 67.3180914513,
"alphanum_fraction": 0.6213047459
} |
# Notebook from dauparas/tensorflow_examples
Path: VAE_cell_cycle.ipynb
<a href="https://colab.research.google.com/github/dauparas/tensorflow_examples/blob/master/VAE_cell_cycle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____https://github.com/PMBio/scLVM/blob/master/tutorials/tcell_demo.ipynb_____no_output_____Variational Autoencoder Model (VAE) with latent subspaces based on:
https://arxiv.org/pdf/1812.06190.pdf_____no_output_____
<code>
#Step 1: import dependencies
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from keras import regularizers
import time
from __future__ import division
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
%matplotlib inline
plt.style.use('dark_background')
import pandas as pdUsing TensorFlow backend.
import os
from matplotlib import cm
import h5py
import scipy as SP
import pylab as PL_____no_output_____data = os.path.join('data_Tcells_normCounts.h5f')
f = h5py.File(data,'r')
Y = f['LogNcountsMmus'][:] # gene expression matrix
tech_noise = f['LogVar_techMmus'][:] # technical noise
genes_het_bool=f['genes_heterogen'][:] # index of heterogeneous genes
geneID = f['gene_names'][:] # gene names
cellcyclegenes_filter = SP.unique(f['cellcyclegenes_filter'][:].ravel() -1) # idx of cell cycle genes from GO
cellcyclegenes_filterCB = f['ccCBall_gene_indices'][:].ravel() -1 # idx of cell cycle genes from cycle base ..._____no_output_____# filter cell cycle genes
idx_cell_cycle = SP.union1d(cellcyclegenes_filter,cellcyclegenes_filterCB)
# determine non-zero counts
idx_nonzero = SP.nonzero((Y.mean(0)**2)>0)[0]
idx_cell_cycle_noise_filtered = SP.intersect1d(idx_cell_cycle,idx_nonzero)
# subset gene expression matrix
Ycc = Y[:,idx_cell_cycle_noise_filtered]_____no_output_____plt = PL.subplot(1,1,1);
PL.imshow(Ycc,cmap=cm.RdBu,vmin=-3,vmax=+3,interpolation='None');
#PL.colorbar();
plt.set_xticks([]);
plt.set_yticks([]);
PL.xlabel('genes');
PL.ylabel('cells');_____no_output_____X = np.delete(Y, idx_cell_cycle_noise_filtered, axis=1)
X = Y #base case
U = Y[:,idx_cell_cycle_noise_filtered]
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
indx_small_mean = np.argwhere(mean < 0.00001)
X = np.delete(X, indx_small_mean, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)_____no_output_____fano = variance/mean
print(fano.shape)(30233,)
indx_small_fano = np.argwhere(fano < 1.0)_____no_output_____X = np.delete(X, indx_small_fano, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
fano = variance/mean_____no_output_____print(fano.shape)(8892,)
#Reconstruction loss
def x_given_z(z, output_size):
with tf.variable_scope('M/x_given_w_z'):
act = tf.nn.leaky_relu
h = z
h = tf.layers.dense(h, 8, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 256, act)
loc = tf.layers.dense(h, output_size)
#log_variance = tf.layers.dense(x, latent_size)
#scale = tf.nn.softplus(log_variance)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
#KL term for z
def z_given_x(x, latent_size): #+
with tf.variable_scope('M/z_given_x'):
act = tf.nn.leaky_relu
h = x
h = tf.layers.dense(h, 256, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 8, act)
loc = tf.layers.dense(h,latent_size)
log_variance = tf.layers.dense(h, latent_size)
scale = tf.nn.softplus(log_variance)
# scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
def z_given(latent_size):
with tf.variable_scope('M/z_given'):
loc = tf.zeros(latent_size)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)_____no_output_____#Connect encoder and decoder and define the loss function
tf.reset_default_graph()
x_in = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_in')
x_out = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_out')
z_latent_size = 2
beta = 0.000001
#KL_z
zI = z_given(z_latent_size)
zIx = z_given_x(x_in, z_latent_size)
zIx_sample = zIx.sample()
zIx_mean = zIx.mean()
#kl_z = tf.reduce_mean(zIx.log_prob(zIx_sample)- zI.log_prob(zIx_sample))
kl_z = tf.reduce_mean(tfd.kl_divergence(zIx, zI)) #analytical
#Reconstruction
xIz = x_given_z(zIx_sample, X.shape[1])
rec_out = xIz.mean()
rec_loss = tf.losses.mean_squared_error(x_out, rec_out)
loss = rec_loss + beta*kl_z
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)_____no_output_____#Helper function
def batch_generator(features, x, u, batch_size):
"""Function to create python generator to shuffle and split features into batches along the first dimension."""
idx = np.arange(features.shape[0])
np.random.shuffle(idx)
for start_idx in range(0, features.shape[0], batch_size):
end_idx = min(start_idx + batch_size, features.shape[0])
part = idx[start_idx:end_idx]
yield features[part,:], x[part,:] , u[part, :]_____no_output_____n_epochs = 5000
batch_size = X.shape[0]
start = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(n_epochs):
gen = batch_generator(X, X, U, batch_size) #create batch generator
rec_loss_ = 0
kl_z_ = 0
for j in range(np.int(X.shape[0]/batch_size)):
x_in_batch, x_out_batch, u_batch = gen.__next__()
_, rec_loss__, kl_z__= sess.run([optimizer, rec_loss, kl_z], feed_dict={x_in: x_in_batch, x_out: x_out_batch})
rec_loss_ += rec_loss__
kl_z_ += kl_z__
if (i+1)% 50 == 0 or i == 0:
zIx_mean_, rec_out_= sess.run([zIx_mean, rec_out], feed_dict ={x_in:X, x_out:X})
end = time.time()
print('epoch: {0}, rec_loss: {1:.3f}, kl_z: {2:.2f}'.format((i+1), rec_loss_/(1+np.int(X.shape[0]/batch_size)), kl_z_/(1+np.int(X.shape[0]/batch_size))))
start = time.time()epoch: 1, rec_loss: 0.488, kl_z: 5038.03
epoch: 50, rec_loss: 0.256, kl_z: 1169.19
epoch: 100, rec_loss: 0.254, kl_z: 1150.47
epoch: 150, rec_loss: 0.256, kl_z: 1138.06
epoch: 200, rec_loss: 0.253, kl_z: 891.83
epoch: 250, rec_loss: 0.250, kl_z: 493.30
epoch: 300, rec_loss: 0.248, kl_z: 730.26
epoch: 350, rec_loss: 0.243, kl_z: 559.81
epoch: 400, rec_loss: 0.238, kl_z: 564.55
epoch: 450, rec_loss: 0.234, kl_z: 710.93
epoch: 500, rec_loss: 0.228, kl_z: 784.23
epoch: 550, rec_loss: 0.222, kl_z: 896.33
epoch: 600, rec_loss: 0.217, kl_z: 1257.52
epoch: 650, rec_loss: 0.208, kl_z: 1527.80
epoch: 700, rec_loss: 0.195, kl_z: 1799.51
epoch: 750, rec_loss: 0.185, kl_z: 2238.79
epoch: 800, rec_loss: 0.169, kl_z: 2253.14
epoch: 850, rec_loss: 0.163, kl_z: 2524.81
epoch: 900, rec_loss: 0.153, kl_z: 2371.30
epoch: 950, rec_loss: 0.138, kl_z: 2567.25
epoch: 1000, rec_loss: 0.129, kl_z: 2703.91
epoch: 1050, rec_loss: 0.119, kl_z: 2730.01
epoch: 1100, rec_loss: 0.110, kl_z: 2854.48
epoch: 1150, rec_loss: 0.104, kl_z: 2838.82
epoch: 1200, rec_loss: 0.099, kl_z: 2848.78
epoch: 1250, rec_loss: 0.092, kl_z: 2902.38
epoch: 1300, rec_loss: 0.094, kl_z: 2743.52
epoch: 1350, rec_loss: 0.086, kl_z: 2913.92
epoch: 1400, rec_loss: 0.079, kl_z: 2738.15
epoch: 1450, rec_loss: 0.075, kl_z: 2695.68
epoch: 1500, rec_loss: 0.072, kl_z: 2631.96
epoch: 1550, rec_loss: 0.066, kl_z: 2619.38
epoch: 1600, rec_loss: 0.276, kl_z: 5195.00
epoch: 1650, rec_loss: 0.247, kl_z: 2940.30
epoch: 1700, rec_loss: 0.241, kl_z: 1825.31
epoch: 1750, rec_loss: 0.236, kl_z: 1415.02
epoch: 1800, rec_loss: 0.230, kl_z: 1225.71
epoch: 1850, rec_loss: 0.226, kl_z: 1231.15
epoch: 1900, rec_loss: 0.220, kl_z: 1264.15
epoch: 1950, rec_loss: 0.212, kl_z: 1333.83
epoch: 2000, rec_loss: 0.205, kl_z: 1449.89
epoch: 2050, rec_loss: 0.198, kl_z: 1722.17
epoch: 2100, rec_loss: 0.192, kl_z: 1952.83
epoch: 2150, rec_loss: 0.186, kl_z: 2188.21
epoch: 2200, rec_loss: 0.180, kl_z: 2312.91
epoch: 2250, rec_loss: 0.175, kl_z: 2320.98
epoch: 2300, rec_loss: 0.167, kl_z: 2494.97
epoch: 2350, rec_loss: 0.162, kl_z: 2534.96
epoch: 2400, rec_loss: 0.159, kl_z: 2485.46
epoch: 2450, rec_loss: 0.152, kl_z: 2470.98
epoch: 2500, rec_loss: 0.150, kl_z: 2546.58
epoch: 2550, rec_loss: 0.143, kl_z: 2635.48
epoch: 2600, rec_loss: 0.137, kl_z: 2575.49
epoch: 2650, rec_loss: 0.132, kl_z: 2556.10
epoch: 2700, rec_loss: 0.128, kl_z: 2502.45
epoch: 2750, rec_loss: 0.126, kl_z: 2708.18
epoch: 2800, rec_loss: 0.120, kl_z: 2577.95
epoch: 2850, rec_loss: 0.113, kl_z: 2625.05
epoch: 2900, rec_loss: 0.113, kl_z: 2510.52
epoch: 2950, rec_loss: 0.107, kl_z: 2573.50
epoch: 3000, rec_loss: 0.105, kl_z: 2498.15
epoch: 3050, rec_loss: 0.103, kl_z: 2485.70
epoch: 3100, rec_loss: 0.101, kl_z: 2390.64
epoch: 3150, rec_loss: 0.101, kl_z: 2433.73
epoch: 3200, rec_loss: 0.097, kl_z: 2279.95
epoch: 3250, rec_loss: 0.090, kl_z: 2303.65
epoch: 3300, rec_loss: 0.088, kl_z: 2240.98
epoch: 3350, rec_loss: 0.087, kl_z: 2256.19
epoch: 3400, rec_loss: 0.085, kl_z: 2204.10
epoch: 3450, rec_loss: 0.082, kl_z: 2199.10
epoch: 3500, rec_loss: 0.082, kl_z: 2147.36
epoch: 3550, rec_loss: 0.080, kl_z: 2091.81
epoch: 3600, rec_loss: 0.078, kl_z: 2094.17
epoch: 3650, rec_loss: 0.075, kl_z: 2095.87
epoch: 3700, rec_loss: 0.076, kl_z: 2082.63
epoch: 3750, rec_loss: 0.074, kl_z: 2055.43
epoch: 3800, rec_loss: 0.070, kl_z: 2018.21
epoch: 3850, rec_loss: 0.070, kl_z: 2024.04
epoch: 3900, rec_loss: 0.067, kl_z: 1994.23
epoch: 3950, rec_loss: 0.071, kl_z: 1973.09
epoch: 4000, rec_loss: 0.066, kl_z: 1976.65
epoch: 4050, rec_loss: 0.062, kl_z: 1942.60
epoch: 4100, rec_loss: 0.061, kl_z: 1930.08
epoch: 4150, rec_loss: 0.058, kl_z: 1919.03
epoch: 4200, rec_loss: 0.060, kl_z: 1893.53
epoch: 4250, rec_loss: 0.056, kl_z: 1897.14
epoch: 4300, rec_loss: 0.060, kl_z: 1889.32
epoch: 4350, rec_loss: 0.057, kl_z: 1847.00
epoch: 4400, rec_loss: 0.056, kl_z: 1810.34
epoch: 4450, rec_loss: 0.055, kl_z: 1853.19
epoch: 4500, rec_loss: 0.056, kl_z: 1782.88
epoch: 4550, rec_loss: 0.051, kl_z: 1817.94
epoch: 4600, rec_loss: 0.050, kl_z: 1781.25
epoch: 4650, rec_loss: 0.056, kl_z: 1756.28
epoch: 4700, rec_loss: 0.052, kl_z: 1764.26
epoch: 4750, rec_loss: 0.049, kl_z: 1793.30
epoch: 4800, rec_loss: 0.047, kl_z: 1734.55
epoch: 4850, rec_loss: 0.045, kl_z: 1763.27
epoch: 4900, rec_loss: 0.043, kl_z: 1716.00
epoch: 4950, rec_loss: 0.045, kl_z: 1727.20
epoch: 5000, rec_loss: 0.042, kl_z: 1735.31
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=2, n_iter=7, random_state=42)
svd.fit(U.T)
print(svd.explained_variance_ratio_)
print(svd.explained_variance_ratio_.sum())
print(svd.singular_values_)
U_ = svd.components_
U_ = U_.T[0.55983905 0.04389005]
0.6037290921647739
[482.32958745 71.86901165]
import matplotlib.pyplot as plt_____no_output_____fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,0], cmap='viridis', s=5.0);
axs[0].set_xlabel('z1')
axs[0].set_ylabel('z2')
fig.suptitle('X1')
plt.show()_____no_output_____fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(wIxy_mean_[:,0],wIxy_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[0].set_xlabel('w1')
axs[0].set_ylabel('w2')
axs[1].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[1].set_xlabel('z1')
axs[1].set_ylabel('z2')
fig.suptitle('X1')
plt.show()_____no_output_____error = np.abs(X-rec_out_)_____no_output_____plt.plot(np.reshape(error, -1), '*', markersize=0.1);_____no_output_____plt.hist(np.reshape(error, -1), bins=50);_____no_output_____
</code>
| {
"repository": "dauparas/tensorflow_examples",
"path": "VAE_cell_cycle.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": 4,
"size": 157095,
"hexsha": "d03ff007d79767fb25651a9fe35e0b24a2c6c1d1",
"max_line_length": 39106,
"avg_line_length": 208.0728476821,
"alphanum_fraction": 0.8768834145
} |
# Notebook from carpenterlab/2021_Haghighi_submitted
Path: 0-preprocess_datasets.ipynb
### Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets:
- **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4
* $\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3
* 20,131 compounds are present in both datasets.
- **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2
* 1916 compounds are present in both datasets.
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8
* 525 alleles are present in both datasets.
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2
* 150 alleles are present in both datasets.
- **LINCS**-Pilot1-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3
* $N_{p/d}$: 6984 compounds are present in both datasets.
--------------------------------------------
#### Link to the processed profiles:
https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP_____no_output_____
<code>
%matplotlib notebook
%load_ext autoreload
%autoreload 2
import numpy as np
import scipy.spatial
import pandas as pd
import sklearn.decomposition
import matplotlib.pyplot as plt
import seaborn as sns
import os
from cmapPy.pandasGEXpress.parse import parse
from utils.replicateCorrs import replicateCorrs
from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp
from importlib import reload
from utils.normalize_funcs import standardize_per_catX
# sns.set_style("whitegrid")
# np.__version__
pd.__version_______no_output_____
</code>
### Input / ouput files:_____no_output_____- **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas
* Output:
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input:
* Output:
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/
* Output:
### Reformat Cell-Painting Data Sets
- CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/
- Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp
in case you want to reformat_____no_output_____
<code>
fileName='RepCorrDF'
### dirs on gpu cluster
# rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/'
# procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/'
### dirs on ec2
rawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/'
# procProf_dir='./'
procProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/'
# s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data
# aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser
filename='../../results/RepCor/'+fileName+'.xlsx'
_____no_output_____# ls ../../
# https://cellpainting-datasets.s3.us-east-1.amazonaws.com/_____no_output_____
</code>
# CDRP-BBBC047-Bray_____no_output_____### GE - L1000 - CDRP_____no_output_____
<code>
os.listdir(rawProf_dir+'/l1000_CDRP/')_____no_output_____cdrp_dataDir=rawProf_dir+'/l1000_CDRP/'
cpd_info = pd.read_csv(cdrp_dataDir+"/compounds.txt", sep="\t", dtype=str)
cpd_info.columns_____no_output_____from scipy.io import loadmat
x = loadmat(cdrp_dataDir+'cdrp.all.prof.mat')
k1=x['metaWell']['pert_id'][0][0]
k2=x['metaGen']['AFFX_PROBE_ID'][0][0]
k3=x['metaWell']['pert_dose'][0][0]
k4=x['metaWell']['det_plate'][0][0]
# pert_dose
# x['metaWell']['pert_id'][0][0][0][0][0]
pertID = []
probID=[]
for r in range(len(k1)):
v = k1[r][0][0]
pertID.append(v)
# probID.append(k2[r][0][0])
for r in range(len(k2)):
probID.append(k2[r][0][0])
pert_dose=[]
det_plate=[]
for r in range(len(k3)):
pert_dose.append(k3[r][0])
det_plate.append(k4[r][0][0])
dataArray=x['pclfc'];
cdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID)
cdrp_l1k_rep['pert_id']=pertID
cdrp_l1k_rep['pert_dose']=pert_dose
cdrp_l1k_rep['det_plate']=det_plate
cdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13]
cdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID'])
l1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains("_at")]
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
# cdrp_l1k_df.head()
print(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape)
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO')
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz');
# cdrp_l1k_rep2.head()(32324, 4) (68120, 981) (68120, 986)
# cpd_info_____no_output_____
</code>
### CP - CDRP_____no_output_____
<code>
profileType=['_augmented','_normalized']
bioactiveFlag="";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType[1:2]:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
# sgfsgf
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)_____no_output_____dataFolderName='CDRP-BBBC047-Bray'
cp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
features_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False)
repLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove)
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz')/home/ubuntu/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py:3167: RuntimeWarning: compression has no effect when passing file-like object as input.
formatter.save()
# features_to_remove
# features_to_remove
# features_to_remove_____no_output_____repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0']_____no_output_____# repLevelCDRP2.shape
# cp_scaled.columns[cp_scaled.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()_____no_output_____
</code>
# CDRP-bio-BBBC036-Bray_____no_output_____### GE - L1000 - CDRPBIO_____no_output_____
<code>
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')_____no_output_____# plates_____no_output_____cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2["pert_sample_dose"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())]
_____no_output_____cdrp_l1k_rep.det_plate_____no_output_____
</code>
### CP - CDRPBIO_____no_output_____
<code>
profileType=['_augmented','_normalized','_normalized_variable_selected']
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)_____no_output_____
</code>
# LUAD-BBBC041-Caicedo_____no_output_____### GE - L1000 - LUAD_____no_output_____
<code>
os.listdir(rawProf_dir+'/l1000_LUAD/input/')_____no_output_____os.listdir(rawProf_dir+'/l1000_LUAD/output/')_____no_output_____luad_dataDir=rawProf_dir+'/l1000_LUAD/'
luad_info1 = pd.read_csv(luad_dataDir+"/input/TA.OE014_A549_96H.map", sep="\t", dtype=str)
luad_info2 = pd.read_csv(luad_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
luad_info=pd.concat([luad_info1, luad_info2], ignore_index=True)
luad_info.head()_____no_output_____luad_l1k_df = parse(luad_dataDir+"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx").data_df.T.reset_index()
luad_l1k_df=luad_l1k_df.rename(columns={"cid":"id"})
# cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0]
# cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15]
luad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id'])
luad_l1k_df2=luad_l1k_df2.rename(columns={"x_mutation_status":"allele"})
l1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains("_at")]
luad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO')
print(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape)
saveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz')(5945, 54) (4232, 979) (4232, 1032)
luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist());
x_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1)
# x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1)
# saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad')here3
</code>
### CP - LUAD_____no_output_____
<code>
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/')
for pt in profileType[1:2]:
repLevelLuad0=[]
for p in plates:
repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv'))
repLevelLuad = pd.concat(repLevelLuad0)
metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv')
metaLuad1=metaLuad1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower()
# metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv')
# Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0']
repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well'])
repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO')
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape) _____no_output_____pt=['_normalized']
# Read save data
repLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')
# repLevelTA.head()
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05]
print(cols2remove0)
repLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1);
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLuad2 = repLevelLuad2.interpolate()
repLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist());
df1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True)
x_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1)
saveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad')_____no_output_____
</code>
# TA-ORF-BBBC037-Rohban_____no_output_____### GE - L1000 _____no_output_____
<code>
taorf_datadir=rawProf_dir+'/l1000_TA_ORF/'
gene_info = pd.read_csv(taorf_datadir+"TA.OE005_U2OS_72H.map.txt", sep="\t", dtype=str)
# gene_info.columns
# TA.OE005_U2OS_72H_INF_n729x22268.gctx
# TA.OE005_U2OS_72H_QNORM_n729x978.gctx
# TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx
# TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx
taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx")
# taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_QNORM_n729x978.gctx")
taorf_l1k_df0=taorf_l1k0.data_df
taorf_l1k_df=taorf_l1k_df0.T.reset_index()
l1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains("_at")]
taorf_l1k_df=taorf_l1k_df.rename(columns={"cid":"id"})
taorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id'])
# print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape)
taorf_l1k_df2.head()
# x_genesymbol_mutation
taorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO')
# compression_opts = dict(method='zip',archive_name='out.csv')
# taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts)
saveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz')
print(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape)
# gene_info.head()/home/ubuntu/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py:3167: RuntimeWarning: compression has no effect when passing file-like object as input.
formatter.save()
taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe()_____no_output_____taorf_l1k_df2.groupby(['pert_id']).size().describe()_____no_output_____
</code>
#### Check Replicate Correlation_____no_output_____
<code>
# df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000']
df1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist());
df1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO']
x=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1)here3
</code>
### CP - TAORF_____no_output_____
<code>
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/')
for pt in profileType[0:1]:
repLevelTA0=[]
for p in plates:
repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv'))
repLevelTA = pd.concat(repLevelTA0)
metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv')
metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv')
# metaTA2=metaTA2.rename(columns={"Metadata_broad_sample":"Metadata_broad_sample_2",'Metadata_Treatment':'Gene Allele Name'})
metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample'])
# metaTA2=metaTA2.rename(columns={"Metadata_Treatment":"Metadata_pert_name"})
# repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name'])
repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample'])
# repLevelTA2=repLevelTA2.rename(columns={"Gene Allele Name":"Allele"})
repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape)
(323, 4) (1920, 1801) (1920, 1804)
# repLevelTA.head()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05]
print(cols2remove0)
repLevelTA2=repLevelTA2.drop(cols2remove0, axis=1);
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
repLevelTA2 = repLevelTA2.interpolate()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist());
df1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True)
x_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf')[]
# plates_____no_output_____
</code>
# LINCS-Pilot1_____no_output_____### GE - L1000 - LINCS_____no_output_____
<code>
os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/')_____no_output_____os.listdir(rawProf_dir+'/l1000_LINCS/metadata/')_____no_output_____data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'],
['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']]_____no_output_____lincs_dataDir=rawProf_dir+'/l1000_LINCS/'
lincs_pert_info = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
lincs_meta_level3 = pd.read_csv(lincs_dataDir+"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt", sep="\t", dtype=str)
# lincs_info1 = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
print(lincs_meta_level3.shape)
lincs_meta_level3.head()
# lincs_info2 = pd.read_csv(lincs_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
# lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True)
# lincs_info.head()(27837, 45)
# lincs_meta_level3.groupby('distil_id').size()
lincs_meta_level3['distil_id'].unique().shape_____no_output_____# lincs_meta_level3.columns.tolist()
# lincs_meta_level3.pert_id_____no_output_____ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting[0m[01;34mCellPainting[0m/ [01;34mL1000[0m/
# procProf_dir+'preprocessed_data/LINCS-Pilot1/'
procProf_dir_____no_output_____for el in data_meta_match_ls:
lincs_l1k_df=parse(lincs_dataDir+"/2016_04_01_a549_48hr_batch1_L1000/"+el[1]).data_df.T.reset_index()
lincs_meta0 = pd.read_csv(lincs_dataDir+"/metadata/"+el[2], sep="\t", dtype=str)
lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id'])
lincs_meta=lincs_meta.rename(columns={"distil_id":"cid"})
lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid'])
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str)
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO')
# lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz')_____no_output_____# lincs_l1k_df2_____no_output_____lincs_l1k_rep['pert_id_dose'].unique()_____no_output_____lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz')
# l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
# x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)
# # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
# # lincs_l1k_rep.head()_____no_output_____lincs_l1k_rep.pert_id.unique().shape_____no_output_____lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')]_____no_output_____lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']]_____no_output_____lincs_l1k_rep['nearest_dose'].unique()_____no_output_____# lincs_l1k_rep.rna_plate.unique()_____no_output_____lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)_____no_output_____lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')/home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/utils/replicateCorrs.py:44: RuntimeWarning: Mean of empty slice
repCorrDf.loc[u,'RepCor']=np.nanmean(repCorr)
/home/ubuntu/anaconda3/lib/python3.6/site-packages/numpy/lib/nanfunctions.py:1113: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis, out=out, keepdims=keepdims)
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')_____no_output_____saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')_____no_output_____
</code>
raw data
_____no_output_____
<code>
# set(repLevelLuad2)-set(Y1.columns)_____no_output_____# Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head()_____no_output_____# repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head()_____no_output_____
</code>
#### Check Replicate Correlation_____no_output_____### CP - LINCS_____no_output_____
<code>
# Ran the following on:
# https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb
# Metadata
def recode_dose(x, doses, return_level=False):
closest_index = np.argmin([np.abs(dose - x) for dose in doses])
if np.isnan(x):
return 0
if return_level:
return closest_index + 1
else:
return doses[closest_index]
primary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20]
metadata=pd.read_csv("/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv")
metadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2)
metadata=metadata.rename(columns={"Assay_Plate_Barcode": "Metadata_Plate",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'})
lincs_submod_root_dir="/home/ubuntu/datasetsbucket/lincs-cell-painting/"
profileType=['_augmented','_normalized','_normalized_dmso',\
'_normalized_feature_select','_normalized_feature_select_dmso']
# profileType=['_normalized']
# plates=metadata.Assay_Plate_Barcode.unique().tolist()
plates=metadata.Metadata_Plate.unique().tolist()
for pt in profileType[4:5]:
repLevelLINCS0=[]
for p in plates:
profile_add=lincs_submod_root_dir+"/profiles/2016_04_01_a549_48hr_batch1/"+p+"/"+p+pt+".csv.gz"
if os.path.exists(profile_add):
repLevelLINCS0.append(pd.read_csv(profile_add))
repLevelLINCS = pd.concat(repLevelLINCS0)
meta_lincs1=metadata.rename(columns={"broad_sample": "Metadata_broad_sample"})
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample","Metadata_Well","Metadata_Plate",'Metadata_mmoles_per_liter'])
repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply(
lambda x: recode_dose(x, primary_dose_mapping, return_level=False))))
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
# repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO')
# saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape)(53760, 17) (52223, 1243) (52223, 1257)
# (8120, 15) (52223, 1810) (688699, 1825)
# repLevelLINCS_____no_output_____# pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample"]).shape
repLevelLINCS.shape,meta_lincs1.shape_____no_output_____(8120, 15) (52223, 1238) (52223, 1253)_____no_output_____csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz')
csv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')_____no_output_____csv_l1k_lincs.head()_____no_output_____csv_l1k_lincs.pert_id_dose.unique()_____no_output_____csv_pddf.Metadata_pert_id_dose.unique()_____no_output_____
</code>
#### Read saved data_____no_output_____
<code>
repLevelLINCS2.groupby(['Metadata_pert_id']).size()_____no_output_____repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe()_____no_output_____repLevelLINCS2.Metadata_Plate.unique().shape_____no_output_____repLevelLINCS2['Metadata_pert_id_dose'].unique().shape_____no_output_____# csv_pddf['Metadata_mmoles_per_liter'].round(0).unique()
# np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique())_____no_output_____csv_pddf.groupby(['Metadata_dose_recode']).size()#.median()_____no_output_____# repLevelLincs2=csv_pddf.copy()
import gc
cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
print(cols2remove0)
repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
print('here0')
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
del repLevelLincs2
gc.collect()
print('here0')
cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate()
print('here1')
repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
print('here1')
# df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist()
highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')['Cells_RadialDistribution_FracAtD_DNA_1of4', 'Cells_RadialDistribution_FracAtD_DNA_2of4', 'Cells_RadialDistribution_FracAtD_DNA_3of4', 'Cells_RadialDistribution_FracAtD_DNA_4of4', 'Cells_RadialDistribution_MeanFrac_DNA_1of4', 'Cells_RadialDistribution_MeanFrac_DNA_2of4', 'Cells_RadialDistribution_MeanFrac_DNA_3of4', 'Cells_RadialDistribution_MeanFrac_DNA_4of4', 'Cells_RadialDistribution_RadialCV_DNA_1of4', 'Cells_RadialDistribution_RadialCV_DNA_2of4', 'Cells_RadialDistribution_RadialCV_DNA_3of4', 'Cells_RadialDistribution_RadialCV_DNA_4of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_1of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_2of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_3of4', 'Cytoplasm_RadialDistribution_FracAtD_DNA_4of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_1of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_2of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_3of4', 'Cytoplasm_RadialDistribution_MeanFrac_DNA_4of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_1of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_2of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_3of4', 'Cytoplasm_RadialDistribution_RadialCV_DNA_4of4', 'Nuclei_RadialDistribution_FracAtD_DNA_1of4', 'Nuclei_RadialDistribution_FracAtD_DNA_2of4', 'Nuclei_RadialDistribution_FracAtD_DNA_3of4', 'Nuclei_RadialDistribution_FracAtD_DNA_4of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_1of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_2of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_3of4', 'Nuclei_RadialDistribution_MeanFrac_DNA_4of4', 'Nuclei_RadialDistribution_RadialCV_DNA_1of4', 'Nuclei_RadialDistribution_RadialCV_DNA_2of4', 'Nuclei_RadialDistribution_RadialCV_DNA_3of4', 'Nuclei_RadialDistribution_RadialCV_DNA_4of4']
here0
here0
here1
here1
here2
repSizeDF_____no_output_____# repLevelLincs2=csv_pddf.copy()
# cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
# print(cols2remove0)
# repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
# # cp_features=list(set(cp_features)-set(cols2remove0))
# # repLevelTA2=repLevelTA2.replace('nan', np.nan)
# repLevelLincs3 = repLevelLincs3.interpolate()
# repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
# cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist()
# highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')here2
# x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
# highRepComp[-1]
_____no_output_____saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs')_____no_output_____# repLevelLincs3.Metadata_Plate
repLevelLincs3.head()_____no_output_____# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595")][['Metadata_Plate','Metadata_Well']].drop_duplicates()_____no_output_____# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595") &
# (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']=="B12")][csv_pddf.columns[1820:]].drop_duplicates()_____no_output_____# def standardize_per_catX(df,column_name):
column_name='Metadata_Plate'
repLevelLincs_scaled_perPlate=repLevelLincs3.copy()
repLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values_____no_output_____# def standardize_per_catX(df,column_name):
# # column_name='Metadata_Plate'
# cp_features=df.columns[df.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# df_scaled_perPlate=df.copy()
# df_scaled_perPlate[cp_features.tolist()]=\
# df[cp_features.tolist()+[column_name]].groupby(column_name)\
# .transform(lambda x: (x - x.mean()) / x.std()).values
# return df_scaled_perPlate_____no_output_____df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))]
x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)_____no_output_____
</code>
| {
"repository": "carpenterlab/2021_Haghighi_submitted",
"path": "0-preprocess_datasets.ipynb",
"matched_keywords": [
"gene expression"
],
"stars": 6,
"size": 983203,
"hexsha": "d04075594dd3c3d31c326b1add11c238c50f42bf",
"max_line_length": 66351,
"avg_line_length": 93.3361496108,
"alphanum_fraction": 0.754751562
} |
# Notebook from EdTonatto/UFFS-2020.2-Inteligencia_Artificial
Path: T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb
# Rede Neural Simples
### Implementando uma RNA Simples
O diagrama abaixo mostra uma rede simples. A combinação linear dos pesos, inputs e viés formam o input h, que então é passado pela função de ativação f(h), gerando o output final do perceptron, etiquetado como y.
<img src='RNA-simples.png' /><br>
<p style="text-align:center"> <i> Diagrama de uma rede neural simples</i> </p>
Círculos são unidades, caixas são operações. O que faz as redes neurais possíveis, é que a função de ativação, f(h) pode ser qualquer função, não apenas a função degrau.
<p> Por exemplo, caso f(h)=h, o output será o mesmo que o input. Agora o output da rede é </p>
<p style="text-align:center"> $$h = \frac 1n\sum_{i=1}^n(w_i*x_i)+b$$ </p>
<p> Essa equação deveria ser familiar para você, pois é a mesma do modelo de regressão linear!
Outras funções de ativação comuns são a função logística (também chamada de sigmóide), tanh e a função softmax. Nós iremos trabalhar principalmente com a função sigmóide pelo resto dessa aula:</p>
$$f(h) = sigmoid(h)=\frac 1 {1+e^{-h}}$$
_____no_output_____## Vamos implementar uma RNA de apenas um neurônio!
#### Importando a biblioteca_____no_output_____
<code>
import numpy as np_____no_output_____
</code>
#### Função do cáculo da sigmóide_____no_output_____
<code>
def sigmoid(x):
return 1/(1+np.exp(-x))
_____no_output_____
</code>
#### Vetor dos valores de entrada_____no_output_____
<code>
x = np.array([1.66, -0.22])
b = 0.1
_____no_output_____
</code>
#### Pesos das ligações sinápticas_____no_output_____
<code>
w = np.array([0.5, -0.3])_____no_output_____
</code>
#### Calcule a combinação linear de entradas e pesos sinápticos_____no_output_____
<code>
h = np.dot(x, w) + b_____no_output_____
</code>
#### Aplicado a função de ativação do neurônio_____no_output_____
<code>
y = sigmoid(h)
print('A Saida da rede eh: ', y)A Saida da rede eh: 0.7302714044131816
</code>
| {
"repository": "EdTonatto/UFFS-2020.2-Inteligencia_Artificial",
"path": "T-RNA/Tarefa-1/Solucao-Tarefa1-RNA-simples.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 4924,
"hexsha": "d0432af59a5b44827061fda1382e7ae7564246e4",
"max_line_length": 225,
"avg_line_length": 24.8686868687,
"alphanum_fraction": 0.4597887896
} |
# Notebook from scw-ss/-2018-06-27-cfmehu-python-ecology-lesson
Path: _episodes_pynb/04-merging-data_clean.ipynb
# Combining DataFrames with pandas
In many "real world" situations, the data that we want to use come in multiple
files. We often need to combine these files into a single DataFrame to analyze
the data. The pandas package provides [various methods for combining
DataFrames](http://pandas.pydata.org/pandas-docs/stable/merging.html) including
`merge` and `concat`.
To work through the examples below, we first need to load the species and
surveys files into pandas DataFrames. In iPython:
_____no_output_____Take note that the `read_csv` method we used can take some additional options which
we didn't use previously. Many functions in python have a set of options that
can be set by the user if needed. In this case, we have told Pandas to assign
empty values in our CSV to NaN `keep_default_na=False, na_values=[""]`.
[More about all of the read_csv options here.](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html)
# Concatenating DataFrames
We can use the `concat` function in Pandas to append either columns or rows from
one DataFrame to another. Let's grab two subsets of our data to see how this
works._____no_output_____
<code>
# read in first 10 lines of surveys table
# grab the last 10 rows
# reset the index values to the second dataframe appends properly
# drop=True option avoids adding new index column with old index values_____no_output_____
</code>
When we concatenate DataFrames, we need to specify the axis. `axis=0` tells
Pandas to stack the second DataFrame under the first one. It will automatically
detect whether the column names are the same and will stack accordingly.
`axis=1` will stack the columns in the second DataFrame to the RIGHT of the
first DataFrame. To stack the data vertically, we need to make sure we have the
same columns and associated column format in both datasets. When we stack
horizonally, we want to make sure what we are doing makes sense (ie the data are
related in some way)._____no_output_____
<code>
# stack the DataFrames on top of each other
_____no_output_____# place the DataFrames side by side
_____no_output_____
</code>
### Row Index Values and Concat
Have a look at the `vertical_stack` dataframe? Notice anything unusual?
The row indexes for the two data frames `survey_sub` and `survey_sub_last10`
have been repeated. We can reindex the new dataframe using the `reset_index()` method.
## Writing Out Data to CSV
We can use the `to_csv` command to do export a DataFrame in CSV format. Note that the code
below will by default save the data into the current working directory. We can
save it to a different folder by adding the foldername and a slash to the file
`vertical_stack.to_csv('foldername/out.csv')`. We use the 'index=False' so that
pandas doesn't include the index number for each line.
_____no_output_____
<code>
# Write DataFrame to CSV
_____no_output_____
</code>
Check out your working directory to make sure the CSV wrote out properly, and
that you can open it! If you want, try to bring it back into python to make sure
it imports properly._____no_output_____
<code>
# for kicks read our output back into python and make sure all looks good
_____no_output_____
</code>
> ## Challenge - Combine Data
>
> In the data folder, there are two survey data files: `survey2001.csv` and
> `survey2002.csv`. Read the data into python and combine the files to make one
> new data frame. Create a plot of average plot weight by year grouped by sex.
> Export your results as a CSV and make sure it reads back into python properly._____no_output_____# Joining DataFrames
When we concatenated our DataFrames we simply added them to each other -
stacking them either vertically or side by side. Another way to combine
DataFrames is to use columns in each dataset that contain common values (a
common unique id). Combining DataFrames using a common field is called
"joining". The columns containing the common values are called "join key(s)".
Joining DataFrames in this way is often useful when one DataFrame is a "lookup
table" containing additional data that we want to include in the other.
NOTE: This process of joining tables is similar to what we do with tables in an
SQL database.
For example, the `species.csv` file that we've been working with is a lookup
table. This table contains the genus, species and taxa code for 55 species. The
species code is unique for each line. These species are identified in our survey
data as well using the unique species code. Rather than adding 3 more columns
for the genus, species and taxa to each of the 35,549 line Survey data table, we
can maintain the shorter table with the species information. When we want to
access that information, we can create a query that joins the additional columns
of information to the Survey data.
Storing data in this way has many benefits including:
1. It ensures consistency in the spelling of species attributes (genus, species
and taxa) given each species is only entered once. Imagine the possibilities
for spelling errors when entering the genus and species thousands of times!
2. It also makes it easy for us to make changes to the species information once
without having to find each instance of it in the larger survey data.
3. It optimizes the size of our data.
## Joining Two DataFrames
To better understand joins, let's grab the first 10 lines of our data as a
subset to work with. We'll use the `.head` method to do this. We'll also read
in a subset of the species table.
_____no_output_____
<code>
# read in first 10 lines of surveys table
# import a small subset of the species data designed for this part of the lesson.
# It is stored in the data folder.
_____no_output_____
</code>
In this example, `species_sub` is the lookup table containing genus, species, and
taxa names that we want to join with the data in `survey_sub` to produce a new
DataFrame that contains all of the columns from both `species_df` *and*
`survey_df`.
## Identifying join keys
To identify appropriate join keys we first need to know which field(s) are
shared between the files (DataFrames). We might inspect both DataFrames to
identify these columns. If we are lucky, both DataFrames will have columns with
the same name that also contain the same data. If we are less lucky, we need to
identify a (differently-named) column in each DataFrame that contains the same
information._____no_output_____In our example, the join key is the column containing the two-letter species
identifier, which is called `species_id`.
Now that we know the fields with the common species ID attributes in each
DataFrame, we are almost ready to join our data. However, since there are
[different types of joins](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/), we
also need to decide which type of join makes sense for our analysis.
## Inner joins
The most common type of join is called an _inner join_. An inner join combines
two DataFrames based on a join key and returns a new DataFrame that contains
**only** those rows that have matching values in *both* of the original
DataFrames.
Inner joins yield a DataFrame that contains only rows where the value being
joins exists in BOTH tables. An example of an inner join, adapted from [this
page](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/) is below:

The pandas function for performing joins is called `merge` and an Inner join is
the default option: _____no_output_____The result of an inner join of `survey_sub` and `species_sub` is a new DataFrame
that contains the combined set of columns from `survey_sub` and `species_sub`. It
*only* contains rows that have two-letter species codes that are the same in
both the `survey_sub` and `species_sub` DataFrames. In other words, if a row in
`survey_sub` has a value of `species_id` that does *not* appear in the `species_id`
column of `species`, it will not be included in the DataFrame returned by an
inner join. Similarly, if a row in `species_sub` has a value of `species_id`
that does *not* appear in the `species_id` column of `survey_sub`, that row will not
be included in the DataFrame returned by an inner join.
The two DataFrames that we want to join are passed to the `merge` function using
the `left` and `right` argument. The `left_on='species'` argument tells `merge`
to use the `species_id` column as the join key from `survey_sub` (the `left`
DataFrame). Similarly , the `right_on='species_id'` argument tells `merge` to
use the `species_id` column as the join key from `species_sub` (the `right`
DataFrame). For inner joins, the order of the `left` and `right` arguments does
not matter.
The result `merged_inner` DataFrame contains all of the columns from `survey_sub`
(record id, month, day, etc.) as well as all the columns from `species_sub`
(species_id, genus, species, and taxa).
Notice that `merged_inner` has fewer rows than `survey_sub`. This is an
indication that there were rows in `surveys_df` with value(s) for `species_id` that
do not exist as value(s) for `species_id` in `species_df`.
## Left joins
What if we want to add information from `species_sub` to `survey_sub` without
losing any of the information from `survey_sub`? In this case, we use a different
type of join called a "left outer join", or a "left join".
Like an inner join, a left join uses join keys to combine two DataFrames. Unlike
an inner join, a left join will return *all* of the rows from the `left`
DataFrame, even those rows whose join key(s) do not have values in the `right`
DataFrame. Rows in the `left` DataFrame that are missing values for the join
key(s) in the `right` DataFrame will simply have null (i.e., NaN or None) values
for those columns in the resulting joined DataFrame.
Note: a left join will still discard rows from the `right` DataFrame that do not
have values for the join key(s) in the `left` DataFrame.

A left join is performed in pandas by calling the same `merge` function used for
inner join, but using the `how='left'` argument:_____no_output_____The result DataFrame from a left join (`merged_left`) looks very much like the
result DataFrame from an inner join (`merged_inner`) in terms of the columns it
contains. However, unlike `merged_inner`, `merged_left` contains the **same
number of rows** as the original `survey_sub` DataFrame. When we inspect
`merged_left`, we find there are rows where the information that should have
come from `species_sub` (i.e., `species_id`, `genus`, and `taxa`) is
missing (they contain NaN values):_____no_output_____These rows are the ones where the value of `species_id` from `survey_sub` (in this
case, `PF`) does not occur in `species_sub`.
## Other join types
The pandas `merge` function supports two other join types:
* Right (outer) join: Invoked by passing `how='right'` as an argument. Similar
to a left join, except *all* rows from the `right` DataFrame are kept, while
rows from the `left` DataFrame without matching join key(s) values are
discarded.
* Full (outer) join: Invoked by passing `how='outer'` as an argument. This join
type returns the all pairwise combinations of rows from both DataFrames; i.e.,
the result DataFrame will `NaN` where data is missing in one of the dataframes. This join type is
very rarely used.
# Final Challenges
> ## Challenge - Distributions
> Create a new DataFrame by joining the contents of the `surveys.csv` and
> `species.csv` tables. Then calculate and plot the distribution of:
>
> 1. taxa by plot
> 2. taxa by sex by plot
> ## Challenge - Diversity Index
>
> 1. In the data folder, there is a plot `CSV` that contains information about the
> type associated with each plot. Use that data to summarize the number of
> plots by plot type.
> 2. Calculate a diversity index of your choice for control vs rodent exclosure
> plots. The index should consider both species abundance and number of
> species. You might choose to use the simple [biodiversity index described
> here](http://www.amnh.org/explore/curriculum-collections/biodiversity-counts/plant-ecology/how-to-calculate-a-biodiversity-index)
> which calculates diversity as:
>
> the number of species in the plot / the total number of individuals in the plot = Biodiversity index._____no_output_____
| {
"repository": "scw-ss/-2018-06-27-cfmehu-python-ecology-lesson",
"path": "_episodes_pynb/04-merging-data_clean.ipynb",
"matched_keywords": [
"ecology"
],
"stars": null,
"size": 17271,
"hexsha": "d0479022860db927a6e1ab400befe82eacf791bc",
"max_line_length": 142,
"avg_line_length": 37.7921225383,
"alphanum_fraction": 0.6336633663
} |
# Notebook from mukamel-lab/ALLCools
Path: docs/allcools/cell_level/step_by_step/100kb/04a-PreclusteringAndClusterEnrichedFeatures-mCH.ipynb
# Preclustering and Cluster Enriched Features
## Purpose
The purpose of this step is to perform a simple pre-clustering using the highly variable features to get a pre-clusters labeling. We then select top enriched features for each cluster (CEF) for further analysis.
## Input
- HVF adata file.
## Output
- HVF adata file with pre-clusters and CEF annotated._____no_output_____## Import_____no_output_____
<code>
import seaborn as sns
import anndata
import scanpy as sc
from ALLCools.clustering import cluster_enriched_features, significant_pc_test, log_scale_____no_output_____sns.set_context(context='notebook', font_scale=1.3)_____no_output_____
</code>
## Parameters_____no_output_____
<code>
adata_path = 'mCH.HVF.h5ad'
# Cluster Enriched Features analysis
top_n=200
alpha=0.05
stat_plot=True
# you may provide a pre calculated cluster version.
# If None, will perform basic clustering using parameters below.
cluster_col = None
# These parameters only used when cluster_col is None
k=25
resolution=1
cluster_plot=True_____no_output_____
</code>
## Load Data_____no_output_____
<code>
adata = anndata.read_h5ad(adata_path)_____no_output_____
</code>
## Pre-Clustering
If cluster label is not provided, will perform basic clustering here_____no_output_____
<code>
if cluster_col is None:
# IMPORTANT
# put the unscaled matrix in adata.raw
adata.raw = adata
log_scale(adata)
sc.tl.pca(adata, n_comps=100)
significant_pc_test(adata, p_cutoff=0.1, update=True)
sc.pp.neighbors(adata, n_neighbors=k)
sc.tl.leiden(adata, resolution=resolution)
if cluster_plot:
sc.tl.umap(adata)
sc.pl.umap(adata, color='leiden')
# return to unscaled X, CEF need to use the unscaled matrix
adata = adata.raw.to_adata()
cluster_col = 'leiden'32 components passed P cutoff of 0.1.
Changing adata.obsm['X_pca'] from shape (16985, 100) to (16985, 32)
</code>
## Cluster Enriched Features (CEF)_____no_output_____
<code>
cluster_enriched_features(adata,
cluster_col=cluster_col,
top_n=top_n,
alpha=alpha,
stat_plot=True)Found 31 clusters to compute feature enrichment score
Computing enrichment score
Computing enrichment score FDR-corrected P values
Selected 3102 unique features
</code>
## Save AnnData_____no_output_____
<code>
# save adata
adata.write_h5ad(adata_path)
adata_____no_output_____
</code>
| {
"repository": "mukamel-lab/ALLCools",
"path": "docs/allcools/cell_level/step_by_step/100kb/04a-PreclusteringAndClusterEnrichedFeatures-mCH.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": 5,
"size": 148170,
"hexsha": "d0483d0b5db7536f1f0f5b727c2bee7ab5f25c1f",
"max_line_length": 108588,
"avg_line_length": 482.6384364821,
"alphanum_fraction": 0.9466221232
} |
# Notebook from majkelx/astwro
Path: examples/deriving_psf_stenson.ipynb
# Deriving a Point-Spread Function in a Crowded Field
### following Appendix III of Peter Stetson's *User's Manual for DAOPHOT II*
### Using `pydaophot` form `astwro` python package_____no_output_____All *italic* text here have been taken from Stetson's manual._____no_output_____The only input file for this procedure is a FITS file containing reference frame image. Here we use sample FITS form astwro package (NGC6871 I filter 20s frame). Below we get filepath for this image, as well as create instances of `Daophot` and `Allstar` classes - wrappers around `daophot` and `allstar` respectively.
One should also provide `daophot.opt`, `photo.opt` and `allstar.opt` in apropiriete constructors. Here default, build in, sample, `opt` files are used._____no_output_____
<code>
from astwro.sampledata import fits_image
frame = fits_image()_____no_output_____
</code>
`Daophot` object creates temporary working directory (*runner directory*), which is passed to `Allstar` constructor to share._____no_output_____
<code>
from astwro.pydaophot import Daophot, Allstar
dp = Daophot(image=frame)
al = Allstar(dir=dp.dir)_____no_output_____
</code>
Daophot got FITS file in construction, which will be automatically **ATTACH**ed. _____no_output_____#### *(1) Run FIND on your frame*_____no_output_____Daophot `FIND` parameters `Number of frames averaged, summed` are defaulted to `1,1`, below are provided for clarity._____no_output_____
<code>
res = dp.FInd(frames_av=1, frames_sum=1)_____no_output_____
</code>
Check some results returned by `FIND`, every method for `daophot` command returns results object._____no_output_____
<code>
print ("{} pixels analysed, sky estimate {}, {} stars found.".format(res.pixels, res.sky, res.stars))9640 pixels analysed, sky estimate 12.665, 4166 stars found.
</code>
Also, take a look into *runner directory*_____no_output_____
<code>
!ls -lt $dp.dirtotal 536
lrwxr-xr-x 1 michal staff 60 Jun 26 18:25 [35m63d38b_NGC6871.fits[m[m -> /Users/michal/projects/astwro/astwro/sampledata/NGC6871.fits
lrwxr-xr-x 1 michal staff 65 Jun 26 18:25 [35mallstar.opt[m[m -> /Users/michal/projects/astwro/astwro/pydaophot/config/allstar.opt
lrwxr-xr-x 1 michal staff 65 Jun 26 18:25 [35mdaophot.opt[m[m -> /Users/michal/projects/astwro/astwro/pydaophot/config/daophot.opt
-rw-r--r-- 1 michal staff 258438 Jun 26 18:25 i.coo
</code>
We see symlinks to input image and `opt` files, and `i.coo` - result of `FIND`_____no_output_____
#### *(2) Run PHOTOMETRY on your frame*_____no_output_____Below we run photometry, providing explicitly radius of aperture `A1` and `IS`, `OS` sky radiuses._____no_output_____
<code>
res = dp.PHotometry(apertures=[8], IS=35, OS=50)_____no_output_____
</code>
List of stars generated by daophot commands, can be easily get as `astwro.starlist.Starlist` being essentially `pandas.DataFrame`:_____no_output_____
<code>
stars = res.photometry_starlist_____no_output_____
</code>
Let's check 10 stars with least A1 error (``mag_err`` column). ([pandas](https://pandas.pydata.org) style)_____no_output_____
<code>
stars.sort_values('mag_err').iloc[:10]_____no_output_____
</code>
#### *(3) SORT the output from PHOTOMETRY*
*in order of increasing apparent magnitude decreasing
stellar brightness with the renumbering feature. This step is optional but it can be more convenient than not.*_____no_output_____`SORT` command of `daophor` is not implemented (yet) in `pydaohot`. But we do sorting by ourself._____no_output_____
<code>
sorted_stars = stars.sort_values('mag')
sorted_stars.renumber()_____no_output_____
</code>
Here we write sorted list back info photometry file at default name (overwriting existing one), because it's convenient to use default files in next commands._____no_output_____
<code>
dp.write_starlist(sorted_stars, 'i.ap')_____no_output_____!head -n20 $dp.PHotometry_result.photometry_file NL NX NY LOWBAD HIGHBAD THRESH AP1 PH/ADU RNOISE FRAD
2 1250 1150 -3.9 31000.0 5.81 8.00 9.00 1.70 6.00
1 577.370 666.480 12.118
15.649 6.55 0.52 0.0012
2 982.570 733.500 12.430
12.626 2.27 0.08 0.0012
3 702.670 102.050 12.533
12.755 2.45 0.08 0.0012
4 603.270 675.390 12.727
16.515 7.82 0.58 0.0020
5 502.640 177.660 12.741
12.794 2.41 0.09 0.0014
6 1165.500 636.910 12.742
dp.PHotometry_result.photometry_file_____no_output_____
</code>
#### *(4) PICK to generate a set of likely PSF stars*
*How many stars you want to use is a function of the degree of variation you expect and the frequency with which stars are contaminated by cosmic rays or neighbor stars. [...]*_____no_output_____
<code>
pick_res = dp.PIck(faintest_mag=20, number_of_stars_to_pick=40)_____no_output_____
</code>
If no error reported, symlink to image file (renamed to `i.fits`), and all daophot output files (`i.*`) are in the working directory of runner:_____no_output_____
<code>
ls $dp.dir[35m63d38b_NGC6871.fits[m[m@ [35mdaophot.opt[m[m@ i.coo
[35mallstar.opt[m[m@ i.ap i.lst
</code>
One may examine and improve `i.lst` list of PSF stars. Or use `astwro.tools.gapick.py` to obtain list of PSF stars optimised by genetic algorithm._____no_output_____#### *(5) Run PSF *
*tell it the name of your complete (sorted renumbered) aperture photometry file, the name of the file with the list of PSF stars, and the name of the disk file you want the point spread function stored in (the default should be fine) [...]*
*If the frame is crowded it is probably worth your while to generate the first PSF with the "VARIABLE PSF" option set to -1 --- pure analytic PSF. That way, the companions will not generate ghosts in the model PSF that will come back to haunt you later. You should also have specified a reasonably generous fitting radius --- these stars have been preselected to be as isolated as possible and you want the best fits you can get. But remember to avoid letting neighbor stars intrude within one fitting radius of the center of any PSF star.*
_____no_output_____For illustration we will set `VARIABLE PSF` option, before `PSf()`_____no_output_____
<code>
dp.set_options('VARIABLE PSF', 2)
psf_res = dp.PSf()_____no_output_____
</code>
#### *(6) Run GROUP and NSTAR or ALLSTAR on your NEI file*
*If your PSF stars have many neighbors this may take some minutes of real time. Please be patient or submit it as a batch job and perform steps on your next frame while you wait.*_____no_output_____We use `allstar`. (`GROUP` and `NSTAR` command are not implemented in current version of `pydaophot`). We use prepared above `Allstar` object: `al` operating on the same runner dir that `dp`._____no_output_____As parameter we set input image (we haven't do that on constructor), and `nei` file produced by `PSf()`. We do not remember name `i.psf` so use `psf_res.nei_file` property.
Finally we order `allstar` to produce subtracted FITS ._____no_output_____
<code>
alls_res = al.ALlstar(image_file=frame, stars=psf_res.nei_file, subtracted_image_file='is.fits')_____no_output_____
</code>
All `result` objects, has `get_buffer()` method, useful to lookup unparsed `daophot` or `allstar` output:_____no_output_____
<code>
print (alls_res.get_buffer()) 63d38b_NGC6871...
Picture size: 1250 1150
File with the PSF (default 63d38b_NGC6871.psf): Input file (default 63d38b_NGC6871.ap): File for results (default i.als): Name for subtracted image (default is):
915 stars. <<
I = iteration number
R = number of stars that remain
D = number of stars that disappeared
C = number of stars that converged
I R D C
1 915 0 0 <<
2 915 0 0 <<
3 915 0 0 <<
4 724 0 191 <<
5 385 0 530 <<
6 211 0 704 <<
7 110 0 805 <<
8 67 0 848 <<
9 40 0 875 <<
10 0 0 915
Finished i
Good bye.
</code>
#### *(8) EXIT from DAOPHOT and send this new picture to the image display *
*Examine each of the PSF stars and its environs. Have all of the PSF stars subtracted out more or less cleanly, or should some of them be rejected from further use as PSF stars? (If so use a text editor to delete these stars from the LST file.) Have the neighbors mostly disappeared, or have they left behind big zits? Have you uncovered any faint companions that FIND missed?[...]* _____no_output_____The absolute path to subtracted file (like for most output files) is available as result's property:_____no_output_____
<code>
sub_img = alls_res.subtracted_image_file_____no_output_____
</code>
We can also generate region file for psf stars:_____no_output_____
<code>
from astwro.starlist.ds9 import write_ds9_regions
reg_file_path = dp.file_from_runner_dir('lst.reg')
write_ds9_regions(pick_res.picked_starlist, reg_file_path)_____no_output_____# One can run ds9 directly from notebook:
!ds9 $sub_img -regions $reg_file_path _____no_output_____
</code>
#### *(9) Back in DAOPHOT II ATTACH the original picture and run SUBSTAR*
*specifying the file created in step (6) or in step (8f) as the stars to subtract, and the stars in the LST file as the stars to keep.*_____no_output_____Lookup into runner dir:_____no_output_____
<code>
ls $al.dir[35m63d38b_NGC6871.fits[m[m@ i.ap i.nei
[35mallstar.opt[m[m@ i.coo i.psf
[35mdaophot.opt[m[m@ i.err is.fits
i.als i.lst lst.reg
sub_res = dp.SUbstar(subtract=alls_res.profile_photometry_file, leave_in=pick_res.picked_stars_file)_____no_output_____
</code>
*You have now created a new picture which has the PSF stars still in it but from which the known neighbors of these PSF stars have been mostly removed*_____no_output_____#### (10) ATTACH the new star subtracted frame and repeat step (5) to derive a new point spread function
#### (11+...) Run GROUP NSTAR or ALLSTAR _____no_output_____
<code>
for i in range(3):
print ("Iteration {}: Allstar chi: {}".format(i, alls_res.als_stars.chi.mean()))
dp.image = 'is.fits'
respsf = dp.PSf()
print ("Iteration {}: PSF chi: {}".format(i, respsf.chi))
alls_res = al.ALlstar(image_file=frame, stars='i.nei')
dp.image = frame
dp.SUbstar(subtract='i.als', leave_in='i.lst')
print ("Final: Allstar chi: {}".format(alls_res.als_stars.chi.mean()))Iteration 0: Allstar chi: 1.14670601093
Iteration 0: PSF chi: 0.0249
Iteration 1: Allstar chi: 1.13409726776
Iteration 1: PSF chi: 0.0249
Iteration 2: Allstar chi: 1.1332852459
Iteration 2: PSF chi: 0.0249
Final: Allstar chi: 1.13326229508
alls_res.als_stars_____no_output_____
</code>
Check last image with subtracted PSF stars neighbours._____no_output_____
<code>
!ds9 $dp.SUbstar_result.subtracted_image_file -regions $reg_file_path _____no_output_____
</code>
*Once you have produced a frame in which the PSF stars and their neighbors all subtract out cleanly, one more time through PSF should produce a point-spread function you can be proud of.*_____no_output_____
<code>
dp.image = 'is.fits'
psf_res = dp.PSf()
print ("PSF file: {}".format(psf_res.psf_file))PSF file: /var/folders/kt/1jqvm3s51jd4qbxns7dc43rw0000gq/T/pydaophot_tmpDu5p8c/i.psf
</code>
| {
"repository": "majkelx/astwro",
"path": "examples/deriving_psf_stenson.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 6,
"size": 55950,
"hexsha": "d048ac1a0a681fda2b0b4a5390cdc2744e6b2e10",
"max_line_length": 564,
"avg_line_length": 31.5566835871,
"alphanum_fraction": 0.3923503128
} |
# Notebook from lukassnoek/NI-edu
Path: NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
# Fmriprep
Today, many excellent general-purpose, open-source neuroimaging software packages exist: [SPM](https://www.fil.ion.ucl.ac.uk/spm/) (Matlab-based), [FSL](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki), [AFNI](https://afni.nimh.nih.gov/), and [Freesurfer](https://surfer.nmr.mgh.harvard.edu/) (with a shell interface). We argue that there is not one single package that is always the best choice for every step in your preprocessing pipeline. Fortunately, people from the [Poldrack lab](https://poldracklab.stanford.edu/) created [fmriprep](https://fmriprep.readthedocs.io/en/stable/), a software package that offers a preprocessing pipeline which "glues together" functionality from different neuroimaging software packages (such as Freesurfer and FSL), such that each step in the pipeline is executed by the software package that (arguably) does it best.
We have been using *Fmriprep* for preprocessing of our own data and we strongly recommend it. It is relatively simple to use, requires minimal user intervention, and creates extensive visual reports for users to do visual quality control (to check whether each step in the pipeline worked as expected). The *only* requirement to use Fmriprep is that your data is formatted as specified in the Brain Imaging Data Structure (BIDS)._____no_output_____## The BIDS-format
[BIDS](https://bids.neuroimaging.io/) is a specification on how to format, name, and organize your MRI dataset. It specifies the file format of MRI files (i.e., compressed Nifti: `.nii.gz` files), lays out rules for how you should name your files (i.e., with "key-value" pairs, such as: `sub-01_ses-1_task-1back_run-1_bold.nii.gz`), and outlines the file/folder structure of your dataset (where each subject has its own directory with separate subdirectories for different MRI modalities, including fieldmaps, functional, diffusion, and anatomical MRI). Additionally, it specifies a way to include "metadata" about the (MRI) files in your dataset with [JSON](https://en.wikipedia.org/wiki/JSON) files: plain-text files with key-value pairs (in the form "parameter: value"). Given that your dataset is BIDS-formatted and contains the necessary metadata, you can use `fmriprep` on your dataset. (You can use the awesome [bids-validator](https://bids-standard.github.io/bids-validator/) to see whether your dataset is completely valid according to BIDS.)
There are different tools to convert your "raw" scanner data (e.g., in DICOM or PAR/REC format) to BIDS, including [heudiconv](https://heudiconv.readthedocs.io/en/latest/), [bidscoin](https://github.com/Donders-Institute/bidscoin), and [bidsify](https://github.com/NILAB-UvA/bidsify) (created by Lukas). We'll skip over this step and assume that you'll be able to convert your data to BIDS._____no_output_____## Installing Fmriprep
Now, having your data in BIDS is an important step in getting started with Fmriprep. The next step is installing the package. Technically, Fmriprep is a Python package, so it can be installed as such (using `pip install fmriprep`), but we do not recommend this "bare metal" installation, because it depends on a host of neuroimaging software packages (including FSL, Freesurfer, AFNI, and ANTs). So if you'd want to directly install Fmriprep, you'd need to install those extra neuroimaging software packages as well (which is not worth your time, trust us).
Fortunately, Fmriprep also offers a "Docker container" in which Fmriprep and all the associated dependencies are already installed. [Docker](https://www.docker.com/) is software that allows you to create "containers", which are like lightweight "virtual machines" ([VM](https://en.wikipedia.org/wiki/Virtual_machine)) that are like a separate (Linux-based) operating system with a specific software configuration. You can download the Fmriprep-specific docker "image", which is like a "recipe", build the Fmriprep-specific "container" according to this "recipe" on your computer, and finally use this container to run Fmriprep on your computer as if all dependencies were actually installed on your computer! Docker is available on Linux, Mac, and Windows. To install Docker, google something like "install docker for {Windows,Mac,Linux}" to find a google walkthrough.
Note that you need administrator ("root") privilege on your computer (which is likely the case for your own computer, but not on shared analysis servers) to run Docker. If you don't have root access on your computer/server, ask you administrator/sysadmin to install [singularity](https://fmriprep.readthedocs.io/en/stable/installation.html#singularity-container), which allows you to convert Docker images to Singularity images, which you can run without administrator privileges.
Assuming you have installed Docker, you can run the "containerized" Fmriprep from your command line directly, which involves a fairly long and complicated command (i.e., `docker run -it --rm -v bids_dir /data ... etc`), or using the `fmriprep-docker` Python package. This `fmriprep-docker` package is just a simple wrapper around the appropriate Docker command to run the complicated "containerized" Fmriprep command. We strongly recommend this method.
To install `fmriprep-docker`, you can use `pip` (from your command line):
```
pip install fmriprep-docker
```
Now, you should have access to the `fmriprep-docker` command on your command line and you're ready to start preprocessing your dataset. For more detailed information about installing Fmriprep, check out their [website](https://fmriprep.readthedocs.io/en/stable/installation.html)._____no_output_____## Running Fmriprep
Assuming you have Docker and `fmriprep-docker` installed, you're ready to run Fmriprep. The basic format of the `fmriprep-docker` command is as follows:
```
fmriprep-docker <your bids-folder> <your output-folder>
```
This means that `fmriprep-docker` has two mandatory positional arguments: the first one being your BIDS-folder (i.e., the path to your folder with BIDS-formattefd data), and the second one being the output-folder (i.e., where you want Fmriprep to output the preprocessed data). We recommend setting your output-folder to a subfolder of your BIDS-folder named "derivatives": `<your bids-folder>/derivatives`.
Then, you can add a bunch of extra "flags" (parameters) to the command to specify the preprocessing pipeline as you like it. We highlight a couple of important ones here, but for the full list of parameters, check out the [Fmriprep](https://fmriprep.readthedocs.io/en/stable/usage.html) website.
### Freesurfer
When running Fmriprep from Docker, you don't need to have Freesurfer installed, but you *do* need a Freesurfer license. You can download this here: https://surfer.nmr.mgh.harvard.edu/fswiki/License. Then, you need to supply the `--fs-license-file <path to license file>` parameter to your `fmriprep-docker` command:
```
fmriprep-docker <your bids-folder> <your output-folder> --fs-license-file /home/lukas/license.txt
```
### Configuring what is preprocessed
If you just run Fmriprep with the mandatory BIDS-folder and output-folder arguments, it will preprocess everything it finds in the BIDS-folder. Sometimes, however, you may just want to run one (or several) specific participants, or one (or more) specific tasks (e.g., only the MRI files associated with the localizer runs, but not the working memory runs). You can do this by adding the `--participant` and `--task` flags to the command:
```
fmriprep-docker <your bids-folder> <your output-folder> --participant sub-01 --task localizer
```
You can also specify some things to be ignored during preprocessing using the `--ignore` parameters (like `fieldmaps`):
```
fmriprep-docker <your bids-folder> <your output-folder> --ignore fieldmaps
```
### Handling performance
It's very easy to parallelize the preprocessing pipeline by setting the `--nthreads` and `--omp-nthreads` parameters, which refer to the number of threads that should be used to run Fmriprep on. Note that laptops usually have 4 threads available (but analysis servers usually have more!). You can also specify the maximum of RAM that Fmriprep is allowed to use by the `--mem_mb` parameters. So, if you for example want to run Fmriprep with 3 threads and a maximum of 3GB of RAM, you can run:
```
fmriprep-docker <your bids-folder> <your output-folder> --nthreads 3 --omp-nthreads 3 --mem_mb 3000
```
In our experience, however, specifying the `--mem_mb` parameter is rarely necessary if you don't parallelize too much.
### Output spaces
Specifying your "output spaces" (with the `--output-spaces` flag) tells Fmriprep to what "space(s)" you want your preprocessed data registered to. For example, you can specify `T1w` to have your functional data registered to the participant's T1 scan. You can, instead or in addition to, also specify some standard template, like the MNI template (`MNI152NLin2009cAsym` or `MNI152NLin6Asym`). You can even specify surface templates if you want (like `fsaverage`), which will sample your volumetric functional data onto the surface (as computed by freesurfer). In addition to the specific output space(s), you can add a resolution "modifier" to the parameter to specify in what spatial resolution you want your resampled data to be. Without any resolution modifier, the native resolution of your functional files (e.g., $3\times3\times3$ mm.) will be kept intact. But if you want to upsample your resampled files to 2mm, you can add `YourTemplate:2mm`. For example, if you want to use the FSL-style MNI template (`MNI152NLin6Asym`) resampled at 2 mm, you'd use:
```
fmriprep-docker <your bids-folder> <your output-folder> --output-spaces MNI152NLin6Asym:2mm
```
You can of course specify multiple output-spaces:
```
fmriprep-docker <your bids-folder> <your output-folder> --output-spaces MNI152NLin6Asym:2mm T1w fsaverage
```
### Other parameters
There are many options that you can set when running Fmriprep. Check out the [Fmriprep website](https://fmriprep.readthedocs.io/) (under "Usage") for a list of all options!_____no_output_____## Issues, errors, and troubleshooting
While Fmriprep often works out-of-the-box (assuming your data are properly BIDS-formatted), it may happen that it crashes or otherwise gives unexpected results. A great place to start looking for help is [neurostars.org](https://neurostars.org). This website is dedicated to helping neuroscientists with neuroimaging/neuroscience-related questions. Make sure to check whether your question has been asked here already and, if not, pose it here!
If you encounter Fmriprep-specific bugs, you can also submit and issue at the [Github repository](https://github.com/poldracklab/fmriprep) of Fmriprep._____no_output_____## Fmriprep output/reports
After Fmriprep has run, it outputs, for each participants separately, a directory with results (i.e., preprocessed files) and an HTML-file with a summary and figures of the different steps in the preprocessing pipeline.
We ran Fmriprep on a single run/task (`flocBLOCKED`) from a single subject (`sub-03`) some data with the following command:
```
fmriprep-docker /home/lsnoek1/ni-edu/bids /home/lsnoek1/ni-edu/bids/derivatives --participant-label sub-03 --output-spaces T1w MNI152NLin2009cAsym
```
We've copied the Fmriprep output for this subject (`sub-03`) in the `fmriprep` subdirectory of the `week_4` directory. Let's check its contents:_____no_output_____
<code>
import os
print(os.listdir('bids/derivatives/fmriprep'))_____no_output_____
</code>
As said, Fmriprep outputs a directory with results (`sub-03`) and an associated HTML-file with a summary of the (intermediate and final) results. Let's check the directory with results first:_____no_output_____
<code>
from pprint import pprint # pprint stands for "pretty print",
sub_path = os.path.join('bids/derivatives/fmriprep', 'sub-03')
pprint(sorted(os.listdir(sub_path)))_____no_output_____
</code>
The `figures` directory contains several figures with the result of different preprocessing stages (like functional → high-res anatomical registration), but these figures are also included in the HTML-file, so we'll leave that for now. The other two directories, `anat` and `func`, contain the preprocessed anatomical and functional files, respectively. Let's inspect the `anat` directory:_____no_output_____
<code>
anat_path = os.path.join(sub_path, 'anat')
pprint(os.listdir(anat_path))_____no_output_____
</code>
Here, we see a couple of different files. There are both (preprocessed) nifti images (`*.nii.gz`) and associated meta-data (plain-text files in JSON format: `*.json`).
Importantly, the nifti outputs are in two different spaces: one set of files are in the original "T1 space", so without any resampling to another space (these files have the same resolution and orientation as the original T1 anatomical scan). For example, the `sub_03_desc-preproc_T1w.nii.gz` scan is the preprocessed (i.e., bias-corrected) T1 scan. In addition, most files are also available in `MNI152NLin2009cAsym` space, a standard template. For example, the `sub-03_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz` is the same file as `sub_03_desc-preproc_T1w.nii.gz`, but resampled to the `MNI152NLin2009cAsym` template. In addition, there are subject-specific brain parcellations (the `*aparcaseg_dseg.nii.gz `and `*aseg_dseg.nii.gz` files), files with registration parameters (`*from- ... -to ...` files), probabilistic tissue segmentation files (`*label-{CSF,GM,WM}_probseg.nii.gz`) files, and brain masks (to outline what is brain and not skull/dura/etc; `*brain_mask.nii.gz`).
Again, on the [Fmriprep website](https://fmriprep.readthedocs.io/), you can find more information about the specific outputs.
Now, let's check out the `func` directory:_____no_output_____
<code>
func_path = os.path.join(sub_path, 'func')
pprint(os.listdir(func_path))_____no_output_____
</code>
Again, like the files in the `anat` folder, the functional outputs are available in two spaces: `T1w` and `MNI152NLin2009cAsym`. In terms of actual images, there are preprocessed BOLD files (ending in `preproc_bold.nii.gz`), the functional volume used for "functional → anatomical" registration (ending in `boldref.nii.gz`), brain parcellations in functional space (ending in `dseg.nii.gz`), and brain masks (ending in `brain_mask.nii.gz`). In addition, there are files with "confounds" (ending in `confounds_regressors.tsv`) which contain variables that you might want to include as nuisance regressors in your first-level analysis. These confound files are speadsheet-like files (like `csv` files, but instead of being comma-delimited, they are tab-delimited) and can be easily loaded in Python using the [pandas](https://pandas.pydata.org/) package:_____no_output_____
<code>
import pandas as pd
conf_path = os.path.join(func_path, 'sub-03_task-flocBLOCKED_desc-confounds_regressors.tsv')
conf = pd.read_csv(conf_path, sep='\t')
conf.head()_____no_output_____
</code>
Confound files from Fmriprep contain a large set of confounds, ranging from motion parameters (`rot_x`, `rot_y`, `rot_z`, `trans_x`, `trans_y`, and `trans_z`) and their derivatives (`*derivative1`) and squares (`*_power2`) to the average signal from the brain's white matter and cerebrospinal fluid (CSF), which should contain sources of noise such as respiratory, cardiac, or motion related signals (but not signal from neural sources, which should be largely constrained to gray matter). For a full list and explanation of Fmriprep's estimated confounds, check their website. Also, check [this thread](https://neurostars.org/t/confounds-from-fmriprep-which-one-would-you-use-for-glm/326) on Neurostars for a discussion on which confounds to include in your analyses._____no_output_____In addition to the actual preprocessed outputs, Fmriprep also provides you with a nice (visual) summary of the different (major) preprocessing steps in an HTML-file, which you'd normally open in any standard browser to view. Here. we load this file for our example participants (`sub-03`) inside the notebook below. Scroll through it to see which preprocessing steps are highlighted. Note that the images from the HTML-file are not properly rendered in Jupyter notebooks, but you can right-click the image links (e.g., `sub-03/figures/sub-03_dseg.svg`) and click "Open link in new tab" to view the image._____no_output_____
<code>
from IPython.display import IFrame
IFrame(src='./bids/derivatives/fmriprep/sub-03.html', width=700, height=600)_____no_output_____
</code>
| {
"repository": "lukassnoek/NI-edu",
"path": "NI-edu/fMRI-introduction/week_4/fmriprep.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 10,
"size": 19562,
"hexsha": "d04b199cda9d2e9f761c71d359e098bc8ab800fe",
"max_line_length": 1077,
"avg_line_length": 69.8642857143,
"alphanum_fraction": 0.7042224721
} |
# Notebook from nishadalal120/NEU-365P-385L-Spring-2021
Path: homework/key-random_walks.ipynb
# Homework - Random Walks (18 pts)_____no_output_____## Continuous random walk in three dimensions
Write a program simulating a three-dimensional random walk in a continuous space. Let 1000 independent particles all start at random positions within a cube with corners at (0,0,0) and (1,1,1). At each time step each particle will move in a random direction by a random amount between -1 and 1 along each axis (x, y, z)._____no_output_____1. (3 pts) Create data structure(s) to store your simulated particle positions for each of 2000 time steps and initialize them with the particles starting positions._____no_output_____
<code>
import numpy as np
numTimeSteps = 2000
numParticles = 1000
positions = np.zeros( (numParticles, 3, numTimeSteps) )
# initialize starting positions on first time step
positions[:,:,0] = np.random.random( (numParticles, 3) )_____no_output_____
</code>
2. (3 pts) Write code to run your simulation for 2000 time steps._____no_output_____
<code>
for t in range(numTimeSteps-1):
# 2 * [0 to 1] - 1 --> [-1 to 1]
jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1
positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles_____no_output_____# just for fun, here's another way to run the simulation above without a loop
jumpsForAllParticlesAndAllTimeSteps = 2 * np.random.random((numParticles, 3, numTimeSteps-1)) - 1
positions[:,:,1:] = positions[:,:,0].reshape(numParticles, 3, 1) + np.cumsum(jumpsForAllParticlesAndAllTimeSteps, axis=2)_____no_output_____
</code>
3. (3 pts) Generate a series of four 3D scatter plots at selected time points to visually convey what is going on. Arrange the plots in a single row from left to right. Make sure you indicate which time points you are showing._____no_output_____
<code>
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
lim = 70
plt.figure(figsize=(12,3))
for (i,t) in enumerate([0, 100, 1000, 1999]):
ax = plt.subplot(1, 4, i+1, projection='3d')
x = positions[:,0,t]
y = positions[:,1,t]
z = positions[:,2,t]
ax.scatter(x, y, z)
plt.xlim([-lim, lim])
plt.ylim([-lim, lim])
ax.set_zlim([-lim, lim])
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Time {t}");_____no_output_____
</code>
4. (3 pts) Draw the path of a single particle (your choice) across all time steps in a 3D plot._____no_output_____
<code>
ax = plt.subplot(1, 1, 1, projection='3d')
i = 10 # particle index
x = positions[i,0,:]
y = positions[i,1,:]
z = positions[i,2,:]
plt.plot(x, y, z)
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Particle {i}");_____no_output_____
</code>
5. (3 pts) Find the minimum, maximum, mean and variance for the jump distances of all particles throughout the entire simulation. Jump distance is the euclidean distance moved on each time step $\sqrt(dx^2+dy^2+dz^2)$. *Hint: numpy makes this very simple.*_____no_output_____
<code>
jumpsXYZForAllParticlesAndAllTimeSteps = positions[:,:,1:] - positions[:,:,:-1]
jumpDistancesForAllParticlesAndAllTimeSteps = np.sqrt(np.sum(jumpsXYZForAllParticlesAndAllTimeSteps**2, axis=1))
print(f"min = {jumpDistancesForAllParticlesAndAllTimeSteps.min()}")
print(f"max = {jumpDistancesForAllParticlesAndAllTimeSteps.max()}")
print(f"mean = {jumpDistancesForAllParticlesAndAllTimeSteps.mean()}")
print(f"var = {jumpDistancesForAllParticlesAndAllTimeSteps.var()}")min = 0.0052364433932233926
max = 1.7230154410954457
mean = 0.9602742572616196
var = 0.07749699927626445
</code>
6. (3 pts) Repeat the simulation, but this time confine the particles to a unit cell of dimension 10x10x10. Make it so that if a particle leaves one edge of the cell, it enters on the opposite edge (this is the sort of thing most molecular dynamics simulations do). Show plots as in #3 to visualize the simulation (note that most interesting stuff liekly happens in the first 100 time steps)._____no_output_____
<code>
for t in range(numTimeSteps-1):
# 2 * [0 to 1] - 1 --> [-1 to 1]
jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1
positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles
# check for out-of-bounds and warp to opposite bound
for i in range(numParticles):
for j in range(3):
if positions[i,j,t+1] < 0:
positions[i,j,t+1] += 10
elif positions[i,j,t+1] > 10:
positions[i,j,t+1] -= 10_____no_output_____plt.figure(figsize=(12,3))
for (i,t) in enumerate([0, 3, 10, 1999]):
ax = plt.subplot(1, 4, i+1, projection='3d')
x = positions[:,0,t]
y = positions[:,1,t]
z = positions[:,2,t]
ax.scatter(x, y, z)
plt.xlim([0, 10])
plt.ylim([0, 10])
ax.set_zlim([0, 10])
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Time {t}");_____no_output_____
</code>
| {
"repository": "nishadalal120/NEU-365P-385L-Spring-2021",
"path": "homework/key-random_walks.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": 12,
"size": 240293,
"hexsha": "d04ba34b2203b2b845671c65ceb5f6103e60ca92",
"max_line_length": 119332,
"avg_line_length": 876.9817518248,
"alphanum_fraction": 0.9531363793
} |
# Notebook from aniket371/tapas
Path: notebooks/sqa_predictions.ipynb
<a href="https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/sqa_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____##### Copyright 2020 The Google AI Language Team Authors
Licensed under the Apache License, Version 2.0 (the "License");_____no_output_____
<code>
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License._____no_output_____
</code>
Running a Tapas fine-tuned checkpoint
---
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)_____no_output_____# Clone and install the repository
_____no_output_____First, let's install the code._____no_output_____
<code>
! pip install tapas-table-parsingCollecting tapas-table-parsing
Downloading tapas_table_parsing-0.0.1.dev0-py3-none-any.whl (195 kB)
[?25l
[K |█▊ | 10 kB 22.3 MB/s eta 0:00:01
[K |███▍ | 20 kB 28.7 MB/s eta 0:00:01
[K |█████ | 30 kB 16.4 MB/s eta 0:00:01
[K |██████▊ | 40 kB 11.4 MB/s eta 0:00:01
[K |████████▍ | 51 kB 5.7 MB/s eta 0:00:01
[K |██████████ | 61 kB 6.7 MB/s eta 0:00:01
[K |███████████▊ | 71 kB 7.3 MB/s eta 0:00:01
[K |█████████████▍ | 81 kB 5.6 MB/s eta 0:00:01
[K |███████████████ | 92 kB 6.2 MB/s eta 0:00:01
[K |████████████████▊ | 102 kB 6.8 MB/s eta 0:00:01
[K |██████████████████▍ | 112 kB 6.8 MB/s eta 0:00:01
[K |████████████████████ | 122 kB 6.8 MB/s eta 0:00:01
[K |█████████████████████▉ | 133 kB 6.8 MB/s eta 0:00:01
[K |███████████████████████▌ | 143 kB 6.8 MB/s eta 0:00:01
[K |█████████████████████████▏ | 153 kB 6.8 MB/s eta 0:00:01
[K |██████████████████████████▉ | 163 kB 6.8 MB/s eta 0:00:01
[K |████████████████████████████▌ | 174 kB 6.8 MB/s eta 0:00:01
[K |██████████████████████████████▏ | 184 kB 6.8 MB/s eta 0:00:01
[K |███████████████████████████████▉| 194 kB 6.8 MB/s eta 0:00:01
[K |████████████████████████████████| 195 kB 6.8 MB/s
[?25hCollecting frozendict==1.2
Downloading frozendict-1.2.tar.gz (2.6 kB)
Collecting pandas~=1.0.0
Downloading pandas-1.0.5-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
[K |████████████████████████████████| 10.1 MB 51.1 MB/s
[?25hCollecting tensorflow-probability==0.10.1
Downloading tensorflow_probability-0.10.1-py2.py3-none-any.whl (3.5 MB)
[K |████████████████████████████████| 3.5 MB 43.8 MB/s
[?25hCollecting nltk~=3.5
Downloading nltk-3.7-py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 47.9 MB/s
[?25hCollecting scikit-learn~=0.22.1
Downloading scikit_learn-0.22.2.post1-cp37-cp37m-manylinux1_x86_64.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 36.9 MB/s
[?25hCollecting kaggle<1.5.8
Downloading kaggle-1.5.6.tar.gz (58 kB)
[K |████████████████████████████████| 58 kB 5.4 MB/s
[?25hCollecting tf-models-official~=2.2.0
Downloading tf_models_official-2.2.2-py2.py3-none-any.whl (711 kB)
[K |████████████████████████████████| 711 kB 53.0 MB/s
[?25hCollecting tensorflow~=2.2.0
Downloading tensorflow-2.2.3-cp37-cp37m-manylinux2010_x86_64.whl (516.4 MB)
[K |████████████████████████████████| 516.4 MB 17 kB/s
[?25hCollecting apache-beam[gcp]==2.20.0
Downloading apache_beam-2.20.0-cp37-cp37m-manylinux1_x86_64.whl (3.5 MB)
[K |████████████████████████████████| 3.5 MB 45.4 MB/s
[?25hCollecting tf-slim~=1.1.0
Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
[K |████████████████████████████████| 352 kB 56.0 MB/s
[?25hRequirement already satisfied: future<1.0.0,>=0.16.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.16.0)
Requirement already satisfied: pytz>=2018.3 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (2018.9)
Requirement already satisfied: numpy<2,>=1.14.3 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.21.5)
Collecting fastavro<0.22,>=0.21.4
Downloading fastavro-0.21.24-cp37-cp37m-manylinux1_x86_64.whl (1.2 MB)
[K |████████████████████████████████| 1.2 MB 36.6 MB/s
[?25hRequirement already satisfied: protobuf<4,>=3.5.0.post1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.17.3)
Collecting dill<0.3.2,>=0.3.1.1
Downloading dill-0.3.1.1.tar.gz (151 kB)
[K |████████████████████████████████| 151 kB 42.9 MB/s
[?25hCollecting httplib2<=0.12.0,>=0.8
Downloading httplib2-0.12.0.tar.gz (218 kB)
[K |████████████████████████████████| 218 kB 50.0 MB/s
[?25hCollecting oauth2client<4,>=2.0.1
Downloading oauth2client-3.0.0.tar.gz (77 kB)
[K |████████████████████████████████| 77 kB 5.8 MB/s
[?25hCollecting mock<3.0.0,>=1.0.1
Downloading mock-2.0.0-py2.py3-none-any.whl (56 kB)
[K |████████████████████████████████| 56 kB 4.7 MB/s
[?25hCollecting pymongo<4.0.0,>=3.8.0
Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB)
[K |████████████████████████████████| 508 kB 31.5 MB/s
[?25hCollecting hdfs<3.0.0,>=2.1.0
Downloading hdfs-2.7.0-py3-none-any.whl (34 kB)
Collecting typing-extensions<3.8.0,>=3.7.0
Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1
Downloading avro-python3-1.9.2.1.tar.gz (37 kB)
Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.7)
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.8.2)
Collecting pyarrow<0.17.0,>=0.15.1
Downloading pyarrow-0.16.0-cp37-cp37m-manylinux2014_x86_64.whl (63.1 MB)
[K |████████████████████████████████| 63.1 MB 35 kB/s
[?25hRequirement already satisfied: grpcio<2,>=1.12.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.44.0)
Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.3.0)
Collecting google-cloud-dlp<=0.13.0,>=0.12.0
Downloading google_cloud_dlp-0.13.0-py2.py3-none-any.whl (151 kB)
[K |████████████████████████████████| 151 kB 51.1 MB/s
[?25hRequirement already satisfied: google-cloud-core<2,>=0.28.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.0.3)
Collecting google-cloud-bigtable<1.1.0,>=0.31.1
Downloading google_cloud_bigtable-1.0.0-py2.py3-none-any.whl (232 kB)
[K |████████████████████████████████| 232 kB 55.5 MB/s
[?25hCollecting google-cloud-language<2,>=1.3.0
Downloading google_cloud_language-1.3.0-py2.py3-none-any.whl (83 kB)
[K |████████████████████████████████| 83 kB 1.7 MB/s
[?25hCollecting google-cloud-vision<0.43.0,>=0.38.0
Downloading google_cloud_vision-0.42.0-py2.py3-none-any.whl (435 kB)
[K |████████████████████████████████| 435 kB 52.1 MB/s
[?25hRequirement already satisfied: google-cloud-bigquery<=1.24.0,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.21.0)
Collecting grpcio-gcp<1,>=0.2.2
Downloading grpcio_gcp-0.2.2-py2.py3-none-any.whl (9.4 kB)
Collecting google-cloud-spanner<1.14.0,>=1.13.0
Downloading google_cloud_spanner-1.13.0-py2.py3-none-any.whl (212 kB)
[K |████████████████████████████████| 212 kB 68.0 MB/s
[?25hCollecting google-cloud-datastore<1.8.0,>=1.7.1
Downloading google_cloud_datastore-1.7.4-py2.py3-none-any.whl (82 kB)
[K |████████████████████████████████| 82 kB 1.1 MB/s
[?25hCollecting cachetools<4,>=3.1.0
Downloading cachetools-3.1.1-py2.py3-none-any.whl (11 kB)
Collecting google-cloud-videointelligence<1.14.0,>=1.8.0
Downloading google_cloud_videointelligence-1.13.0-py2.py3-none-any.whl (177 kB)
[K |████████████████████████████████| 177 kB 64.9 MB/s
[?25hCollecting google-cloud-pubsub<1.1.0,>=0.39.0
Downloading google_cloud_pubsub-1.0.2-py2.py3-none-any.whl (118 kB)
[K |████████████████████████████████| 118 kB 68.4 MB/s
[?25hCollecting google-apitools<0.5.29,>=0.5.28
Downloading google-apitools-0.5.28.tar.gz (172 kB)
[K |████████████████████████████████| 172 kB 65.3 MB/s
[?25hRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (4.4.2)
Requirement already satisfied: gast>=0.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (0.5.3)
Requirement already satisfied: cloudpickle==1.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (1.3.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (1.15.0)
Collecting fasteners>=0.14
Downloading fasteners-0.17.3-py3-none-any.whl (18 kB)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery<=1.24.0,>=1.6.0->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.4.1)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.26.3)
Collecting grpc-google-iam-v1<0.13dev,>=0.12.3
Downloading grpc-google-iam-v1-0.12.3.tar.gz (13 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.35.0)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (57.4.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (21.3)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.23.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.56.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (4.8)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.2.8)
Requirement already satisfied: docopt in /usr/local/lib/python3.7/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.6.2)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (1.24.3)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (2021.10.8)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (4.63.0)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (6.1.1)
Collecting pbr>=0.11
Downloading pbr-5.8.1-py2.py3-none-any.whl (113 kB)
[K |████████████████████████████████| 113 kB 63.8 MB/s
[?25hRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from nltk~=3.5->tapas-table-parsing) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from nltk~=3.5->tapas-table-parsing) (1.1.0)
Collecting regex>=2021.8.3
Downloading regex-2022.3.15-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (749 kB)
[K |████████████████████████████████| 749 kB 45.4 MB/s
[?25hRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.4.8)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.0.7)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.0.4)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn~=0.22.1->tapas-table-parsing) (1.4.1)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.6.3)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.14.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (3.3.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (0.37.1)
Collecting tensorboard<2.3.0,>=2.2.0
Downloading tensorboard-2.2.2-py3-none-any.whl (3.0 MB)
[K |████████████████████████████████| 3.0 MB 39.4 MB/s
[?25hCollecting h5py<2.11.0,>=2.10.0
Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
[K |████████████████████████████████| 2.9 MB 44.0 MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (0.2.0)
Collecting numpy<2,>=1.14.3
Downloading numpy-1.18.5-cp37-cp37m-manylinux1_x86_64.whl (20.1 MB)
[K |████████████████████████████████| 20.1 MB 82.3 MB/s
[?25hCollecting tensorflow-estimator<2.3.0,>=2.2.0
Downloading tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454 kB)
[K |████████████████████████████████| 454 kB 65.6 MB/s
[?25hCollecting gast>=0.3.2
Downloading gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.1.2)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.0.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.8.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.3.6)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (4.11.3)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.7.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.2.0)
Collecting mlperf-compliance==0.0.10
Downloading mlperf_compliance-0.0.10-py3-none-any.whl (24 kB)
Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.12.0)
Collecting sentencepiece
Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
[K |████████████████████████████████| 1.2 MB 37.7 MB/s
[?25hCollecting typing==3.7.4.1
Downloading typing-3.7.4.1-py3-none-any.whl (25 kB)
Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.5.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (3.13)
Collecting opencv-python-headless
Downloading opencv_python_headless-4.5.5.64-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.8 MB)
[K |████████████████████████████████| 47.8 MB 49 kB/s
[?25hRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (3.2.2)
Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.29.28)
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (4.0.1)
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (5.4.8)
Collecting tensorflow-model-optimization>=0.2.1
Downloading tensorflow_model_optimization-0.7.2-py2.py3-none-any.whl (237 kB)
[K |████████████████████████████████| 237 kB 64.3 MB/s
[?25hCollecting py-cpuinfo>=3.3.0
Downloading py-cpuinfo-8.0.0.tar.gz (99 kB)
[K |████████████████████████████████| 99 kB 9.3 MB/s
[?25hRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (7.1.2)
Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (1.12.11)
Collecting dataclasses
Downloading dataclasses-0.6-py3-none-any.whl (14 kB)
Collecting tensorflow-addons
Downloading tensorflow_addons-0.16.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
[K |████████████████████████████████| 1.1 MB 37.8 MB/s
[?25hCollecting google-api-python-client>=1.6.7
Downloading google_api_python_client-2.42.0-py2.py3-none-any.whl (8.3 MB)
[K |████████████████████████████████| 8.3 MB 64.3 MB/s
[?25h Downloading google_api_python_client-2.41.0-py2.py3-none-any.whl (8.3 MB)
[K |████████████████████████████████| 8.3 MB 21.1 MB/s
[?25h Downloading google_api_python_client-2.40.0-py2.py3-none-any.whl (8.2 MB)
[K |████████████████████████████████| 8.2 MB 44.2 MB/s
[?25h Downloading google_api_python_client-2.39.0-py2.py3-none-any.whl (8.2 MB)
[K |████████████████████████████████| 8.2 MB 38.0 MB/s
[?25h Downloading google_api_python_client-2.38.0-py2.py3-none-any.whl (8.2 MB)
[K |████████████████████████████████| 8.2 MB 41.1 MB/s
[?25h Downloading google_api_python_client-2.37.0-py2.py3-none-any.whl (8.1 MB)
[K |████████████████████████████████| 8.1 MB 30.0 MB/s
[?25h Downloading google_api_python_client-2.36.0-py2.py3-none-any.whl (8.0 MB)
[K |████████████████████████████████| 8.0 MB 50.5 MB/s
[?25h Downloading google_api_python_client-2.35.0-py2.py3-none-any.whl (8.0 MB)
[K |████████████████████████████████| 8.0 MB 35.1 MB/s
[?25h Downloading google_api_python_client-2.34.0-py2.py3-none-any.whl (7.9 MB)
[K |████████████████████████████████| 7.9 MB 50.6 MB/s
[?25h Downloading google_api_python_client-2.33.0-py2.py3-none-any.whl (7.9 MB)
[K |████████████████████████████████| 7.9 MB 26.7 MB/s
[?25h Downloading google_api_python_client-2.32.0-py2.py3-none-any.whl (7.8 MB)
[K |████████████████████████████████| 7.8 MB 50.7 MB/s
[?25h Downloading google_api_python_client-2.31.0-py2.py3-none-any.whl (7.8 MB)
[K |████████████████████████████████| 7.8 MB 31.8 MB/s
[?25h Downloading google_api_python_client-2.30.0-py2.py3-none-any.whl (7.8 MB)
[K |████████████████████████████████| 7.8 MB 25.6 MB/s
[?25h Downloading google_api_python_client-2.29.0-py2.py3-none-any.whl (7.7 MB)
[K |████████████████████████████████| 7.7 MB 45.6 MB/s
[?25h Downloading google_api_python_client-2.28.0-py2.py3-none-any.whl (7.7 MB)
[K |████████████████████████████████| 7.7 MB 32.2 MB/s
[?25h Downloading google_api_python_client-2.27.0-py2.py3-none-any.whl (7.7 MB)
[K |████████████████████████████████| 7.7 MB 37.2 MB/s
[?25h Downloading google_api_python_client-2.26.1-py2.py3-none-any.whl (7.6 MB)
[K |████████████████████████████████| 7.6 MB 33.4 MB/s
[?25h Downloading google_api_python_client-2.26.0-py2.py3-none-any.whl (7.6 MB)
[K |████████████████████████████████| 7.6 MB 19.4 MB/s
[?25h Downloading google_api_python_client-2.25.0-py2.py3-none-any.whl (7.5 MB)
[K |████████████████████████████████| 7.5 MB 41.9 MB/s
[?25h Downloading google_api_python_client-2.24.0-py2.py3-none-any.whl (7.5 MB)
[K |████████████████████████████████| 7.5 MB 9.5 MB/s
[?25h Downloading google_api_python_client-2.23.0-py2.py3-none-any.whl (7.5 MB)
[K |████████████████████████████████| 7.5 MB 11.0 MB/s
[?25h Downloading google_api_python_client-2.22.0-py2.py3-none-any.whl (7.5 MB)
[K |████████████████████████████████| 7.5 MB 8.1 MB/s
[?25h Downloading google_api_python_client-2.21.0-py2.py3-none-any.whl (7.5 MB)
[K |████████████████████████████████| 7.5 MB 37.5 MB/s
[?25h Downloading google_api_python_client-2.20.0-py2.py3-none-any.whl (7.4 MB)
[K |████████████████████████████████| 7.4 MB 29.5 MB/s
[?25h Downloading google_api_python_client-2.19.1-py2.py3-none-any.whl (7.4 MB)
[K |████████████████████████████████| 7.4 MB 6.7 MB/s
[?25h Downloading google_api_python_client-2.19.0-py2.py3-none-any.whl (7.4 MB)
[K |████████████████████████████████| 7.4 MB 31.1 MB/s
[?25h Downloading google_api_python_client-2.18.0-py2.py3-none-any.whl (7.4 MB)
[K |████████████████████████████████| 7.4 MB 30.6 MB/s
[?25h Downloading google_api_python_client-2.17.0-py2.py3-none-any.whl (7.3 MB)
[K |████████████████████████████████| 7.3 MB 21.9 MB/s
[?25h Downloading google_api_python_client-2.16.0-py2.py3-none-any.whl (7.3 MB)
[K |████████████████████████████████| 7.3 MB 27.5 MB/s
[?25h Downloading google_api_python_client-2.15.0-py2.py3-none-any.whl (7.2 MB)
[K |████████████████████████████████| 7.2 MB 19.3 MB/s
[?25h Downloading google_api_python_client-2.14.1-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 25.4 MB/s
[?25h Downloading google_api_python_client-2.14.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 22.6 MB/s
[?25h Downloading google_api_python_client-2.13.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 16.9 MB/s
[?25h Downloading google_api_python_client-2.12.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 22.3 MB/s
[?25h Downloading google_api_python_client-2.11.0-py2.py3-none-any.whl (7.0 MB)
[K |████████████████████████████████| 7.0 MB 5.1 MB/s
[?25h Downloading google_api_python_client-2.10.0-py2.py3-none-any.whl (7.0 MB)
[K |████████████████████████████████| 7.0 MB 26.8 MB/s
[?25h Downloading google_api_python_client-2.9.0-py2.py3-none-any.whl (7.0 MB)
[K |████████████████████████████████| 7.0 MB 19.7 MB/s
[?25h Downloading google_api_python_client-2.8.0-py2.py3-none-any.whl (7.0 MB)
[K |████████████████████████████████| 7.0 MB 24.8 MB/s
[?25h Downloading google_api_python_client-2.7.0-py2.py3-none-any.whl (7.3 MB)
[K |████████████████████████████████| 7.3 MB 19.0 MB/s
[?25h Downloading google_api_python_client-2.6.0-py2.py3-none-any.whl (7.2 MB)
[K |████████████████████████████████| 7.2 MB 34.9 MB/s
[?25h Downloading google_api_python_client-2.5.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 17.7 MB/s
[?25h Downloading google_api_python_client-2.4.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 31.2 MB/s
[?25h Downloading google_api_python_client-2.3.0-py2.py3-none-any.whl (7.1 MB)
[K |████████████████████████████████| 7.1 MB 37.4 MB/s
[?25h Downloading google_api_python_client-2.2.0-py2.py3-none-any.whl (7.0 MB)
[K |████████████████████████████████| 7.0 MB 6.2 MB/s
[?25h Downloading google_api_python_client-2.1.0-py2.py3-none-any.whl (6.6 MB)
[K |████████████████████████████████| 6.6 MB 14.4 MB/s
[?25h Downloading google_api_python_client-2.0.2-py2.py3-none-any.whl (6.5 MB)
[K |████████████████████████████████| 6.5 MB 37.7 MB/s
[?25h Downloading google_api_python_client-1.12.10-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 119 kB/s
[?25h Downloading google_api_python_client-1.12.8-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 785 bytes/s
[?25h Downloading google_api_python_client-1.12.7-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 29 kB/s
[?25h Downloading google_api_python_client-1.12.6-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 27 kB/s
[?25h Downloading google_api_python_client-1.12.5-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 7.5 MB/s
[?25h Downloading google_api_python_client-1.12.4-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 7.1 MB/s
[?25h Downloading google_api_python_client-1.12.3-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 5.8 MB/s
[?25h Downloading google_api_python_client-1.12.2-py2.py3-none-any.whl (61 kB)
[K |████████████████████████████████| 61 kB 8.4 MB/s
[?25hRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official~=2.2.0->tapas-table-parsing) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official~=2.2.0->tapas-table-parsing) (3.0.1)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.2.1->tf-models-official~=2.2.0->tapas-table-parsing) (0.1.6)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->tf-models-official~=2.2.0->tapas-table-parsing) (1.4.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->tf-models-official~=2.2.0->tapas-table-parsing) (0.11.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle<1.5.8->tapas-table-parsing) (1.3)
Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons->tf-models-official~=2.2.0->tapas-table-parsing) (2.7.1)
Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (5.4.0)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (1.7.0)
Requirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (2.3)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (21.4.0)
Building wheels for collected packages: frozendict, avro-python3, dill, google-apitools, grpc-google-iam-v1, httplib2, kaggle, oauth2client, py-cpuinfo
Building wheel for frozendict (setup.py) ... [?25l[?25hdone
Created wheel for frozendict: filename=frozendict-1.2-py3-none-any.whl size=3166 sha256=bcbf7ecdf36cf16604986862f798d3cfd039a27a02d608e86236c97dac08c3ae
Stored in directory: /root/.cache/pip/wheels/68/17/69/ac196dd181e620bba5fae5488e4fd6366a7316dce13cf88776
Building wheel for avro-python3 (setup.py) ... [?25l[?25hdone
Created wheel for avro-python3: filename=avro_python3-1.9.2.1-py3-none-any.whl size=43513 sha256=8d5079abbdcb60a53a8929f07491be91b469e9ca1e0b266eb0f868e065147dae
Stored in directory: /root/.cache/pip/wheels/bc/49/5f/fdb5b9d85055c478213e0158ac122b596816149a02d82e0ab1
Building wheel for dill (setup.py) ... [?25l[?25hdone
Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=5847f08d96cd5f1809473da81ce0e7cea3badef87bbcc5e8f88f7136bb233a9c
Stored in directory: /root/.cache/pip/wheels/a4/61/fd/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f
Building wheel for google-apitools (setup.py) ... [?25l[?25hdone
Created wheel for google-apitools: filename=google_apitools-0.5.28-py3-none-any.whl size=130109 sha256=3a5edaac514084485549d713c595af9a20d638189d5ab3dfa0107675ce6c2937
Stored in directory: /root/.cache/pip/wheels/34/3b/69/ecd8e6ae89d9d71102a58962c29faa7a9467ba45f99f205920
Building wheel for grpc-google-iam-v1 (setup.py) ... [?25l[?25hdone
Created wheel for grpc-google-iam-v1: filename=grpc_google_iam_v1-0.12.3-py3-none-any.whl size=18515 sha256=c6a155cf0d184085c4d718e1e3c19356ba32b9f7c29d3884f6de71f8d14a6387
Stored in directory: /root/.cache/pip/wheels/b9/ee/67/2e444183030cb8d31ce8b34cee34a7afdbd3ba5959ea846380
Building wheel for httplib2 (setup.py) ... [?25l[?25hdone
Created wheel for httplib2: filename=httplib2-0.12.0-py3-none-any.whl size=93465 sha256=e1e718e4ceca2290ca872bd2434defee84953402a8baad2e2b183a115bb6b901
Stored in directory: /root/.cache/pip/wheels/0d/e7/b6/0dd30343ceca921cfbd91f355041bd9c69e0f40b49f25b7b8a
Building wheel for kaggle (setup.py) ... [?25l[?25hdone
Created wheel for kaggle: filename=kaggle-1.5.6-py3-none-any.whl size=72858 sha256=f911a59bdadc590e7f089c41c23f24c49e3d2d586bbefe73925d026b7989d7fc
Stored in directory: /root/.cache/pip/wheels/aa/e7/e7/eb3c3d514c33294d77ddd5a856bdd58dc9c1fabbed59a02a2b
Building wheel for oauth2client (setup.py) ... [?25l[?25hdone
Created wheel for oauth2client: filename=oauth2client-3.0.0-py3-none-any.whl size=106375 sha256=a0b226e54e128315e6205fc9380270bca443fc3e1bac6e135e51a4cd24bb3622
Stored in directory: /root/.cache/pip/wheels/86/73/7a/3b3f76a2142176605ff38fbca574327962c71e25a43197a4c1
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=113156ecdb59f6181b20ec42ed7d3373f3b3be32ec31144ee5cb7efd4caca5f3
Stored in directory: /root/.cache/pip/wheels/d2/f1/1f/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401
Successfully built frozendict avro-python3 dill google-apitools grpc-google-iam-v1 httplib2 kaggle oauth2client py-cpuinfo
Installing collected packages: typing-extensions, cachetools, pbr, numpy, httplib2, grpcio-gcp, tensorflow-estimator, tensorboard, pymongo, pyarrow, oauth2client, mock, hdfs, h5py, grpc-google-iam-v1, gast, fasteners, fastavro, dill, avro-python3, typing, tensorflow-model-optimization, tensorflow-addons, tensorflow, sentencepiece, regex, py-cpuinfo, pandas, opencv-python-headless, mlperf-compliance, kaggle, google-cloud-vision, google-cloud-videointelligence, google-cloud-spanner, google-cloud-pubsub, google-cloud-language, google-cloud-dlp, google-cloud-datastore, google-cloud-bigtable, google-apitools, google-api-python-client, dataclasses, apache-beam, tf-slim, tf-models-official, tensorflow-probability, scikit-learn, nltk, frozendict, tapas-table-parsing
Attempting uninstall: typing-extensions
Found existing installation: typing-extensions 3.10.0.2
Uninstalling typing-extensions-3.10.0.2:
Successfully uninstalled typing-extensions-3.10.0.2
Attempting uninstall: cachetools
Found existing installation: cachetools 4.2.4
Uninstalling cachetools-4.2.4:
Successfully uninstalled cachetools-4.2.4
Attempting uninstall: numpy
Found existing installation: numpy 1.21.5
Uninstalling numpy-1.21.5:
Successfully uninstalled numpy-1.21.5
Attempting uninstall: httplib2
Found existing installation: httplib2 0.17.4
Uninstalling httplib2-0.17.4:
Successfully uninstalled httplib2-0.17.4
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.8.0
Uninstalling tensorflow-estimator-2.8.0:
Successfully uninstalled tensorflow-estimator-2.8.0
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.8.0
Uninstalling tensorboard-2.8.0:
Successfully uninstalled tensorboard-2.8.0
Attempting uninstall: pymongo
Found existing installation: pymongo 4.0.2
Uninstalling pymongo-4.0.2:
Successfully uninstalled pymongo-4.0.2
Attempting uninstall: pyarrow
Found existing installation: pyarrow 6.0.1
Uninstalling pyarrow-6.0.1:
Successfully uninstalled pyarrow-6.0.1
Attempting uninstall: oauth2client
Found existing installation: oauth2client 4.1.3
Uninstalling oauth2client-4.1.3:
Successfully uninstalled oauth2client-4.1.3
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
Attempting uninstall: gast
Found existing installation: gast 0.5.3
Uninstalling gast-0.5.3:
Successfully uninstalled gast-0.5.3
Attempting uninstall: dill
Found existing installation: dill 0.3.4
Uninstalling dill-0.3.4:
Successfully uninstalled dill-0.3.4
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.8.0
Uninstalling tensorflow-2.8.0:
Successfully uninstalled tensorflow-2.8.0
Attempting uninstall: regex
Found existing installation: regex 2019.12.20
Uninstalling regex-2019.12.20:
Successfully uninstalled regex-2019.12.20
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
Attempting uninstall: kaggle
Found existing installation: kaggle 1.5.12
Uninstalling kaggle-1.5.12:
Successfully uninstalled kaggle-1.5.12
Attempting uninstall: google-cloud-language
Found existing installation: google-cloud-language 1.2.0
Uninstalling google-cloud-language-1.2.0:
Successfully uninstalled google-cloud-language-1.2.0
Attempting uninstall: google-cloud-datastore
Found existing installation: google-cloud-datastore 1.8.0
Uninstalling google-cloud-datastore-1.8.0:
Successfully uninstalled google-cloud-datastore-1.8.0
Attempting uninstall: google-api-python-client
Found existing installation: google-api-python-client 1.12.11
Uninstalling google-api-python-client-1.12.11:
Successfully uninstalled google-api-python-client-1.12.11
Attempting uninstall: tensorflow-probability
Found existing installation: tensorflow-probability 0.16.0
Uninstalling tensorflow-probability-0.16.0:
Successfully uninstalled tensorflow-probability-0.16.0
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: nltk
Found existing installation: nltk 3.2.5
Uninstalling nltk-3.2.5:
Successfully uninstalled nltk-3.2.5
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
yellowbrick 1.4 requires scikit-learn>=1.0.0, but you have scikit-learn 0.22.2.post1 which is incompatible.
tables 3.7.0 requires numpy>=1.19.0, but you have numpy 1.18.5 which is incompatible.
pymc3 3.11.4 requires cachetools>=4.2.1, but you have cachetools 3.1.1 which is incompatible.
pydrive 1.3.1 requires oauth2client>=4.0.0, but you have oauth2client 3.0.0 which is incompatible.
multiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.1.1 which is incompatible.
jaxlib 0.3.2+cuda11.cudnn805 requires numpy>=1.19, but you have numpy 1.18.5 which is incompatible.
jax 0.3.4 requires numpy>=1.19, but you have numpy 1.18.5 which is incompatible.
imbalanced-learn 0.8.1 requires scikit-learn>=0.24, but you have scikit-learn 0.22.2.post1 which is incompatible.
google-colab 1.0.0 requires pandas>=1.1.0; python_version >= "3.0", but you have pandas 1.0.5 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.[0m
Successfully installed apache-beam-2.20.0 avro-python3-1.9.2.1 cachetools-3.1.1 dataclasses-0.6 dill-0.3.1.1 fastavro-0.21.24 fasteners-0.17.3 frozendict-1.2 gast-0.3.3 google-api-python-client-1.12.2 google-apitools-0.5.28 google-cloud-bigtable-1.0.0 google-cloud-datastore-1.7.4 google-cloud-dlp-0.13.0 google-cloud-language-1.3.0 google-cloud-pubsub-1.0.2 google-cloud-spanner-1.13.0 google-cloud-videointelligence-1.13.0 google-cloud-vision-0.42.0 grpc-google-iam-v1-0.12.3 grpcio-gcp-0.2.2 h5py-2.10.0 hdfs-2.7.0 httplib2-0.12.0 kaggle-1.5.6 mlperf-compliance-0.0.10 mock-2.0.0 nltk-3.7 numpy-1.18.5 oauth2client-3.0.0 opencv-python-headless-4.5.5.64 pandas-1.0.5 pbr-5.8.1 py-cpuinfo-8.0.0 pyarrow-0.16.0 pymongo-3.12.3 regex-2022.3.15 scikit-learn-0.22.2.post1 sentencepiece-0.1.96 tapas-table-parsing-0.0.1.dev0 tensorboard-2.2.2 tensorflow-2.2.3 tensorflow-addons-0.16.1 tensorflow-estimator-2.2.0 tensorflow-model-optimization-0.7.2 tensorflow-probability-0.10.1 tf-models-official-2.2.2 tf-slim-1.1.0 typing-3.7.4.1 typing-extensions-3.7.4.3
</code>
# Fetch models fom Google Storage_____no_output_____Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is base sized model trained on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). Note that best results in the paper were obtained with a large model, with 24 layers instead of 12._____no_output_____
<code>
! gsutil cp gs://tapas_models/2020_04_21/tapas_sqa_base.zip . && unzip tapas_sqa_base.zipCopying gs://tapas_models/2020_04_21/tapas_sqa_base.zip...
| [1 files][ 1.0 GiB/ 1.0 GiB] 51.4 MiB/s
Operation completed over 1 objects/1.0 GiB.
Archive: tapas_sqa_base.zip
replace tapas_sqa_base/model.ckpt.data-00000-of-00001? [y]es, [n]o, [A]ll, [N]one, [r]ename: y
inflating: tapas_sqa_base/model.ckpt.data-00000-of-00001 y
y
inflating: tapas_sqa_base/model.ckpt.index
inflating: tapas_sqa_base/README.txt
inflating: tapas_sqa_base/vocab.txt
inflating: tapas_sqa_base/bert_config.json
inflating: tapas_sqa_base/model.ckpt.meta
</code>
# Imports_____no_output_____
<code>
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')_____no_output_____from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
from tapas.scripts import prediction_utils_____no_output_____
</code>
# Load checkpoint for prediction_____no_output_____Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script._____no_output_____
<code>
os.makedirs('results/sqa/tf_examples', exist_ok=True)
os.makedirs('results/sqa/model', exist_ok=True)
with open('results/sqa/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_sqa_base/model.ckpt{suffix}', f'results/sqa/model/model.ckpt-0{suffix}')_____no_output_____max_seq_length = 512
vocab_file = "tapas_sqa_base/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
"""Calls Tapas converter to convert interaction to example."""
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/sqa/tf_examples/test.tfrecord", examples)
write_tf_example("results/sqa/tf_examples/random-split-1-dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="SQA" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--init_checkpoint="tapas_sqa_base/model.ckpt" \
--bert_config_file="tapas_sqa_base/bert_config.json" \
--mode="predict" 2> error
results_path = "results/sqa/model/test_sequence.tsv"
all_coordinates = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
coordinates = prediction_utils.parse_coordinates(row["answer_coordinates"])
all_coordinates.append(coordinates)
answers = ', '.join([table[row + 1][col] for row, col in coordinates])
position = int(row['position'])
print(">", queries[position])
print(answers)
return all_coordinates_____no_output_____
</code>
# Predict_____no_output_____
<code>
# Example nu-1000-0
result = predict("""
Doctor_ID|Doctor_Name|Department|opd_day|Morning_time|Evening_time
1|ABCD|Nephrology|Monday|9|5
2|ABC|Opthomology|Tuesday|9|6
3|DEF|Nephrology|Wednesday|9|6
4|GHI|Gynaecology|Thursday|9|6
5|JKL|Orthopeadics|Friday|9|6
6|MNO|Cardiology|Saturday|9|6
7|PQR|Dentistry|Sunday|9|5
8|STU|Epidemology|Monday|9|6
9|WVX|ENT|Tuesday|9|5
10|GILOY|Genetics|Wednesday|9|6
11|Rajeev|Neurology|Wednesday|10|4:30
12|Makan|Immunology|Tuesday|9|4:30
13|Arora|Paediatrics|Sunday|11|4:30
14|Piyush|Radiology|Monday|11:20|2
15|Roha|Gynaecology|Wednesday|9:20|2
16|Bohra|Dentistry|Thursday|11|2
17|Rajeev Khan|Virology|Tuesday|10|2
18|Arnab|Pharmocology|Sunday|10|2
19|Muskan|ENT|Friday|10|2
20|pamela|Epidemology|Monday|10|2
21|Rohit|Radiology|Tuesday|10|2
22|Aniket|Cardiology|Saturday|10|2
23|Darbar|Genetics|Saturday|10|2
24|Suyash|Neurology|Friday|10|2
25|Abhishek|Immunology|Wednesday|10|2
26|Yogesh|Immunology|Saturday|10|2
27|Kunal|Paediatrics|Monday|10|2
28|Vimal|Pharmocology|Friday|10|2
29|Kalyan|Virology|Tuesday|10|2
30|DSS|Nephrology|Thursday|10|2
""", ["How many doctors are there in Immunology department?", "of these, which doctor is available on Saturday?"])is_built_with_cuda: True
is_gpu_available: False
GPUs: []
Training or predicting ...
Evaluation finished after training step 0.
_____no_output_____
</code>
| {
"repository": "aniket371/tapas",
"path": "notebooks/sqa_predictions.ipynb",
"matched_keywords": [
"virology",
"immunology"
],
"stars": null,
"size": 71892,
"hexsha": "d04ca4208005b1f4d4dd16adc0edd8d0ea9f973d",
"max_line_length": 1552,
"avg_line_length": 62.8976377953,
"alphanum_fraction": 0.5142018583
} |
# Notebook from patrickphatnguyen/deepchem
Path: examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
# Tutorial Part 6: Going Deeper On Molecular Featurizations
One of the most important steps of doing machine learning on molecular data is transforming this data into a form amenable to the application of learning algorithms. This process is broadly called "featurization" and involves tutrning a molecule into a vector or tensor of some sort. There are a number of different ways of doing such transformations, and the choice of featurization is often dependent on the problem at hand.
In this tutorial, we explore the different featurization methods available for molecules. These featurization methods include:
1. `ConvMolFeaturizer`,
2. `WeaveFeaturizer`,
3. `CircularFingerprints`
4. `RDKitDescriptors`
5. `BPSymmetryFunction`
6. `CoulombMatrix`
7. `CoulombMatrixEig`
8. `AdjacencyFingerprints`
## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment._____no_output_____
<code>
!wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
!chmod +x Anaconda3-2019.10-Linux-x86_64.sh
!bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')--2020-03-07 01:06:34-- https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.130.3, 104.16.131.3, 2606:4700::6810:8303, ...
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 530308481 (506M) [application/x-sh]
Saving to: ‘Anaconda3-2019.10-Linux-x86_64.sh’
Anaconda3-2019.10-L 100%[===================>] 505.74M 105MB/s in 5.1s
2020-03-07 01:06:39 (99.5 MB/s) - ‘Anaconda3-2019.10-Linux-x86_64.sh’ saved [530308481/530308481]
PREFIX=/usr/local
Unpacking payload ...
Collecting package metadata (current_repodata.json): - \ | / - \ | done
Solving environment: - \ | / - \ | / - \ | / - \ | / - \ | done
## Package Plan ##
environment location: /usr/local
added / updated specs:
- _ipyw_jlab_nb_ext_conf==0.1.0=py37_0
- _libgcc_mutex==0.1=main
- alabaster==0.7.12=py37_0
- anaconda-client==1.7.2=py37_0
- anaconda-navigator==1.9.7=py37_0
- anaconda-project==0.8.3=py_0
- anaconda==2019.10=py37_0
- asn1crypto==1.0.1=py37_0
- astroid==2.3.1=py37_0
- astropy==3.2.2=py37h7b6447c_0
- atomicwrites==1.3.0=py37_1
- attrs==19.2.0=py_0
- babel==2.7.0=py_0
- backcall==0.1.0=py37_0
- backports.functools_lru_cache==1.5=py_2
- backports.os==0.1.1=py37_0
- backports.shutil_get_terminal_size==1.0.0=py37_2
- backports.tempfile==1.0=py_1
- backports.weakref==1.0.post1=py_1
- backports==1.0=py_2
- beautifulsoup4==4.8.0=py37_0
- bitarray==1.0.1=py37h7b6447c_0
- bkcharts==0.2=py37_0
- blas==1.0=mkl
- bleach==3.1.0=py37_0
- blosc==1.16.3=hd408876_0
- bokeh==1.3.4=py37_0
- boto==2.49.0=py37_0
- bottleneck==1.2.1=py37h035aef0_1
- bzip2==1.0.8=h7b6447c_0
- ca-certificates==2019.8.28=0
- cairo==1.14.12=h8948797_3
- certifi==2019.9.11=py37_0
- cffi==1.12.3=py37h2e261b9_0
- chardet==3.0.4=py37_1003
- click==7.0=py37_0
- cloudpickle==1.2.2=py_0
- clyent==1.2.2=py37_1
- colorama==0.4.1=py37_0
- conda-build==3.18.9=py37_3
- conda-env==2.6.0=1
- conda-package-handling==1.6.0=py37h7b6447c_0
- conda-verify==3.4.2=py_1
- conda==4.7.12=py37_0
- contextlib2==0.6.0=py_0
- cryptography==2.7=py37h1ba5d50_0
- curl==7.65.3=hbc83047_0
- cycler==0.10.0=py37_0
- cython==0.29.13=py37he6710b0_0
- cytoolz==0.10.0=py37h7b6447c_0
- dask-core==2.5.2=py_0
- dask==2.5.2=py_0
- dbus==1.13.6=h746ee38_0
- decorator==4.4.0=py37_1
- defusedxml==0.6.0=py_0
- distributed==2.5.2=py_0
- docutils==0.15.2=py37_0
- entrypoints==0.3=py37_0
- et_xmlfile==1.0.1=py37_0
- expat==2.2.6=he6710b0_0
- fastcache==1.1.0=py37h7b6447c_0
- filelock==3.0.12=py_0
- flask==1.1.1=py_0
- fontconfig==2.13.0=h9420a91_0
- freetype==2.9.1=h8a8886c_1
- fribidi==1.0.5=h7b6447c_0
- fsspec==0.5.2=py_0
- future==0.17.1=py37_0
- get_terminal_size==1.0.0=haa9412d_0
- gevent==1.4.0=py37h7b6447c_0
- glib==2.56.2=hd408876_0
- glob2==0.7=py_0
- gmp==6.1.2=h6c8ec71_1
- gmpy2==2.0.8=py37h10f8cd9_2
- graphite2==1.3.13=h23475e2_0
- greenlet==0.4.15=py37h7b6447c_0
- gst-plugins-base==1.14.0=hbbd80ab_1
- gstreamer==1.14.0=hb453b48_1
- h5py==2.9.0=py37h7918eee_0
- harfbuzz==1.8.8=hffaf4a1_0
- hdf5==1.10.4=hb1b8bf9_0
- heapdict==1.0.1=py_0
- html5lib==1.0.1=py37_0
- icu==58.2=h9c2bf20_1
- idna==2.8=py37_0
- imageio==2.6.0=py37_0
- imagesize==1.1.0=py37_0
- importlib_metadata==0.23=py37_0
- intel-openmp==2019.4=243
- ipykernel==5.1.2=py37h39e3cac_0
- ipython==7.8.0=py37h39e3cac_0
- ipython_genutils==0.2.0=py37_0
- ipywidgets==7.5.1=py_0
- isort==4.3.21=py37_0
- itsdangerous==1.1.0=py37_0
- jbig==2.1=hdba287a_0
- jdcal==1.4.1=py_0
- jedi==0.15.1=py37_0
- jeepney==0.4.1=py_0
- jinja2==2.10.3=py_0
- joblib==0.13.2=py37_0
- jpeg==9b=h024ee3a_2
- json5==0.8.5=py_0
- jsonschema==3.0.2=py37_0
- jupyter==1.0.0=py37_7
- jupyter_client==5.3.3=py37_1
- jupyter_console==6.0.0=py37_0
- jupyter_core==4.5.0=py_0
- jupyterlab==1.1.4=pyhf63ae98_0
- jupyterlab_server==1.0.6=py_0
- keyring==18.0.0=py37_0
- kiwisolver==1.1.0=py37he6710b0_0
- krb5==1.16.1=h173b8e3_7
- lazy-object-proxy==1.4.2=py37h7b6447c_0
- libarchive==3.3.3=h5d8350f_5
- libcurl==7.65.3=h20c2e04_0
- libedit==3.1.20181209=hc058e9b_0
- libffi==3.2.1=hd88cf55_4
- libgcc-ng==9.1.0=hdf63c60_0
- libgfortran-ng==7.3.0=hdf63c60_0
- liblief==0.9.0=h7725739_2
- libpng==1.6.37=hbc83047_0
- libsodium==1.0.16=h1bed415_0
- libssh2==1.8.2=h1ba5d50_0
- libstdcxx-ng==9.1.0=hdf63c60_0
- libtiff==4.0.10=h2733197_2
- libtool==2.4.6=h7b6447c_5
- libuuid==1.0.3=h1bed415_2
- libxcb==1.13=h1bed415_1
- libxml2==2.9.9=hea5a465_1
- libxslt==1.1.33=h7d1a2b0_0
- llvmlite==0.29.0=py37hd408876_0
- locket==0.2.0=py37_1
- lxml==4.4.1=py37hefd8a0e_0
- lz4-c==1.8.1.2=h14c3975_0
- lzo==2.10=h49e0be7_2
- markupsafe==1.1.1=py37h7b6447c_0
- matplotlib==3.1.1=py37h5429711_0
- mccabe==0.6.1=py37_1
- mistune==0.8.4=py37h7b6447c_0
- mkl-service==2.3.0=py37he904b0f_0
- mkl==2019.4=243
- mkl_fft==1.0.14=py37ha843d7b_0
- mkl_random==1.1.0=py37hd6b4f25_0
- mock==3.0.5=py37_0
- more-itertools==7.2.0=py37_0
- mpc==1.1.0=h10f8cd9_1
- mpfr==4.0.1=hdf1c602_3
- mpmath==1.1.0=py37_0
- msgpack-python==0.6.1=py37hfd86e86_1
- multipledispatch==0.6.0=py37_0
- navigator-updater==0.2.1=py37_0
- nbconvert==5.6.0=py37_1
- nbformat==4.4.0=py37_0
- ncurses==6.1=he6710b0_1
- networkx==2.3=py_0
- nltk==3.4.5=py37_0
- nose==1.3.7=py37_2
- notebook==6.0.1=py37_0
- numba==0.45.1=py37h962f231_0
- numexpr==2.7.0=py37h9e4a6bb_0
- numpy-base==1.17.2=py37hde5b4d6_0
- numpy==1.17.2=py37haad9e8e_0
- numpydoc==0.9.1=py_0
- olefile==0.46=py37_0
- openpyxl==3.0.0=py_0
- openssl==1.1.1d=h7b6447c_2
- packaging==19.2=py_0
- pandas==0.25.1=py37he6710b0_0
- pandoc==2.2.3.2=0
- pandocfilters==1.4.2=py37_1
- pango==1.42.4=h049681c_0
- parso==0.5.1=py_0
- partd==1.0.0=py_0
- patchelf==0.9=he6710b0_3
- path.py==12.0.1=py_0
- pathlib2==2.3.5=py37_0
- patsy==0.5.1=py37_0
- pcre==8.43=he6710b0_0
- pep8==1.7.1=py37_0
- pexpect==4.7.0=py37_0
- pickleshare==0.7.5=py37_0
- pillow==6.2.0=py37h34e0f95_0
- pip==19.2.3=py37_0
- pixman==0.38.0=h7b6447c_0
- pkginfo==1.5.0.1=py37_0
- pluggy==0.13.0=py37_0
- ply==3.11=py37_0
- prometheus_client==0.7.1=py_0
- prompt_toolkit==2.0.10=py_0
- psutil==5.6.3=py37h7b6447c_0
- ptyprocess==0.6.0=py37_0
- py-lief==0.9.0=py37h7725739_2
- py==1.8.0=py37_0
- pycodestyle==2.5.0=py37_0
- pycosat==0.6.3=py37h14c3975_0
- pycparser==2.19=py37_0
- pycrypto==2.6.1=py37h14c3975_9
- pycurl==7.43.0.3=py37h1ba5d50_0
- pyflakes==2.1.1=py37_0
- pygments==2.4.2=py_0
- pylint==2.4.2=py37_0
- pyodbc==4.0.27=py37he6710b0_0
- pyopenssl==19.0.0=py37_0
- pyparsing==2.4.2=py_0
- pyqt==5.9.2=py37h05f1152_2
- pyrsistent==0.15.4=py37h7b6447c_0
- pysocks==1.7.1=py37_0
- pytables==3.5.2=py37h71ec239_1
- pytest-arraydiff==0.3=py37h39e3cac_0
- pytest-astropy==0.5.0=py37_0
- pytest-doctestplus==0.4.0=py_0
- pytest-openfiles==0.4.0=py_0
- pytest-remotedata==0.3.2=py37_0
- pytest==5.2.1=py37_0
- python-dateutil==2.8.0=py37_0
- python-libarchive-c==2.8=py37_13
- python==3.7.4=h265db76_1
- pytz==2019.3=py_0
- pywavelets==1.0.3=py37hdd07704_1
- pyyaml==5.1.2=py37h7b6447c_0
- pyzmq==18.1.0=py37he6710b0_0
- qt==5.9.7=h5867ecd_1
- qtawesome==0.6.0=py_0
- qtconsole==4.5.5=py_0
- qtpy==1.9.0=py_0
- readline==7.0=h7b6447c_5
- requests==2.22.0=py37_0
- ripgrep==0.10.0=hc07d326_0
- rope==0.14.0=py_0
- ruamel_yaml==0.15.46=py37h14c3975_0
- scikit-image==0.15.0=py37he6710b0_0
- scikit-learn==0.21.3=py37hd81dba3_0
- scipy==1.3.1=py37h7c811a0_0
- seaborn==0.9.0=py37_0
- secretstorage==3.1.1=py37_0
- send2trash==1.5.0=py37_0
- setuptools==41.4.0=py37_0
- simplegeneric==0.8.1=py37_2
- singledispatch==3.4.0.3=py37_0
- sip==4.19.8=py37hf484d3e_0
- six==1.12.0=py37_0
- snappy==1.1.7=hbae5bb6_3
- snowballstemmer==2.0.0=py_0
- sortedcollections==1.1.2=py37_0
- sortedcontainers==2.1.0=py37_0
- soupsieve==1.9.3=py37_0
- sphinx==2.2.0=py_0
- sphinxcontrib-applehelp==1.0.1=py_0
- sphinxcontrib-devhelp==1.0.1=py_0
- sphinxcontrib-htmlhelp==1.0.2=py_0
- sphinxcontrib-jsmath==1.0.1=py_0
- sphinxcontrib-qthelp==1.0.2=py_0
- sphinxcontrib-serializinghtml==1.1.3=py_0
- sphinxcontrib-websupport==1.1.2=py_0
- sphinxcontrib==1.0=py37_1
- spyder-kernels==0.5.2=py37_0
- spyder==3.3.6=py37_0
- sqlalchemy==1.3.9=py37h7b6447c_0
- sqlite==3.30.0=h7b6447c_0
- statsmodels==0.10.1=py37hdd07704_0
- sympy==1.4=py37_0
- tbb==2019.4=hfd86e86_0
- tblib==1.4.0=py_0
- terminado==0.8.2=py37_0
- testpath==0.4.2=py37_0
- tk==8.6.8=hbc83047_0
- toolz==0.10.0=py_0
- tornado==6.0.3=py37h7b6447c_0
- tqdm==4.36.1=py_0
- traitlets==4.3.3=py37_0
- unicodecsv==0.14.1=py37_0
- unixodbc==2.3.7=h14c3975_0
- urllib3==1.24.2=py37_0
- wcwidth==0.1.7=py37_0
- webencodings==0.5.1=py37_1
- werkzeug==0.16.0=py_0
- wheel==0.33.6=py37_0
- widgetsnbextension==3.5.1=py37_0
- wrapt==1.11.2=py37h7b6447c_0
- wurlitzer==1.0.3=py37_0
- xlrd==1.2.0=py37_0
- xlsxwriter==1.2.1=py_0
- xlwt==1.3.0=py37_0
- xz==5.2.4=h14c3975_4
- yaml==0.1.7=had09818_2
- zeromq==4.3.1=he6710b0_3
- zict==1.0.0=py_0
- zipp==0.6.0=py_0
- zlib==1.2.11=h7b6447c_3
- zstd==1.3.7=h0b5b093_0
The following NEW packages will be INSTALLED:
_ipyw_jlab_nb_ext~ pkgs/main/linux-64::_ipyw_jlab_nb_ext_conf-0.1.0-py37_0
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
alabaster pkgs/main/linux-64::alabaster-0.7.12-py37_0
anaconda pkgs/main/linux-64::anaconda-2019.10-py37_0
anaconda-client pkgs/main/linux-64::anaconda-client-1.7.2-py37_0
anaconda-navigator pkgs/main/linux-64::anaconda-navigator-1.9.7-py37_0
anaconda-project pkgs/main/noarch::anaconda-project-0.8.3-py_0
asn1crypto pkgs/main/linux-64::asn1crypto-1.0.1-py37_0
astroid pkgs/main/linux-64::astroid-2.3.1-py37_0
astropy pkgs/main/linux-64::astropy-3.2.2-py37h7b6447c_0
atomicwrites pkgs/main/linux-64::atomicwrites-1.3.0-py37_1
attrs pkgs/main/noarch::attrs-19.2.0-py_0
babel pkgs/main/noarch::babel-2.7.0-py_0
backcall pkgs/main/linux-64::backcall-0.1.0-py37_0
backports pkgs/main/noarch::backports-1.0-py_2
backports.functoo~ pkgs/main/noarch::backports.functools_lru_cache-1.5-py_2
backports.os pkgs/main/linux-64::backports.os-0.1.1-py37_0
backports.shutil_~ pkgs/main/linux-64::backports.shutil_get_terminal_size-1.0.0-py37_2
backports.tempfile pkgs/main/noarch::backports.tempfile-1.0-py_1
backports.weakref pkgs/main/noarch::backports.weakref-1.0.post1-py_1
beautifulsoup4 pkgs/main/linux-64::beautifulsoup4-4.8.0-py37_0
bitarray pkgs/main/linux-64::bitarray-1.0.1-py37h7b6447c_0
bkcharts pkgs/main/linux-64::bkcharts-0.2-py37_0
blas pkgs/main/linux-64::blas-1.0-mkl
bleach pkgs/main/linux-64::bleach-3.1.0-py37_0
blosc pkgs/main/linux-64::blosc-1.16.3-hd408876_0
bokeh pkgs/main/linux-64::bokeh-1.3.4-py37_0
boto pkgs/main/linux-64::boto-2.49.0-py37_0
bottleneck pkgs/main/linux-64::bottleneck-1.2.1-py37h035aef0_1
bzip2 pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0
ca-certificates pkgs/main/linux-64::ca-certificates-2019.8.28-0
cairo pkgs/main/linux-64::cairo-1.14.12-h8948797_3
certifi pkgs/main/linux-64::certifi-2019.9.11-py37_0
cffi pkgs/main/linux-64::cffi-1.12.3-py37h2e261b9_0
chardet pkgs/main/linux-64::chardet-3.0.4-py37_1003
click pkgs/main/linux-64::click-7.0-py37_0
cloudpickle pkgs/main/noarch::cloudpickle-1.2.2-py_0
clyent pkgs/main/linux-64::clyent-1.2.2-py37_1
colorama pkgs/main/linux-64::colorama-0.4.1-py37_0
conda pkgs/main/linux-64::conda-4.7.12-py37_0
conda-build pkgs/main/linux-64::conda-build-3.18.9-py37_3
conda-env pkgs/main/linux-64::conda-env-2.6.0-1
conda-package-han~ pkgs/main/linux-64::conda-package-handling-1.6.0-py37h7b6447c_0
conda-verify pkgs/main/noarch::conda-verify-3.4.2-py_1
contextlib2 pkgs/main/noarch::contextlib2-0.6.0-py_0
cryptography pkgs/main/linux-64::cryptography-2.7-py37h1ba5d50_0
curl pkgs/main/linux-64::curl-7.65.3-hbc83047_0
cycler pkgs/main/linux-64::cycler-0.10.0-py37_0
cython pkgs/main/linux-64::cython-0.29.13-py37he6710b0_0
cytoolz pkgs/main/linux-64::cytoolz-0.10.0-py37h7b6447c_0
dask pkgs/main/noarch::dask-2.5.2-py_0
dask-core pkgs/main/noarch::dask-core-2.5.2-py_0
dbus pkgs/main/linux-64::dbus-1.13.6-h746ee38_0
decorator pkgs/main/linux-64::decorator-4.4.0-py37_1
defusedxml pkgs/main/noarch::defusedxml-0.6.0-py_0
distributed pkgs/main/noarch::distributed-2.5.2-py_0
docutils pkgs/main/linux-64::docutils-0.15.2-py37_0
entrypoints pkgs/main/linux-64::entrypoints-0.3-py37_0
et_xmlfile pkgs/main/linux-64::et_xmlfile-1.0.1-py37_0
expat pkgs/main/linux-64::expat-2.2.6-he6710b0_0
fastcache pkgs/main/linux-64::fastcache-1.1.0-py37h7b6447c_0
filelock pkgs/main/noarch::filelock-3.0.12-py_0
flask pkgs/main/noarch::flask-1.1.1-py_0
fontconfig pkgs/main/linux-64::fontconfig-2.13.0-h9420a91_0
freetype pkgs/main/linux-64::freetype-2.9.1-h8a8886c_1
fribidi pkgs/main/linux-64::fribidi-1.0.5-h7b6447c_0
fsspec pkgs/main/noarch::fsspec-0.5.2-py_0
future pkgs/main/linux-64::future-0.17.1-py37_0
get_terminal_size pkgs/main/linux-64::get_terminal_size-1.0.0-haa9412d_0
gevent pkgs/main/linux-64::gevent-1.4.0-py37h7b6447c_0
glib pkgs/main/linux-64::glib-2.56.2-hd408876_0
glob2 pkgs/main/noarch::glob2-0.7-py_0
gmp pkgs/main/linux-64::gmp-6.1.2-h6c8ec71_1
gmpy2 pkgs/main/linux-64::gmpy2-2.0.8-py37h10f8cd9_2
graphite2 pkgs/main/linux-64::graphite2-1.3.13-h23475e2_0
greenlet pkgs/main/linux-64::greenlet-0.4.15-py37h7b6447c_0
gst-plugins-base pkgs/main/linux-64::gst-plugins-base-1.14.0-hbbd80ab_1
gstreamer pkgs/main/linux-64::gstreamer-1.14.0-hb453b48_1
h5py pkgs/main/linux-64::h5py-2.9.0-py37h7918eee_0
harfbuzz pkgs/main/linux-64::harfbuzz-1.8.8-hffaf4a1_0
hdf5 pkgs/main/linux-64::hdf5-1.10.4-hb1b8bf9_0
heapdict pkgs/main/noarch::heapdict-1.0.1-py_0
html5lib pkgs/main/linux-64::html5lib-1.0.1-py37_0
icu pkgs/main/linux-64::icu-58.2-h9c2bf20_1
idna pkgs/main/linux-64::idna-2.8-py37_0
imageio pkgs/main/linux-64::imageio-2.6.0-py37_0
imagesize pkgs/main/linux-64::imagesize-1.1.0-py37_0
importlib_metadata pkgs/main/linux-64::importlib_metadata-0.23-py37_0
intel-openmp pkgs/main/linux-64::intel-openmp-2019.4-243
ipykernel pkgs/main/linux-64::ipykernel-5.1.2-py37h39e3cac_0
ipython pkgs/main/linux-64::ipython-7.8.0-py37h39e3cac_0
ipython_genutils pkgs/main/linux-64::ipython_genutils-0.2.0-py37_0
ipywidgets pkgs/main/noarch::ipywidgets-7.5.1-py_0
isort pkgs/main/linux-64::isort-4.3.21-py37_0
itsdangerous pkgs/main/linux-64::itsdangerous-1.1.0-py37_0
jbig pkgs/main/linux-64::jbig-2.1-hdba287a_0
jdcal pkgs/main/noarch::jdcal-1.4.1-py_0
jedi pkgs/main/linux-64::jedi-0.15.1-py37_0
jeepney pkgs/main/noarch::jeepney-0.4.1-py_0
jinja2 pkgs/main/noarch::jinja2-2.10.3-py_0
joblib pkgs/main/linux-64::joblib-0.13.2-py37_0
jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2
json5 pkgs/main/noarch::json5-0.8.5-py_0
jsonschema pkgs/main/linux-64::jsonschema-3.0.2-py37_0
jupyter pkgs/main/linux-64::jupyter-1.0.0-py37_7
jupyter_client pkgs/main/linux-64::jupyter_client-5.3.3-py37_1
jupyter_console pkgs/main/linux-64::jupyter_console-6.0.0-py37_0
jupyter_core pkgs/main/noarch::jupyter_core-4.5.0-py_0
jupyterlab pkgs/main/noarch::jupyterlab-1.1.4-pyhf63ae98_0
jupyterlab_server pkgs/main/noarch::jupyterlab_server-1.0.6-py_0
keyring pkgs/main/linux-64::keyring-18.0.0-py37_0
kiwisolver pkgs/main/linux-64::kiwisolver-1.1.0-py37he6710b0_0
krb5 pkgs/main/linux-64::krb5-1.16.1-h173b8e3_7
lazy-object-proxy pkgs/main/linux-64::lazy-object-proxy-1.4.2-py37h7b6447c_0
libarchive pkgs/main/linux-64::libarchive-3.3.3-h5d8350f_5
libcurl pkgs/main/linux-64::libcurl-7.65.3-h20c2e04_0
libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0
libffi pkgs/main/linux-64::libffi-3.2.1-hd88cf55_4
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.3.0-hdf63c60_0
liblief pkgs/main/linux-64::liblief-0.9.0-h7725739_2
libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0
libsodium pkgs/main/linux-64::libsodium-1.0.16-h1bed415_0
libssh2 pkgs/main/linux-64::libssh2-1.8.2-h1ba5d50_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
libtiff pkgs/main/linux-64::libtiff-4.0.10-h2733197_2
libtool pkgs/main/linux-64::libtool-2.4.6-h7b6447c_5
libuuid pkgs/main/linux-64::libuuid-1.0.3-h1bed415_2
libxcb pkgs/main/linux-64::libxcb-1.13-h1bed415_1
libxml2 pkgs/main/linux-64::libxml2-2.9.9-hea5a465_1
libxslt pkgs/main/linux-64::libxslt-1.1.33-h7d1a2b0_0
llvmlite pkgs/main/linux-64::llvmlite-0.29.0-py37hd408876_0
locket pkgs/main/linux-64::locket-0.2.0-py37_1
lxml pkgs/main/linux-64::lxml-4.4.1-py37hefd8a0e_0
lz4-c pkgs/main/linux-64::lz4-c-1.8.1.2-h14c3975_0
lzo pkgs/main/linux-64::lzo-2.10-h49e0be7_2
markupsafe pkgs/main/linux-64::markupsafe-1.1.1-py37h7b6447c_0
matplotlib pkgs/main/linux-64::matplotlib-3.1.1-py37h5429711_0
mccabe pkgs/main/linux-64::mccabe-0.6.1-py37_1
mistune pkgs/main/linux-64::mistune-0.8.4-py37h7b6447c_0
mkl pkgs/main/linux-64::mkl-2019.4-243
mkl-service pkgs/main/linux-64::mkl-service-2.3.0-py37he904b0f_0
mkl_fft pkgs/main/linux-64::mkl_fft-1.0.14-py37ha843d7b_0
mkl_random pkgs/main/linux-64::mkl_random-1.1.0-py37hd6b4f25_0
mock pkgs/main/linux-64::mock-3.0.5-py37_0
more-itertools pkgs/main/linux-64::more-itertools-7.2.0-py37_0
mpc pkgs/main/linux-64::mpc-1.1.0-h10f8cd9_1
mpfr pkgs/main/linux-64::mpfr-4.0.1-hdf1c602_3
mpmath pkgs/main/linux-64::mpmath-1.1.0-py37_0
msgpack-python pkgs/main/linux-64::msgpack-python-0.6.1-py37hfd86e86_1
multipledispatch pkgs/main/linux-64::multipledispatch-0.6.0-py37_0
navigator-updater pkgs/main/linux-64::navigator-updater-0.2.1-py37_0
nbconvert pkgs/main/linux-64::nbconvert-5.6.0-py37_1
nbformat pkgs/main/linux-64::nbformat-4.4.0-py37_0
ncurses pkgs/main/linux-64::ncurses-6.1-he6710b0_1
networkx pkgs/main/noarch::networkx-2.3-py_0
nltk pkgs/main/linux-64::nltk-3.4.5-py37_0
nose pkgs/main/linux-64::nose-1.3.7-py37_2
notebook pkgs/main/linux-64::notebook-6.0.1-py37_0
numba pkgs/main/linux-64::numba-0.45.1-py37h962f231_0
numexpr pkgs/main/linux-64::numexpr-2.7.0-py37h9e4a6bb_0
numpy pkgs/main/linux-64::numpy-1.17.2-py37haad9e8e_0
numpy-base pkgs/main/linux-64::numpy-base-1.17.2-py37hde5b4d6_0
numpydoc pkgs/main/noarch::numpydoc-0.9.1-py_0
olefile pkgs/main/linux-64::olefile-0.46-py37_0
openpyxl pkgs/main/noarch::openpyxl-3.0.0-py_0
openssl pkgs/main/linux-64::openssl-1.1.1d-h7b6447c_2
packaging pkgs/main/noarch::packaging-19.2-py_0
pandas pkgs/main/linux-64::pandas-0.25.1-py37he6710b0_0
pandoc pkgs/main/linux-64::pandoc-2.2.3.2-0
pandocfilters pkgs/main/linux-64::pandocfilters-1.4.2-py37_1
pango pkgs/main/linux-64::pango-1.42.4-h049681c_0
parso pkgs/main/noarch::parso-0.5.1-py_0
partd pkgs/main/noarch::partd-1.0.0-py_0
patchelf pkgs/main/linux-64::patchelf-0.9-he6710b0_3
path.py pkgs/main/noarch::path.py-12.0.1-py_0
pathlib2 pkgs/main/linux-64::pathlib2-2.3.5-py37_0
patsy pkgs/main/linux-64::patsy-0.5.1-py37_0
pcre pkgs/main/linux-64::pcre-8.43-he6710b0_0
pep8 pkgs/main/linux-64::pep8-1.7.1-py37_0
pexpect pkgs/main/linux-64::pexpect-4.7.0-py37_0
pickleshare pkgs/main/linux-64::pickleshare-0.7.5-py37_0
pillow pkgs/main/linux-64::pillow-6.2.0-py37h34e0f95_0
pip pkgs/main/linux-64::pip-19.2.3-py37_0
pixman pkgs/main/linux-64::pixman-0.38.0-h7b6447c_0
pkginfo pkgs/main/linux-64::pkginfo-1.5.0.1-py37_0
pluggy pkgs/main/linux-64::pluggy-0.13.0-py37_0
ply pkgs/main/linux-64::ply-3.11-py37_0
prometheus_client pkgs/main/noarch::prometheus_client-0.7.1-py_0
prompt_toolkit pkgs/main/noarch::prompt_toolkit-2.0.10-py_0
psutil pkgs/main/linux-64::psutil-5.6.3-py37h7b6447c_0
ptyprocess pkgs/main/linux-64::ptyprocess-0.6.0-py37_0
py pkgs/main/linux-64::py-1.8.0-py37_0
py-lief pkgs/main/linux-64::py-lief-0.9.0-py37h7725739_2
pycodestyle pkgs/main/linux-64::pycodestyle-2.5.0-py37_0
pycosat pkgs/main/linux-64::pycosat-0.6.3-py37h14c3975_0
pycparser pkgs/main/linux-64::pycparser-2.19-py37_0
pycrypto pkgs/main/linux-64::pycrypto-2.6.1-py37h14c3975_9
pycurl pkgs/main/linux-64::pycurl-7.43.0.3-py37h1ba5d50_0
pyflakes pkgs/main/linux-64::pyflakes-2.1.1-py37_0
pygments pkgs/main/noarch::pygments-2.4.2-py_0
pylint pkgs/main/linux-64::pylint-2.4.2-py37_0
pyodbc pkgs/main/linux-64::pyodbc-4.0.27-py37he6710b0_0
pyopenssl pkgs/main/linux-64::pyopenssl-19.0.0-py37_0
pyparsing pkgs/main/noarch::pyparsing-2.4.2-py_0
pyqt pkgs/main/linux-64::pyqt-5.9.2-py37h05f1152_2
pyrsistent pkgs/main/linux-64::pyrsistent-0.15.4-py37h7b6447c_0
pysocks pkgs/main/linux-64::pysocks-1.7.1-py37_0
pytables pkgs/main/linux-64::pytables-3.5.2-py37h71ec239_1
pytest pkgs/main/linux-64::pytest-5.2.1-py37_0
pytest-arraydiff pkgs/main/linux-64::pytest-arraydiff-0.3-py37h39e3cac_0
pytest-astropy pkgs/main/linux-64::pytest-astropy-0.5.0-py37_0
pytest-doctestplus pkgs/main/noarch::pytest-doctestplus-0.4.0-py_0
pytest-openfiles pkgs/main/noarch::pytest-openfiles-0.4.0-py_0
pytest-remotedata pkgs/main/linux-64::pytest-remotedata-0.3.2-py37_0
python pkgs/main/linux-64::python-3.7.4-h265db76_1
python-dateutil pkgs/main/linux-64::python-dateutil-2.8.0-py37_0
python-libarchive~ pkgs/main/linux-64::python-libarchive-c-2.8-py37_13
pytz pkgs/main/noarch::pytz-2019.3-py_0
pywavelets pkgs/main/linux-64::pywavelets-1.0.3-py37hdd07704_1
pyyaml pkgs/main/linux-64::pyyaml-5.1.2-py37h7b6447c_0
pyzmq pkgs/main/linux-64::pyzmq-18.1.0-py37he6710b0_0
qt pkgs/main/linux-64::qt-5.9.7-h5867ecd_1
qtawesome pkgs/main/noarch::qtawesome-0.6.0-py_0
qtconsole pkgs/main/noarch::qtconsole-4.5.5-py_0
qtpy pkgs/main/noarch::qtpy-1.9.0-py_0
readline pkgs/main/linux-64::readline-7.0-h7b6447c_5
requests pkgs/main/linux-64::requests-2.22.0-py37_0
ripgrep pkgs/main/linux-64::ripgrep-0.10.0-hc07d326_0
rope pkgs/main/noarch::rope-0.14.0-py_0
ruamel_yaml pkgs/main/linux-64::ruamel_yaml-0.15.46-py37h14c3975_0
scikit-image pkgs/main/linux-64::scikit-image-0.15.0-py37he6710b0_0
scikit-learn pkgs/main/linux-64::scikit-learn-0.21.3-py37hd81dba3_0
scipy pkgs/main/linux-64::scipy-1.3.1-py37h7c811a0_0
seaborn pkgs/main/linux-64::seaborn-0.9.0-py37_0
secretstorage pkgs/main/linux-64::secretstorage-3.1.1-py37_0
send2trash pkgs/main/linux-64::send2trash-1.5.0-py37_0
setuptools pkgs/main/linux-64::setuptools-41.4.0-py37_0
simplegeneric pkgs/main/linux-64::simplegeneric-0.8.1-py37_2
singledispatch pkgs/main/linux-64::singledispatch-3.4.0.3-py37_0
sip pkgs/main/linux-64::sip-4.19.8-py37hf484d3e_0
six pkgs/main/linux-64::six-1.12.0-py37_0
snappy pkgs/main/linux-64::snappy-1.1.7-hbae5bb6_3
snowballstemmer pkgs/main/noarch::snowballstemmer-2.0.0-py_0
sortedcollections pkgs/main/linux-64::sortedcollections-1.1.2-py37_0
sortedcontainers pkgs/main/linux-64::sortedcontainers-2.1.0-py37_0
soupsieve pkgs/main/linux-64::soupsieve-1.9.3-py37_0
sphinx pkgs/main/noarch::sphinx-2.2.0-py_0
sphinxcontrib pkgs/main/linux-64::sphinxcontrib-1.0-py37_1
sphinxcontrib-app~ pkgs/main/noarch::sphinxcontrib-applehelp-1.0.1-py_0
sphinxcontrib-dev~ pkgs/main/noarch::sphinxcontrib-devhelp-1.0.1-py_0
sphinxcontrib-htm~ pkgs/main/noarch::sphinxcontrib-htmlhelp-1.0.2-py_0
sphinxcontrib-jsm~ pkgs/main/noarch::sphinxcontrib-jsmath-1.0.1-py_0
sphinxcontrib-qth~ pkgs/main/noarch::sphinxcontrib-qthelp-1.0.2-py_0
sphinxcontrib-ser~ pkgs/main/noarch::sphinxcontrib-serializinghtml-1.1.3-py_0
sphinxcontrib-web~ pkgs/main/noarch::sphinxcontrib-websupport-1.1.2-py_0
spyder pkgs/main/linux-64::spyder-3.3.6-py37_0
spyder-kernels pkgs/main/linux-64::spyder-kernels-0.5.2-py37_0
sqlalchemy pkgs/main/linux-64::sqlalchemy-1.3.9-py37h7b6447c_0
sqlite pkgs/main/linux-64::sqlite-3.30.0-h7b6447c_0
statsmodels pkgs/main/linux-64::statsmodels-0.10.1-py37hdd07704_0
sympy pkgs/main/linux-64::sympy-1.4-py37_0
tbb pkgs/main/linux-64::tbb-2019.4-hfd86e86_0
tblib pkgs/main/noarch::tblib-1.4.0-py_0
terminado pkgs/main/linux-64::terminado-0.8.2-py37_0
testpath pkgs/main/linux-64::testpath-0.4.2-py37_0
tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0
toolz pkgs/main/noarch::toolz-0.10.0-py_0
tornado pkgs/main/linux-64::tornado-6.0.3-py37h7b6447c_0
tqdm pkgs/main/noarch::tqdm-4.36.1-py_0
traitlets pkgs/main/linux-64::traitlets-4.3.3-py37_0
unicodecsv pkgs/main/linux-64::unicodecsv-0.14.1-py37_0
unixodbc pkgs/main/linux-64::unixodbc-2.3.7-h14c3975_0
urllib3 pkgs/main/linux-64::urllib3-1.24.2-py37_0
wcwidth pkgs/main/linux-64::wcwidth-0.1.7-py37_0
webencodings pkgs/main/linux-64::webencodings-0.5.1-py37_1
werkzeug pkgs/main/noarch::werkzeug-0.16.0-py_0
wheel pkgs/main/linux-64::wheel-0.33.6-py37_0
widgetsnbextension pkgs/main/linux-64::widgetsnbextension-3.5.1-py37_0
wrapt pkgs/main/linux-64::wrapt-1.11.2-py37h7b6447c_0
wurlitzer pkgs/main/linux-64::wurlitzer-1.0.3-py37_0
xlrd pkgs/main/linux-64::xlrd-1.2.0-py37_0
xlsxwriter pkgs/main/noarch::xlsxwriter-1.2.1-py_0
xlwt pkgs/main/linux-64::xlwt-1.3.0-py37_0
xz pkgs/main/linux-64::xz-5.2.4-h14c3975_4
yaml pkgs/main/linux-64::yaml-0.1.7-had09818_2
zeromq pkgs/main/linux-64::zeromq-4.3.1-he6710b0_3
zict pkgs/main/noarch::zict-1.0.0-py_0
zipp pkgs/main/noarch::zipp-0.6.0-py_0
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
zstd pkgs/main/linux-64::zstd-1.3.7-h0b5b093_0
Preparing transaction: - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | done
Executing transaction: - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
installation finished.
WARNING:
You currently have a PYTHONPATH environment variable set. This may cause
unexpected behavior when running the Python interpreter in Anaconda3.
For best results, please verify that your PYTHONPATH only points to
directories of packages that are compatible with the Python interpreter
in Anaconda3: /usr/local
Collecting package metadata (current_repodata.json): - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
Solving environment: | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with initial frozen solve. Retrying with flexible solve.
Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
==> WARNING: A newer version of conda exists. <==
current version: 4.7.12
latest version: 4.8.2
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /usr/local
added / updated specs:
- deepchem-gpu=2.3.0
The following packages will be downloaded:
package | build
---------------------------|-----------------
_py-xgboost-mutex-2.0 | cpu_0 8 KB conda-forge
_tflow_select-2.1.0 | gpu 2 KB
absl-py-0.9.0 | py37_0 162 KB conda-forge
astor-0.7.1 | py_0 22 KB conda-forge
c-ares-1.15.0 | h516909a_1001 100 KB conda-forge
certifi-2019.9.11 | py37_0 147 KB conda-forge
conda-4.8.2 | py37_0 3.0 MB conda-forge
cudatoolkit-10.1.243 | h6bb024c_0 347.4 MB
cudnn-7.6.5 | cuda10.1_0 179.9 MB
cupti-10.1.168 | 0 1.4 MB
deepchem-gpu-2.3.0 | py37_0 2.1 MB deepchem
fftw3f-3.3.4 | 2 1.2 MB omnia
gast-0.3.3 | py_0 12 KB conda-forge
google-pasta-0.1.8 | py_0 42 KB conda-forge
grpcio-1.23.0 | py37he9ae1f9_0 1.1 MB conda-forge
keras-applications-1.0.8 | py_1 30 KB conda-forge
keras-preprocessing-1.1.0 | py_0 33 KB conda-forge
libboost-1.67.0 | h46d08c1_4 13.0 MB
libprotobuf-3.11.4 | h8b12597_0 4.8 MB conda-forge
libxgboost-0.90 | he1b5a44_4 2.4 MB conda-forge
markdown-3.2.1 | py_0 61 KB conda-forge
mdtraj-1.9.3 | py37h00575c5_0 1.9 MB conda-forge
openmm-7.4.1 |py37_cuda101_rc_1 11.9 MB omnia
pdbfixer-1.6 | py37_0 190 KB omnia
protobuf-3.11.4 | py37he1b5a44_0 699 KB conda-forge
py-boost-1.67.0 | py37h04863e7_4 278 KB
py-xgboost-0.90 | py37_4 73 KB conda-forge
rdkit-2019.09.3.0 | py37hc20afe1_1 23.7 MB rdkit
simdna-0.4.2 | py_0 627 KB deepchem
tensorboard-1.14.0 | py37_0 3.2 MB conda-forge
tensorflow-1.14.0 |gpu_py37h74c33d7_0 4 KB
tensorflow-base-1.14.0 |gpu_py37he45bfe2_0 146.3 MB
tensorflow-estimator-1.14.0| py37h5ca1d4c_0 645 KB conda-forge
tensorflow-gpu-1.14.0 | h0d30ee6_0 3 KB
termcolor-1.1.0 | py_2 6 KB conda-forge
xgboost-0.90 | py37he1b5a44_4 11 KB conda-forge
------------------------------------------------------------
Total: 746.5 MB
The following NEW packages will be INSTALLED:
_py-xgboost-mutex conda-forge/linux-64::_py-xgboost-mutex-2.0-cpu_0
_tflow_select pkgs/main/linux-64::_tflow_select-2.1.0-gpu
absl-py conda-forge/linux-64::absl-py-0.9.0-py37_0
astor conda-forge/noarch::astor-0.7.1-py_0
c-ares conda-forge/linux-64::c-ares-1.15.0-h516909a_1001
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0
cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0
cupti pkgs/main/linux-64::cupti-10.1.168-0
deepchem-gpu deepchem/linux-64::deepchem-gpu-2.3.0-py37_0
fftw3f omnia/linux-64::fftw3f-3.3.4-2
gast conda-forge/noarch::gast-0.3.3-py_0
google-pasta conda-forge/noarch::google-pasta-0.1.8-py_0
grpcio conda-forge/linux-64::grpcio-1.23.0-py37he9ae1f9_0
keras-applications conda-forge/noarch::keras-applications-1.0.8-py_1
keras-preprocessi~ conda-forge/noarch::keras-preprocessing-1.1.0-py_0
libboost pkgs/main/linux-64::libboost-1.67.0-h46d08c1_4
libprotobuf conda-forge/linux-64::libprotobuf-3.11.4-h8b12597_0
libxgboost conda-forge/linux-64::libxgboost-0.90-he1b5a44_4
markdown conda-forge/noarch::markdown-3.2.1-py_0
mdtraj conda-forge/linux-64::mdtraj-1.9.3-py37h00575c5_0
openmm omnia/linux-64::openmm-7.4.1-py37_cuda101_rc_1
pdbfixer omnia/linux-64::pdbfixer-1.6-py37_0
protobuf conda-forge/linux-64::protobuf-3.11.4-py37he1b5a44_0
py-boost pkgs/main/linux-64::py-boost-1.67.0-py37h04863e7_4
py-xgboost conda-forge/linux-64::py-xgboost-0.90-py37_4
rdkit rdkit/linux-64::rdkit-2019.09.3.0-py37hc20afe1_1
simdna deepchem/noarch::simdna-0.4.2-py_0
tensorboard conda-forge/linux-64::tensorboard-1.14.0-py37_0
tensorflow pkgs/main/linux-64::tensorflow-1.14.0-gpu_py37h74c33d7_0
tensorflow-base pkgs/main/linux-64::tensorflow-base-1.14.0-gpu_py37he45bfe2_0
tensorflow-estima~ conda-forge/linux-64::tensorflow-estimator-1.14.0-py37h5ca1d4c_0
tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-1.14.0-h0d30ee6_0
termcolor conda-forge/noarch::termcolor-1.1.0-py_2
xgboost conda-forge/linux-64::xgboost-0.90-py37he1b5a44_4
The following packages will be UPDATED:
conda pkgs/main::conda-4.7.12-py37_0 --> conda-forge::conda-4.8.2-py37_0
The following packages will be SUPERSEDED by a higher-priority channel:
certifi pkgs/main --> conda-forge
Downloading and Extracting Packages
keras-applications-1 | 30 KB | : 100% 1.0/1 [00:00<00:00, 8.82it/s]
libboost-1.67.0 | 13.0 MB | : 100% 1.0/1 [00:01<00:00, 1.85s/it]
absl-py-0.9.0 | 162 KB | : 100% 1.0/1 [00:00<00:00, 11.13it/s]
libxgboost-0.90 | 2.4 MB | : 100% 1.0/1 [00:00<00:00, 2.16it/s]
cupti-10.1.168 | 1.4 MB | : 100% 1.0/1 [00:00<00:00, 7.39it/s]
termcolor-1.1.0 | 6 KB | : 100% 1.0/1 [00:00<00:00, 22.33it/s]
tensorflow-base-1.14 | 146.3 MB | : 100% 1.0/1 [00:14<00:00, 14.12s/it]
tensorboard-1.14.0 | 3.2 MB | : 100% 1.0/1 [00:00<00:00, 1.87it/s]
cudnn-7.6.5 | 179.9 MB | : 100% 1.0/1 [00:10<00:00, 10.91s/it]
conda-4.8.2 | 3.0 MB | : 100% 1.0/1 [00:00<00:00, 1.22it/s]
py-boost-1.67.0 | 278 KB | : 100% 1.0/1 [00:00<00:00, 8.26it/s]
py-xgboost-0.90 | 73 KB | : 100% 1.0/1 [00:00<00:00, 18.94it/s]
tensorflow-gpu-1.14. | 3 KB | : 100% 1.0/1 [00:00<00:00, 9.85it/s]
mdtraj-1.9.3 | 1.9 MB | : 100% 1.0/1 [00:00<00:00, 2.17it/s]
rdkit-2019.09.3.0 | 23.7 MB | : 100% 1.0/1 [00:05<00:00, 76.64s/it]
deepchem-gpu-2.3.0 | 2.1 MB | : 100% 1.0/1 [00:00<00:00, 50.91s/it]
grpcio-1.23.0 | 1.1 MB | : 100% 1.0/1 [00:00<00:00, 4.14it/s]
_py-xgboost-mutex-2. | 8 KB | : 100% 1.0/1 [00:00<00:00, 27.43it/s]
libprotobuf-3.11.4 | 4.8 MB | : 100% 1.0/1 [00:01<00:00, 1.08s/it]
keras-preprocessing- | 33 KB | : 100% 1.0/1 [00:00<00:00, 22.50it/s]
markdown-3.2.1 | 61 KB | : 100% 1.0/1 [00:00<00:00, 20.73it/s]
google-pasta-0.1.8 | 42 KB | : 100% 1.0/1 [00:00<00:00, 11.05it/s]
protobuf-3.11.4 | 699 KB | : 100% 1.0/1 [00:00<00:00, 4.10it/s]
_tflow_select-2.1.0 | 2 KB | : 100% 1.0/1 [00:00<00:00, 10.36it/s]
simdna-0.4.2 | 627 KB | : 100% 1.0/1 [00:00<00:00, 2.80it/s]
c-ares-1.15.0 | 100 KB | : 100% 1.0/1 [00:00<00:00, 13.50it/s]
gast-0.3.3 | 12 KB | : 100% 1.0/1 [00:00<00:00, 20.80it/s]
certifi-2019.9.11 | 147 KB | : 100% 1.0/1 [00:00<00:00, 7.10it/s]
fftw3f-3.3.4 | 1.2 MB | : 100% 1.0/1 [00:00<00:00, 12.56s/it]
openmm-7.4.1 | 11.9 MB | : 100% 1.0/1 [00:03<00:00, 108.64s/it]
tensorflow-1.14.0 | 4 KB | : 100% 1.0/1 [00:00<00:00, 10.64it/s]
tensorflow-estimator | 645 KB | : 100% 1.0/1 [00:00<00:00, 4.16it/s]
astor-0.7.1 | 22 KB | : 100% 1.0/1 [00:00<00:00, 26.30it/s]
xgboost-0.90 | 11 KB | : 100% 1.0/1 [00:00<00:00, 32.86it/s]
cudatoolkit-10.1.243 | 347.4 MB | : 100% 1.0/1 [00:19<00:00, 19.76s/it]
pdbfixer-1.6 | 190 KB | : 100% 1.0/1 [00:00<00:00, 1.50it/s]
Preparing transaction: \ | / - \ | / - \ | / done
Verifying transaction: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
Executing transaction: | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
</code>
Let's start with some basic imports_____no_output_____
<code>
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
import numpy as np
from rdkit import Chem
from deepchem.feat import ConvMolFeaturizer, WeaveFeaturizer, CircularFingerprint
from deepchem.feat import AdjacencyFingerprint, RDKitDescriptors
from deepchem.feat import BPSymmetryFunctionInput, CoulombMatrix, CoulombMatrixEig
from deepchem.utils import conformers/usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=FutureWarning)
</code>
We use `propane`( $CH_3 CH_2 CH_3 $ ) as a running example throughout this tutorial. Many of the featurization methods use conformers or the molecules. A conformer can be generated using the `ConformerGenerator` class in `deepchem.utils.conformers`. _____no_output_____### RDKitDescriptors_____no_output_____`RDKitDescriptors` featurizes a molecule by computing descriptors values for specified descriptors. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using `RDKitDescriptors.allowedDescriptors`.
The featurizer uses the descriptors in `rdkit.Chem.Descriptors.descList`, checks if they are in the list of allowed descriptors and computes the descriptor value for the molecule._____no_output_____
<code>
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)_____no_output_____
</code>
Let's check the allowed list of descriptors. As you will see shortly, there's a wide range of chemical properties that RDKit computes for us._____no_output_____
<code>
for descriptor in RDKitDescriptors.allowedDescriptors:
print(descriptor)MaxAbsPartialCharge
EState_VSA6
SMR_VSA10
EState_VSA3
SlogP_VSA2
SlogP_VSA12
PEOE_VSA8
LabuteASA
SMR_VSA2
Chi4n
MaxPartialCharge
EState_VSA9
EState_VSA8
SMR_VSA8
EState_VSA2
SMR_VSA4
RingCount
SlogP_VSA6
MinAbsEStateIndex
VSA_EState4
PEOE_VSA7
Chi2v
PEOE_VSA12
NumAliphaticCarbocycles
VSA_EState8
NumHeteroatoms
MolLogP
PEOE_VSA10
SlogP_VSA9
EState_VSA10
Chi1v
MolWt
EState_VSA11
HeavyAtomMolWt
Chi4v
MinPartialCharge
PEOE_VSA1
SlogP_VSA4
MaxAbsEStateIndex
PEOE_VSA2
NumValenceElectrons
Chi1
TPSA
NumAromaticHeterocycles
SMR_VSA1
SMR_VSA3
Chi1n
FractionCSP3
NOCount
SMR_VSA9
VSA_EState10
EState_VSA7
NumAromaticCarbocycles
Chi3n
VSA_EState1
NumSaturatedRings
Kappa1
PEOE_VSA4
NumSaturatedHeterocycles
EState_VSA5
MolMR
SMR_VSA5
NumSaturatedCarbocycles
Chi2n
MinAbsPartialCharge
MinEStateIndex
PEOE_VSA14
SlogP_VSA3
SlogP_VSA11
NumRotatableBonds
VSA_EState3
ExactMolWt
VSA_EState6
Kappa3
VSA_EState9
Chi3v
Kappa2
EState_VSA4
SMR_VSA7
NumHDonors
PEOE_VSA3
SMR_VSA6
SlogP_VSA1
NumAliphaticRings
HallKierAlpha
NumAromaticRings
Chi0n
PEOE_VSA6
SlogP_VSA8
VSA_EState7
VSA_EState2
BalabanJ
SlogP_VSA5
EState_VSA1
NHOHCount
BertzCT
Chi0
NumRadicalElectrons
PEOE_VSA9
SlogP_VSA10
SlogP_VSA7
HeavyAtomCount
NumHAcceptors
VSA_EState5
PEOE_VSA13
NumAliphaticHeterocycles
Ipc
MaxEStateIndex
PEOE_VSA5
Chi0v
PEOE_VSA11
rdkit_desc = RDKitDescriptors()
features = rdkit_desc._featurize(example_mol)
print('The number of descriptors present are: ', len(features))The number of descriptors present are: 111
</code>
### BPSymmetryFunction_____no_output_____`Behler-Parinello Symmetry function` or `BPSymmetryFunction` featurizes a molecule by computing the atomic number and coordinates for each atom in the molecule. The features can be used as input for symmetry functions, like `RadialSymmetry`, `DistanceMatrix` and `DistanceCutoff` . More details on these symmetry functions can be found in [this paper](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.98.146401). These functions can be found in `deepchem.feat.coulomb_matrices`
The featurizer takes in `max_atoms` as an argument. As input, it takes in a conformer of the molecule and computes:
1. coordinates of every atom in the molecule (in Bohr units)
2. the atomic numbers for all atoms.
These features are concantenated and padded with zeros to account for different number of atoms, across molecules._____no_output_____
<code>
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)_____no_output_____
</code>
Let's now take a look at the actual featurized matrix that comes out._____no_output_____
<code>
bp_sym = BPSymmetryFunctionInput(max_atoms=20)
features = bp_sym._featurize(mol=example_mol)
features_____no_output_____
</code>
A simple check for the featurization would be to count the different atomic numbers present in the features._____no_output_____
<code>
atomic_numbers = features[:, 0]
from collections import Counter
unique_numbers = Counter(atomic_numbers)
print(unique_numbers)Counter({0.0: 9, 1.0: 8, 6.0: 3})
</code>
For propane, we have $3$ `C-atoms` and $8$ `H-atoms`, and these numbers are in agreement with the results shown above. There's also the additional padding of 9 atoms, to equalize with `max_atoms`._____no_output_____### CoulombMatrix_____no_output_____`CoulombMatrix`, featurizes a molecule by computing the coulomb matrices for different conformers of the molecule, and returning it as a list.
A Coulomb matrix tries to encode the energy structure of a molecule. The matrix is symmetric, with the off-diagonal elements capturing the Coulombic repulsion between pairs of atoms and the diagonal elements capturing atomic energies using the atomic numbers. More information on the functional forms used can be found [here](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.108.058301).
The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`), and getting only the upper triangular matrix (`upper_tri`)._____no_output_____
<code>
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))Number of available conformers for propane: 1
coulomb_mat = CoulombMatrix(max_atoms=20, randomize=False, remove_hydrogens=False, upper_tri=False)
features = coulomb_mat._featurize(mol=example_mol)_____no_output_____
</code>
A simple check for the featurization is to see if the feature list has the same length as the number of conformers_____no_output_____
<code>
print(len(example_mol.GetConformers()) == len(features))True
</code>
### CoulombMatrixEig_____no_output_____`CoulombMatrix` is invariant to molecular rotation and translation, since the interatomic distances or atomic numbers do not change. However the matrix is not invariant to random permutations of the atom's indices. To deal with this, the `CoulumbMatrixEig` featurizer was introduced, which uses the eigenvalue spectrum of the columb matrix, and is invariant to random permutations of the atom's indices.
`CoulombMatrixEig` inherits from `CoulombMatrix` and featurizes a molecule by first computing the coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules.
The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`)._____no_output_____
<code>
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))Number of available conformers for propane: 1
coulomb_mat_eig = CoulombMatrixEig(max_atoms=20, randomize=False, remove_hydrogens=False)
features = coulomb_mat_eig._featurize(mol=example_mol)_____no_output_____print(len(example_mol.GetConformers()) == len(features))True
</code>
### Adjacency Fingerprints_____no_output_____TODO(rbharath): This tutorial still needs to be expanded out with the additional fingerprints._____no_output_____# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!_____no_output_____
| {
"repository": "patrickphatnguyen/deepchem",
"path": "examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb",
"matched_keywords": [
"STAR",
"drug discovery"
],
"stars": null,
"size": 90005,
"hexsha": "d04cb1896164f74b7504b352aa1e004c7724f698",
"max_line_length": 2868,
"avg_line_length": 58.0303030303,
"alphanum_fraction": 0.4690961613
} |
# Notebook from neoaksa/IMDB_Spider
Path: Movie_Analysis.ipynb
[View in Colaboratory](https://colab.research.google.com/github/neoaksa/IMDB_Spider/blob/master/Movie_Analysis.ipynb)_____no_output_____
<code>
# I've already uploaded three files onto googledrive, you can use uploaded function blew to upload the files.
# # upload
# uploaded = files.upload()
# for fn in uploaded.keys():
# print('User uploaded file "{name}" with length {length} bytes'.format(
# name=fn, length=len(uploaded[fn])))
_____no_output_____import pandas as pd
import numpy as np
import urllib.request
! pip install pydrive
# these classes allow you to request the Google drive API
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
from googleapiclient.discovery import build
from google.colab import auth
# authenticate google drive
auth.authenticate_user()
drive_service = build('drive', 'v3')
# 1. Authenticate and create the PyDrive client.
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
def downloadFile(inputfilename,outputfilename):
downloaded = drive.CreateFile({'id': inputfilename})
# assume the file is called file.csv and it's located at the root of your drive
downloaded.GetContentFile(outputfilename)
# traning file download
MovieItemFile = downloadFile("1w8Ce9An_6vJH_o5Ux7A8Zf0zc2E419xN","MovieItem.csv")
MovieReview = downloadFile("1R7kAHF9X_YnPGwsclqMn2_XA1WgVgjlC","MovieReview.csv")
MovieStar = downloadFile("15d3ZiHoqvxxdRhS9-5it979D0M60Ued0","MovieStar.csv")
df_movieItem = pd.read_csv('MovieItem.csv', delimiter=',',index_col=['id'])
df_movieReview = pd.read_csv('MovieReview.csv', delimiter=',',index_col=['id'])
df_movieStar = pd.read_csv('MovieStar.csv', delimiter=',',index_col=['id'])
# sort by index id(also known by rating)
df_movieItem = df_movieItem.sort_index(axis=0)
# rating overview
import seaborn as sns
sns.stripplot(data=df_movieItem,y='rating',jitter= True,orient = 'v' ,size=6)
plt.title('Movie Rating Overview')
plt.show()Requirement already satisfied: pydrive in /usr/local/lib/python3.6/dist-packages (1.3.1)
Requirement already satisfied: PyYAML>=3.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (3.13)
Requirement already satisfied: oauth2client>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (4.1.2)
Requirement already satisfied: google-api-python-client>=1.2 in /usr/local/lib/python3.6/dist-packages (from pydrive) (1.6.7)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.11.3)
Requirement already satisfied: six>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (1.11.0)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.2.2)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.4.4)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (3.4.2)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (3.0.0)
# stars analysis
# pre-process for movieItem and movieStar
star_list = []
for index,stars in df_movieItem[['stars','stars_id']].iterrows():
star_list += [(x.lstrip().replace('"',''),y.lstrip().replace('"',''))
for x,y in zip(stars['stars'][1:-1].replace('\'','').split(','),stars['stars_id'][1:-1].replace('\'','').split(','))]
# star_id_list += [x.lstrip().replace('"','') for x in stars['stars_id'][1:-1].replace('\'','').split(',')]
# reduce duplicate
star_list = list(set(star_list))
# create a dataframe for output
df_star = pd.DataFrame(columns=['stars_id','stars','avg_rating','num_movie'])
df_star['stars_id'] = [x[1] for x in star_list]
df_star['stars'] = [x[0] for x in star_list]
for index,star_id in enumerate(df_star['stars_id']):
filter = df_movieItem['stars_id'].str.contains(star_id)
df_star['num_movie'][index] = len(df_movieItem[filter])
df_star['avg_rating'][index] = pd.to_numeric(df_movieItem[filter]['rating'].str[2:-2]).sum(axis=0)/df_star['num_movie'][index]
# left join star information
df_star
# order by # of movies
df_star = df_star.sort_values(['num_movie'],ascending=False)
print(df_star.head(10))
# order by avg rating
df_star = df_star.sort_values(['avg_rating'],ascending=False)
print(df_star.head(10)) stars_id stars avg_rating num_movie
330 nm0000134 Robert De Niro 8.375 8
352 nm0000148 Harrison Ford 8.34286 7
172 nm0000138 Leonardo DiCaprio 8.3 6
250 nm0000158 Tom Hanks 8.38333 6
588 nm0000142 Clint Eastwood 8.28 5
62 nm0451148 Aamir Khan 8.2 5
539 nm0000122 Charles Chaplin 8.38 5
26 nm0000199 Al Pacino 8.65 4
208 nm0000197 Jack Nicholson 8.45 4
327 nm0000228 Kevin Spacey 8.425 4
stars_id stars avg_rating num_movie
427 nm0001001 James Caan 9.2 1
176 nm0348409 Bob Gunton 9.2 1
39 nm0000209 Tim Robbins 9.2 1
290 nm0005132 Heath Ledger 9 1
338 nm0001173 Aaron Eckhart 9 1
276 nm0000842 Martin Balsam 8.9 1
343 nm0000168 Samuel L. Jackson 8.9 1
303 nm0000237 John Travolta 8.9 1
398 nm0000553 Liam Neeson 8.9 1
177 nm0005212 Ian McKellen 8.8 3
</code>
Accordig this breif table, we can find **Robert De Niro** took the most movies in top 250 list. Followed by **Harrison**,**Tom** and **Leonardo** ._____no_output_____
<code>
# visual stars
import matplotlib.pyplot as plt
# figure = plt.figure()
ax1 = plt.subplot()
df_aggbyMovie = df_star[df_star['num_movie']>0].groupby(['num_movie']).agg({'stars_id':np.size})
df_aggbyMovie.columns.values[0] ='freq'
df_aggbyMovie = df_aggbyMovie.sort_values(['freq'])
acc_numMovie = np.cumsum(df_aggbyMovie['freq'])
ax1.plot(acc_numMovie)
ax1.set_xlabel('# of movies')
ax1.set_ylabel('cumulated # of stars')
ax1.set_title('Cumulated chart for each segement')
plt.gca().invert_xaxis()
plt.show()
ax2 = plt.subplot()
ax2.pie(df_aggbyMovie,
labels=df_aggbyMovie.index,
startangle=90,
autopct='%1.1f%%')
ax2.set_title('Percetage of segements')
plt.show()
# check out which moive the best stars perform. - best stars: who took more than one movies in the top250 list
df_star_2plus = df_star[df_star['num_movie']>1]['stars_id']
i = 0
movie_list = []
for index,row in df_movieItem[['stars_id','title']].iterrows():
for x in df_star_2plus.values:
if x in row['stars_id']:
i +=1
movie_list.append(row['title'])
break
df_movieItem[df_movieItem['title'].isin(movie_list)].head(10)_____no_output_____
</code>
**165** movies in top 250 movies are performed by the **100** best stars who is defined that took more than one movies in the list. We picked up these 100 movie stars for future star research_____no_output_____
<code>
# movie star relationship analysis
df_movie_star_plus = df_star[df_star['num_movie']>2][['stars_id','stars']]
# transfer star list to relationship list
def starlist2network(list):
bi_list = []
i = 0
while i<len(list):
j = 1
while j<len(list)-i:
bi_list.append((list[i],list[i+j]))
j += 1
i += 1
return tuple(bi_list)
star_map_list =set()
for index,stars in df_movieItem[['stars']].iterrows():
star_list = []
star_list += [x.lstrip().replace('"','')
for x in stars['stars'][1:-1].replace('\'','').split(',')]
for item in starlist2network(star_list):
if item[0] in df_movie_star_plus['stars'].values and item[1] in df_movie_star_plus['stars'].values:
star_map_list.add(tuple(sorted(item)))
!pip install networkx
import networkx as nx
import matplotlib.pyplot as plt
# Creating a Graph
G = nx.Graph() # Right now G is empty
G.add_edges_from(star_map_list)
# k controls the distance between the nodes and varies between 0 and 1
# iterations is the number of times simulated annealing is run
# default k =0.1 and iterations=50
pos = nx.spring_layout(G,k=0.55,iterations=50)
nx.draw(G,pos, with_labels=True, font_weight='bold',node_shape = 'o')Requirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (2.1)
Requirement already satisfied: decorator>=4.1.0 in /usr/local/lib/python3.6/dist-packages (from networkx) (4.3.0)
</code>
I picked up a few stars who took more than 2 movies in the top 250 list, and create a relationship netwrok for them.We can find the major 5 blocks, if we loose the filter, maybe we can find more._____no_output_____
<code>
# pick 100 stars for age analysis
# rebin the year by 10 years
df_movieStar_bin = df_movieStar.copy()
df_movieStar_bin['name'] = df_movieStar_bin['name'].str[2:-2]
df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].str[2:-2]
df_movieStar_bin['born_area'] = df_movieStar_bin['born_area'].str[2:-2]
df_movieStar_bin['born_year'] = pd.cut(pd.to_numeric(df_movieStar_bin['born_year'].str[0:4]),range(1900,2020,10),right=False)
df_movieStar_bin = df_movieStar_bin.dropna()
df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].astype(str).str[1:5] + 's'
df_movieStar_bin = df_movieStar_bin[df_movieStar_bin.index.isin(df_star_2plus.values)]
fig = plt.figure(figsize=(12,6))
plt.style.use('fivethirtyeight')
ax3 = plt.subplot()
ax3.hist(df_movieStar_bin['born_year'])
ax3.set_title('Histogram of Star born year')
plt.xlabel('Star Born Year')
plt.ylabel('# of Star')
plt.show()
# star city anlysis
df_movieStar_bin['born_state'] = [x.split(',')[1] for x in df_movieStar_bin['born_area']]
df_movieStar_by_state = df_movieStar_bin.groupby(['born_state']).size().sort_values(ascending=False)
df_movieStar_by_state = df_movieStar_by_state[df_movieStar_by_state>=2].append(
pd.Series(df_movieStar_by_state[df_movieStar_by_state<2].sum(),index=['Others']))
# print(df_movieStar_by_state)
fig = plt.figure(figsize=(20,6))
plt.bar(range(len(df_movieStar_by_state)), df_movieStar_by_state, align='center', alpha=0.5)
plt.xticks(range(len(df_movieStar_by_state)), df_movieStar_by_state.index)
plt.ylabel('# of Stars')
plt.title('Movie Star by States')
plt.show()
_____no_output_____
</code>
From picked 100 movie stars, most of them are born between **1930s to 1970s**. **California, Illinois, New Jersey ** are the states with most movie stars. Even so, none of state or regions is predominant._____no_output_____
<code>
# review analysis
!pip install wordcloud
!pip install multidict
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import string
import multidict as multidict
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
Lemmatizer = WordNetLemmatizer()
# remove punctuation
list_word = []
for text in df_movieReview['content'].values:
nopunc = [char.lower() for char in text if char not in string.punctuation]
nopunc = ''.join(nopunc)
list_word.append(nopunc)
# setting words unuseful
del_words = ['movie','character','film','story','wa','ha']# excluded words
word_type_list_In = ("JJ","NN") # only picked adj and noun
# word_list_Ex = ("/", "br", "<", ">","be","movie","film","have","do","none","none none")
words = {}
for sent in list_word:
text = nltk.word_tokenize(sent) # tokenize sentence to words
text = [Lemmatizer.lemmatize(word) for word in text] # get stem of words
text_tag = nltk.pos_tag(text) # get words type
for item in [x[0] for x in text_tag if x[1][:2] in word_type_list_In and x[0] not in del_words and x[0] not in stopwords.words('english')]:
if item not in words:
words[item] = 1
else:
words[item] += 1
#sort by value
sorted_words = sorted(words.items(), key=lambda x: x[1],reverse=True)
# filtered_words = ' '.join([x[0] for x in sorted_words if x[1]>=1000])
print(sorted_words[0:20])
fullTermsDict = multidict.MultiDict()
for key in words:
fullTermsDict.add(key, words[key])
# Create the wordcloud object
wordcloud = WordCloud(width=1600, height=800, margin=0,max_font_size=100).generate_from_frequencies(fullTermsDict)
# Display the generated image:
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=0, y=0)
plt.show()[('time', 6689), ('scene', 4773), ('great', 4118), ('life', 4079), ('best', 3854), ('good', 3790), ('way', 3752), ('people', 3487), ('many', 3194), ('year', 2941), ('first', 2832), ('man', 2769), ('thing', 2649), ('performance', 2588), ('world', 2253), ('actor', 2154), ('director', 1919), ('war', 1915), ('action', 1865), ('plot', 1851)]
_____no_output_____
</code>
| {
"repository": "neoaksa/IMDB_Spider",
"path": "Movie_Analysis.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 431862,
"hexsha": "d04ccbc418081bbcdaf4f6d9a11f0a31b1a288f0",
"max_line_length": 207218,
"avg_line_length": 555.0925449871,
"alphanum_fraction": 0.914523621
} |
# Notebook from SvetozarMateev/Data-Science
Path: DataScienceExam/Exam.ipynb
<code>
import pandas as pd
import scipy.stats as st
import matplotlib.pyplot as plt
import numpy as np
import operator_____no_output_____
</code>
# Crimes
### Svetozar Mateev_____no_output_____## Putting Crime in the US in Context
_____no_output_____First I am going to calculate the total crimes by dividing the population by 100 000 and then multiplying it by the crimes percapita.Then I am going to remove the NaN values._____no_output_____
<code>
crime_reports=pd.read_csv("report.csv")
crime_reports=crime_reports.dropna()
crime_reports=crime_reports.reset_index()
crime_reports["total_crimes"]=(crime_reports.population/100000*crime_reports.crimes_percapita)
#crime_reports[["population",'crimes_percapita','total_crimes']]
_____no_output_____
</code>
• Have a look at the “months_reported” column values. What do they mean? What percent of the rows have less than 12 months? How significant is that?_____no_output_____
<code>
crime_reports["months_reported"].unique()
less_than_twelve=crime_reports[crime_reports.months_reported<12]
print(str(len(less_than_twelve)/len(crime_reports.months_reported)*100)+'%')_____no_output_____
</code>
The months reported column indicates how much months from the year have been reported and only 1.9% of the rows have less than 12 months reported per year whichn on a 5% level isn't significant._____no_output_____• Overall crime popularity: Create a bar chart of crime frequencies (total, not per capita). Display the type of crime and total occurrences (sum over all years). Sort largest to smallest. Are there any patterns? Which crime is most common?_____no_output_____
<code>
homicides_total_sum=crime_reports.homicides.sum()
rapes_total_sum=crime_reports.rapes.sum()
assaults_total_sum=crime_reports.assaults.sum()
robberies_total_sum=crime_reports.robberies.sum()
total_crimes_total_sum= crime_reports.total_crimes.sum()
homicides_frequency=homicides_total_sum/total_crimes_total_sum
rapes_frequency=rapes_total_sum/total_crimes_total_sum
assaults_frequency=assaults_total_sum/total_crimes_total_sum
robberies_frequency=robberies_total_sum/total_crimes_total_sum
plt.bar(height=[assaults_frequency,robberies_frequency,rapes_frequency,homicides_frequency],left=[1,2,3,4], align = "center",width=0.2)
plt.xticks([1,2,3,4,],['Assaults','Robberies','Rapes','Homicides'])
plt.ylabel("Frequency of a crime")
plt.show()_____no_output_____
</code>
The most frequent crimes are the assaults and i can see from the diagram that crimes which are less serious are committed more often._____no_output_____• Crime popularity by year: Break down the analysis of the previous graph by year. What is the most common crime (total, not per capita) for each year? What is the least common one?_____no_output_____
<code>
homicides_sum=0
rapes_sum=0
assaults_sum=0
robberies_sum=0
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_sum_year=year_df.homicides.sum()
rapes_sum_year=year_df.rapes.sum()
assaults_sum_year=year_df.assaults.sum()
robberies_sum_year=year_df.robberies.sum()
if(homicides_sum_year>rapes_sum_year and homicides_sum_year>assaults_sum_year and homicides_sum_year>robberies_sum_year):
homiciedes_sum+=1
print(str(year)+' '+"homicides")
elif(homicides_sum_year<rapes_sum_year and rapes_sum_year>assaults_sum_year and rapes_sum_year>robberies_sum_year):
rapes_sum+=1
print(str(year)+' '+"rapes")
elif(homicides_sum_year<assaults_sum_year and rapes_sum_year<assaults_sum_year and assaults_sum_year>robberies_sum_year):
assaults_sum+=1
print(str(year)+' '+"assaults")
elif(homicides_sum_year<robberies_sum_year and rapes_sum_year<robberies_sum_year and assaults_sum_year<robberies_sum_year):
robberies_sum+=1
print(str(year)+' '+"robberies")
plt.bar(height=[assaults_sum,robberies_sum,homicides_sum,rapes_sum],left=[1,2,3,4],align='center')#most common one through the years
plt.xticks([1,2,3,4,],['Assaults','Robberies','Homicides','Rapes'])
plt.ylabel("Times a crime was most often for a year")
plt.show()
_____no_output_____
</code>
I can see from the bar chart that assault were the most popular crime for a year almost thirty time and that the homicides and rapes were never the most popular crime for a year._____no_output_____• Crime evolution (e. g. crime rates as a function of time): How do crime rates per capita evolve over the years? Create a plot (or a series) of plots displaying how each rate evolves. Create another plot of all crimes (total, not per capita) over the years._____no_output_____
<code>
rapes_per_capita=[]
homicides_per_capita=[]
assaults_per_capita=[]
robberies_per_capita=[]
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_mean_year=year_df.homicides_percapita.mean()
rapes_mean_year=year_df.rapes_percapita.mean()
assaults_mean_year=year_df.assaults_percapita.mean()
robberies_mean_year=year_df.robberies_percapita.mean()
homicides_per_capita.append(homicides_mean_year)
rapes_per_capita.append(rapes_mean_year)
assaults_per_capita.append(assaults_mean_year)
robberies_per_capita.append(robberies_mean_year)
plt.plot(crime_reports.report_year.unique(),rapes_per_capita)
plt.suptitle("Rapes")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),homicides_per_capita)
plt.suptitle("Homicides")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),assaults_per_capita)
plt.suptitle("Assaults")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),robberies_per_capita)
plt.suptitle("Robberies")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()_____no_output_____
</code>
From the plots we can see that each crime has significanttly lower rate per capita and that for all of them the peak was between 1990 and 1995._____no_output_____
<code>
rapes_per_year=[]
homicides_per_year=[]
assaults_per_year=[]
robberies_per_year=[]
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_mean_year=year_df.homicides.sum()
rapes_mean_year=year_df.rapes.sum()
assaults_mean_year=year_df.assaults.sum()
robberies_mean_year=year_df.robberies.sum()
homicides_per_year.append(homicides_mean_year)
rapes_per_year.append(rapes_mean_year)
assaults_per_year.append(assaults_mean_year)
robberies_per_year.append(robberies_mean_year)
plt.plot(crime_reports.report_year.unique(),rapes_per_year,label="Rapes")
plt.plot(crime_reports.report_year.unique(),assaults_per_year,label="Assaults")
plt.plot(crime_reports.report_year.unique(),homicides_per_year,label="Homicides")
plt.plot(crime_reports.report_year.unique(),robberies_per_year,label="Robberies")
plt.legend()
plt.ylabel("Number of crimes")
plt.xlabel("Years")
plt.show()_____no_output_____
</code>
Again our observations are confirmed that the peak of the crimes is around 1990 and that in present there are a lot less crimes except the rapes which between 2010 and 2015 have begun raise slightly._____no_output_____## Crimes by States_____no_output_____• “Criminal” jurisdictions: Plot the sum of all crimes (total, not per capita) for each jurisdiction. Sort largest to smallest. Are any jurisdictions more prone to crime?_____no_output_____
<code>
#agency_jurisdiction
jurisdicitons=[]
counter=0
crimes_per_jurisdiction=[]
agencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for jurisdiciton in agencies_df.agency_jurisdiction.unique():
jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton]
all_crimes=jurisdicition_df.violent_crimes.sum()
crimes_per_jurisdiction.append(all_crimes)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
df_plottt=pd.DataFrame({'area':jurisdicitons,'num':crimes_per_jurisdiction})
df_plottt=df_plottt.sort_values('num',ascending=False)
plt.bar(height=df_plottt.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plottt.area,rotation='vertical')
plt.ylabel("Number of Crimes")
plt.show()_____no_output_____
</code>
From the bar chart we can see that the New York City,Ny jurisdiction has the most crimes._____no_output_____• “Criminal” jurisdictions, part 2: Create the same type of chart as above, but use the crime rates per capita this time. Are you getting the same distribution? Why? You may need data from the “population” column to answer this. Don’t perform significance tests, just inspect the plots._____no_output_____
<code>
jurisdicitons=[]
counter=0
crimes_per_jurisdiction=[]
population=[]
agencies_df=crime_reports
agencies_df=crime_reports.sort_values('crimes_percapita',ascending=False)
for a in agencies_df.agency_jurisdiction.unique():
agencies_df["crimes_percapita_per_agency"]=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton].crimes_percapita.sum()
agencies_df=agencies_df.sort_values('crimes_percapita_per_agency',ascending=True)
for jurisdiciton in agencies_df.agency_jurisdiction.unique():
jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton]
all_crimes=jurisdicition_df.crimes_percapita.sum()
crimes_per_jurisdiction.append(all_crimes)
counter+=1
jurisdicitons.append(jurisdiciton)
population.append(jurisdicition_df.population.mean())
if counter==10:
break
df_plot=pd.DataFrame({'jurisdicitons':jurisdicitons,'num':crimes_per_jurisdiction})
df_plot=df_plot.sort_values('num',ascending=False,axis=0)
plt.bar(height=df_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plot.jurisdicitons,rotation='vertical')
plt.ylabel("Number of Crimes")
plt.show()
df_pop_plot=pd.DataFrame({'area':jurisdicitons,'num':population})
df_pop_plot=df_pop_plot.sort_values('num',ascending=False,axis=0)
plt.bar(height=df_pop_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_pop_plot.area,rotation='vertical')
plt.ylabel("Population")
plt.show()_____no_output_____
</code>
We can see that the crime per capita in Miami is the biggest contary to the previous plot. However it appears to have little correlation between the number of crimes per capita and the population._____no_output_____• “Criminal states”: Create the same type of chart as in the first subproblem, but use the states instead. You can get the state name in two ways: either the first two letters of the agency_code column or the symbols after the comma in the agency_jurisdiction column._____no_output_____
<code>
parts=crime_reports['agency_jurisdiction'].str.extract("(\w+), (\w+)", expand = True)
parts.columns=['something_else','state']
parts['state']
crime_reports['state']=parts['state']
crime_states=[]
total_crimes=[]
counter=0
gencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for state in crime_reports.state.unique():
jurisdicition_df=crime_reports[crime_reports.state==state]
all_crimes=jurisdicition_df.violent_crimes.sum()
total_crimes.append(all_crimes)
crime_states.append(state)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states)
plt.ylabel("Number Of Crimes")
plt.show()
_____no_output_____
</code>
From the chart we can see that New York has the biggest number of crimes._____no_output_____• Hypothesis testing: Are crime rates per capita related to population, e. g. does a more densely populated community produce more crime (because there are more people), or less crime (because there is a better police force)? Plot the total number of crimes vs. population to find out. Is there any correlation? If so, what is it? Is the correlation significant?_____no_output_____
<code>
total_crimes=[]
agency_jurisdiction=[]
population=[]
counter=0
for jurisdiction in crime_reports.agency_jurisdiction.unique():
jurisdicition_df=crime_reports[crime_reports.agency_jurisdiction==jurisdiction]
all_crimes=jurisdicition_df.violent_crimes.sum()
total_crimes.append(all_crimes)
counter+=1
agency_jurisdiction.append(jurisdiction)
population.append(jurisdicition_df.population.mean())
if counter==10:
break
print(len(total_crimes),len(agency_jurisdiction))
plot_df=pd.DataFrame({'states':agency_jurisdiction,'num':total_crimes,'popu':population})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.popu,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='r',label="Population")
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='b',label="Crimes")
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states,rotation='vertical')
plt.ylabel("Number")
plt.legend()
plt.show()
_____no_output_____
</code>
We can see that there isn't a corelation between the population and the crimes because some places like Atlanta,GA shows that there might be but others like Baltimore Country,MD show us totaly different results_____no_output_____## Additional data_____no_output_____First I am droping some of the unnecessary columns and then I am tranforming the dates to datetime objects._____no_output_____
<code>
crimes=pd.read_csv("crimes.csv")
crimes=crimes.drop(['x','y','OBJECTID','ESRI_OID','Time'],axis=1)
crimes.columns=['publicaddress', 'controlnbr', 'CCN', 'precinct', 'reported_date',
'begin_date', 'offense', 'description', 'UCRCode', 'entered_date',
'long', 'lat', 'neighborhood', 'lastchanged', 'last_update_date']
crimes.dtypes
#2015-09-21T14:16:59.000Z
crimes['reported_date']=pd.to_datetime(crimes['reported_date'],format='%Y-%m-%d',errors='ignore')
crimes['entered_date']=pd.to_datetime(crimes['entered_date'],format='%Y-%m-%d',errors='ignore')
crimes['lastchanged']=pd.to_datetime(crimes['lastchanged'],format='%Y-%m-%d',errors='ignore')
crimes['last_update_date']=pd.to_datetime(crimes['last_update_date'],format='%Y-%m-%d',errors='ignore')
crimes['begin_date']=pd.to_datetime(crimes['begin_date'],format='%Y-%m-%d',errors='ignore')
crimes=crimes.dropna()_____no_output_____
</code>
• Total number of crimes per year: Count all crimes for years in the dataset (2010-2016). Print the total number._____no_output_____
<code>
print(str(len(crimes))+" "+"crimes between 2010 and 2016")_____no_output_____
</code>
• Plot how crimes evolve each year_____no_output_____
<code>
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center')
plt.ylabel("Number of Crimes")
plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.show()_____no_output_____
</code>
From 2010 to 2012 ther is a sligth raise in the number of crimes.However from 2012 to 2016 there is a drop in the number of crimes committed._____no_output_____• Compare the previous plot to the plots in the previous exercise.
Note: In order to make comparison better, plot the data for all states again, but this time filter only years 2010-2016. Does the crime rate in MN have any connection to the total crime rate? What percentage of the total crime rate (in all given states) is given by MN?
_____no_output_____
<code>
crime_states=[]
total_crimes=[]
counter=0
gencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for state in crime_reports.state.unique():
jurisdicition_df=crime_reports[crime_reports.state==state]
right_year=jurisdicition_df[jurisdicition_df.report_year>2009]
all_crimes=right_year.violent_crimes.sum()
total_crimes.append(all_crimes)
crime_states.append(state)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states)
plt.ylabel("Number Of Crimes")
plt.show()
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center')
plt.ylabel("Number of Crimes")
plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.show()
whole_number = sum(i for i in total_crimes)
print(str(len(crimes)/whole_number)+' '+'% from the total number of crimes committed between 2010 and 2016')_____no_output_____
</code>
• Cross-dataset matching: Get data from the previous dataset (crime rates in the US) again. This time, search only for MN and only for years 2010-2016. Do you have any results? If so, the results for total crime in MN should match in both datasets. Do they match?_____no_output_____
<code>
year_10n=4064.0
year_11n=3722.0
year_12n=3872.0
year_13n=4038.0
year_14n=4093.0
year_15n=0
year_16n=0
MN=crime_reports[crime_reports.state=="MN"]
MN=MN[MN.report_year>2009]
number_crimes=sum(MN.violent_crimes)
print(str(int(number_crimes))+" from the first data set")
print(str(len(crimes))+" "+"from the second data set")
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center',color='r',label="Second DataSet values")
plt.bar(height=[year_10n,year_11n,year_12n,year_13n,year_14n,year_15n,year_16n],left=[1,2,3,4,5,6,7],align='center',color='b',label="First DataSet values")
plt.legend()
plt.xticks([1,2,3,4,5,6,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.ylabel("Crimes")
plt.show()
_____no_output_____
</code>
The values in the first data set are until 2014 and they are much smaller than those in the second.There is a big difference between the two._____no_output_____## Temporal Analysis_____no_output_____• Look at the crime categories. Which is the most popular crime category in MN overall?_____no_output_____
<code>
crimes.description.unique()
d={'Shoplifting':1, 'Theft From Motr Vehc':1, 'Other Theft':1,
'Theft From Building':1, 'Crim Sex Cond-rape':1, 'Burglary Of Dwelling':1,
'Theft From Person':1, 'Motor Vehicle Theft':1, 'Robbery Of Business':1,
'Aslt-police/emerg P':1, 'Domestic Assault/Strangulation':1,
'Theft-motr Veh Parts':1, 'Robbery Of Person':1, 'Asslt W/dngrs Weapon':1,
'Robbery Per Agg':1, 'Burglary Of Business':1, 'Arson':1,
'Theft By Swindle':1, 'Aslt-great Bodily Hm':1, 'Aslt-sgnfcnt Bdly Hm':1,
'On-line Theft':1, '2nd Deg Domes Aslt':1, 'Murder (general)':1,
'Adulteration/poison':1, 'Gas Station Driv-off':1,
'Other Vehicle Theft':1, '3rd Deg Domes Aslt':1, 'Pocket-picking':1,
'Theft/coinop Device':1, 'Disarm a Police Officer':1,
'Theft By Computer':1, '1st Deg Domes Asslt':1, 'Bike Theft':1,
'Scrapping-Recycling Theft':1, 'Justifiable Homicide':0, 'Looting':1}
for desc in crimes.description:
d[desc]+=1
sorted_d = sorted(d.items(), key=operator.itemgetter(1))
print(sorted_d)_____no_output_____
</code>
The most common type is Other theft but since it si do unclear we can say that Burglary of Dwelling is the most commnon type of theft._____no_output_____• Break down the data by months. Plot the total number of crimes for each month, summed over the years. Is there a seasonal component? Which month has the highest crime rate? Which has the smallest? Are the differences significant?
_____no_output_____
<code>
january=0
february=0
march=0
april=0
may=0
june=0
july=0
august=0
september=0
october=0
november=0
december=0
for date in crimes.begin_date:
if(date.month==1):
january+=1
elif(date.month==2):
february+=1
elif(date.month==3):
march+=1
elif(date.month==4):
april+=1
elif(date.month==5):
may+=1
elif(date.month==6):
june+=1
elif(date.month==7):
july+=1
elif(date.month==8):
august+=1
elif(date.month==9):
september+=1
elif(date.month==10):
october+=1
elif(date.month==11):
november+=1
elif(date.month==12):
december+=1
plt.bar(height=[january,february,march,april,may,june,july,august,september,october,november,december]
,left=[1,2,3,4,5,6,7,8,9,10,11,12],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10,11,12],
['january','february','march','april','may','june','july','august','september','october','november','december']
,rotation='vertical')
plt.ylabel("Number Of Crimes")
plt.show()_____no_output_____
</code>
We can see that most of the crimes are in june and that there is seasonal tendency that most of the crimes are committer in the summer._____no_output_____• Break the results by weekday. You can get the weekday from the date (there are functions for this). Do more crimes happen on
the weekends?_____no_output_____
<code>
Monday=0
Tuesday=0
Wednesday=0
Thursday=0
Friday=0
Saturday=0
Sunday=0
for date in crimes.begin_date:
if(date.weekday()==0):
Monday+=1
elif(date.weekday()==1):
Tuesday+=1
elif(date.weekday()==2):
Wednesday+=1
elif(date.weekday()==3):
Thursday+=1
elif(date.weekday()==4):
Friday+=1
elif(date.weekday()==5):
Saturday+=1
elif(date.weekday()==6):
Sunday+=1
plt.bar(height=[Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday]
,left=[1,2,3,4,5,6,7],align='center')
plt.xticks([1,2,3,4,5,6,7],['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'],rotation='vertical')
plt.ylabel("Number Of Crimes")
plt.show()_____no_output_____
</code>
Most crimes are committed on Fridays.On the second place are Thursdays._____no_output_____• Break the weekday data by crime type. Are certain types of crime more likely to happen on a given day? Comment your findings.
I have no time to complete this because I have a Programming Fundamentals Exam to take but I would make 7 plots one for each day of the week with the top 10 types of crimes._____no_output_____## 5. Significant Factors in Crime_____no_output_____
<code>
communities= pd.read_table("communities.data",sep=',',header=None)
communities.columns
communities_names= pd.read_table('communities.names',header=None)
communities_names_____no_output_____
</code>
| {
"repository": "SvetozarMateev/Data-Science",
"path": "DataScienceExam/Exam.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 41636,
"hexsha": "d04d1ea95774f042d0cda8fe7ae301a10bd39817",
"max_line_length": 1504,
"avg_line_length": 42.7035897436,
"alphanum_fraction": 0.6261648573
} |
# Notebook from ProteinsWebTeam/ebi-metagenomics-examples
Path: mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb
# American Gut Project example
This notebook was created from a question we recieved from a user of MGnify.
The question was:
```
I am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location.
However latitude and longitude do not appear to be searchable fields.
Is it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2.
```
Let's decompose the question:
- project "American Gut Project"
- Metadata filtration using the geographic location of a sample.
- Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2
Each sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena).
## Get samples
The first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search).
_____no_output_____
<code>
from pandas import DataFrame
import requests
base_url = 'https://www.ebi.ac.uk/ena/portal/api/search'
# parameters
params = {
'result': 'sample',
'query': ' AND '.join([
'geo_box1(16.9175,-158.4687,21.6593,-152.7969)',
'description="*American Gut Project*"'
]),
'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']),
'format': 'json',
}
response = requests.post(base_url, data=params)
agp_samples = response.json()
df = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon'))
df.index.name = 'accession'
for s in agp_samples:
df.loc[s.get('accession')] = [
s.get('secondary_sample_accession'),
s.get('lat'),
s.get('lon')
]
df
secondary_sample_accession lat lon
accession
SAMEA104163502 ERS1822520 19.6 -155.0
SAMEA104163503 ERS1822521 19.6 -155.0
SAMEA104163504 ERS1822522 19.6 -155.0
SAMEA104163505 ERS1822523 19.6 -155.0
SAMEA104163506 ERS1822524 19.6 -155.0
... ... ... ...
SAMEA4588733 ERS2409455 21.5 -157.8
SAMEA4588734 ERS2409456 21.5 -157.8
SAMEA4786501 ERS2606437 21.4 -157.7
SAMEA92368918 ERS1561273 19.4 -155.0
SAMEA92936668 ERS1562030 21.3 -157.7
[121 rows x 3 columns]
</code>
Now we can use EMG API to get the information.
_____no_output_____
<code>
#!/bin/usr/env python
import requests
import sys
def get_links(data):
return data["links"]["related"]
if __name__ == "__main__":
samples_url = "https://www.ebi.ac.uk/metagenomics/api/v1/samples/"
tsv = sys.argv[1] if len(sys.argv) == 2 else None
if not tsv:
print("The first arg is the tsv file")
exit(1)
tsv_fh = open(tsv, "r")
# header
next(tsv_fh)
for record in tsv_fh:
# get the runs first
# mgnify references the secondary accession
_, sec_acc, *_ = record.split("\t")
samples_res = requests.get(samples_url + sec_acc)
if samples_res.status_code == 404:
print(sec_acc + " not found in MGnify")
continue
# then the analysis for that run
runs_url = get_links(samples_res.json()["data"]["relationships"]["runs"])
if not runs_url:
print("No runs for sample " + sec_acc)
continue
print("Getting the runs: " + runs_url)
run_res = requests.get(runs_url)
if run_res.status_code != 200:
print(run_url + " failed", file=sys.stderr)
continue
# iterate over the sample runs
run_data = run_res.json()
# this script doesn't consider pagination, it's just an example
# there could be more that one page of runs
# use links -> next to get the next page
for run in run_data["data"]:
analyses_url = get_links(run["relationships"]["analyses"])
if not analyses_url:
print("No analyses for run " + run)
continue
analyses_res = requests.get(analyses_url)
if analyses_res.status_code != 200:
print(analyses_url + " failed", file=sys.stderr)
continue
# dump
print("Raw analyses data")
print(analyses_res.json())
print("=" * 30)
tsv_fh.close()_____no_output_____
</code>
| {
"repository": "ProteinsWebTeam/ebi-metagenomics-examples",
"path": "mgnify/src/notebooks/American_Gut_filter_based_in_location.ipynb",
"matched_keywords": [
"metagenomics"
],
"stars": 8,
"size": 6591,
"hexsha": "d04de5814488acdb696702cf7b4a52feea7212d0",
"max_line_length": 767,
"avg_line_length": 34.6894736842,
"alphanum_fraction": 0.4974965863
} |
# Notebook from JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Path: module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
# Lambda School Data Science - Loading, Cleaning and Visualizing Data
Objectives for today:
- Load data from multiple sources into a Python notebook
- From a URL (github or otherwise)
- CSV upload method
- !wget method
- "Clean" a dataset using common Python libraries
- Removing NaN values "Data Imputation"
- Create basic plots appropriate for different data types
- Scatter Plot
- Histogram
- Density Plot
- Pairplot (if we have time)_____no_output_____# Part 1 - Loading Data
Data comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.
Data set sources:
- https://archive.ics.uci.edu/ml/datasets.html
- https://github.com/awesomedata/awesome-public-datasets
- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)
Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags)._____no_output_____## Lecture example - flag data_____no_output_____
<code>
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something isAfghanistan,5,1,648,16,10,2,0,3,5,1,1,0,1,1,1,0,green,0,0,0,0,1,0,0,1,0,0,black,green
Albania,3,1,29,3,6,6,0,0,3,1,0,0,1,0,1,0,red,0,0,0,0,1,0,0,0,1,0,red,red
Algeria,4,1,2388,20,8,2,2,0,3,1,1,0,0,1,0,0,green,0,0,0,0,1,1,0,0,0,0,green,white
American-Samoa,6,3,0,0,1,1,0,0,5,1,0,1,1,1,0,1,blue,0,0,0,0,0,0,1,1,1,0,blue,red
Andorra,3,1,0,0,6,0,3,0,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,blue,red
Angola,4,2,1247,7,10,5,0,2,3,1,0,0,1,0,1,0,red,0,0,0,0,1,0,0,1,0,0,red,black
Anguilla,1,4,0,0,1,1,0,1,3,0,0,1,0,1,0,1,white,0,0,0,0,0,0,0,0,1,0,white,blue
Antigua-Barbuda,1,4,0,0,1,1,0,1,5,1,0,1,1,1,1,0,red,0,0,0,0,1,0,1,0,0,0,black,red
Argentina,2,3,2777,28,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Argentine,2,3,2777,28,2,0,0,3,3,0,0,1,1,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue
Australia,6,2,7690,15,1,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,1,1,6,0,0,0,0,0,white,blue
Austria,3,1,84,8,4,0,0,3,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red
Bahamas,1,4,19,0,1,1,0,3,3,0,0,1,1,0,1,0,blue,0,0,0,0,0,0,1,0,0,0,blue,blue
Bahrain,5,1,1,0,8,2,0,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,white,red
Bangladesh,5,1,143,90,6,2,0,0,2,1,1,0,0,0,0,0,green,1,0,0,0,0,0,0,0,0,0,green,green
Barbados,1,4,0,0,1,1,3,0,3,0,0,1,1,0,1,0,blue,0,0,0,0,0,0,0,1,0,0,blue,blue
Belgium,3,1,31,10,6,0,3,0,3,1,0,0,1,0,1,0,gold,0,0,0,0,0,0,0,0,0,0,black,red
Belize,1,4,23,0,1,1,0,2,8,1,1,1,1,1,1,1,blue,1,0,0,0,0,0,0,1,1,1,red,red
Benin,4,1,113,3,3,5,0,0,2,1,1,0,0,0,0,0,green,0,0,0,0,1,0,0,0,0,0,green,green
Bermuda,1,4,0,0,1,1,0,0,6,1,1,1,1,1,1,0,red,1,1,1,1,0,0,0,1,1,0,white,red
Bhutan,5,1,47,1,10,3,0,0,4,1,0,0,0,1,1,1,orange,4,0,0,0,0,0,0,0,1,0,orange,red
Bolivia,2,3,1099,6,2,0,0,3,3,1,1,0,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green
Botswana,4,2,600,1,10,5,0,5,3,0,0,1,0,1,1,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Brazil,2,3,8512,119,6,0,0,0,4,0,1,1,1,1,0,0,green,1,0,0,0,22,0,0,0,0,1,green,green
British-Virgin-Isles,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,1,1,white,blue
Brunei,5,1,6,0,10,2,0,0,4,1,0,0,1,1,1,0,gold,0,0,0,0,0,0,1,1,1,1,white,gold
Bulgaria,3,1,111,9,5,6,0,3,5,1,1,1,1,1,0,0,red,0,0,0,0,1,0,0,1,1,0,white,red
Burkina,4,4,274,7,3,5,0,2,3,1,1,0,1,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,green
Burma,5,1,678,35,10,3,0,0,3,1,0,1,0,1,0,0,red,0,0,0,1,14,0,0,1,1,0,blue,red
Burundi,4,2,28,4,10,5,0,0,3,1,1,0,0,1,0,0,red,1,0,1,0,3,0,0,0,0,0,white,white
Cameroon,4,1,474,8,3,1,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,1,0,0,0,0,0,green,gold
Canada,1,4,9976,24,1,1,2,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,1,0,red,red
Cape-Verde-Islands,4,4,4,0,6,0,1,2,5,1,1,0,1,0,1,1,gold,0,0,0,0,1,0,0,0,1,0,red,green
Cayman-Islands,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,1,1,1,1,4,0,0,1,1,1,white,blue
Central-African-Republic,4,1,623,2,10,5,1,0,5,1,1,1,1,1,0,0,gold,0,0,0,0,1,0,0,0,0,0,blue,gold
Chad,4,1,1284,4,3,5,3,0,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,blue,red
Chile,2,3,757,11,2,0,0,2,3,1,0,1,0,1,0,0,red,0,0,0,1,1,0,0,0,0,0,blue,red
China,5,1,9561,1008,7,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,5,0,0,0,0,0,red,red
Colombia,2,4,1139,28,2,0,0,3,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,gold,red
Comorro-Islands,4,2,2,0,3,2,0,0,2,0,1,0,0,1,0,0,green,0,0,0,0,4,1,0,0,0,0,green,green
Congo,4,2,342,2,10,5,0,0,3,1,1,0,1,0,0,0,red,0,0,0,0,1,0,0,1,1,0,red,red
Cook-Islands,6,3,0,0,1,1,0,0,4,1,0,1,0,1,0,0,blue,1,1,1,1,15,0,0,0,0,0,white,blue
Costa-Rica,1,4,51,2,2,0,0,5,3,1,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Cuba,1,4,115,10,2,6,0,5,3,1,0,1,0,1,0,0,blue,0,0,0,0,1,0,1,0,0,0,blue,blue
Cyprus,3,1,9,1,6,1,0,0,3,0,1,0,1,1,0,0,white,0,0,0,0,0,0,0,1,1,0,white,white
Czechoslovakia,3,1,128,15,5,6,0,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,1,0,0,0,white,red
Denmark,3,1,43,5,6,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red
Djibouti,4,1,22,0,3,2,0,0,4,1,1,1,0,1,0,0,blue,0,0,0,0,1,0,1,0,0,0,white,green
Dominica,1,4,0,0,1,1,0,0,6,1,1,1,1,1,1,0,green,1,0,0,0,10,0,0,0,1,0,green,green
Dominican-Republic,1,4,49,6,2,0,0,0,3,1,0,1,0,1,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue
Ecuador,2,3,284,8,2,0,0,3,3,1,0,1,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,gold,red
Egypt,4,1,1001,47,8,2,0,3,4,1,0,0,1,1,1,0,black,0,0,0,0,0,0,0,0,1,1,red,black
El-Salvador,1,4,21,5,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Equatorial-Guinea,4,1,28,0,10,5,0,3,4,1,1,1,0,1,0,0,green,0,0,0,0,0,0,1,0,0,0,green,red
Ethiopia,4,1,1222,31,10,1,0,3,3,1,1,0,1,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,red
Faeroes,3,4,1,0,6,1,0,0,3,1,0,1,0,1,0,0,white,0,1,0,0,0,0,0,0,0,0,white,white
Falklands-Malvinas,2,3,12,0,1,1,0,0,6,1,1,1,1,1,0,0,blue,1,1,1,1,0,0,0,1,1,1,white,blue
Fiji,6,2,18,1,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,2,1,1,0,0,0,1,1,0,white,blue
Finland,3,1,337,5,9,1,0,0,2,0,0,1,0,1,0,0,white,0,1,0,0,0,0,0,0,0,0,white,white
France,3,1,547,54,3,0,3,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,blue,red
French-Guiana,2,4,91,0,3,0,3,0,3,1,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,blue,red
French-Polynesia,6,3,4,0,3,0,0,3,5,1,0,1,1,1,1,0,red,1,0,0,0,1,0,0,1,0,0,red,red
Gabon,4,2,268,1,10,5,0,3,3,0,1,1,1,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,blue
Gambia,4,4,10,1,1,5,0,5,4,1,1,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green
Germany-DDR,3,1,108,17,4,6,0,3,3,1,0,0,1,0,1,0,gold,0,0,0,0,0,0,0,1,0,0,black,gold
Germany-FRG,3,1,249,61,4,1,0,3,3,1,0,0,1,0,1,0,black,0,0,0,0,0,0,0,0,0,0,black,gold
Ghana,4,4,239,14,1,5,0,3,4,1,1,0,1,0,1,0,red,0,0,0,0,1,0,0,0,0,0,red,green
Gibraltar,3,4,0,0,1,1,0,1,3,1,0,0,1,1,0,0,white,0,0,0,0,0,0,0,1,0,0,white,red
Greece,3,1,132,10,6,1,0,9,2,0,0,1,0,1,0,0,blue,0,1,0,1,0,0,0,0,0,0,blue,blue
Greenland,1,4,2176,0,6,1,0,0,2,1,0,0,0,1,0,0,white,1,0,0,0,0,0,0,0,0,0,white,red
Grenada,1,4,0,0,1,1,0,0,3,1,1,0,1,0,0,0,gold,1,0,0,0,7,0,1,0,1,0,red,red
Guam,6,1,0,0,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,0,0,0,0,0,0,1,1,1,red,red
Guatemala,1,4,109,8,2,0,3,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Guinea,4,4,246,6,3,2,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,red,green
Guinea-Bissau,4,4,36,1,6,5,1,2,4,1,1,0,1,0,1,0,gold,0,0,0,0,1,0,0,0,0,0,red,green
Guyana,2,4,215,1,1,4,0,0,5,1,1,0,1,1,1,0,green,0,0,0,0,0,0,1,0,0,0,black,green
Haiti,1,4,28,6,3,0,2,0,2,1,0,0,0,0,1,0,black,0,0,0,0,0,0,0,0,0,0,black,red
Honduras,1,4,112,4,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,5,0,0,0,0,0,blue,blue
Hong-Kong,5,1,1,5,7,3,0,0,6,1,1,1,1,1,0,1,blue,1,1,1,1,0,0,0,1,1,1,white,blue
Hungary,3,1,93,11,9,6,0,3,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green
Iceland,3,4,103,0,6,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue
India,5,1,3268,684,6,4,0,3,4,0,1,1,0,1,0,1,orange,1,0,0,0,0,0,0,1,0,0,orange,green
Indonesia,6,2,1904,157,10,2,0,2,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,white
Iran,5,1,1648,39,6,2,0,3,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,1,0,1,green,red
Iraq,5,1,435,14,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,3,0,0,0,0,0,red,black
Ireland,3,4,70,3,1,0,3,0,3,0,1,0,0,1,0,1,white,0,0,0,0,0,0,0,0,0,0,green,orange
Israel,5,1,21,4,10,7,0,2,2,0,0,1,0,1,0,0,white,0,0,0,0,1,0,0,0,0,0,blue,blue
Italy,3,1,301,57,6,0,3,0,3,1,1,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,green,red
Ivory-Coast,4,4,323,7,3,5,3,0,3,1,1,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,red,green
Jamaica,1,4,11,2,1,1,0,0,3,0,1,0,1,0,1,0,green,0,0,1,0,0,0,1,0,0,0,gold,gold
Japan,5,1,372,118,9,7,0,0,2,1,0,0,0,1,0,0,white,1,0,0,0,1,0,0,0,0,0,white,white
Jordan,5,1,98,2,8,2,0,3,4,1,1,0,0,1,1,0,black,0,0,0,0,1,0,1,0,0,0,black,green
Kampuchea,5,1,181,6,10,3,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,0,0,0,1,0,0,red,red
Kenya,4,1,583,17,10,5,0,5,4,1,1,0,0,1,1,0,red,1,0,0,0,0,0,0,1,0,0,black,green
Kiribati,6,1,0,0,1,1,0,0,4,1,0,1,1,1,0,0,red,0,0,0,0,1,0,0,1,1,0,red,blue
Kuwait,5,1,18,2,8,2,0,3,4,1,1,0,0,1,1,0,green,0,0,0,0,0,0,0,0,0,0,green,red
Laos,5,1,236,3,10,6,0,3,3,1,0,1,0,1,0,0,red,1,0,0,0,0,0,0,0,0,0,red,red
Lebanon,5,1,10,3,8,2,0,2,4,1,1,0,0,1,0,1,red,0,0,0,0,0,0,0,0,1,0,red,red
Lesotho,4,2,30,1,10,5,2,0,4,1,1,1,0,1,0,0,blue,0,0,0,0,0,0,0,1,0,0,green,blue
Liberia,4,4,111,1,10,5,0,11,3,1,0,1,0,1,0,0,red,0,0,0,1,1,0,0,0,0,0,blue,red
Libya,4,1,1760,3,8,2,0,0,1,0,1,0,0,0,0,0,green,0,0,0,0,0,0,0,0,0,0,green,green
Liechtenstein,3,1,0,0,4,0,0,2,3,1,0,1,1,0,0,0,red,0,0,0,0,0,0,0,1,0,0,blue,red
Luxembourg,3,1,3,0,4,0,0,3,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,blue
Malagasy,4,2,587,9,10,1,1,2,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,white,green
Malawi,4,2,118,6,10,5,0,3,3,1,1,0,0,0,1,0,red,0,0,0,0,1,0,0,0,0,0,black,green
Malaysia,5,1,333,13,10,2,0,14,4,1,0,1,1,1,0,0,red,0,0,0,1,1,1,0,0,0,0,blue,white
Maldive-Islands,5,1,0,0,10,2,0,0,3,1,1,0,0,1,0,0,red,0,0,0,0,0,1,0,0,0,0,red,red
Mali,4,4,1240,7,3,2,3,0,3,1,1,0,1,0,0,0,gold,0,0,0,0,0,0,0,0,0,0,green,red
Malta,3,1,0,0,10,0,2,0,3,1,0,0,0,1,1,0,red,0,1,0,0,0,0,0,1,0,0,white,red
Marianas,6,1,0,0,10,1,0,0,3,0,0,1,0,1,0,0,blue,0,0,0,0,1,0,0,1,0,0,blue,blue
Mauritania,4,4,1031,2,8,2,0,0,2,0,1,0,1,0,0,0,green,0,0,0,0,1,1,0,0,0,0,green,green
Mauritius,4,2,2,1,1,4,0,4,4,1,1,1,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green
Mexico,1,4,1973,77,2,0,3,0,4,1,1,0,0,1,0,1,green,0,0,0,0,0,0,0,0,1,0,green,red
Micronesia,6,1,1,0,10,1,0,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,4,0,0,0,0,0,blue,blue
Monaco,3,1,0,0,3,0,0,2,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,white
Mongolia,5,1,1566,2,10,6,3,0,3,1,0,1,1,0,0,0,red,2,0,0,0,1,1,1,1,0,0,red,red
Montserrat,1,4,0,0,1,1,0,0,7,1,1,1,1,1,1,0,blue,0,2,1,1,0,0,0,1,1,0,white,blue
Morocco,4,4,447,20,8,2,0,0,2,1,1,0,0,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,red
Mozambique,4,2,783,12,10,5,0,5,5,1,1,0,1,1,1,0,gold,0,0,0,0,1,0,1,1,0,0,green,gold
Nauru,6,2,0,0,10,1,0,3,3,0,0,1,1,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue
Nepal,5,1,140,16,10,4,0,0,3,0,0,1,0,1,0,1,brown,0,0,0,0,2,1,0,0,0,0,blue,blue
Netherlands,3,1,41,14,6,1,0,3,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,blue
Netherlands-Antilles,1,4,0,0,6,1,0,1,3,1,0,1,0,1,0,0,white,0,0,0,0,6,0,0,0,0,0,white,white
New-Zealand,6,2,268,2,1,1,0,0,3,1,0,1,0,1,0,0,blue,0,1,1,1,4,0,0,0,0,0,white,blue
Nicaragua,1,4,128,3,2,0,0,3,2,0,0,1,0,1,0,0,blue,0,0,0,0,0,0,0,0,0,0,blue,blue
Niger,4,1,1267,5,3,2,0,3,3,0,1,0,0,1,0,1,orange,1,0,0,0,0,0,0,0,0,0,orange,green
Nigeria,4,1,925,56,10,2,3,0,2,0,1,0,0,1,0,0,green,0,0,0,0,0,0,0,0,0,0,green,green
Niue,6,3,0,0,1,1,0,0,4,1,0,1,1,1,0,0,gold,1,1,1,1,5,0,0,0,0,0,white,gold
North-Korea,5,1,121,18,10,6,0,5,3,1,0,1,0,1,0,0,blue,1,0,0,0,1,0,0,0,0,0,blue,blue
North-Yemen,5,1,195,9,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,1,0,0,0,0,0,red,black
Norway,3,1,324,4,6,1,0,0,3,1,0,1,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red
Oman,5,1,212,1,8,2,0,2,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,1,0,0,red,green
Pakistan,5,1,804,84,6,2,1,0,2,0,1,0,0,1,0,0,green,0,0,0,0,1,1,0,0,0,0,white,green
Panama,2,4,76,2,2,0,0,0,3,1,0,1,0,1,0,0,red,0,0,0,4,2,0,0,0,0,0,white,white
Papua-New-Guinea,6,2,463,3,1,5,0,0,4,1,0,0,1,1,1,0,black,0,0,0,0,5,0,1,0,1,0,red,black
Parguay,2,3,407,3,2,0,0,3,6,1,1,1,1,1,1,0,red,1,0,0,0,1,0,0,1,1,1,red,blue
Peru,2,3,1285,14,2,0,3,0,2,1,0,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red
Philippines,6,1,300,48,10,0,0,0,4,1,0,1,1,1,0,0,blue,0,0,0,0,4,0,1,0,0,0,blue,red
Poland,3,1,313,36,5,6,0,2,2,1,0,0,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,white,red
Portugal,3,4,92,10,6,0,0,0,5,1,1,1,1,1,0,0,red,1,0,0,0,0,0,0,1,0,0,green,red
Puerto-Rico,1,4,9,3,2,0,0,5,3,1,0,1,0,1,0,0,red,0,0,0,0,1,0,1,0,0,0,red,red
Qatar,5,1,11,0,8,2,0,0,2,0,0,0,0,1,0,1,brown,0,0,0,0,0,0,0,0,0,0,white,brown
Romania,3,1,237,22,6,6,3,0,7,1,1,1,1,1,0,1,red,0,0,0,0,2,0,0,1,1,1,blue,red
Rwanda,4,2,26,5,10,5,3,0,4,1,1,0,1,0,1,0,red,0,0,0,0,0,0,0,0,0,1,red,green
San-Marino,3,1,0,0,6,0,0,2,2,0,0,1,0,1,0,0,white,0,0,0,0,0,0,0,0,0,0,white,blue
Sao-Tome,4,1,0,0,6,0,0,3,4,1,1,0,1,0,1,0,green,0,0,0,0,2,0,1,0,0,0,green,green
Saudi-Arabia,5,1,2150,9,8,2,0,0,2,0,1,0,0,1,0,0,green,0,0,0,0,0,0,0,1,0,1,green,green
Senegal,4,4,196,6,3,2,3,0,3,1,1,0,1,0,0,0,green,0,0,0,0,1,0,0,0,0,0,green,red
Seychelles,4,2,0,0,1,1,0,0,3,1,1,0,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,green
Sierra-Leone,4,4,72,3,1,5,0,3,3,0,1,1,0,1,0,0,green,0,0,0,0,0,0,0,0,0,0,green,blue
Singapore,5,1,1,3,7,3,0,2,2,1,0,0,0,1,0,0,white,0,0,0,0,5,1,0,0,0,0,red,white
Soloman-Islands,6,2,30,0,1,1,0,0,4,0,1,1,1,1,0,0,green,0,0,0,0,5,0,1,0,0,0,blue,green
Somalia,4,1,637,5,10,2,0,0,2,0,0,1,0,1,0,0,blue,0,0,0,0,1,0,0,0,0,0,blue,blue
South-Africa,4,2,1221,29,6,1,0,3,5,1,1,1,0,1,0,1,orange,0,1,1,0,0,0,0,0,0,0,orange,blue
South-Korea,5,1,99,39,10,7,0,0,4,1,0,1,0,1,1,0,white,1,0,0,0,0,0,0,1,0,0,white,white
South-Yemen,5,1,288,2,8,2,0,3,4,1,0,1,0,1,1,0,red,0,0,0,0,1,0,1,0,0,0,red,black
Spain,3,4,505,38,2,0,0,3,2,1,0,0,1,0,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red
Sri-Lanka,5,1,66,15,10,3,2,0,4,0,1,0,1,0,0,1,gold,0,0,0,0,0,0,0,1,1,0,gold,gold
St-Helena,4,3,0,0,1,1,0,0,7,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,0,0,white,blue
St-Kitts-Nevis,1,4,0,0,1,1,0,0,5,1,1,0,1,1,1,0,green,0,0,0,0,2,0,1,0,0,0,green,red
St-Lucia,1,4,0,0,1,1,0,0,4,0,0,1,1,1,1,0,blue,0,0,0,0,0,0,1,0,0,0,blue,blue
St-Vincent,1,4,0,0,1,1,5,0,4,0,1,1,1,1,0,0,green,0,0,0,0,0,0,0,1,1,1,blue,green
Sudan,4,1,2506,20,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,0,0,1,0,0,0,red,black
Surinam,2,4,63,0,6,1,0,5,4,1,1,0,1,1,0,0,red,0,0,0,0,1,0,0,0,0,0,green,green
Swaziland,4,2,17,1,10,1,0,5,7,1,0,1,1,1,1,1,blue,0,0,0,0,0,0,0,1,0,0,blue,blue
Sweden,3,1,450,8,6,1,0,0,2,0,0,1,1,0,0,0,blue,0,1,0,0,0,0,0,0,0,0,blue,blue
Switzerland,3,1,41,6,4,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,0,0,0,0,0,0,0,red,red
Syria,5,1,185,10,8,2,0,3,4,1,1,0,0,1,1,0,red,0,0,0,0,2,0,0,0,0,0,red,black
Taiwan,5,1,36,18,7,3,0,0,3,1,0,1,0,1,0,0,red,1,0,0,1,1,0,0,0,0,0,blue,red
Tanzania,4,2,945,18,10,5,0,0,4,0,1,1,1,0,1,0,green,0,0,0,0,0,0,1,0,0,0,green,blue
Thailand,5,1,514,49,10,3,0,5,3,1,0,1,0,1,0,0,red,0,0,0,0,0,0,0,0,0,0,red,red
Togo,4,1,57,2,3,7,0,5,4,1,1,0,1,1,0,0,green,0,0,0,1,1,0,0,0,0,0,red,green
Tonga,6,2,1,0,10,1,0,0,2,1,0,0,0,1,0,0,red,0,1,0,1,0,0,0,0,0,0,white,red
Trinidad-Tobago,2,4,5,1,1,1,0,0,3,1,0,0,0,1,1,0,red,0,0,0,0,0,0,1,0,0,0,white,white
Tunisia,4,1,164,7,8,2,0,0,2,1,0,0,0,1,0,0,red,1,0,0,0,1,1,0,0,0,0,red,red
Turkey,5,1,781,45,9,2,0,0,2,1,0,0,0,1,0,0,red,0,0,0,0,1,1,0,0,0,0,red,red
Turks-Cocos-Islands,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,1,blue,0,1,1,1,0,0,0,1,1,0,white,blue
Tuvalu,6,2,0,0,1,1,0,0,5,1,0,1,1,1,0,0,blue,0,1,1,1,9,0,0,0,0,0,white,blue
UAE,5,1,84,1,8,2,1,3,4,1,1,0,0,1,1,0,green,0,0,0,0,0,0,0,0,0,0,red,black
Uganda,4,1,236,13,10,5,0,6,5,1,0,0,1,1,1,0,gold,1,0,0,0,0,0,0,0,1,0,black,red
UK,3,4,245,56,1,1,0,0,3,1,0,1,0,1,0,0,red,0,1,1,0,0,0,0,0,0,0,white,red
Uruguay,2,3,178,3,2,0,0,9,3,0,0,1,1,1,0,0,white,0,0,0,1,1,0,0,0,0,0,white,white
US-Virgin-Isles,1,4,0,0,1,1,0,0,6,1,1,1,1,1,0,0,white,0,0,0,0,0,0,0,1,1,1,white,white
USA,1,4,9363,231,1,1,0,13,3,1,0,1,0,1,0,0,white,0,0,0,1,50,0,0,0,0,0,blue,red
USSR,5,1,22402,274,5,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,1,0,0,1,0,0,red,red
Vanuatu,6,2,15,0,6,1,0,0,4,1,1,0,1,0,1,0,red,0,0,0,0,0,0,1,0,1,0,black,green
Vatican-City,3,1,0,0,6,0,2,0,4,1,0,0,1,1,1,0,gold,0,0,0,0,0,0,0,1,0,0,gold,white
Venezuela,2,4,912,15,2,0,0,3,7,1,1,1,1,1,1,1,red,0,0,0,0,7,0,0,1,1,0,gold,red
Vietnam,5,1,333,60,10,6,0,0,2,1,0,0,1,0,0,0,red,0,0,0,0,1,0,0,0,0,0,red,red
Western-Samoa,6,3,3,0,1,1,0,0,3,1,0,1,0,1,0,0,red,0,0,0,1,5,0,0,0,0,0,blue,red
Yugoslavia,3,1,256,22,6,6,0,3,4,1,0,1,1,1,0,0,red,0,0,0,0,1,0,0,0,0,0,blue,red
Zaire,4,2,905,28,10,5,0,0,4,1,1,0,1,0,0,1,green,1,0,0,0,0,0,0,1,1,0,green,green
Zambia,4,2,753,6,10,5,3,0,4,1,1,0,0,0,1,1,green,0,0,0,0,0,0,0,0,1,0,green,brown
Zimbabwe,4,2,391,8,10,5,0,7,5,1,1,0,1,1,1,0,green,0,0,0,0,1,0,1,1,1,0,green,green
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)_____no_output_____# Step 3 - verify we've got *something*
flag_data.head()_____no_output_____# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()_____no_output_____!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc'wc' is not recognized as an internal or external command,
operable program or batch file.
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)Help on function read_csv in module pandas.io.parsers:
read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)
Read a comma-separated values (csv) file into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for
`IO Tools <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
Parameters
----------
filepath_or_buffer : str, path object, or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts either
``pathlib.Path`` or ``py._path.local.LocalPath``.
By file-like object, we refer to objects with a ``read()`` method, such as
a file handler (e.g. via builtin ``open`` function) or ``StringIO``.
sep : str, default ','
Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will
be used and automatically detect the separator by Python's builtin sniffer
tool, ``csv.Sniffer``. In addition, separators longer than 1 character and
different from ``'\s+'`` will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``.
delimiter : str, default ``None``
Alias for sep.
header : int, list of int, default 'infer'
Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names
are passed the behavior is identical to ``header=0`` and column
names are inferred from the first line of the file, if column
names are passed explicitly then the behavior is identical to
``header=None``. Explicitly pass ``header=0`` to be able to
replace existing names. The header can be a list of integers that
specify row locations for a multi-index on the columns
e.g. [0,1,3]. Intervening rows that are not specified will be
skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if
``skip_blank_lines=True``, so ``header=0`` denotes the first line of
data rather than the first line of the file.
names : array-like, optional
List of column names to use. If file contains no header row, then you
should explicitly pass ``header=None``. Duplicates in this list will cause
a ``UserWarning`` to be issued.
index_col : int, sequence or bool, optional
Column to use as the row labels of the DataFrame. If a sequence is given, a
MultiIndex is used. If you have a malformed file with delimiters at the end
of each line, you might consider ``index_col=False`` to force pandas to
not use the first column as the index (row names).
usecols : list-like or callable, optional
Return a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in `names` or
inferred from the document header row(s). For example, a valid list-like
`usecols` parameter would be ``[0, 1, 2]`` or ``['foo', 'bar', 'baz']``.
Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
To instantiate a DataFrame from ``data`` with element order preserved use
``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columns
in ``['foo', 'bar']`` order or
``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``
for ``['bar', 'foo']`` order.
If callable, the callable function will be evaluated against the column
names, returning names where the callable function evaluates to True. An
example of a valid callable argument would be ``lambda x: x.upper() in
['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
parsing time and lower memory usage.
squeeze : bool, default False
If the parsed data only contains one column then return a Series.
prefix : str, optional
Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
mangle_dupe_cols : bool, default True
Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
'X'...'X'. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
dtype : Type name or dict of column -> type, optional
Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32,
'c': 'Int64'}
Use `str` or `object` together with suitable `na_values` settings
to preserve and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
engine : {'c', 'python'}, optional
Parser engine to use. The C engine is faster while the python engine is
currently more feature-complete.
converters : dict, optional
Dict of functions for converting values in certain columns. Keys can either
be integers or column labels.
true_values : list, optional
Values to consider as True.
false_values : list, optional
Values to consider as False.
skipinitialspace : bool, default False
Skip spaces after delimiter.
skiprows : list-like, int or callable, optional
Line numbers to skip (0-indexed) or number of lines to skip (int)
at the start of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise.
An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine='c').
nrows : int, optional
Number of rows of file to read. Useful for reading pieces of large files.
na_values : scalar, str, list-like, or dict, optional
Additional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted as
NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
'1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan',
'null'.
keep_default_na : bool, default True
Whether or not to include the default NaN values when parsing the data.
Depending on whether `na_values` is passed in, the behavior is as follows:
* If `keep_default_na` is True, and `na_values` are specified, `na_values`
is appended to the default NaN values used for parsing.
* If `keep_default_na` is True, and `na_values` are not specified, only
the default NaN values are used for parsing.
* If `keep_default_na` is False, and `na_values` are specified, only
the NaN values specified `na_values` are used for parsing.
* If `keep_default_na` is False, and `na_values` are not specified, no
strings will be parsed as NaN.
Note that if `na_filter` is passed in as False, the `keep_default_na` and
`na_values` parameters will be ignored.
na_filter : bool, default True
Detect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbose : bool, default False
Indicate number of NA values placed in non-numeric columns.
skip_blank_lines : bool, default True
If True, skip over blank lines rather than interpreting as NaN values.
parse_dates : bool or list of int or names or list of lists or dict, default False
The behavior is as follows:
* boolean. If True -> try parsing the index.
* list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
* list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
* dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call
result 'foo'
If a column or index cannot be represented as an array of datetimes,
say because of an unparseable value or a mixture of timezones, the column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use ``pd.to_datetime`` after
``pd.read_csv``. To parse an index or column with a mixture of timezones,
specify ``date_parser`` to be a partially-applied
:func:`pandas.to_datetime` with ``utc=True``. See
:ref:`io.csv.mixed_timezones` for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : bool, default False
If True and `parse_dates` is enabled, pandas will attempt to infer the
format of the datetime strings in the columns, and if it can be inferred,
switch to a faster method of parsing them. In some cases this can increase
the parsing speed by 5-10x.
keep_date_col : bool, default False
If True and `parse_dates` specifies combining multiple columns then
keep the original columns.
date_parser : function, optional
Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses ``dateutil.parser.parser`` to do the
conversion. Pandas will try to call `date_parser` in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by `parse_dates` into a single array
and pass that; and 3) call `date_parser` once for each row using one or
more strings (corresponding to the columns defined by `parse_dates`) as
arguments.
dayfirst : bool, default False
DD/MM format dates, international and European format.
iterator : bool, default False
Return TextFileReader object for iteration or getting chunks with
``get_chunk()``.
chunksize : int, optional
Return TextFileReader object for iteration.
See the `IO Tools docs
<http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
for more information on ``iterator`` and ``chunksize``.
compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer' and
`filepath_or_buffer` is path-like, then detect compression from the
following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no
decompression). If using 'zip', the ZIP file must contain only one data
file to be read in. Set to None for no decompression.
.. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
thousands : str, optional
Thousands separator.
decimal : str, default '.'
Character to recognize as decimal point (e.g. use ',' for European data).
lineterminator : str (length 1), optional
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted
items can include the delimiter and it will be ignored.
quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote : bool, default ``True``
When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
whether or not to interpret two consecutive quotechar elements INSIDE a
field as a single ``quotechar`` element.
escapechar : str (length 1), optional
One-character string used to escape other characters.
comment : str, optional
Indicates remainder of line should not be parsed. If found at the beginning
of a line, the line will be ignored altogether. This parameter must be a
single character. Like empty lines (as long as ``skip_blank_lines=True``),
fully commented lines are ignored by the parameter `header` but not by
`skiprows`. For example, if ``comment='#'``, parsing
``#empty\na,b,c\n1,2,3`` with ``header=0`` will result in 'a,b,c' being
treated as the header.
encoding : str, optional
Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
standard encodings
<https://docs.python.org/3/library/codecs.html#standard-encodings>`_ .
dialect : str or csv.Dialect, optional
If provided, this parameter will override values (default or not) for the
following parameters: `delimiter`, `doublequote`, `escapechar`,
`skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
tupleize_cols : bool, default False
Leave a list of tuples on columns as is (default is to convert to
a MultiIndex on the columns).
.. deprecated:: 0.21.0
This argument will be removed and will always convert to MultiIndex
error_bad_lines : bool, default True
Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned.
If False, then these "bad lines" will dropped from the DataFrame that is
returned.
warn_bad_lines : bool, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each
"bad line" will be output.
delim_whitespace : bool, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``' '``) will be
used as the sep. Equivalent to setting ``sep='\s+'``. If this option
is set to True, nothing should be passed in for the ``delimiter``
parameter.
.. versionadded:: 0.18.1 support for the Python parser.
low_memory : bool, default True
Internally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the `dtype` parameter.
Note that the entire file is read into a single DataFrame regardless,
use the `chunksize` or `iterator` parameter to return the data in chunks.
(Only valid with C parser).
memory_map : bool, default False
If a filepath is provided for `filepath_or_buffer`, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
float_precision : str, optional
Specifies which converter the C engine should use for floating-point
values. The options are `None` for the ordinary converter,
`high` for the high-precision converter, and `round_trip` for the
round-trip converter.
Returns
-------
DataFrame or TextParser
A comma-separated values (csv) file is returned as two-dimensional
data structure with labeled axes.
See Also
--------
to_csv : Write DataFrame to a comma-separated values (csv) file.
read_csv : Read a comma-separated values (csv) file into DataFrame.
read_fwf : Read a table of fixed-width formatted lines into DataFrame.
Examples
--------
>>> pd.read_csv('data.csv') # doctest: +SKIP
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()_____no_output_____flag_data.count()_____no_output_____flag_data.isna().sum()_____no_output_____
</code>
### Yes, but what does it *mean*?
This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).
```
1. name: Name of the country concerned
2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
4. area: in thousands of square km
5. population: in round millions
6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
8. bars: Number of vertical bars in the flag
9. stripes: Number of horizontal stripes in the flag
10. colours: Number of different colours in the flag
11. red: 0 if red absent, 1 if red present in the flag
12. green: same for green
13. blue: same for blue
14. gold: same for gold (also yellow)
15. white: same for white
16. black: same for black
17. orange: same for orange (also brown)
18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
19. circles: Number of circles in the flag
20. crosses: Number of (upright) crosses
21. saltires: Number of diagonal crosses
22. quarters: Number of quartered sections
23. sunstars: Number of sun or star symbols
24. crescent: 1 if a crescent moon symbol present, else 0
25. triangle: 1 if any triangles present, 0 otherwise
26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
29. topleft: colour in the top-left corner (moving right to decide tie-breaks)
30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
```
Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1..._____no_output_____## Steps of Loading and Exploring a Dataset:
- Find a dataset that looks interesting
- Learn what you can about it
- What's in it?
- How many rows and columns?
- What types of variables?
- Look at the raw contents of the file
- Load it into your workspace (notebook)
- Handle any challenges with headers
- Handle any problems with missing values
- Then you can start to explore the data
- Look at the summary statistics
- Look at counts of different categories
- Make some plots to look at the distribution of the data_____no_output_____## 3 ways of loading a dataset_____no_output_____### From its URL_____no_output_____
<code>
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week',
'native-country', 'income']
df = pd.read_csv(dataset_url, names=column_headers)
print(df.shape)
df.head()(32561, 15)
</code>
### From a local file_____no_output_____
<code>
from google.colab import files
uploaded _____no_output_____
</code>
### Using the `!wget` command_____no_output_____
<code>
import wget
wget https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data_____no_output_____
</code>
# Part 2 - Deal with Missing Values_____no_output_____## Diagnose Missing Values
Lets use the Adult Dataset from UCI. <https://github.com/ryanleeallred/datasets>_____no_output_____
<code>
df.isnull().sum()_____no_output_____
</code>
## Fill Missing Values_____no_output_____
<code>
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week',
'native-country', 'income']
df = pd.read_csv(dataset_url, names=column_headers, na_values=[' ?'])
print(df.shape)
df.head(20)_____no_output_____df.dtypes_____no_output_____df.iloc[14][13]_____no_output_____
</code>
# Part 3 - Explore the Dataset:_____no_output_____## Look at "Summary Statistics_____no_output_____### Numeric_____no_output_____
<code>
df.describe()_____no_output_____
</code>
###Non-Numeric_____no_output_____
<code>
df.describe(exclude="number")_____no_output_____
</code>
## Look at Categorical Values_____no_output_____# Part 4 - Basic Visualizations (using the Pandas Library)_____no_output_____## Histogram_____no_output_____
<code>
# Pandas Histogram _____no_output_____
</code>
## Density Plot (KDE)_____no_output_____
<code>
# Pandas Density Plot_____no_output_____
</code>
## Scatter Plot_____no_output_____
<code>
# Pandas Scatterplot_____no_output_____
</code>
| {
"repository": "JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data",
"path": "module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 87986,
"hexsha": "d04eba5c99b3b9e5f958ae5c069860531d1cd673",
"max_line_length": 1190,
"avg_line_length": 39.8306926211,
"alphanum_fraction": 0.4726888369
} |
# Notebook from MarineLasbleis/GrowYourIC
Path: notebooks/sandbox-grow.ipynb
# Let's Grow your Own Inner Core!_____no_output_____### Choose a model in the list:
- geodyn_trg.TranslationGrowthRotation()
- geodyn_static.Hemispheres()
### Choose a proxy type:
- age
- position
- phi
- theta
- growth rate
### set the parameters for the model : geodynModel.set_parameters(parameters)
### set the units : geodynModel.define_units()
### Choose a data set:
- data.SeismicFromFile(filename) # Lauren's data set
- data.RandomData(numbers_of_points)
- data.PerfectSamplingEquator(numbers_of_points)
organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4
- as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn)
- data.PerfectSamplingEquatorRadial(Nr, Ntheta)
same than below, but organized on a polar grid, not a cartesian grid.
### Extract the info:
- calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel)
- extract the positions as numpy arrays: extract_rtp or extract_xyz
- calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point)_____no_output_____
<code>
%matplotlib inline
# import statements
import numpy as np
import matplotlib.pyplot as plt #for figures
from mpl_toolkits.basemap import Basemap #to render maps
import math
import json #to write dict with parameters
from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data
plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures
cm = plt.cm.get_cmap('viridis')
cm2 = plt.cm.get_cmap('winter')/Users/marine/.python-eggs/GrowYourIC-0.5-py3.5.egg-tmp/GrowYourIC/data/CM2008_data.mat
</code>
## Define the geodynamical model_____no_output_____Un-comment one of the model_____no_output_____
<code>
## un-comment one of them
geodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
# geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres. _____no_output_____
</code>
Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())_____no_output_____
<code>
age_ic_dim = 1e9 #in years
rICB_dim = 1221. #in km
v_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate
print("Growth rate is {:.2e} km/years".format(v_g_dim))
v_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7)
translation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010)
time_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)
maxAge = 2.*time_translation/1e6
print("The translation recycles the inner core material in {0:.2e} million years".format(maxAge))
print("Translation velocity is {0:.2e} km/years".format(translation_velocity_dim*np.pi*1e7/1e3))
units = None #we give them already dimensionless parameters.
rICB = 1.
age_ic = 1.
omega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[
print("Rotation rate is {:.2e}".format(omega))
velocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3
velocity_center = [0., 100.]#center of the eastern hemisphere
velocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude)
exponent_growth = 1.#0.1#1
print(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6)Growth rate is 1.22e-06 km/years
The translation recycles the inner core material in 2.50e+03 million years
Translation velocity is 9.77e-07 km/years
Rotation rate is 0.00e+00
1.221e-06 0.7999999999999999 0.0
</code>
Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)
You can re-define it later if you want (or define another proxy_type2 if needed)_____no_output_____
<code>
proxy_type = "age"#"growth rate"
proxy_name = "age (Myears)" #growth rate (km/Myears)"
proxy_lim = [0, maxAge] #or None
#proxy_lim = None
fig_name = "figures/test_" #to name the figures
print(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type)
print(velocity)1.0 1.0 0.7999999999999999 0.0 1.0 age
[ -1.38918542e-01 7.87846202e-01 4.89858720e-17]
</code>
### Parameters for the geodynamical model
This will input the different parameters in the model._____no_output_____
<code>
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity,
'exponent_growth': exponent_growth,
'omega': omega,
'proxy_type': proxy_type})
geodynModel.set_parameters(parameters)
geodynModel.define_units()
param = parameters
param['vt'] = parameters['vt'].tolist() #for json serialization
# write file with parameters, readable with json, byt also human-readable
with open(fig_name+'parameters.json', 'w') as f:
json.dump(param, f)
print(parameters){'exponent_growth': 1.0, 'vt': [-0.13891854213354424, 0.7878462024097663, 4.8985871965894125e-17], 'proxy_type': 'age', 'omega': 0.0, 'tau_ic': 1.0, 'units': None, 'rICB': 1.0}
</code>
## Different data set and visualisations_____no_output_____### Perfect sampling at the equator (to visualise the flow lines)
You can add more points to get a better precision._____no_output_____
<code>
npoints = 10 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingEquator(npoints, rICB = 1.)
data_set.method = "bt_point"
proxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="age", verbose = False)
data_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy="age (Myears)")
plt.savefig(fig_name+"equatorial_plot.pdf", bbox_inches='tight')===
== Evaluate value of proxy for all points of the data set
= Geodynamic model is Translation, Rotation and Growth
= Proxy is age
= Data set is Perfect sampling in the equatorial plane
= Proxy is evaluated for bt_point
= Number of points to examine: 60
===
</code>
### Perfect sampling in the first 100km (to visualise the depth evolution)_____no_output_____
<code>
data_meshgrid = data.Equator_upperpart(10,10)
data_meshgrid.method = "bt_point"
proxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_meshgrid.extract_rtp("bottom_turning_point")
fig3, ax3 = plt.subplots(figsize=(8, 2))
X, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid)
sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm)
sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k")
ax3.set_ylim(-0, 120)
fig3.gca().invert_yaxis()
ax3.set_xlim(-180,180)
cbar = fig3.colorbar(sc)
#cbar.set_clim(0, maxAge)
cbar.set_label(proxy_name)
ax3.set_xlabel("longitude")
ax3.set_ylabel("depth below ICB (km)")
plt.savefig(fig_name+"meshgrid.pdf", bbox_inches='tight')===
== Evaluate value of proxy for all points of the data set
= Geodynamic model is Translation, Rotation and Growth
= Proxy is age
= Data set is Meshgrid at the equator between 0 and 120km depth
= Proxy is evaluated for bt_point
= Number of points to examine: 100
===
npoints = 20 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01)
data_set.method = "bt_point"
proxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_set.extract_rtp("bottom_turning_point")
X, Y, Z = data_set.mesh_TPProxy(proxy_surface)
## map
m, fig = plot_data.setting_map()
y, x = m(Y, X)
sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+"map_surface.pdf", bbox_inches='tight')===
== Evaluate value of proxy for all points of the data set
= Geodynamic model is Translation, Rotation and Growth
= Proxy is age
= Data set is Perfect sampling at the surface
= Proxy is evaluated for bt_point
= Number of points to examine: 400
===
</code>
### Random data set, in the first 100km - bottom turning point only
#### Calculate the data_____no_output_____
<code>
# random data set
data_set_random = data.RandomData(300)
data_set_random.method = "bt_point"
proxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False)
data_path = "../GrowYourIC/data/"
geodynModel.data_path = data_path
if proxy_type == "age":
# ## domain size and Vp
proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="domain_size", verbose=False)
proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="dV_V", verbose=False)_____no_output_____r, t, p = data_set_random.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set_random.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_depth.pdf", bbox_inches='tight')_____no_output_____
</code>
### Real Data set from Waszek paper_____no_output_____
<code>
## real data set
data_set = data.SeismicFromFile("../GrowYourIC/data/WD11.dat")
data_set.method = "bt_point"
proxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False)
if proxy_type == "age":
## domain size and DV/V
proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="domain_size", verbose=False)
proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="dV_V", verbose=False)_____no_output_____r, t, p = data_set.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_depth.pdf", bbox_inches='tight')_____no_output_____
</code>
| {
"repository": "MarineLasbleis/GrowYourIC",
"path": "notebooks/sandbox-grow.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 1,
"size": 245096,
"hexsha": "d050abea73e867ef35b586140e4761e69d99aa4c",
"max_line_length": 110492,
"avg_line_length": 346.6704384724,
"alphanum_fraction": 0.9116183047
} |
# Notebook from googlegenomics/datalab-examples
Path: datalab/genomics/Getting started with the Genomics API.ipynb
<!-- Copyright 2015 Google Inc. All rights reserved. -->
<!-- Licensed under the Apache License, Version 2.0 (the "License"); -->
<!-- you may not use this file except in compliance with the License. -->
<!-- You may obtain a copy of the License at -->
<!-- http://www.apache.org/licenses/LICENSE-2.0 -->
<!-- Unless required by applicable law or agreed to in writing, software -->
<!-- distributed under the License is distributed on an "AS IS" BASIS, -->
<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
<!-- See the License for the specific language governing permissions and -->
<!-- limitations under the License. -->
# Getting started with the Google Genomics API_____no_output_____In this notebook we'll cover how to make authenticated requests to the [Google Genomics API](https://cloud.google.com/genomics/reference/rest/).
----
NOTE:
* If you're new to notebooks, or want to check out additional samples, check out the full [list](../) of general notebooks.
* For additional Genomics samples, check out the full [list](./) of Genomics notebooks._____no_output_____## Setup_____no_output_____### Install Python libraries_____no_output_____We'll be using the [Google Python API client](https://github.com/google/google-api-python-client) for interacting with Genomics API. We can install this library, or any other 3rd-party Python libraries from the [Python Package Index (PyPI)](https://pypi.python.org/pypi) using the `pip` package manager.
There are [50+ Google APIs](http://api-python-client-doc.appspot.com/) that you can work against with the Google Python API Client, but we'll focus on the Genomics API in this notebook._____no_output_____
<code>
!pip install --upgrade google-api-python-clientRequirement already up-to-date: google-api-python-client in /usr/local/lib/python2.7/dist-packages
Cleaning up...
</code>
### Create an Authenticated Client_____no_output_____Next we construct a Python object that we can use it to make requests.
The following snippet shows how we can authenticate using the service account on the Datalab host. For more detail about authentication from Python, see [Using OAuth 2.0 for Server to Server Applications](https://developers.google.com/api-client-library/python/auth/service-accounts)._____no_output_____
<code>
from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http)
_____no_output_____
</code>
And then we create a client for the Genomics API._____no_output_____
<code>
from apiclient.discovery import build
genomics = build('genomics', 'v1', http=http)_____no_output_____
</code>
### Send a request to the Genomics API_____no_output_____Now that we have a Python client for the Genomics API, we can access a variety of different resources. For details about each available resource, see the python client [API docs here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html).
Using our `genomics` client, we'll demonstrate fetching a Dataset resource by ID (the [1000 Genomes dataset](http://googlegenomics.readthedocs.org/en/latest/use_cases/discover_public_data/1000_genomes.html) in this case).
First, we need to construct a request object._____no_output_____
<code>
request = genomics.datasets().get(datasetId='10473108253681171589')_____no_output_____
</code>
Next, we'll send this request to the Genomics API by calling the `request.execute()` method._____no_output_____
<code>
response = request.execute()_____no_output_____
</code>
You will need enable the Genomics API for your project if you have not done so previously. Click on [this link](https://console.developers.google.com/flows/enableapi?apiid=genomics) to enable the API in your project._____no_output_____The response object returned is simply a Python dictionary. Let's take a look at the properties returned in the response._____no_output_____
<code>
for entry in response.items():
print "%s => %s" % entryprojectId => genomics-public-data
id => 10473108253681171589
createTime => 1970-01-01T00:00:00.000Z
name => 1000 Genomes
</code>
Success! We can see the name of the specified Dataset and a few other pieces of metadata.
Accessing other Genomics API resources will follow this same set of steps. The full [list of available resources within the API is here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html). Each resource has details about the different verbs that can be applied (e.g., [Dataset methods](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/genomics_v1.datasets.html))._____no_output_____## Access Data_____no_output_____In this portion of the notebook, we implement [this same example](https://github.com/googlegenomics/getting-started-with-the-api/tree/master/python) implemented as a python script. First let's define a few constants to use within the examples that follow._____no_output_____
<code>
dataset_id = '10473108253681171589' # This is the 1000 Genomes dataset ID
sample = 'NA12872'
reference_name = '22'
reference_position = 51003835_____no_output_____
</code>
### Get read bases for a sample at specific a position_____no_output_____First find the read group set ID for the sample._____no_output_____
<code>
request = genomics.readgroupsets().search(
body={'datasetIds': [dataset_id], 'name': sample},
fields='readGroupSets(id)')
read_group_sets = request.execute().get('readGroupSets', [])
if len(read_group_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of read group sets' % sample)
read_group_set_id = read_group_sets[0]['id']_____no_output_____
</code>
Once we have the read group set ID, lookup the reads at the position in which we are interested._____no_output_____
<code>
request = genomics.reads().search(
body={'readGroupSetIds': [read_group_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1,
'pageSize': 1024},
fields='alignments(alignment,alignedSequence)')
reads = request.execute().get('alignments', [])_____no_output_____
</code>
And we print out the results._____no_output_____
<code>
# Note: This is simplistic - the cigar should be considered for real code
bases = [read['alignedSequence'][
reference_position - int(read['alignment']['position']['position'])]
for read in reads]
print '%s bases on %s at %d are' % (sample, reference_name, reference_position)
from collections import Counter
for base, count in Counter(bases).items():
print '%s: %s' % (base, count)NA12872 bases on 22 at 51003835 are
C: 1
G: 13
</code>
### Get variants for a sample at specific a position_____no_output_____First find the call set ID for the sample._____no_output_____
<code>
request = genomics.callsets().search(
body={'variantSetIds': [dataset_id], 'name': sample},
fields='callSets(id)')
resp = request.execute()
call_sets = resp.get('callSets', [])
if len(call_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of call sets' % sample)
call_set_id = call_sets[0]['id']_____no_output_____
</code>
Once we have the call set ID, lookup the variants that overlap the position in which we are interested._____no_output_____
<code>
request = genomics.variants().search(
body={'callSetIds': [call_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1},
fields='variants(names,referenceBases,alternateBases,calls(genotype))')
variant = request.execute().get('variants', [])[0]_____no_output_____
</code>
And we print out the results._____no_output_____
<code>
variant_name = variant['names'][0]
genotype = [variant['referenceBases'] if g == 0
else variant['alternateBases'][g - 1]
for g in variant['calls'][0]['genotype']]
print 'the called genotype is %s for %s' % (','.join(genotype), variant_name)the called genotype is G,G for rs131767
</code>
| {
"repository": "googlegenomics/datalab-examples",
"path": "datalab/genomics/Getting started with the Genomics API.ipynb",
"matched_keywords": [
"genomics"
],
"stars": 24,
"size": 13303,
"hexsha": "d0548838f2d882bd78bf4ab7a5520834e37ddf35",
"max_line_length": 457,
"avg_line_length": 27.7145833333,
"alphanum_fraction": 0.5771630459
} |
# Notebook from ventolab/HGDA
Path: immune_CD45enriched_load_detect_doublets.ipynb
<code>
import scrublet as scr
import numpy as np
import pandas as pd
import scanpy as sc
import matplotlib.pyplot as plt
import os
import sys
import scipy
def MovePlots(plotpattern, subplotdir):
os.system('mkdir -p '+str(sc.settings.figdir)+'/'+subplotdir)
os.system('mv '+str(sc.settings.figdir)+'/*'+plotpattern+'** '+str(sc.settings.figdir)+'/'+subplotdir)
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.figdir = './final-figures/'
sc.logging.print_versions()
sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
sys.executableWARNING: If you miss a compact list, please try `print_header`!
# Benjamini-Hochberg and Bonferroni FDR helper functions.
def bh(pvalues):
"""
Computes the Benjamini-Hochberg FDR correction.
Input:
* pvals - vector of p-values to correct
"""
pvalues = np.array(pvalues)
n = int(pvalues.shape[0])
new_pvalues = np.empty(n)
values = [ (pvalue, i) for i, pvalue in enumerate(pvalues) ]
values.sort()
values.reverse()
new_values = []
for i, vals in enumerate(values):
rank = n - i
pvalue, index = vals
new_values.append((n/rank) * pvalue)
for i in range(0, int(n)-1):
if new_values[i] < new_values[i+1]:
new_values[i+1] = new_values[i]
for i, vals in enumerate(values):
pvalue, index = vals
new_pvalues[index] = new_values[i]
return new_pvalues
def bonf(pvalues):
"""
Computes the Bonferroni FDR correction.
Input:
* pvals - vector of p-values to correct
"""
new_pvalues = np.array(pvalues) * len(pvalues)
new_pvalues[new_pvalues>1] = 1
return new_pvalues_____no_output_____
</code>
Scrumblet
(Courtesy of K Polansky)
Two-step doublet score processing, mirroring the approach from Popescu et al. https://www.nature.com/articles/s41586-019-1652-y which was closely based on Pijuan-Sala et al. https://www.nature.com/articles/s41586-019-0933-9
The first step starts with some sort of doublet score, e.g. Scrublet, and ends up with a per-cell p-value (with significant values marking doublets). For each sample individually:
run Scrublet to obtain each cell's score
overcluster the manifold - run a basic Scanpy pipeline up to clustering, then additionally cluster each cluster separately
compute per-cluster Scrublet scores as the median of the observed values, and use those going forward
identify p-values:
compute normal distribution parameters: centered at the median of the scores, with a MAD-derived standard deviation
the score distribution is zero-truncated, so as per the paper I only use above-median values to compute the MAD
K deviates from the paper a bit, at least the exact wording captured within it, and multiply the MAD by 1.4826 to obtain a literature-derived normal distribution standard deviation estimate
FDR-correct the p-values via Benjamini-Hochberg
write out all this doublet info into CSVs for later use
NOTE: The second step is performed later, in a multi-sample space_____no_output_____
<code>
path_to_data = '/nfs/users/nfs_l/lg18/team292/lg18/gonads/data/scRNAseq/FCA/rawdata/'_____no_output_____metadata = pd.read_csv(path_to_data + 'immune_meta.csv', index_col=0)
metadata['process'].value_counts()_____no_output_____# Select process = CD45+
metadata_enriched = metadata[metadata['process'] == 'CD45+']
metadata_enriched_____no_output_____metadata_enriched['stage'] = metadata_enriched['stage'].astype('str')
plotmeta = list(metadata_enriched.columns)
plotmeta.append('sample')
print('Number of samples: ', metadata_enriched.index.size)Number of samples: 11
#there's loads of clustering going on, so set verbosity low unless you enjoy walls of text
sc.settings.verbosity = 0 # verbosity: errors (0), warnings (1), info (2), hints (3)
scorenames = ['scrublet_score','scrublet_cluster_score','zscore','bh_pval','bonf_pval']
if not os.path.exists('scrublet-scores'):
os.makedirs('scrublet-scores')
#loop over the subfolders of the rawdata folder
samples = metadata_enriched.index.to_list()
for sample in list(reversed(samples)):
print(sample)
#import data
adata_sample = sc.read_10x_mtx(path_to_data + sample + '/filtered_feature_bc_matrix/',cache=True)
adata_sample.var_names_make_unique()
#rename cells to SAMPLE_BARCODE
adata_sample.obs_names = [sample+'_'+i for i in adata_sample.obs_names]
#do some early filtering to retain meaningful cells for doublet inspection
sc.pp.filter_cells(adata_sample, min_genes=200)
sc.pp.filter_genes(adata_sample, min_cells=3)
#convert to lower to be species agnostic: human mito start with MT-, mouse with mt-
mito_genes = [name for name in adata_sample.var_names if name.lower().startswith('mt-')]
# for each cell compute fraction of counts in mito genes vs. all genes
# the `.A1` is only necessary as X is sparse (to transform to a dense array after summing)
adata_sample.obs['percent_mito'] = np.sum(
adata_sample[:, mito_genes].X, axis=1).A1 / np.sum(adata_sample.X, axis=1).A1
adata_sample = adata_sample[adata_sample.obs['percent_mito'] < 0.2, :]
#set up and run Scrublet, seeding for replicability
np.random.seed(0)
scrub = scr.Scrublet(adata_sample.X)
doublet_scores, predicted_doublets = scrub.scrub_doublets(verbose=False)
adata_sample.obs['scrublet_score'] = doublet_scores
#overcluster prep. run turbo basic scanpy pipeline
sc.pp.normalize_per_cell(adata_sample, counts_per_cell_after=1e4)
sc.pp.log1p(adata_sample)
sc.pp.highly_variable_genes(adata_sample, min_mean=0.0125, max_mean=3, min_disp=0.5)
adata_sample = adata_sample[:, adata_sample.var['highly_variable']]
sc.pp.scale(adata_sample, max_value=10)
sc.tl.pca(adata_sample, svd_solver='arpack')
sc.pp.neighbors(adata_sample)
#overclustering proper - do basic clustering first, then cluster each cluster
sc.tl.leiden(adata_sample)
adata_sample.obs['leiden'] = [str(i) for i in adata_sample.obs['leiden']]
for clus in np.unique(adata_sample.obs['leiden']):
adata_sub = adata_sample[adata_sample.obs['leiden']==clus].copy()
sc.tl.leiden(adata_sub)
adata_sub.obs['leiden'] = [clus+','+i for i in adata_sub.obs['leiden']]
adata_sample.obs.loc[adata_sub.obs_names,'leiden'] = adata_sub.obs['leiden']
#compute the cluster scores - the median of Scrublet scores per overclustered cluster
for clus in np.unique(adata_sample.obs['leiden']):
adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_cluster_score'] = \
np.median(adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_score'])
#now compute doublet p-values. figure out the median and mad (from above-median values) for the distribution
med = np.median(adata_sample.obs['scrublet_cluster_score'])
mask = adata_sample.obs['scrublet_cluster_score']>med
mad = np.median(adata_sample.obs['scrublet_cluster_score'][mask]-med)
#let's do a one-sided test. the Bertie write-up does not address this but it makes sense
zscores = (adata_sample.obs['scrublet_cluster_score'].values - med) / (1.4826 * mad)
adata_sample.obs['zscore'] = zscores
pvals = 1-scipy.stats.norm.cdf(zscores)
adata_sample.obs['bh_pval'] = bh(pvals)
adata_sample.obs['bonf_pval'] = bonf(pvals)
#create results data frame for single sample and copy stuff over from the adata object
scrublet_sample = pd.DataFrame(0, index=adata_sample.obs_names, columns=scorenames)
for score in scorenames:
scrublet_sample[score] = adata_sample.obs[score]
#write out complete sample scores
scrublet_sample.to_csv('scrublet-scores/'+sample+'.csv')FCA_GND8784459
</code>
#### End of notebook_____no_output_____
| {
"repository": "ventolab/HGDA",
"path": "immune_CD45enriched_load_detect_doublets.ipynb",
"matched_keywords": [
"Scanpy"
],
"stars": null,
"size": 109085,
"hexsha": "d0581821b89eda9e359cc4763d39dd5e859faa58",
"max_line_length": 232,
"avg_line_length": 80.2685798381,
"alphanum_fraction": 0.6977311271
} |
# Notebook from schabertrobbinger/jupyter-notebook-slides
Path: Presentation.ipynb
**Fact: Amazon.com is rife with deceptive product marketing.**_____no_output_____<img src="reviews.png">
If you squint hard enough, you can see that Warren Buffett is **not** actually the author of this book..._____no_output_____It is also easy to guess why this book has so many five star reviews:
<img src="suspiciousreview.png">_____no_output_____**Question: Can we improve on Amazon.com's ratings and review ranking algorithm?**_____no_output_____It is not clear, for example, that Amazon's ratings are meaningful at all.
By analyzing a little over two million video game reviews from Amazon.com, I concluded the positivity bias seen in the case above is far from a rare occurence:
<img src="ratingshistogram1.png">_____no_output_____Perhaps unsurprisingly given the above, the number of review upvotes actually appears to be anticorrelated with rating:
<img src="ratingsupvotehistogram.png">_____no_output_____**1) What should correlate well with the number of review upvotes?**_____no_output_____Since the number of upvotes is a measure of how helpful a given review was to consumers, I guessed that the length the review text should correlate with helpfulness because more information ("Substantive content") is provided by the reviewer:
<img src="upvoteplot1.png">_____no_output_____**2) Does it make sense for Amazon.com to preferentially rank newer reviews and reviews by its annointed Vine contributors?**_____no_output_____Is there concern that older reviews will necessarily garner a greater number of upvotes than newer reviews, solely because they have been around for longer? If so, this does not seem to be strongly reflected in the data:
<img src="timestamphistogram.png">_____no_output_____<img src="topreviewer.png">
<img src="bestreview.png">_____no_output_____**We seem well on our way to a complementary alternative to Amazon.com's current system, even before doing any machine learning!**
**Next step: Identify key words and phrases which indicate products should be avoided.**_____no_output_____
| {
"repository": "schabertrobbinger/jupyter-notebook-slides",
"path": "Presentation.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 4121,
"hexsha": "d05c3b154d279ab6af776e4e8b6de5c45205477d",
"max_line_length": 253,
"avg_line_length": 23.5485714286,
"alphanum_fraction": 0.5763164281
} |
# Notebook from alex-w/lightkurve
Path: docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb
# Plotting Target Pixel Files with Lightkurve_____no_output_____## Learning Goals
By the end of this tutorial, you will:
- Learn how to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).
- Be able to plot the target pixel file background.
- Be able to extract and plot flux from a target pixel file.
_____no_output_____## Introduction_____no_output_____The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.
Pixels around targeted stars are cut out and stored as *target pixel files* at each observing cadence. In this tutorial, we will learn how to use Lightkurve to download and understand the different photometric data stored in a target pixel file, and how to extract flux using basic aperture photometry.
It is useful to read the accompanying tutorial discussing how to use target pixel file products with Lightkurve before starting this tutorial. It is recommended that you also read the tutorial on using *Kepler* light curve products with Lightkurve, which will introduce you to some specifics on how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.
*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. These cutout images are called target pixel files, or TPFs. By combining the amount of flux in the pixels where the star appears, you can make a measurement of the amount of light from a star in that observation. The pixels chosen to include in this measurement are referred to as an *aperture*.
TPFs are typically the first port of call when studying a star with *Kepler*, *K2*, or *TESS*. They allow us to see where our data is coming from, and identify potential sources of noise or systematic trends. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally apply to *TESS* and *K2* as well._____no_output_____## Imports
This tutorial requires:
- **[Lightkurve](https://docs.lightkurve.org)** to work with TPF files.
- [**Matplotlib**](https://matplotlib.org/) for plotting._____no_output_____
<code>
import lightkurve as lk
import matplotlib.pyplot as plt
%matplotlib inline_____no_output_____
</code>
## 1. Downloading a TPF_____no_output_____A TPF contains the original imaging data from which a light curve is derived. Besides the brightness data measured by the charge-coupled device (CCD) camera, a TPF also includes post-processing information such as an estimate of the astronomical background, and a recommended pixel aperture for extracting a light curve.
First, we download a target pixel file. We will use one quarter's worth of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter)._____no_output_____
<code>
search_result = lk.search_targetpixelfile("Kepler-8", author="Kepler", quarter=4, cadence="long")
search_result_____no_output_____tpf = search_result.download()_____no_output_____
</code>
This TPF contains data for every cadence in the quarter we downloaded. Let's focus on the first cadence for now, which we can select using zero-based indexing as follows:_____no_output_____
<code>
first_cadence = tpf[0]
first_cadence_____no_output_____
</code>
## 2. Flux and Background_____no_output_____At each cadence the TPF has a number of photometry data properties. These are:
- `flux_bkg`: the astronomical background of the image.
- `flux_bkg_err`: the statistical uncertainty on the background flux.
- `flux`: the stellar flux after the background is removed.
- `flux_err`: the statistical uncertainty on the stellar flux after background removal.
These properties can be accessed via a TPF object as follows:_____no_output_____
<code>
first_cadence.flux.value_____no_output_____
</code>
And you can plot the data as follows:_____no_output_____
<code>
first_cadence.plot(column='flux');_____no_output_____
</code>
Alternatively, if you are working directly with a FITS file, you can access the data in extension 1 (for example, `first_cadence.hdu[1].data['FLUX']`). Note that you can find all of the details on the structure and contents of TPF files in Section 2.3.2 of the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf)._____no_output_____When plotting data using the `plot()` function, what you are seeing in the TPF is the flux *after* the background has been removed. This background flux typically consists of [zodiacal light](https://en.wikipedia.org/wiki/Zodiacal_light) or earthshine (especially in *TESS* observations). The background is typically smooth and changes on scales much larger than a single TPF. In *Kepler*, the background is estimated for the CCD as a whole, before being extracted from each TPF in that CCD. You can learn more about background removal in Section 4.2 of the [*Kepler* Data Processing Handbook](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf)._____no_output_____Now, let's compare the background to the background-subtracted flux to get a sense of scale. We can do this using the `plot()` function's `column` keyword. By default the function plots the flux, but we can change this to plot the background, as well as other data such as the error on each pixel's flux._____no_output_____
<code>
fig, axes = plt.subplots(2,2, figsize=(16,16))
first_cadence.plot(ax=axes[0,0], column='FLUX')
first_cadence.plot(ax=axes[0,1], column='FLUX_BKG')
first_cadence.plot(ax=axes[1,0], column='FLUX_ERR')
first_cadence.plot(ax=axes[1,1], column='FLUX_BKG_ERR');_____no_output_____
</code>
From looking at the color scale on both plots, you may see that the background flux is very low compared to the total flux emitted by a star. This is expected — stars are bright! But these small background corrections become important when looking at the very small scale changes caused by planets or stellar oscillations. Understanding the background is an important part of astronomy with *Kepler*, *K2*, and *TESS*._____no_output_____If the background is particularly bright and you want to see what the TPF looks like with it included, passing the `bkg=True` argument to the `plot()` method will show the TPF with the flux added on top of the background, representing the total flux recorded by the spacecraft._____no_output_____
<code>
first_cadence.plot(bkg=True);_____no_output_____
</code>
In this case, the background is low and the star is bright, so it doesn't appear to make much of a difference._____no_output_____## 3. Apertures_____no_output_____As part of the data processing done by the *Kepler* pipeline, each TPF includes a recommended *optimal aperture mask*. This aperture mask is optimized to ensure that the stellar signal has a high signal-to-noise ratio, with minimal contamination from the background._____no_output_____The optimal aperture is stored in the TPF as the `pipeline_mask` property. We can have a look at it by calling it here:_____no_output_____
<code>
first_cadence.pipeline_mask_____no_output_____
</code>
As you can see, it is a Boolean array detailing which pixels are included. We can plot this aperture over the top of our TPF using the `plot()` function, and passing in the mask to the `aperture_mask` keyword. This will highlight the pixels included in the aperture mask using red hatched lines._____no_output_____
<code>
first_cadence.plot(aperture_mask=first_cadence.pipeline_mask);_____no_output_____
</code>
You don't necessarily have to pass in the `pipeline_mask` to the `plot()` function; it can be any mask you create yourself, provided it is the right shape. An accompanying tutorial explains how to create such custom apertures, and goes into aperture photometry in more detail. For specifics on the selection of *Kepler*'s optimal apertures, read the [*Kepler* Data Processing Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf), Section 7, *Finding Optimal Apertures in Kepler Data*._____no_output_____## 4. Simple Aperture Photometry_____no_output_____Finally, let's learn how to perform simple aperture photometry (SAP) using the provided optimal aperture in `pipeline_mask` and the TPF._____no_output_____Using the full TPF for all cadences in the quarter, we can perform aperture photometry using the `to_lightcurve()` method as follows:_____no_output_____
<code>
lc = tpf.to_lightcurve()_____no_output_____
</code>
This method returns a `LightCurve` object which details the flux and flux centroid position at each cadence:_____no_output_____
<code>
lc_____no_output_____
</code>
Note that this [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) object has fewer data columns than in light curves downloaded directly from MAST. This is because we are extracting our light curve directly from the TPF using minimal processing, whereas light curves created using the official pipeline include more processing and more columns.
We can visualize the light curve as follows:_____no_output_____
<code>
lc.plot();_____no_output_____
</code>
This light curve is similar to the SAP light curve we previously encountered in the light curve tutorial._____no_output_____### Note
The background flux can be plotted in a similar way, using the [`get_bkg_lightcurve()`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html#lightkurve.targetpixelfile.KeplerTargetPixelFile.get_bkg_lightcurve) method. This does not require an aperture, but instead sums the flux in the TPF's `FLUX_BKG` column at each timestamp. _____no_output_____
<code>
bkg = tpf.get_bkg_lightcurve()
bkg.plot();_____no_output_____
</code>
Inspecting the background in this way is useful to identify signals which appear to be present in the background rather than in the astronomical object under study._____no_output_____---_____no_output_____## Exercises_____no_output_____Some stars, such as the planet-hosting star Kepler-10, have been observed both with *Kepler* and *TESS*. In this exercise, download and plot both the *TESS* and *Kepler* TPFs, along with the optimal apertures. You can do this by either selecting the TPFs from the list returned by [`search_targetpixelfile()`](https://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html), or by using the `mission` keyword argument when searching.
Both *Kepler* and *TESS* produce target pixel file data products, but these can look different across the two missions. *TESS* is focused on brighter stars and has larger pixels, so a star that might occupy many pixels in *Kepler* may only occupy a few in *TESS*.
How do light curves extracted from both of them compare?_____no_output_____
<code>
#datalist = lk.search_targetpixelfile(...)
_____no_output_____#soln:
datalist = lk.search_targetpixelfile("Kepler-10")
datalist_____no_output_____kep = datalist[6].download()
tes = datalist[15].download()_____no_output_____fig, axes = plt.subplots(1, 2, figsize=(14,6))
kep.plot(ax=axes[0], aperture_mask=kep.pipeline_mask, scale='log')
tes.plot(ax=axes[1], aperture_mask=tes.pipeline_mask)
fig.tight_layout();_____no_output_____lc_kep = kep.to_lightcurve()
lc_tes = tes.to_lightcurve()_____no_output_____fig, axes = plt.subplots(1, 2, figsize=(14,6), sharey=True)
lc_kep.flatten().plot(ax=axes[0], c='k', alpha=.8)
lc_tes.flatten().plot(ax=axes[1], c='k', alpha=.8);_____no_output_____
</code>
If you plot the light curves for both missions side by side, you will see a stark difference. The *Kepler* data has a much smaller scatter, and repeating transits are visible. This is because *Kepler*'s pixels were smaller, and so could achieve a higher precision on fainter stars. *TESS* has larger pixels and therefore focuses on brighter stars. For stars like Kepler-10, it would be hard to detect a planet using *TESS* data alone._____no_output_____## About this Notebook_____no_output_____**Authors:** Oliver Hall ([email protected]), Geert Barentsen
**Updated On**: 2020-09-15_____no_output_____## Citing Lightkurve and Astropy
If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard._____no_output_____lk.show_citation_instructions()_____no_output_____<img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>
_____no_output_____
| {
"repository": "alex-w/lightkurve",
"path": "docs/source/tutorials/1-getting-started/plotting-target-pixel-files.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 29167,
"hexsha": "d05c7ff06fb2d6c08d620f353884cf397338097e",
"max_line_length": 719,
"avg_line_length": 30.2562240664,
"alphanum_fraction": 0.6140501251
} |
# Notebook from eunicenjuguna/Python4Bioinformatics2020
Path: Notebooks/00.ipynb
# Python For Bioinformatics
Introduction to Python for Bioinformatics - available at https://github.com/kipkurui/Python4Bioinformatics.
<small><small><i>
## Attribution
These tutorials are an adaptation of the Introduction to Python for Maths by [Andreas Ernst](http://users.monash.edu.au/~andreas), available from https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git. The original version was written by Rajath Kumar and is available at https://github.com/rajathkumarmp/Python-Lectures.
These notes have been greatly amended and updated for the MSC Bioinformatics and Molecular Biology at Pwani university, sponsored by EANBiT by [Caleb Kibet](https://twitter.com/calkibet)
</small></small></i>
# Quick Introduction to Jupyter Notebooks
Throughout this course, we will be using Jupyter Notebooks. Although the HPC you will be using will have Jupyter setup, these notes are provided for you want to set it up in your Computer.
## Introduction
The Jupyter Notebook is an interactive computing environment that enables users to author notebooks, which contain a complete and self-contained record of a computation. These notebooks can be shared more efficiently. The notebooks may contain:
* Live code
* Interactive widgets
* Plots
* Narrative text
* Equations
* Images
* Video
It is good to note that "Jupyter" is a loose acronym meaning Julia, Python, and R; the primary languages supported by Jupyter.
The notebook can allow a computational researcher to create reproducible documentation of their research. As Bioinformatics is datacentric, use of Jupyter Notebooks increases research transparency, hence promoting open science.
## First Steps
### Installation
1. [Download Miniconda](https://www.anaconda.com/download/) for your specific OS to your home directory
- Linux: `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh`
- Mac: `curl https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh`
2. Run:
- `bash Miniconda3-latest-Linux-x86_64.sh`
- `bash Miniconda3-latest-MacOSX-x86_64.sh`
3. Follow all the prompts: if unsure, accept defaults
4. Close and re-open your terminal
5. If the installation is successful, you should see a list of installed packages with
- `conda list`
If the command cannot be found, you can add Anaconda bin to the path using:
` export PATH=~/anaconda3/bin:$PATH`
For reproducible analysis, you can [create a conda environment](https://conda.io/docs/user-guide/tasks/manage-environments.html) with all the Python packages you used.
`conda create --name bioinf python jupyter`
To activate the conda environment:
`source activate bioinf`
Having set-up conda environment, you can install any package you need using pip.
`conda install jupyter`
`conda install -c conda-forge jupyterlab`
or by using pip
`pip3 install jupyter`
Then you can quickly launch it using:
`jupyter notebook` or `jupyter lab`
NB: We will use a jupyter lab for training.
A Jupyter notebook is made up of many cells. Each cell can contain Python code. You can execute a cell by clicking on it and pressing `Shift-Enter` or `Ctrl-Enter` (run without moving to the next line).
### Further help
To learn more about Jupyter notebooks, check [the official introduction](http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb) and [some useful Jupyter Tricks](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/).
Book: http://www.ict.ru.ac.za/Resources/cspw/thinkcspy3/thinkcspy3.pdf
# Python for Bioinformatics
## Introduction
Python is a modern, robust, high-level programming language. It is straightforward to pick up even if you are entirely new to programming.
Python, similar to other languages like Matlab or R, is interpreted hence runs slowly compared to C++, Fortran or Java. However, writing programs in Python is very quick. Python has an extensive collection of libraries for everything from scientific computing to web services. It caters for object-oriented and functional programming with a module system that allows large and complex applications to be developed in Python.
These lectures are using Jupyter notebooks which mix Python code with documentation. The python notebooks can be run on a web server or stand-alone on a computer.
## Contents
This course is broken up into a number of notebooks (lectures).
### Session 1
* [01](01.ipynb) Basic data types and operations (numbers, strings)
* [02](02.ipynb) String manipulation
### Session 2
* [03](03.ipynb) Data structures: Lists and Tuples
* [04](04.ipynb) Data structures (continued): dictionaries
### Session 3
* [05](05.ipynb) Control statements: if, for, while, try statements
* [06](06.ipynb) Functions
* [07](07.ipynb) Files, Scripting and Modules
### Session 4
* [08](08.ipynb) Data Analysis and plotting with Pandas
* [09](09.ipynb) Reproducible Bioinformatics Research
* [10](10.ipynb) Introduction to Biopython
This is a tutorial style introduction to Python. For a quick reminder/summary of Python syntax, the following [Quick Reference Card](http://www.cs.put.poznan.pl/csobaniec/software/python/py-qrc.html) may be useful. A longer and more detailed tutorial style introduction to python is available from the python site at: https://docs.python.org/3/tutorial/.
## How to learn from this resource?
Download all the notebooks from [Python4Bioinformatics](https://github.com/kipkurui/Python4Bioinformatics2019). The easiest way to do that is to clone the GitHub repository to your working directory using any of the following commands:
git clone https://github.com/kipkurui/Python4Bioinformatics2019.git
or
wget https://github.com/kipkurui/Python4Bioinformatics2019/archive/master.zip
unzip master.zip
rm master.zip
## How to Contribute
To contribute, fork the repository, make some updates and send me a pull request.
Alternatively, you can open an issue.
## License
This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/.
_____no_output_____
| {
"repository": "eunicenjuguna/Python4Bioinformatics2020",
"path": "Notebooks/00.ipynb",
"matched_keywords": [
"BioPython",
"bioinformatics",
"biology"
],
"stars": null,
"size": 7905,
"hexsha": "d05d5137720080c10b6473a0594d04cafd439fa1",
"max_line_length": 434,
"avg_line_length": 47.3353293413,
"alphanum_fraction": 0.6564199873
} |
# Notebook from sreramk1/sentiment-analysis
Path: sentiment_analysis_experiment/Sentiment_analysis_experiment_1.ipynb
<code>
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
tf.config.run_functions_eagerly(False)
#tfds.disable_progress_bar()_____no_output_____tf.version.VERSION_____no_output_____import pandas as pd _____no_output_____dataset = pd.read_csv("/content/drive/MyDrive/sentiment-dataset/airline_sentiment_analysis.csv")_____no_output_____print (dataset[:10])
print (dataset[len(dataset) - 10:]) Unnamed: 0 ... text
0 1 ... @VirginAmerica plus you've added commercials t...
1 3 ... @VirginAmerica it's really aggressive to blast...
2 4 ... @VirginAmerica and it's a really big bad thing...
3 5 ... @VirginAmerica seriously would pay $30 a fligh...
4 6 ... @VirginAmerica yes, nearly every time I fly VX...
5 8 ... @virginamerica Well, I didn't…but NOW I DO! :-D
6 9 ... @VirginAmerica it was amazing, and arrived an ...
7 11 ... @VirginAmerica I <3 pretty graphics. so muc...
8 12 ... @VirginAmerica This is such a great deal! Alre...
9 13 ... @VirginAmerica @virginmedia I'm flying your #f...
[10 rows x 3 columns]
Unnamed: 0 ... text
11531 14627 ... @AmericanAir Flight Cancelled Flightled, can't...
11532 14628 ... Thank you. “@AmericanAir: @jlhalldc Customer R...
11533 14629 ... @AmericanAir How do I change my flight if the ...
11534 14630 ... @AmericanAir Thanks! He is.
11535 14631 ... @AmericanAir thx for nothing on getting us out...
11536 14633 ... @AmericanAir my flight was Cancelled Flightled...
11537 14634 ... @AmericanAir right on cue with the delays👌
11538 14635 ... @AmericanAir thank you we got on a different f...
11539 14636 ... @AmericanAir leaving over 20 minutes Late Flig...
11540 14638 ... @AmericanAir you have my money, you change my ...
[10 rows x 3 columns]
def process(txt):
return ' '.join(word for word in txt.split(' ') if not word.startswith('@'))
process(" word1 word2 word3 @word4 word5 word6")_____no_output_____dataset_processed = pd.DataFrame.copy(dataset, deep=True)
dataset_processed['text'] = dataset['text'].apply(process)
print(dataset_processed[:3])
print(dataset_processed[len(dataset_processed) - 3:])
Unnamed: 0 ... text
0 1 ... plus you've added commercials to the experienc...
1 3 ... it's really aggressive to blast obnoxious "ent...
2 4 ... and it's a really big bad thing about it
[3 rows x 3 columns]
Unnamed: 0 ... text
11538 14635 ... thank you we got on a different flight to Chic...
11539 14636 ... leaving over 20 minutes Late Flight. No warnin...
11540 14638 ... you have my money, you change my flight, and d...
[3 rows x 3 columns]
from sklearn.model_selection import train_test_split_____no_output_____def process_label(label):
if label == "negative":
return 0
elif label == "positive":
return 1
raise Exception("unrecognized label")_____no_output_____dataset_processed['airline_sentiment'] = dataset_processed['airline_sentiment'].apply(process_label)_____no_output_____dataset_train, dataset_test = train_test_split(dataset_processed, test_size = 0.2)_____no_output_____dataset_train[100:125]_____no_output_____len(dataset_train)_____no_output_____BUFFER_SIZE = 10000
BATCH_SIZE = 64_____no_output_____dataset_train_text_tf = tf.convert_to_tensor(dataset_train['text'], dtype=tf.string)
dataset_train_label_tf = tf.convert_to_tensor(dataset_train['airline_sentiment'], dtype=tf.float32)
dataset_test_text_tf = tf.convert_to_tensor(dataset_test['text'], dtype=tf.string)
dataset_test_lable_tf = tf.convert_to_tensor(dataset_test['airline_sentiment'], dtype=tf.float32)
dataset_train_tf = tf.data.Dataset.from_tensor_slices((dataset_train_text_tf, dataset_train_label_tf))
dataset_test_tf = tf.data.Dataset.from_tensor_slices((dataset_test_text_tf, dataset_test_lable_tf))
_____no_output_____count = 10
i = 0
for ele in dataset_train_tf.as_numpy_iterator():
if i >= count:
break
print (ele)
i += 1(b'why must you always delay my Late Flight night Orlando flights? \xf0\x9f\x92\x94', 0.0)
(b'So appreciated!', 1.0)
(b'thanks, keep up the good work', 1.0)
(b"we never received that $15 credit for inoperable tv's on our SFO > JFK flight 2 weeks ago. never got an email...", 0.0)
(b'what response? Is our flight out of Montrose Cancelled Flightled or not?', 0.0)
(b"so you don't have a pilot now for #clt \xe2\x9c\x88 #ord for at least another hour. Why on earth would you board the plane? Makes no sense!", 0.0)
(b"LUV Ya Too!!!! I will sing a song for y'all when I finally get on that plane back to Nashville!!! #LOVESOUTHWESTAIR", 1.0)
(b"she's the type of person that can make a customers day! I fly 100+ times a year & she's one of the top flight attendants I've had!", 1.0)
(b"really not acceptable. Just informed plane won't start. Chartering bus to take passengers to jfk.", 0.0)
(b'Also, been on hold for 30 minutes with your "customer service" to find out when my new flight is scheduled bc your site SUCKS', 0.0)
train_dataset_batched_tf = dataset_train_tf.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset_batched_tf = dataset_test_tf.batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)_____no_output_____count = 1
i = 0
for ele in train_dataset_batched_tf.as_numpy_iterator():
if i >= count:
break
print (ele)
i += 1(array([b"Fine. Would you have them call me? I left a message, was told it would be 2 hours for a call. Haven't heard anything yet.",
b"thanks! Y'all have some of the best customer service left in the industry.",
b"it's been 2 hrs of wait on the phone a) worst customer services b) trying to know where my suitcase Is and way MORE \xf0\x9f\x98\xa4#ANGRY",
b"I cheated on you, and I'm sorry. I'll never do it again. has given my wife and I the worst start to a honeymoon ever",
b'a $100 - totaled. Not happy. Not at all.',
b'is gettin fancy! #Mint #LieFlat Nice work on the menu #LobsterMac #BloodyMary #JetSetter http://t.co/zf5wjgtXzT',
b"when trying to check-in online, it says to call...now I've been on hold for 2 hours...what to do?",
b'.@USAirways trying to get a partner PNR, and have spent more than 1 hour on hold. I know its snowing somewhere, but this is awful',
b'not an issue but I think training & information would help. Great ppl but service needs to switch from individual to group better',
b'worst experience with you. Cancelled Flightled flight, no voucher and no luggage because "ramp was broken." No other ramps in Charlotte??',
b"I fly normally. This doesn't happen to me with them. I'll let your flyers provide their own feedback. Thank you.",
b'I guess the Kit Kat looks tasty... not going near that "sandwich."',
b"Thank y'all for being an amazing airline who knows how to treat their customers. you guys rock!",
b"yet again you disappoint. Sitting at IAD for UA3728 for 3.5hrs and you can't seem to know why the plane hasn't left Albany. #Fail",
b'once again flying AA4285, once again 60+ min delay because of mechanical issues. Perhaps you should consider maintenance?',
b'great. Looking forward to your response to my DM then',
b'...be found when he checks in. Now no info. Please DM me and help me fix this NOW. (3/3)',
b"Hey, first time flyer next week - excited! But I'm having a hard time getting my flights added to my Elevate account. Help?",
b'gate agents are now working with everyone to resolve connecting flight issues which is my concern',
b'Rebooked for tomorrow morning. Never been here - not sure what I can see before tomorrow morning!',
b'this is not a fair set up. I payed for a full seat. I should get access to a full seat. http://t.co/SbA0ARicyq',
b"customers aren't dumb. These revenue based programs will hurt everyone. Not gonna save money like you think",
b'thank you for always have the most amazing customer service! Bring on The Disney Princess Half Marathon',
b'I did see that! Working on picking up a trip or two as we type.',
b'no my concerns were not addressed',
b'please help! No bags, no way to get through to customer service since 8AM this morning! Help!!',
b' Yes. Dale at Baggage office was wonderful. But not everyone is on the same page down there... We had a 6 hour wait!',
b'appreciate it!!',
b"Issue is JFK. Pilot explained once JFK reopens we can get scheduled back there, but why can't we divert to LGA? Closer than ACY!",
b"by far the worst airline in history. I'll never ever fly your garbage again",
b"ok it's now been 7 months waiting to hear from airline. I gave them quite a bit more than the 30 days requested! Terrible service",
b"passengers seated, crew ready #WheresThePilot? Flt1088 from ORD. Hope he isn't at the bar.",
b'worst airline ever! Staff is nasty, wifi down bags are delayed due to weather?? And now the belt is broken. Selling UAL stock in AM',
b"that's what you have said for years, you are losing customers!!!!!!",
b"nope I gave up - maybe they'll deliver it",
b'And now the flight Flight Booking Problems site is totally down. Folks, what is the problem?',
b"only thing confusing me is why I lost priority boarding? I'm a mileage plus card member \xf0\x9f\x98\x94",
b'all right, but can you give me an email to write to ?',
b'thanks for the show! \xf0\x9f\x91\x8d',
b'it took ages for one snapchat story to load. one. ONE. I will demolish you',
b"we're home, you guys recovered, now we can laugh about it and the extra day in barbados. Will you open Cuba soon?",
b'negative. Done wasting time with amateurs at customer service. Thanks for at least offering.',
b'- thanks. She submitted a damaged bag complaint online...is there anything else we can do? #goodcustomerservice',
b'The delay is nothing but the personnel being so combative up to the point of saying "what\'s the hury, the plane is not leaving',
b'Thanks, she did her best. Staying the night in Dallas, new trial to Detroit via Atlanta tomorrow, assuming no Cancelled Flightlations.',
b'how can I get travel question answered quickly... Online and calling not helping with this busy day',
b'Not even on the bag status...will take actions against this company is incredible how irresponsible are with the costumer',
b'now maintenance issues with flight 5639 and more issues with passengers that will miss connections needing to get off',
b'Aww Thanks AA..DFW was on GMA up here this AM..so i understand ..Btw A.A is my Airline when im able to trv..Love you guys.:)',
b'is non existent and I will take this as far as needed.Why hide behind a corporate logo? Provide a number #tcf #useless #amateur',
b"Thanks for the reminder of a few older flights I'd taken and the easy access to add points to my new JB account! Awesome service.",
b'I Cancelled Flighted my flight. I really don\xe2\x80\x99t need this much trouble.',
b"it wasn't a delay so much as a straight Cancelled Flightlation. Weather wasn't an issue either.",
b"flt. 4567 departure time has changed five times in the last 20 minutes. Why don't you figure out a solution and announce once?",
b'I am signed up for notifications. This is the first trip I was not updated on. Not sure why this happened.',
b"Hi there, looks like my connection is delayed too so I'll make it. Thanks!",
b"How best to talk with an agent to reschedule Cancelled Flighted flight? No one answers at AA. Know it's busy, but need help.Thanks",
b"Thanks, both airlines said that it is located at AA Detroit. Also was informed that it flew with AA, which shouldn't matter.",
b'I have a flight from omaha to chicago (en route to NYC) and they are seating me and my partner separate, please fix this res# ILC0HP',
b"We're having 2 grandbabies in 2 weeks -- will travel to DC for the births. Thank you for the reasonable fares! See you Saturday!",
b'what maintenance? The flight landed from Jamaica, has to go through security then get to term 3 then cleaned then board',
b'Hidden City forces me into crappy seat even though exit row is available on the first leg. Your support cannot fix. :-(',
b'#BQONPA flight #1641 delayed from POS-MIA, missed #2214 MIA- ATL need seat on last flight to ATL.',
b"customer service failure aside, one would think you guys would care about inaccurate manifests. I'm sure TSA would."],
dtype=object), array([0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1.,
0., 0., 0., 0., 1., 0., 1., 0., 1., 0., 0., 0., 0.], dtype=float32))
print(dataset_train_tf)
print(dataset_test_tf)<TensorSliceDataset shapes: ((), ()), types: (tf.string, tf.float32)>
<TensorSliceDataset shapes: ((), ()), types: (tf.string, tf.float32)>
#VOCAB_SIZE = 1000
encoder = tf.keras.layers.TextVectorization()
#max_tokens=VOCAB_SIZE)
encoder.adapt(train_dataset_batched_tf.map(lambda text, label: text))_____no_output_____count_0 = len(train_dataset_batched_tf)
count = 0
for ds in train_dataset_batched_tf:
count += len(ds[0])
print(len(ds[0]))
count_____no_output_____encoder("hello world HELLO WORLD")[:].numpy()_____no_output_____import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])_____no_output_____vocab = np.array(encoder.get_vocabulary())
vocab[100:150]_____no_output_____for example, label in dataset_train_tf.take(1):
print('texts: ', example.numpy())
print()
print('labels: ', label.numpy())texts: b'why must you always delay my Late Flight night Orlando flights? \xf0\x9f\x92\x94'
labels: 0.0
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])_____no_output_____model.summary()Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text_vectorization (TextVect (None, None) 0
_________________________________________________________________
embedding (Embedding) (None, None, 64) 770176
_________________________________________________________________
bidirectional (Bidirectional (None, 128) 66048
_________________________________________________________________
dense (Dense) (None, 64) 8256
_________________________________________________________________
dense_1 (Dense) (None, 32) 2080
_________________________________________________________________
dense_2 (Dense) (None, 16) 528
_________________________________________________________________
dense_3 (Dense) (None, 8) 136
_________________________________________________________________
dense_4 (Dense) (None, 1) 9
=================================================================
Total params: 847,233
Trainable params: 847,233
Non-trainable params: 0
_________________________________________________________________
encoder("hello world. This is great").numpy()_____no_output_____model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)_____no_output_____history = model.fit(train_dataset_batched_tf, epochs=10,
validation_data=test_dataset_batched_tf,
validation_steps=30)Epoch 1/10
145/145 [==============================] - 20s 87ms/step - loss: 0.6000 - accuracy: 0.7973 - val_loss: 0.4479 - val_accuracy: 0.7917
Epoch 2/10
145/145 [==============================] - 10s 71ms/step - loss: 0.4006 - accuracy: 0.7973 - val_loss: 0.3862 - val_accuracy: 0.7917
Epoch 3/10
145/145 [==============================] - 10s 71ms/step - loss: 0.3055 - accuracy: 0.7973 - val_loss: 0.3105 - val_accuracy: 0.7917
Epoch 4/10
145/145 [==============================] - 10s 72ms/step - loss: 0.2383 - accuracy: 0.7973 - val_loss: 0.2937 - val_accuracy: 0.7917
Epoch 5/10
145/145 [==============================] - 10s 72ms/step - loss: 0.1840 - accuracy: 0.8438 - val_loss: 0.2728 - val_accuracy: 0.8823
Epoch 6/10
145/145 [==============================] - 10s 71ms/step - loss: 0.1276 - accuracy: 0.9469 - val_loss: 0.2823 - val_accuracy: 0.9141
Epoch 7/10
145/145 [==============================] - 10s 71ms/step - loss: 0.0915 - accuracy: 0.9742 - val_loss: 0.3264 - val_accuracy: 0.9115
Epoch 8/10
145/145 [==============================] - 10s 69ms/step - loss: 0.0703 - accuracy: 0.9816 - val_loss: 0.3343 - val_accuracy: 0.9130
Epoch 9/10
145/145 [==============================] - 10s 70ms/step - loss: 0.0537 - accuracy: 0.9874 - val_loss: 0.3435 - val_accuracy: 0.9146
Epoch 10/10
145/145 [==============================] - 10s 70ms/step - loss: 0.0408 - accuracy: 0.9900 - val_loss: 0.3936 - val_accuracy: 0.9172
test_loss, test_acc = model.evaluate(test_dataset_batched_tf)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)37/37 [==============================] - 1s 16ms/step - loss: 0.3950 - accuracy: 0.9181
Test Loss: 0.3949778378009796
Test Accuracy: 0.9181463718414307
_____no_output_____
sample_text = ('good it\'s great')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model.predict(np.array([sample_text]))
print(predictions)[[2.4241579]]
[[-3.679005]]
[[1.5192201]]
[[-1.4085932]]
[[-0.5680525]]
[[1.4155738]]
[[-3.8368037]]
[[0.36423916]]
_____no_output_____model.set_weights_____no_output_____encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()_____no_output_____encoder_new.get_config()_____no_output_____encoder_new.adapt(np.array([['hell']], dtype=np.object), batch_size=None)_____no_output_____encoder_new.set_weights(encoder.get_weights())_____no_output_____encoder("hello world").numpy()_____no_output_____encoder_new("hello world").numpy()_____no_output_____model2 = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])_____no_output_____model2.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)_____no_output_____layers = []
for layer in model.layers:
layers.append(layer.get_weights())_____no_output_____i = 0
for layer in model2.layers:
# if i == 0:
# i += 1
# continue
print(layer.get_weights()[0].dtype)
layer.set_weights(layers[i])
i += 1object
float32
float32
float32
float32
float32
float32
float32
sample_text = ('good it\'s great')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model2.predict(np.array([sample_text]))
print(predictions)[[2.4241579]]
[[-3.679005]]
[[1.5192201]]
[[-1.4085932]]
[[-0.5680525]]
[[1.4155738]]
[[-3.8368037]]
[[0.36423916]]
import json
_____no_output_____class NdarrayEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
if isinstance(obj, bytes):
return obj.decode('utf-8')
print(obj)
return json.JSONEncoder().default(self, obj)_____no_output_____layersInList = []
for layer in model.layers:
layersInList.append(layer.get_weights())_____no_output_____weightsInJson = json.dumps(layersInList, cls=NdarrayEncoder)
_____no_output_____with open("weights.json", "w") as json_file:
json_file.write(weightsInJson)_____no_output_____with open("weights.json", "r") as json_file_r:
weightsInListRead = json_file_r.read()_____no_output_____weightsReadData = json.loads(weightsInListRead)_____no_output_____# def isIterable(obj):
# if hasattr(obj, '__iter__') and hasattr(obj, '__next__') and hasattr('__getitem__'):
# return True
# return False
def convertStringToBytesInObject(convertableObj):
if isinstance(convertableObj, list):
i = 0
for item in convertableObj:
if isinstance(item, str):
convertableObj[i] = item.encode()
elif isinstance(item, list):
convertStringToBytesInObject(item)
i += 1
else:
print(convertableObj)
raise Exception(" expected to be iterable ")
_____no_output_____isIterable([])_____no_output_____# convertStringToBytesInObject(weightsReadData)_____no_output_____encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()_____no_output_____encoder_new.get_config()_____no_output_____encoder_new.adapt(np.array([['hell']], dtype=np.object), batch_size=None)_____no_output_____encoder_new.set_weights(weightsReadData[0])_____no_output_____encoder_new("hello world").numpy()_____no_output_____encoder("hello world").numpy()_____no_output__________no_output_____model3 = tf.keras.Sequential([
encoder_new,
tf.keras.layers.Embedding(
input_dim=len(encoder_new.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])_____no_output_____model3.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)_____no_output_____layers2 = []
for layerWeights in weightsReadData:
layers2.append(layerWeights)
print(len(layerWeights))1
1
6
2
2
2
2
2
def convertToNdarray(obj):
if isinstance(obj, list):
return np.asarray([convertToNdarray(o) for o in obj])
else:
return obj
_____no_output_____layers2 = convertToNdarray(layers2)/usr/local/lib/python3.7/dist-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
layers2 = [np.array(layer, dtype=object) for layer in layers2]_____no_output_____i = 0
for layer in model3.layers:
# if i == 0:
# i += 1
# continue
print(layer.get_weights()[0].dtype)
layer.set_weights(layers2[i])
i += 1object
float32
float32
float32
float32
float32
float32
float32
sample_text = ('good it\'s great')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model3.predict(np.array([sample_text]))
print(predictions)[[2.4241579]]
[[-3.679005]]
[[1.5192201]]
[[-1.4085932]]
[[-0.5680525]]
[[1.4155738]]
[[-3.8368037]]
[[0.36423916]]
_____no_output__________no_output__________no_output__________no_output__________no_output_____len(weightsReadData)_____no_output_____len(model3.layers)_____no_output_____len(weightsReadData[2])_____no_output_____len(model3.layers[2].get_weights())_____no_output_____len(model2.layers[2].get_weights())_____no_output_____hw = b'hello world'_____no_output_____json.dumps(hw.decode('utf-8'))_____no_output__________no_output__________no_output__________no_output__________no_output__________no_output__________no_output_____tf.keras.layers.serialize(encoder)_____no_output_____encoder.get_weights()_____no_output_____encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()
_____no_output_____encoder_new.get_config()_____no_output_____encoder_new.adapt([['hell']], batch_size=None)_____no_output_____encoder_new.set_weights(encoder.get_weights())_____no_output_____# encoder_new.set_vocabulary(encoder.get_vocabulary())_____no_output_____encoder("hello world").numpy()_____no_output_____encoder_new("hello world").numpy()_____no_output_____dataset_train_batched_text = np.array_split(dataset_train['text'],len(dataset_train['text'])/BATCH_SIZE)
dataset_train_batched_class = np.array_split(dataset_train['airline_sentiment'], len(dataset_train['airline_sentiment'])/BATCH_SIZE)
dataset_test_batched_text = np.array_split(dataset_test['text'],len(dataset_test['text'])/BATCH_SIZE)
dataset_test_batched_class = np.array_split(dataset_test['airline_sentiment'], len(dataset_test['airline_sentiment'])/BATCH_SIZE)
_____no_output_____print (len(dataset_train))
print (len(dataset_test))
print (" ------------------------ ")
print (len(dataset_train_batched_text))
print (len(dataset_train_batched_class))
print (len(dataset_train_batched_text[len(dataset_train_batched_text)- 1]))
print (len(dataset_train_batched_class[len(dataset_train_batched_text)- 1]))
print (" ------------------------ ")
print (len(dataset_test_batched_text))
print (len(dataset_test_batched_class))
print (len(dataset_test_batched_text[len(dataset_test_batched_text)- 1]))
print (len(dataset_test_batched_class[len(dataset_test_batched_class)- 1]))9232
2309
------------------------
144
144
64
64
------------------------
36
36
64
64
dataset_test_batched_text_tmp = np.asarray(dataset_test_batched_text, dtype=object)
dataset_test_batched_class_tmp = np.asarray(dataset_test_batched_class, dtype=object)
dataset_train_batched_text_tmp = np.asarray(dataset_train_batched_text, dtype=object)
dataset_train_batched_class_tmp = np.asarray(dataset_train_batched_class, dtype=object)
np_dataset_test_batched_text = []
np_dataset_test_batched_class = []
np_dataset_train_batched_text = []
np_dataset_train_batched_class = []
for itr in dataset_test_batched_text_tmp:
np_dataset_test_batched_text.append(itr.to_numpy())
for itr in dataset_test_batched_class_tmp:
np_dataset_test_batched_class.append(itr.to_numpy())
for itr in dataset_train_batched_text_tmp:
np_dataset_train_batched_text.append(itr.to_numpy())
for itr in dataset_train_batched_class_tmp:
np_dataset_train_batched_class.append(itr.to_numpy())
np_dataset_test_batched_text = np.asarray(np_dataset_test_batched_text, dtype=object)
np_dataset_test_batched_class = np.asarray(np_dataset_test_batched_class, dtype=object)
np_dataset_train_batched_text = np.asarray(np_dataset_train_batched_text, dtype=object)
np_dataset_train_batched_class = np.asarray(np_dataset_train_batched_class, dtype=object)_____no_output_____np_dataset_test_batched_text[len(np_dataset_test_batched_text)- 1][0]_____no_output_____tf_dataset_test_batched_text = tf.data.Dataset.from_tensor_slices(np_dataset_test_batched_text)
tf_dataset_test_batched_text
_____no_output_____VOCAB_SIZE = 1000
encoder = tf.keras.layers.TextVectorization()
#max_tokens=VOCAB_SIZE)
encoder.adapt(np_dataset_train_batched_text)_____no_output_____import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
_____no_output_____dataset_2, info = tfds.load('imdb_reviews', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset_2['train'], dataset_2['test']
train_dataset.element_spec_____no_output__________no_output__________no_output_____for example, label in train_dataset.take(1):
print('text: ', example.numpy())
print('label: ', label.numpy())
text: b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it."
label: 0
BUFFER_SIZE = 10000
BATCH_SIZE = 64
_____no_output_____train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset = test_dataset.batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
_____no_output_____train_dataset.as_numpy_iterator()_____no_output_____for example, label in train_dataset.take(1):
print('texts: ', example.numpy()[:3])
print()
print('labels: ', label.numpy()[:])
texts: [b"Eleven different Film Makers from different parts of the world are assembled in this film to present their views and ideas about the WTC attack. This is one of the best effort you will see in any Film. Films like this are rarely made and appreciated. This film tries to touch every possible core of WTC. Here are some of the most important stories from the film that makes this film so unique.<br /><br />There is the story from Samira Makhmalbaf (Iran) where somewhere in Iran people are preparing for the attacks from America. There a teacher is trying to educate her students by informing them about Innocent People being killed in WTC massacre. Then comes a story from Youssef Chahine (Egypt) where a Film Maker comes across face-to-face conversation with a Dead Soldier in the WTC attack and a Dead Hard Core Terrorist who was involved in WTC attack. Then we see a story from Idrissa Ouedraogo (Burkina Faso) where a group of Five Innocent children's sees Osama Bin Laden and plans to kidnap him and win the reward money from America. Then we see the story from Alejandro Gozalez Inarritu (Mexico) where you see a Black Screen and slowly you see the real footage of WTC buildings coming down. And the people who are stuck in the building are jumping out of it to save their lives. The other most important story is from Mira Nair (India) where a mother is struggling to get respect for her Dead Son whose name is falsely trapped in WTC massacre! After September 11 attack, Our heart beat automatically starts pumping if we hear two names anywhere in the world.. First is World Trade Centre and the second is Osama! This film totally changes our perception and makes a strong point by claiming something more to it.<br /><br />I will definitely recommend this movie to everyone who loves to have such kinds of Home DVD Collection. Definitely worth every penny you spend. But please don't expect anything more apart from Films in this DVD. There is of course Filmographies of the Film Makers but No Extra Features."
b"Interesting to read comments by viewers regarding Omega Code... many of the overwhelmingly positive comments were lifted almost word for word from TBN broadcasts... the movie looks as if it were made to go directly to video, to be stocked besides the three-part rapture series that was done by some other religious group in the 70s.. dont remember it? You wont remember this one either in a year or two. This is the first movie I have ever seen where it was implied that it was your religious duty to go to it and buy as many tickets as possible to save souls... very shameful... this just goes to show that if you are a televangelist's son, you too can play high-roller Hollywood producer with lil ole ladies tithe money..."
b"While this film certainly does possess the stench of a bad film, it's surprisingly watchable on several levels. First, for old movie fans, it's interesting to see the leading role played by Dean Jagger (no relation to Mick). While Jagger later went on to a very respectable role as a supporting actor (even garnering the Oscar in this category for 12 O'CLOCK HIGH), here his performance is truly unique since he actually has a full head of hair (I never saw him this way before) and because he was by far the worst actor in the film. This film just goes to show that if an actor cannot act in his earlier films doesn't mean he can't eventually learn to be a great actor. Another good example of this phenomenon is Paul Newman, whose first movie (THE SILVER CHALICE) is considered one of the worst films of the 1950s.<br /><br />A second reason to watch the film is the shear cheesiness of it all. The writing is bad, the acting is bad and the special effects are bad. For example, when Jagger and an unnamed Cambodian are wading through the water, it's obvious they are really just walking in place and the background is poorly projected behind them. Plus, once they leave the water, their costumes are 100% dry!!! Horrid continuity and mindlessly bad dialog abounds throughout the film--so much so that it's hard to imagine why they didn't ask Bela Lugosi or George Zucco to star in the film--since both of them starred in many grade-z horror films. In many ways, this would be a perfect example for a film class on how NOT to make a film.<br /><br />So, while giving it a 3 is probably a bit over-generous, it's fun to laugh at and short so it's worth a look for bad film fans."]
labels: [1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 0 1 0 1 0 1
1 1 0 0 1 0 0 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 0 0 1 1 1]
VOCAB_SIZE = 1000
encoder = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=VOCAB_SIZE)
encoder.adapt(train_dataset.map(lambda text, label: text))_____no_output_____vocab = np.array(encoder.get_vocabulary())
vocab[:]_____no_output_____encoded_example = encoder(example)[:3].numpy()
encoded_example_____no_output_____for n in range(3):
print("Original: ", example[n].numpy())
print("Round-trip: ", " ".join(vocab[encoded_example[n]]))
print()
Original: b"Eleven different Film Makers from different parts of the world are assembled in this film to present their views and ideas about the WTC attack. This is one of the best effort you will see in any Film. Films like this are rarely made and appreciated. This film tries to touch every possible core of WTC. Here are some of the most important stories from the film that makes this film so unique.<br /><br />There is the story from Samira Makhmalbaf (Iran) where somewhere in Iran people are preparing for the attacks from America. There a teacher is trying to educate her students by informing them about Innocent People being killed in WTC massacre. Then comes a story from Youssef Chahine (Egypt) where a Film Maker comes across face-to-face conversation with a Dead Soldier in the WTC attack and a Dead Hard Core Terrorist who was involved in WTC attack. Then we see a story from Idrissa Ouedraogo (Burkina Faso) where a group of Five Innocent children's sees Osama Bin Laden and plans to kidnap him and win the reward money from America. Then we see the story from Alejandro Gozalez Inarritu (Mexico) where you see a Black Screen and slowly you see the real footage of WTC buildings coming down. And the people who are stuck in the building are jumping out of it to save their lives. The other most important story is from Mira Nair (India) where a mother is struggling to get respect for her Dead Son whose name is falsely trapped in WTC massacre! After September 11 attack, Our heart beat automatically starts pumping if we hear two names anywhere in the world.. First is World Trade Centre and the second is Osama! This film totally changes our perception and makes a strong point by claiming something more to it.<br /><br />I will definitely recommend this movie to everyone who loves to have such kinds of Home DVD Collection. Definitely worth every penny you spend. But please don't expect anything more apart from Films in this DVD. There is of course Filmographies of the Film Makers but No Extra Features."
Round-trip: [UNK] different film [UNK] from different parts of the world are [UNK] in this film to present their [UNK] and ideas about the [UNK] [UNK] this is one of the best effort you will see in any film films like this are [UNK] made and [UNK] this film tries to [UNK] every possible [UNK] of [UNK] here are some of the most important stories from the film that makes this film so [UNK] br there is the story from [UNK] [UNK] [UNK] where [UNK] in [UNK] people are [UNK] for the [UNK] from america there a [UNK] is trying to [UNK] her [UNK] by [UNK] them about [UNK] people being killed in [UNK] [UNK] then comes a story from [UNK] [UNK] [UNK] where a film [UNK] comes across [UNK] [UNK] with a dead [UNK] in the [UNK] [UNK] and a dead hard [UNK] [UNK] who was involved in [UNK] [UNK] then we see a story from [UNK] [UNK] [UNK] [UNK] where a group of five [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] and [UNK] to [UNK] him and [UNK] the [UNK] money from america then we see the story from [UNK] [UNK] [UNK] [UNK] where you see a black screen and [UNK] you see the real footage of [UNK] [UNK] coming down and the people who are [UNK] in the [UNK] are [UNK] out of it to save their lives the other most important story is from [UNK] [UNK] [UNK] where a mother is [UNK] to get [UNK] for her dead son whose name is [UNK] [UNK] in [UNK] [UNK] after [UNK] [UNK] [UNK] our heart [UNK] [UNK] starts [UNK] if we hear two [UNK] [UNK] in the world first is world [UNK] [UNK] and the second is [UNK] this film totally [UNK] our [UNK] and makes a strong point by [UNK] something more to itbr br i will definitely recommend this movie to everyone who [UNK] to have such [UNK] of home dvd [UNK] definitely worth every [UNK] you [UNK] but please dont expect anything more apart from films in this dvd there is of course [UNK] of the film [UNK] but no [UNK] features
Original: b"Interesting to read comments by viewers regarding Omega Code... many of the overwhelmingly positive comments were lifted almost word for word from TBN broadcasts... the movie looks as if it were made to go directly to video, to be stocked besides the three-part rapture series that was done by some other religious group in the 70s.. dont remember it? You wont remember this one either in a year or two. This is the first movie I have ever seen where it was implied that it was your religious duty to go to it and buy as many tickets as possible to save souls... very shameful... this just goes to show that if you are a televangelist's son, you too can play high-roller Hollywood producer with lil ole ladies tithe money..."
Round-trip: interesting to read comments by viewers [UNK] [UNK] [UNK] many of the [UNK] [UNK] comments were [UNK] almost word for word from [UNK] [UNK] the movie looks as if it were made to go [UNK] to video to be [UNK] [UNK] the [UNK] [UNK] series that was done by some other [UNK] group in the 70s dont remember it you wont remember this one either in a year or two this is the first movie i have ever seen where it was [UNK] that it was your [UNK] [UNK] to go to it and buy as many [UNK] as possible to save [UNK] very [UNK] this just goes to show that if you are a [UNK] son you too can play [UNK] hollywood [UNK] with [UNK] [UNK] [UNK] [UNK] money
Original: b"While this film certainly does possess the stench of a bad film, it's surprisingly watchable on several levels. First, for old movie fans, it's interesting to see the leading role played by Dean Jagger (no relation to Mick). While Jagger later went on to a very respectable role as a supporting actor (even garnering the Oscar in this category for 12 O'CLOCK HIGH), here his performance is truly unique since he actually has a full head of hair (I never saw him this way before) and because he was by far the worst actor in the film. This film just goes to show that if an actor cannot act in his earlier films doesn't mean he can't eventually learn to be a great actor. Another good example of this phenomenon is Paul Newman, whose first movie (THE SILVER CHALICE) is considered one of the worst films of the 1950s.<br /><br />A second reason to watch the film is the shear cheesiness of it all. The writing is bad, the acting is bad and the special effects are bad. For example, when Jagger and an unnamed Cambodian are wading through the water, it's obvious they are really just walking in place and the background is poorly projected behind them. Plus, once they leave the water, their costumes are 100% dry!!! Horrid continuity and mindlessly bad dialog abounds throughout the film--so much so that it's hard to imagine why they didn't ask Bela Lugosi or George Zucco to star in the film--since both of them starred in many grade-z horror films. In many ways, this would be a perfect example for a film class on how NOT to make a film.<br /><br />So, while giving it a 3 is probably a bit over-generous, it's fun to laugh at and short so it's worth a look for bad film fans."
Round-trip: while this film certainly does [UNK] the [UNK] of a bad film its [UNK] [UNK] on several [UNK] first for old movie fans its interesting to see the leading role played by [UNK] [UNK] no [UNK] to [UNK] while [UNK] later went on to a very [UNK] role as a supporting actor even [UNK] the oscar in this [UNK] for [UNK] [UNK] high here his performance is truly unique since he actually has a full head of [UNK] i never saw him this way before and because he was by far the worst actor in the film this film just goes to show that if an actor cannot act in his earlier films doesnt mean he cant eventually learn to be a great actor another good example of this [UNK] is paul [UNK] whose first movie the [UNK] [UNK] is [UNK] one of the worst films of the [UNK] br a second reason to watch the film is the [UNK] [UNK] of it all the writing is bad the acting is bad and the special effects are bad for example when [UNK] and an [UNK] [UNK] are [UNK] through the [UNK] its obvious they are really just [UNK] in place and the background is poorly [UNK] behind them plus once they leave the [UNK] their [UNK] are [UNK] [UNK] [UNK] [UNK] and [UNK] bad dialog [UNK] throughout the [UNK] much so that its hard to imagine why they didnt ask [UNK] [UNK] or george [UNK] to star in the [UNK] both of them [UNK] in many [UNK] horror films in many ways this would be a perfect example for a film class on how not to make a filmbr br so while giving it a 3 is probably a bit [UNK] its fun to laugh at and short so its worth a look for bad film fans
_____no_output_____import os
model2 = None
print(os.listdir('/content/drive/MyDrive/sentiment/'))
if len(os.listdir('/content/drive/MyDrive/sentiment/')) == 0:
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print ("created model")
else:
model2 = tf.keras.models.load_model ("/content/drive/MyDrive/sentiment/")
print ("loaded model")['variables', 'assets', 'saved_model.pb', 'keras_metadata.pb']
if model2 is not None:
model = model2_____no_output_____print([layer.supports_masking for layer in model.layers])
[False, True, True, True, True]
# predict on a sample text without padding.
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model.predict(np.array([sample_text]))
print(predictions[0])
WARNING:tensorflow:6 out of the last 10 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fa5d69c5320> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
# predict on a sample text with padding
padding = "the " * 2000
predictions = model.predict(np.array([sample_text, padding]))
print(predictions[0])
[0.5011921]
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)
_____no_output_____history = model.fit(train_dataset, epochs=5,
validation_data=test_dataset,
validation_steps=30)
Epoch 1/5
391/391 [==============================] - 112s 285ms/step - loss: 0.6542 - accuracy: 0.6130 - val_loss: 0.5631 - val_accuracy: 0.7479
Epoch 2/5
391/391 [==============================] - 110s 279ms/step - loss: 0.4339 - accuracy: 0.8212 - val_loss: 0.4028 - val_accuracy: 0.8385
Epoch 3/5
391/391 [==============================] - 110s 280ms/step - loss: 0.3584 - accuracy: 0.8520 - val_loss: 0.3456 - val_accuracy: 0.8615
Epoch 4/5
391/391 [==============================] - 108s 274ms/step - loss: 0.3338 - accuracy: 0.8620 - val_loss: 0.3362 - val_accuracy: 0.8615
Epoch 5/5
391/391 [==============================] - 107s 271ms/step - loss: 0.3217 - accuracy: 0.8693 - val_loss: 0.3322 - val_accuracy: 0.8604
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)
391/391 [==============================] - 56s 143ms/step - loss: 0.3299 - accuracy: 0.8616
Test Loss: 0.3298826515674591
Test Accuracy: 0.8615999817848206
# predict on a sample text without padding.
sample_text = ('good is great')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad equals very bad. Worse')
predictions = model.predict(np.array([sample_text]))
print(predictions)
[[1.9038882]]
[[-1.7162021]]
x = tfds.as_numpy(test_dataset)_____no_output_____for ele in train_dataset.as_numpy_iterator():
print (ele)
print ("---------------------")_____no_output_____?plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
_____no_output_____m = tf.keras.metrics.Accuracy()
m.update_state([[0], [2], [3], [4]], [[0], [2], [3], [4]])
m.result().numpy()_____no_output_____import copy
vicab2 = copy.deepcopy(vocab)_____no_output_____vicab2.sort()_____no_output_____vicab2_____no_output_____
lst = []
def func(text, label):
lst.append([text, label])
return text, label
test_dataset.map(func)_____no_output_____lst_____no_output_____for ele in test_dataset.as_numpy_iterator():
print (ele)_____no_output_____tf.keras.models.save_model(model=model, filepath="/content/drive/MyDrive/sentiment/")WARNING:absl:Found untraced functions such as lstm_cell_4_layer_call_fn, lstm_cell_4_layer_call_and_return_conditional_losses, lstm_cell_5_layer_call_fn, lstm_cell_5_layer_call_and_return_conditional_losses, lstm_cell_4_layer_call_fn while saving (showing 5 of 10). These functions will not be directly callable after loading.
tf.saved_model.save(obj=model, export_dir="/content/drive/MyDrive/sentiment")WARNING:absl:Found untraced functions such as lstm_cell_19_layer_call_fn, lstm_cell_19_layer_call_and_return_conditional_losses, lstm_cell_20_layer_call_fn, lstm_cell_20_layer_call_and_return_conditional_losses, lstm_cell_19_layer_call_fn while saving (showing 5 of 10). These functions will not be directly callable after loading.
model.save("/content/drive/MyDrive/sentiment/model", save_format="tf")WARNING:absl:Found untraced functions such as lstm_cell_19_layer_call_fn, lstm_cell_19_layer_call_and_return_conditional_losses, lstm_cell_20_layer_call_fn, lstm_cell_20_layer_call_and_return_conditional_losses, lstm_cell_19_layer_call_fn while saving (showing 5 of 10). These functions will not be directly callable after loading.
for layer in model.layers: print(layer.get_config(), layer.get_weights())_____no_output__________no_output__________no_output_____input_array = np.random.randint(len(encoder.get_vocabulary()), size=(3, 1))
model_temp = tf.keras.Sequential()
model_temp.add(encoder)
model_temp.add(tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True))
model_temp.compile('rmsprop', 'mse')
# output_array = model_temp.predict("hello world this is great!")
# print(output_array.shape)
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model_temp.predict(np.array([sample_text]))
print(len(predictions))
print(len(predictions[0]))
print(len(predictions[0][0]))
# The model will take as input an integer matrix of size (batch,
# input_length), and the largest integer (i.e. word index) in the input
# should be no larger than 999 (vocabulary size).
# Now model.output_shape is (None, 10, 64), where `None` is the batch
# dimension.
# input_array = np.random.randint(900, size=(3, 10))
# model_temp.compile('rmsprop', 'mse')
# output_array = model_temp.predict(input_array)
# print(output_array.shape)
1
19
64
print (output_array[0][0])[ 0.0051492 -0.01514421 0.0233087 -0.03882884 0.01593846]
input_array[0][0]_____no_output__________no_output_____
</code>
| {
"repository": "sreramk1/sentiment-analysis",
"path": "sentiment_analysis_experiment/Sentiment_analysis_experiment_1.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 233193,
"hexsha": "d05d5ecb85d3bd62468cd9d2a1e419df37668a99",
"max_line_length": 26278,
"avg_line_length": 60.8859007833,
"alphanum_fraction": 0.6008370749
} |
# Notebook from pritishyuvraj/profit-from-stock
Path: ranking_stocks_by_category.ipynb
<code>
import yfinance as yf
import pandas as pd
import csv_____no_output_____# Address to folders
stock_info_directory = "/Users/pyuvraj/CCPP/data_for_profit_from_stock/all_stocks_historical_prices/stocks"
ranked_growth_stocks = stock_info_directory + "/ranked_stock_prices.csv"_____no_output_____msft = yf.Ticker("MSFT")_____no_output_____print(msft)yfinance.Ticker object <MSFT>
dir(msft)_____no_output_____msft.recommendations_____no_output_____msft.info_____no_output_____msft.recommendations[:-5]_____no_output_____all_growth_stocks = pd.read_csv(ranked_growth_stocks)_____no_output_____print(all_growth_stocks) stock_name stock_growth
0 CEI.csv -3.779877e+02
1 INPX.csv -3.609980e+02
2 CHFS.csv -3.583176e+02
3 TRNX.csv -3.518619e+02
4 SLS.csv -3.515691e+02
... ... ...
4464 TT.csv 7.229298e+04
4465 WEI.csv 1.167866e+05
4466 OSW.csv 1.195446e+05
4467 JPT.csv 2.849168e+05
4468 RCP.csv 1.263946e+06
[4469 rows x 2 columns]
only_positive_growth_stocks = all_growth_stocks.drop(all_growth_stocks[all_growth_stocks.stock_growth < 1].index)_____no_output_____print(only_positive_growth_stocks) stock_name stock_growth
2176 LARK.csv 1.035300e+00
2177 EHI.csv 1.042683e+00
2178 TVE.csv 1.047389e+00
2179 INBKL.csv 1.099508e+00
2180 GYC.csv 1.166316e+00
... ... ...
4464 TT.csv 7.229298e+04
4465 WEI.csv 1.167866e+05
4466 OSW.csv 1.195446e+05
4467 JPT.csv 2.849168e+05
4468 RCP.csv 1.263946e+06
[2293 rows x 2 columns]
stock_category = []
for index, row in only_positive_growth_stocks.iterrows():
# if index > 2181: break
print(row.stock_name, row.stock_growth)
stock_name = row.stock_name[:-4]
print(stock_name)
try:
stock_object = yf.Ticker(stock_name)
print(stock_object.info['sector'])
stock_category.append([row.stock_name, row.stock_growth, stock_object.info['sector']])
except Exception:
pass
print(stock_category)
with open(stock_info_directory + "/category_wise_ranked_growth_stocks.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerows(stock_category)LARK.csv 1.0352997848863992
LARK
Financial Services
EHI.csv 1.0426829974356742
EHI
Financial Services
TVE.csv 1.0473885985452114
TVE
INBKL.csv 1.099507602004044
INBKL
GYC.csv 1.1663162969851673
GYC
CVX.csv 1.1675958503527184
CVX
Energy
TGH.csv 1.1684064211146443
TGH
Industrials
SYF.csv 1.1927655935143535
SYF
Financial Services
MSD.csv 1.1978174803030346
MSD
Financial Services
USIO.csv 1.2177710564793074
USIO
Technology
KMI.csv 1.221811767553656
KMI
Energy
PFO.csv 1.2277742529973477
PFO
Financial Services
FLEX.csv 1.2381421209592425
FLEX
Technology
ADM.csv 1.257204445971973
ADM
Consumer Defensive
PMO.csv 1.2598701507659789
PMO
Financial Services
BGX.csv 1.275758242318708
BGX
Financial Services
HTA.csv 1.279409338649769
HTA
Real Estate
MXL.csv 1.2933283435100371
MXL
Technology
SGU.csv 1.328476855761373
SGU
Energy
CTAA.csv 1.3313899856090712
CTAA
AJX.csv 1.3482907745138863
AJX
Real Estate
GLU.csv 1.3588154428615962
GLU
Financial Services
EAB.csv 1.3678261511041927
EAB
SMFG.csv 1.3738055696216591
SMFG
Financial Services
PBB.csv 1.4117822623141905
PBB
HYI.csv 1.4702286475225967
HYI
Financial Services
BWA.csv 1.4709704827327528
BWA
Consumer Cyclical
NWS.csv 1.5317006141021183
NWS
OLEM.csv 1.5809145672754248
OLEM
NXGN.csv 1.5965166619039481
NXGN
Healthcare
JPS.csv 1.6098280899558604
JPS
Financial Services
KIQ.csv 1.6163687112554788
KIQ
Industrials
USAS.csv 1.6601623396012766
USAS
Basic Materials
JAZZ.csv 1.6618074974285353
JAZZ
Healthcare
HTH.csv 1.6733352255538887
HTH
Financial Services
ASRVP.csv 1.6944173128846307
ASRVP
TGP.csv 1.7004465933874788
TGP
Energy
AEGN.csv 1.705463819824796
AEGN
HYT.csv 1.7276540136824692
HYT
Financial Services
WSBC.csv 1.7375871952388842
WSBC
Financial Services
FSB.csv 1.7440115580680278
FSB
DSX.csv 1.7718203315905114
DSX
Industrials
BOKFL.csv 1.7762719274390166
BOKFL
MUA.csv 1.8006499954243322
MUA
Financial Services
DTJ.csv 1.8174229651440128
DTJ
DTUS.csv 1.8500431302977864
DTUS
ZNH.csv 1.8504595287165413
ZNH
Industrials
SYPR.csv 1.8517480568668088
SYPR
Consumer Cyclical
INBK.csv 1.905453818745552
INBK
Financial Services
ABM.csv 1.905716340376496
ABM
Industrials
BZM.csv 1.93808411495979
BZM
WIW.csv 1.9448591210455528
WIW
Financial Services
BP.csv 1.9752660604813173
BP
Energy
ELU.csv 1.9937522349980312
ELU
CPIX.csv 2.003631820325481
CPIX
Healthcare
WLKP.csv 2.0809908930822503
WLKP
Basic Materials
FLC.csv 2.081027812214068
FLC
Financial Services
NID.csv 2.1275706656805373
NID
Financial Services
CDXC.csv 2.1516285732767604
CDXC
Healthcare
GGT.csv 2.1664004003042443
GGT
Financial Services
ELSE.csv 2.1819045677503235
ELSE
Technology
BHC.csv 2.219508999154135
BHC
Healthcare
AG.csv 2.223349479072735
AG
Basic Materials
NUV.csv 2.2598609753968413
NUV
Financial Services
GFY.csv 2.2805118391649177
GFY
SBGI.csv 2.300359269549091
SBGI
Communication Services
JHS.csv 2.3107076895852727
JHS
Financial Services
SIMO.csv 2.325167562614656
SIMO
Technology
ORRF.csv 2.386917857287916
ORRF
Financial Services
DEX.csv 2.3893860138119694
DEX
Financial Services
JRI.csv 2.4332198261799918
JRI
Financial Services
TVC.csv 2.4825240069897307
TVC
GMTA.csv 2.493279896726143
GMTA
PMM.csv 2.520446361006881
PMM
Financial Services
WNC.csv 2.589482651015356
WNC
Industrials
PAM.csv 2.6032464579540147
PAM
Utilities
SMCI.csv 2.6193105114686333
SMCI
Technology
VMI.csv 2.6357695076086616
VMI
Industrials
TPCO.csv 2.6388057480580045
TPCO
OXM.csv 2.643381322347416
OXM
Consumer Cyclical
SUNS.csv 2.6518871600156957
SUNS
Financial Services
RFP.csv 2.6734368779166573
RFP
Basic Materials
PINC.csv 2.7176534039384372
PINC
Healthcare
AAT.csv 2.7278768399630757
AAT
Real Estate
MNE.csv 2.7798844136521232
MNE
JHI.csv 2.799214683909629
JHI
Financial Services
PFD.csv 2.8167885424502384
PFD
Financial Services
PICO.csv 2.824212262037337
PICO
SENEB.csv 2.848365553902454
SENEB
Consumer Defensive
PARR.csv 2.8581830098622985
PARR
Energy
PKX.csv 2.9225857585648285
PKX
Basic Materials
LMHB.csv 2.928692836923898
LMHB
FRD.csv 2.9297101215122154
FRD
Basic Materials
FMBI.csv 3.0002742139367893
FMBI
Financial Services
HTLD.csv 3.015795144351089
HTLD
Industrials
NWSA.csv 3.0649344301948864
NWSA
Communication Services
UNB.csv 3.1251745031191795
UNB
Financial Services
GSK.csv 3.1301353764570816
GSK
Healthcare
BTA.csv 3.2099803584940303
BTA
Financial Services
DCOM.csv 3.2237295899710268
DCOM
Financial Services
UMPQ.csv 3.2249928030922788
UMPQ
Financial Services
RJN.csv 3.2572768315335696
RJN
PIM.csv 3.257343585563268
PIM
Financial Services
UCI.csv 3.264420397702024
UCI
SVT.csv 3.3102342993436142
SVT
Industrials
SJI.csv 3.3332323854005548
SJI
Utilities
GBAB.csv 3.3789640532545424
GBAB
Financial Services
BCM.csv 3.4027291794944503
BCM
LFC.csv 3.4713902432956534
LFC
Financial Services
DISCA.csv 3.484430064210372
DISCA
Communication Services
TZOO.csv 3.4891760824864235
TZOO
Consumer Cyclical
WVVIP.csv 3.506149754844746
WVVIP
Consumer Defensive
CHY.csv 3.5230147962566014
CHY
Financial Services
FSLR.csv 3.542697720413223
FSLR
Technology
DSL.csv 3.589690885992276
DSL
Financial Services
RQI.csv 3.6567923691928734
RQI
Financial Services
INSI.csv 3.6836479351954567
INSI
Financial Services
CYCCP.csv 3.72564150168985
CYCCP
Healthcare
LDP.csv 3.736844012379184
LDP
Financial Services
ARDC.csv 3.750583191007493
ARDC
Financial Services
IMKTA.csv 3.755412243985468
IMKTA
Consumer Defensive
KOP.csv 3.7579318768424006
KOP
Basic Materials
EXFO.csv 3.761356406345893
EXFO
Technology
PRCP.csv 3.8110632389703927
PRCP
HTGC.csv 3.817950584657721
HTGC
Financial Services
POWL.csv 3.8279555961117686
POWL
Industrials
DLHC.csv 3.881554836279896
DLHC
Industrials
BTT.csv 3.9396466467651168
BTT
Financial Services
PBCTP.csv 3.9553775650213066
PBCTP
Financial Services
PBF.csv 3.987280890262863
PBF
Energy
NTCT.csv 3.989045084141134
NTCT
Technology
CRK.csv 4.020263865120649
CRK
Energy
MPV.csv 4.043496164283688
MPV
Financial Services
GRX.csv 4.074433015805406
GRX
Financial Services
VLY.csv 4.155023321829038
VLY
Financial Services
NIQ.csv 4.170793275495002
NIQ
Financial Services
UEC.csv 4.227227377074058
UEC
Energy
UHS.csv 4.235639812762925
UHS
Healthcare
CIK.csv 4.248757853722679
CIK
Financial Services
NZF.csv 4.260581639283637
NZF
Financial Services
IBKC.csv 4.277852751432004
IBKC
AMS.csv 4.341598322954482
AMS
Healthcare
NEN.csv 4.348670444120109
NEN
Real Estate
CNTY.csv 4.358350626043077
CNTY
Consumer Cyclical
OFC.csv 4.381531672870423
OFC
Real Estate
AVK.csv 4.40429861884991
AVK
Financial Services
APAM.csv 4.421733313833292
APAM
Financial Services
VIV.csv 4.436193526238763
VIV
Communication Services
NHS.csv 4.44754161308774
NHS
Financial Services
ETX.csv 4.551747543187697
ETX
Financial Services
PDT.csv 4.5768939064377605
PDT
Financial Services
TNAV.csv 4.60436602049295
TNAV
SCD.csv 4.608498683622844
SCD
Financial Services
DMB.csv 4.618282454904291
DMB
Financial Services
GBLI.csv 4.631373576815909
GBLI
Financial Services
GGZ.csv 4.635417561082144
GGZ
Financial Services
BNS.csv 4.672158808163892
BNS
Financial Services
TR.csv 4.701266584625041
TR
Consumer Defensive
CHRW.csv 4.710972341545434
CHRW
Industrials
VRAY.csv 4.737820976391141
VRAY
Healthcare
DLTR.csv 4.854123450467885
DLTR
Consumer Defensive
PSX.csv 4.884613933903744
PSX
Energy
PNFP.csv 4.960472077803022
PNFP
Financial Services
NXQ.csv 4.971206684444355
NXQ
Financial Services
TFII.csv 5.0033925861350514
TFII
Industrials
SNFCA.csv 5.015829969299882
SNFCA
Financial Services
HTBK.csv 5.021812840671141
HTBK
Financial Services
DRUA.csv 5.026280058285726
DRUA
ACH.csv 5.05030381768302
ACH
Basic Materials
WIA.csv 5.129418188397389
WIA
Financial Services
FORR.csv 5.162912587998352
FORR
Industrials
SSB.csv 5.171189757704322
SSB
Financial Services
JGH.csv 5.183511222388411
JGH
Financial Services
HSTM.csv 5.2073039758847335
HSTM
Healthcare
OLP.csv 5.220889978212906
OLP
Real Estate
UCIB.csv 5.239036023341546
UCIB
FORK.csv 5.243069361461348
FORK
CTHR.csv 5.247231019513375
CTHR
Consumer Cyclical
VBF.csv 5.269588126870099
VBF
Financial Services
CUBE.csv 5.318697276139549
CUBE
Real Estate
PRGX.csv 5.349438767679182
PRGX
ERF.csv 5.35886004711972
ERF
Energy
FUND.csv 5.406978584165662
FUND
Financial Services
WEA.csv 5.422683195311691
WEA
Financial Services
RTIX.csv 5.425732501685809
RTIX
NVG.csv 5.4436192603139695
NVG
</code>
| {
"repository": "pritishyuvraj/profit-from-stock",
"path": "ranking_stocks_by_category.ipynb",
"matched_keywords": [
"bwa"
],
"stars": null,
"size": 75566,
"hexsha": "d05d88b1114f7fdd1c97490c839464700e6b5c5e",
"max_line_length": 2016,
"avg_line_length": 28.4724943482,
"alphanum_fraction": 0.5119101183
} |
# Notebook from Jaume-JCI/hpsearch
Path: nbs/examples/complex_dummy_experiment_manager.ipynb
<code>
#hide
#default_exp examples.complex_dummy_experiment_manager
from nbdev.showdoc import *
from block_types.utils.nbdev_utils import nbdev_setup, TestRunner
nbdev_setup ()
tst = TestRunner (targets=['dummy'])_____no_output_____
</code>
# Complex Dummy Experiment Manager
> Dummy experiment manager with features that allow additional functionality_____no_output_____
<code>
#export
from hpsearch.examples.dummy_experiment_manager import DummyExperimentManager, FakeModel
import hpsearch
import os
import shutil
import os
import hpsearch.examples.dummy_experiment_manager as dummy_em
from hpsearch.visualization import plot_utils _____no_output_____#for tests
import pytest
from block_types.utils.nbdev_utils import md_____no_output_____
</code>
## ComplexDummyExperimentManager_____no_output_____
<code>
#export
class ComplexDummyExperimentManager (DummyExperimentManager):
def __init__ (self, model_file_name='model_weights.pk', **kwargs):
super().__init__ (model_file_name=model_file_name, **kwargs)
self.raise_error_if_run = False
def run_experiment (self, parameters={}, path_results='./results'):
# useful for testing: in some cases the experiment manager should not call run_experiment
if self.raise_error_if_run:
raise RuntimeError ('run_experiment should not be called')
# extract hyper-parameters used by our model. All the parameters have default values if they are not passed.
offset = parameters.get('offset', 0.5) # default value: 0.5
rate = parameters.get('rate', 0.01) # default value: 0.01
epochs = parameters.get('epochs', 10) # default value: 10
noise = parameters.get('noise', 0.0)
if parameters.get('actual_epochs') is not None:
epochs = parameters.get('actual_epochs')
# other parameters that do not form part of our experiment definition
# changing the values of these other parameters, does not make the ID of the experiment change
verbose = parameters.get('verbose', True)
# build model with given hyper-parameters
model = FakeModel (offset=offset, rate=rate, epochs=epochs, noise = noise, verbose=verbose)
# load training, validation and test data (fake step)
model.load_data()
# start from previous experiment if indicated by parameters
path_results_previous_experiment = parameters.get('prev_path_results')
if path_results_previous_experiment is not None:
model.load_model_and_history (path_results_previous_experiment)
# fit model with training data
model.fit ()
# save model weights and evolution of accuracy metric across epochs
model.save_model_and_history(path_results)
# simulate ctrl-c
if parameters.get ('halt', False):
raise KeyboardInterrupt ('stopped')
# evaluate model with validation and test data
validation_accuracy, test_accuracy = model.score()
# store model
self.model = model
# the function returns a dictionary with keys corresponding to the names of each metric.
# We return result on validation and test set in this example
dict_results = dict (validation_accuracy = validation_accuracy,
test_accuracy = test_accuracy)
return dict_results
_____no_output_____
</code>
### Usage_____no_output_____
<code>
#exports tests.examples.test_complex_dummy_experiment_manager
def test_complex_dummy_experiment_manager ():
#em = generate_data ('complex_dummy_experiment_manager')
md (
'''
Extend previous experiment by using a larger number of epochs
We see how to create a experiment that is the same as a previous experiment,
only increasing the number of epochs.
1.a. For test purposes, we first run the full number of epochs, 30, take note of the accuracy,
and remove the experiment
'''
)
em = ComplexDummyExperimentManager (path_experiments='test_complex_dummy_experiment_manager',
verbose=0)
em.create_experiment_and_run (parameters = {'epochs': 30});
reference_accuracy = em.model.accuracy
reference_weight = em.model.weight
from hpsearch.config.hpconfig import get_path_experiments
import os
import pandas as pd
path_experiments = get_path_experiments ()
print (f'experiments folders: {os.listdir(f"{path_experiments}/experiments")}\n')
experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk')
print ('csv data')
display (experiments_data)
md ('we plot the history')
from hpsearch.visualization.experiment_visualization import plot_multiple_histories
plot_multiple_histories ([0], run_number=0, op='max', backend='matplotlib', metrics='validation_accuracy')
md ('1.b. Now we run two experiments: ')
md ('We run the first experiment with 20 epochs:')
# a.- remove previous experiment
em.remove_previous_experiments()
# b.- create first experiment with epochs=20
em.create_experiment_and_run (parameters = {'epochs': 20});
print (f'experiments folders: {os.listdir(f"{path_experiments}/experiments")}\n')
experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk')
print ('csv data')
display(experiments_data)
print (f'weight: {em.model.weight}, accuracy: {em.model.accuracy}')
md ('We run the second experiment resumes from the previous one and increases the epochs to 30')
# 4.- create second experiment with epochs=10
em.create_experiment_and_run (parameters = {'epochs': 30},
other_parameters={'prev_epoch': True,
'name_epoch': 'epochs',
'previous_model_file_name': 'model_weights.pk'});
experiments_data = pd.read_pickle (f'{path_experiments}/experiments_data.pk')
print ('csv data')
display(experiments_data)
new_accuracy = em.model.accuracy
new_weight = em.model.weight
assert new_weight==reference_weight
assert new_accuracy==reference_accuracy
print (f'weight: {new_weight}, accuracy: {new_accuracy}')
md ('We plot the history')
plot_multiple_histories ([1], run_number=0, op='max', backend='matplotlib', metrics='validation_accuracy')
em.remove_previous_experiments()_____no_output_____tst.run (test_complex_dummy_experiment_manager, tag='dummy')running test_complex_dummy_experiment_manager
</code>
## Running experiments and removing experiments_____no_output_____
<code>
# export
def run_multiple_experiments (**kwargs):
dummy_em.run_multiple_experiments (EM=ComplexDummyExperimentManager, **kwargs)
def remove_previous_experiments ():
dummy_em.remove_previous_experiments (EM=ComplexDummyExperimentManager)_____no_output_____#export
def generate_data (name_folder):
em = ComplexDummyExperimentManager (path_experiments=f'test_{name_folder}', verbose=0)
em.remove_previous_experiments ()
run_multiple_experiments (em=em, nruns=5, noise=0.1, verbose=False)
return em_____no_output_____
</code>
| {
"repository": "Jaume-JCI/hpsearch",
"path": "nbs/examples/complex_dummy_experiment_manager.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 155060,
"hexsha": "d060eba8647218bcd5795351a84e4d41e7367041",
"max_line_length": 65596,
"avg_line_length": 197.2773536896,
"alphanum_fraction": 0.8945891913
} |
# Notebook from ValRCS/RCS_Data_Analysis_Python_2019_July
Path: Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb
<h1><center>Introductory Data Analysis Workflow</center></h1>
_____no_output_____
https://xkcd.com/2054_____no_output_____# An example machine learning notebook
* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)
* Supported by [Jason H. Moore](http://www.epistasis.org/)
* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)
* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens]([email protected])_____no_output_____**You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**_____no_output_____
<code>
# text 17.04.2019
import datetime
print(datetime.datetime.now())
print('hello')2019-06-13 16:12:23.662194
hello
</code>
## Table of contents
1. [Introduction](#Introduction)
2. [License](#License)
3. [Required libraries](#Required-libraries)
4. [The problem domain](#The-problem-domain)
5. [Step 1: Answering the question](#Step-1:-Answering-the-question)
6. [Step 2: Checking the data](#Step-2:-Checking-the-data)
7. [Step 3: Tidying the data](#Step-3:-Tidying-the-data)
- [Bonus: Testing our data](#Bonus:-Testing-our-data)
8. [Step 4: Exploratory analysis](#Step-4:-Exploratory-analysis)
9. [Step 5: Classification](#Step-5:-Classification)
- [Cross-validation](#Cross-validation)
- [Parameter tuning](#Parameter-tuning)
10. [Step 6: Reproducibility](#Step-6:-Reproducibility)
11. [Conclusions](#Conclusions)
12. [Further reading](#Further-reading)
13. [Acknowledgements](#Acknowledgements)_____no_output_____## Introduction
[[ go back to the top ]](#Table-of-contents)
In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.
In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.
In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.
In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.
I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.
**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.**_____no_output_____## License
[[ go back to the top ]](#Table-of-contents)
Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects#license) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible._____no_output_____## Required libraries
[[ go back to the top ]](#Table-of-contents)
If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.
This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:
* **NumPy**: Provides a fast numerical array structure and helper functions.
* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.
* **scikit-learn**: The essential Machine Learning package in Python.
* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.
* **Seaborn**: Advanced statistical plotting library.
* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.
**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution._____no_output_____## The problem domain
[[ go back to the top ]](#Table-of-contents)
For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.
We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.
<img src="img/petal_sepal.jpg" />
We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers:
### *Iris setosa*
<img src="img/iris_setosa.jpg" />
### *Iris versicolor*
<img src="img/iris_versicolor.jpg" />
### *Iris virginica*
<img src="img/iris_virginica.jpg" />
The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.
**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes._____no_output_____## Step 1: Answering the question
[[ go back to the top ]](#Table-of-contents)
The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.
>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.
Petal - ziedlapiņa, sepal - arī ziedlapiņa

>Did you define the metric for success before beginning?
Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.
>Did you understand the context for the question and the scientific or business application?
We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.
>Did you record the experimental design?
Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.
>Did you consider whether the question could be answered with the available data?
The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.
<hr />
Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.
**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it._____no_output_____## Step 2: Checking the data
[[ go back to the top ]](#Table-of-contents)
The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.
Generally, we're looking to answer the following questions:
* Is there anything wrong with the data?
* Are there any quirks with the data?
* Do I need to fix or remove any of the data?
Let's start by reading the data into a pandas DataFrame._____no_output_____
<code>
import pandas as pd_____no_output_____
iris_data = pd.read_csv('../data/iris-data.csv')
_____no_output_____# Resources for loading data from nonlocal sources
# Pandas Can generally handle most common formats
# https://pandas.pydata.org/pandas-docs/stable/io.html
# SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python
# NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/
# Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model
# Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python
# Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself
# Most data resources have some form of Python API / Library _____no_output_____iris_data.head()_____no_output_____
</code>
We're in luck! The data seems to be in a usable format.
The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.
Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.
**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.
We can tell pandas to automatically identify missing values if it knows our missing value marker._____no_output_____
<code>
iris_data.shape_____no_output_____iris_data.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
sepal_length_cm 150 non-null float64
sepal_width_cm 150 non-null float64
petal_length_cm 150 non-null float64
petal_width_cm 145 non-null float64
class 150 non-null object
dtypes: float64(4), object(1)
memory usage: 5.9+ KB
iris_data.describe()_____no_output_____iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])_____no_output_____
</code>
Voilà! Now pandas knows to treat rows with 'NA' as missing values._____no_output_____Next, it's always a good idea to look at the distribution of our data — especially the outliers.
Let's start by printing out some summary statistics about the data set._____no_output_____
<code>
iris_data.describe()_____no_output_____
</code>
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.
If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.
Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it._____no_output_____
<code>
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb_____no_output_____
</code>
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.
We can even have the plotting package color each entry by its class to look for trends within the classes._____no_output_____
<code>
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\_methods.py:140: RuntimeWarning: Degrees of freedom <= 0 for slice
keepdims=keepdims)
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\_methods.py:132: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
</code>
From the scatterplot matrix, we can already see some issues with the data set:
1. There are five classes when there should only be three, meaning there were some coding errors.
2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
3. We had to drop those rows with missing values.
In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step..._____no_output_____## Step 3: Tidying the data
### GIGO principle
[[ go back to the top ]](#Table-of-contents)
Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.
Let's walk through the issues one-by-one.
>There are five classes when there should only be three, meaning there were some coding errors.
After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.
Let's use the DataFrame to fix these errors._____no_output_____
<code>
iris_data['class'].unique()_____no_output_____# Copy and Replace
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'].unique()
_____no_output_____# So we take a row where a specific column('class' here) matches our bad values
# and change them to good values
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()_____no_output_____iris_data.tail()_____no_output_____iris_data[98:103]_____no_output_____
</code>
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.
>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)
In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened._____no_output_____
<code>
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa')]
smallpetals_____no_output_____iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()_____no_output_____# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
# Let's go over this command in class
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
_____no_output_____
</code>
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.
The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows._____no_output_____
<code>
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]_____no_output_____
</code>
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.
After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them._____no_output_____
<code>
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()_____no_output_____iris_data['sepal_length_cm'].hist()_____no_output_____# Here we fix the wrong units
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;_____no_output_____iris_data['sepal_length_cm'].hist()_____no_output_____
</code>
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.
>We had to drop those rows with missing values.
Let's take a look at the rows with missing values:_____no_output_____
<code>
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]_____no_output_____
</code>
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.
One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.
Let's see if we can do that here._____no_output_____
<code>
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
_____no_output_____
</code>
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width._____no_output_____
<code>
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()_____no_output_____average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
print(average_petal_width)0.24999999999999997
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]_____no_output_____iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]_____no_output_____
</code>
Great! Now we've recovered those rows and no longer have missing data in our data set.
**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call:
iris_data.dropna(inplace=True)
After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on._____no_output_____
<code>
iris_data.to_json('../data/iris-clean.json')_____no_output_____iris_data.to_csv('../data/iris-data-clean.csv', index=False)
_____no_output_____cleanedframe = iris_data.dropna()_____no_output_____iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')_____no_output_____
</code>
Now, let's take a look at the scatterplot matrix now that we've tidied the data._____no_output_____
<code>
myplot = sb.pairplot(iris_data_clean, hue='class')
myplot.savefig('irises.png')_____no_output_____import scipy.stats as stats_____no_output_____iris_data = pd.read_csv('../data/iris-data.csv')_____no_output_____iris_data.columns.unique()_____no_output_____stats.entropy(iris_data_clean['sepal_length_cm'])_____no_output_____iris_data.columns[:-1]_____no_output_____# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
for col in iris_data.columns[:-1]:
print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))Entropy for: sepal_length_cm 4.96909746125432
Entropy for: sepal_width_cm 5.000701325982732
Entropy for: petal_length_cm 4.888113822938816
Entropy for: petal_width_cm 4.754264731532864
</code>
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.
The general takeaways here should be:
* Make sure your data is encoded properly
* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range
* Deal with missing data in one way or another: replace it if you can or drop it
* Never tidy your data manually because that is not easily reproducible
* Use code as a record of how you tidied your data
* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct_____no_output_____## Bonus: Testing our data
[[ go back to the top ]](#Table-of-contents)
At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.
We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,
```Python
assert 1 == 2
```
will raise an `AssertionError` and stop execution of the notebook because the assertion failed.
Let's test a few things that we know about our data set now._____no_output_____
<code>
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3_____no_output_____# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5_____no_output_____# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0_____no_output_____# We know that our data set should have no missing measurements
assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]) == 0_____no_output_____
</code>
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage._____no_output_____### Data Cleanup & Wrangling > 80% time spent in Data Science_____no_output_____## Step 4: Exploratory analysis
[[ go back to the top ]](#Table-of-contents)
Now after spending entirely too much time tidying our data, we can start analyzing it!
Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:
* How is my data distributed?
* Are there any correlations in my data?
* Are there any confounding factors that explain these correlations?
This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.
Let's return to that scatterplot matrix that we used earlier._____no_output_____
<code>
sb.pairplot(iris_data_clean)
;_____no_output_____
</code>
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.
There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up._____no_output_____
<code>
sb.pairplot(iris_data_clean, hue='class')
;_____no_output_____
</code>
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.
Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.
There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.
We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data._____no_output_____
<code>
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)_____no_output_____
</code>
Enough flirting with the data. Let's get to modeling._____no_output_____## Step 5: Classification
[[ go back to the top ]](#Table-of-contents)
Wow, all this work and we *still* haven't modeled the data!
As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.
Remember: **Bad data leads to bad models.** Always check your data first.
<hr />
Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.
A **training set** is a random subset of the data that we use to train our models.
A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.
Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.
Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.
Let's set up our data first._____no_output_____
<code>
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]_____no_output_____all_labels[:5]_____no_output_____type(all_inputs)_____no_output_____all_labels[:5]_____no_output_____type(all_labels)_____no_output_____
</code>
Now our data is ready to be split._____no_output_____
<code>
from sklearn.model_selection import train_test_split_____no_output_____all_inputs[:3]_____no_output_____iris_data_clean.head(3)_____no_output_____all_labels[:3]_____no_output_____# Here we split our data into training and testing data
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)_____no_output_____training_inputs[:5]_____no_output_____testing_inputs[:5]_____no_output_____testing_classes[:5]_____no_output_____training_classes[:5]_____no_output_____
</code>
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.
Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.
Here's an example decision tree classifier:
<img src="img/iris_dtc.png" />
Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.
The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.
There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier._____no_output_____
<code>
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)_____no_output_____150*0.25_____no_output_____len(testing_inputs)_____no_output_____37/38_____no_output_____from sklearn import svm
svm_classifier = svm.SVC(gamma = 'scale')_____no_output_____svm_classifier.fit(training_inputs, training_classes)_____no_output_____svm_classifier.score(testing_inputs, testing_classes)_____no_output_____svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)_____no_output_____
</code>
Heck yeah! Our model achieves 97% classification accuracy without much effort.
However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:_____no_output_____
<code>
import matplotlib.pyplot as plt_____no_output_____# here we randomly split data 1000 times in differrent training and test sets
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;_____no_output_____100/38_____no_output_____
</code>
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before.
### Cross-validation
[[ go back to the top ]](#Table-of-contents)
This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.
10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:
(each square is an entry in our data set)_____no_output_____
<code>
# new text_____no_output_____import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)_____no_output_____
</code>
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)
We can perform 10-fold cross-validation on our model with the following code:_____no_output_____
<code>
from sklearn.model_selection import cross_val_score_____no_output_____from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;_____no_output_____len(all_inputs.T[1])_____no_output_____print("Entropy for: ", stats.entropy(all_inputs.T[1]))Entropy for: 4.994187360273029
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
def printEntropy(npdata):
for i, col in enumerate(npdata.T):
print("Entropy for column:", i, stats.entropy(col))_____no_output_____printEntropy(all_inputs)Entropy for column: 0 4.9947332367061925
Entropy for column: 1 4.994187360273029
Entropy for column: 2 4.88306851089088
Entropy for column: 3 4.76945055275522
</code>
Now we have a much more consistent rating of our classifier's general classification accuracy.
### Parameter tuning
[[ go back to the top ]](#Table-of-contents)
Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:_____no_output_____
<code>
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;_____no_output_____
</code>
the classification accuracy falls tremendously.
Therefore, we need to find a systematic method to discover the best parameters for our model and data set.
The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.
Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want._____no_output_____
<code>
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))Best score: 0.9664429530201343
Best parameters: {'max_depth': 3, 'max_features': 2}
</code>
Now let's visualize the grid search to see how the parameters interact._____no_output_____
<code>
grid_search.cv_results_['mean_test_score']_____no_output_____grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Reds', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
;_____no_output_____
</code>
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.
`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)
Let's go ahead and use a broad grid search to find the best settings for a handful of parameters._____no_output_____
<code>
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))Best score: 0.9664429530201343
Best parameters: {'criterion': 'gini', 'max_depth': 3, 'max_features': 3, 'splitter': 'best'}
</code>
Now we can take the best classifier from the Grid Search and use that:_____no_output_____
<code>
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier_____no_output_____
</code>
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:_____no_output_____
<code>
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)_____no_output_____
</code>
<img src="img/iris_dtc.png" />_____no_output_____(This classifier may look familiar from earlier in the notebook.)
Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data._____no_output_____
<code>
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;_____no_output_____
</code>
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?
We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.
**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.
Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**
Let's see if a Random Forest classifier works better here.
The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier._____no_output_____
<code>
from sklearn.ensemble import RandomForestClassifier_____no_output_____from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_Best score: 0.9664429530201343
Best parameters: {'criterion': 'gini', 'max_features': 3, 'n_estimators': 25}
</code>
Now we can compare their performance:_____no_output_____
<code>
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;_____no_output_____
</code>
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set._____no_output_____## Step 6: Reproducibility
[[ go back to the top ]](#Table-of-contents)
Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.
Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.
Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.
[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:_____no_output_____
<code>
!pip install watermarkRequirement already satisfied: watermark in c:\programdata\anaconda3\lib\site-packages (1.8.1)
Requirement already satisfied: ipython in c:\programdata\anaconda3\lib\site-packages (from watermark) (7.4.0)
Requirement already satisfied: jedi>=0.10 in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (0.13.3)
Requirement already satisfied: backcall in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (0.1.0)
Requirement already satisfied: pickleshare in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (0.7.5)
Requirement already satisfied: setuptools>=18.5 in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (40.8.0)
Requirement already satisfied: colorama; sys_platform == "win32" in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (0.4.1)
Requirement already satisfied: decorator in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (4.4.0)
Requirement already satisfied: pygments in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (2.3.1)
Requirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (2.0.9)
Requirement already satisfied: traitlets>=4.2 in c:\programdata\anaconda3\lib\site-packages (from ipython->watermark) (4.3.2)
Requirement already satisfied: parso>=0.3.0 in c:\programdata\anaconda3\lib\site-packages (from jedi>=0.10->ipython->watermark) (0.3.4)
Requirement already satisfied: six>=1.9.0 in c:\programdata\anaconda3\lib\site-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython->watermark) (1.12.0)
Requirement already satisfied: wcwidth in c:\programdata\anaconda3\lib\site-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython->watermark) (0.1.7)
Requirement already satisfied: ipython-genutils in c:\programdata\anaconda3\lib\site-packages (from traitlets>=4.2->ipython->watermark) (0.2.0)
%load_ext watermarkThe watermark extension is already loaded. To reload it, use:
%reload_ext watermark
pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.24.2
pytest: 4.3.1
pip: 19.0.3
setuptools: 40.8.0
Cython: 0.29.6
numpy: 1.16.2
scipy: 1.2.1
pyarrow: None
xarray: None
IPython: 7.4.0
sphinx: 1.8.5
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: 1.2.1
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.0.3
openpyxl: 2.6.1
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.5
lxml.etree: 4.3.2
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.3.1
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
%watermark -a 'RCS_April_2019' -nmv --packages numpy,pandas,sklearn,matplotlib,seabornRCS_April_2019 Wed Apr 17 2019
CPython 3.7.3
IPython 7.4.0
numpy 1.16.2
pandas 0.24.2
sklearn 0.20.3
matplotlib 3.0.3
seaborn 0.9.0
compiler : MSC v.1915 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
CPU cores : 12
interpreter: 64bit
</code>
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline._____no_output_____
<code>
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))[4.6 3.4 1.4 0.3] --> Iris-setosa (Actual: Iris-setosa)
[5.9 3. 4.2 1.5] --> Iris-versicolor (Actual: Iris-versicolor)
[7.2 3. 5.8 1.6] --> Iris-virginica (Actual: Iris-virginica)
[6.7 2.5 5.8 1.8] --> Iris-virginica (Actual: Iris-virginica)
[6.7 3.3 5.7 2.5] --> Iris-virginica (Actual: Iris-virginica)
[4.9 3.1 1.5 0.25] --> Iris-setosa (Actual: Iris-setosa)
[6.3 3.4 5.6 2.4] --> Iris-virginica (Actual: Iris-virginica)
[5.1 3.3 1.7 0.5] --> Iris-setosa (Actual: Iris-setosa)
[4.9 2.4 3.3 1. ] --> Iris-versicolor (Actual: Iris-versicolor)
[6.3 3.3 4.7 1.6] --> Iris-versicolor (Actual: Iris-versicolor)
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
def processData(filename):
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv(filename)
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
return rf_classifier_scores_____no_output_____myscores = processData('../data/iris-data-clean.csv')[5.1 3.7 1.5 0.4] --> Iris-setosa (Actual: Iris-setosa)
[5.8 2.7 4.1 1. ] --> Iris-versicolor (Actual: Iris-versicolor)
[5.7 3. 1.1 0.1] --> Iris-setosa (Actual: Iris-setosa)
[5.9 3. 5.1 1.8] --> Iris-virginica (Actual: Iris-virginica)
[5.4 3.4 1.7 0.2] --> Iris-setosa (Actual: Iris-setosa)
[4.7 3.2 1.6 0.2] --> Iris-setosa (Actual: Iris-setosa)
[5.4 3. 4.5 1.5] --> Iris-versicolor (Actual: Iris-versicolor)
[5.7 4.4 1.5 0.4] --> Iris-setosa (Actual: Iris-setosa)
[5. 3.2 1.2 0.2] --> Iris-setosa (Actual: Iris-setosa)
[7.2 3. 5.8 1.6] --> Iris-virginica (Actual: Iris-virginica)
myscores_____no_output_____
</code>
There we have it: We have a complete and reproducible Machine Learning pipeline to demo to our company's Head of Data. We've met the success criteria that we set from the beginning (>90% accuracy), and our pipeline is flexible enough to handle new inputs or flowers when that data set is ready. Not bad for our first week on the job!_____no_output_____## Conclusions
[[ go back to the top ]](#Table-of-contents)
I hope you found this example notebook useful for your own work and learned at least one new trick by reading through it.
* [Submit an issue](https://github.com/ValRCS/LU-pysem/issues) on GitHub
* Fork the [notebook repository](https://github.com/ValRCS/LU-pysem), make the fix/addition yourself, then send over a pull request_____no_output_____## Further reading
[[ go back to the top ]](#Table-of-contents)
This notebook covers a broad variety of topics but skips over many of the specifics. If you're looking to dive deeper into a particular topic, here's some recommended reading.
**Data Science**: William Chen compiled a [list of free books](http://www.wzchen.com/data-science-books/) for newcomers to Data Science, ranging from the basics of R & Python to Machine Learning to interviews and advice from prominent data scientists.
**Machine Learning**: /r/MachineLearning has a useful [Wiki page](https://www.reddit.com/r/MachineLearning/wiki/index) containing links to online courses, books, data sets, etc. for Machine Learning. There's also a [curated list](https://github.com/josephmisiti/awesome-machine-learning) of Machine Learning frameworks, libraries, and software sorted by language.
**Unit testing**: Dive Into Python 3 has a [great walkthrough](http://www.diveintopython3.net/unit-testing.html) of unit testing in Python, how it works, and how it should be used
**pandas** has [several tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) covering its myriad features.
**scikit-learn** has a [bunch of tutorials](http://scikit-learn.org/stable/tutorial/index.html) for those looking to learn Machine Learning in Python. Andreas Mueller's [scikit-learn workshop materials](https://github.com/amueller/scipy_2015_sklearn_tutorial) are top-notch and freely available.
**matplotlib** has many [books, videos, and tutorials](http://matplotlib.org/resources/index.html) to teach plotting in Python.
**Seaborn** has a [basic tutorial](http://stanford.edu/~mwaskom/software/seaborn/tutorial.html) covering most of the statistical plotting features._____no_output_____## Acknowledgements
[[ go back to the top ]](#Table-of-contents)
Many thanks to [Andreas Mueller](http://amueller.github.io/) for some of his [examples](https://github.com/amueller/scipy_2015_sklearn_tutorial) in the Machine Learning section. I drew inspiration from several of his excellent examples.
The photo of a flower with annotations of the petal and sepal was taken by [Eric Guinther](https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg).
The photos of the various *Iris* flower types were taken by [Ken Walker](http://www.signa.org/index.pl?Display+Iris-setosa+2) and [Barry Glick](http://www.signa.org/index.pl?Display+Iris-virginica+3)._____no_output_____## Further questions?
Feel free to contact [Valdis Saulespurens]
(email:[email protected])_____no_output_____
| {
"repository": "ValRCS/RCS_Data_Analysis_Python_2019_July",
"path": "Irises_ML_Intro/Irises Data Analysis Workflow_06_2019.ipynb",
"matched_keywords": [
"bioinformatics"
],
"stars": 1,
"size": 823130,
"hexsha": "d06137aa460ed001913396642f28d2945f230d06",
"max_line_length": 142296,
"avg_line_length": 199.7888349515,
"alphanum_fraction": 0.8948756576
} |
# Notebook from siddsrivastava/Image-captionin
Path: 2_Training.ipynb
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will train your CNN-RNN model.
You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.
This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:
- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
- the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Training Setup
- [Step 2](#step2): Train your Model
- [Step 3](#step3): (Optional) Validate your Model_____no_output_____<a id='step1'></a>
## Step 1: Training Setup
In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.
You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**.
### Task #1
Begin by setting the following variables:
- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
- `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
- `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file.
- `embed_size` - the dimensionality of the image and word embeddings.
- `hidden_size` - the number of features in the hidden state of the RNN decoder.
- `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)
- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.
- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.
- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.
If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model.
### Question 1
**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.
**Answer:** I used a pretrained Resnet152 network to extract features (deep CNN network). In the literature other architectures like VGG16 are also used, but Resnet152 is claimed to diminish the vanishing gradient problem.I'm using 2 layers of LSTM currently (as it is taking a lot of time), in the future I will explore with more layers.
vocab_threshold is 6, I tried with 9 (meaning lesser elements in vocab) but the training seemed to converge faster in the case of 6. Many paper suggest taking batch_size of 64 or 128, I went with 64. embed_size and hidden_size are both 512. I consulted several blogs and famous papers like "Show, attend and tell - Xu et al" although i did not use attention currently.
### (Optional) Task #2
Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:
- the images in the dataset have varying heights and widths, and
- if using a pre-trained model, you must perform the corresponding appropriate normalization.
### Question 2
**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?
**Answer:** The transform value is the same. Empirically, these parameters values worked well in my past projects.
### Task #3
Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:
```
params = list(decoder.parameters()) + list(encoder.embed.parameters())
```
### Question 3
**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?
**Answer:** Since resnet was pretrained, i trained only the embedding layer and all layers of the decoder. The resnet is already fitting for feature extraction as it is pretrained, hence only the other parts of the architecture should be trained.
### Task #4
Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.html#torch.optim.Optimizer).
### Question 4
**Question:** How did you select the optimizer used to train your model?
**Answer:** I used the Adam optimizer, since in my past similar projects it gave me better performance than SGD. I have found Adam to perform better than vanilla SGD almost in all cases, aligning with intuition._____no_output_____
<code>
import nltk
nltk.download('punkt')[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 64 # batch size
vocab_threshold = 6 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 3 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9,0.999), eps=1e-8)
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
Done (t=1.07s)
creating index...
</code>
<a id='step2'></a>
## Step 2: Train your Model
Once you have executed the code cell in **Step 1**, the training procedure below should run without issue.
It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!
You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:
```python
# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
```
While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).
### A Note on Tuning Hyperparameters
To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.
However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.
For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.
That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset._____no_output_____
<code>
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()Epoch [1/3], Step [100/6471], Loss: 4.2137, Perplexity: 67.6088
Epoch [1/3], Step [200/6471], Loss: 3.9313, Perplexity: 50.97528
Epoch [1/3], Step [300/6471], Loss: 3.5978, Perplexity: 36.5175
Epoch [1/3], Step [400/6471], Loss: 3.6794, Perplexity: 39.6219
Epoch [1/3], Step [500/6471], Loss: 3.0714, Perplexity: 21.5712
Epoch [1/3], Step [600/6471], Loss: 3.2012, Perplexity: 24.5617
Epoch [1/3], Step [700/6471], Loss: 3.2718, Perplexity: 26.35966
Epoch [1/3], Step [800/6471], Loss: 3.3748, Perplexity: 29.2185
Epoch [1/3], Step [900/6471], Loss: 3.1745, Perplexity: 23.9146
Epoch [1/3], Step [1000/6471], Loss: 3.2627, Perplexity: 26.1206
Epoch [1/3], Step [1100/6471], Loss: 2.8865, Perplexity: 17.9312
Epoch [1/3], Step [1200/6471], Loss: 2.9421, Perplexity: 18.9562
Epoch [1/3], Step [1300/6471], Loss: 2.7139, Perplexity: 15.0875
Epoch [1/3], Step [1400/6471], Loss: 2.6474, Perplexity: 14.1176
Epoch [1/3], Step [1500/6471], Loss: 2.6901, Perplexity: 14.7331
Epoch [1/3], Step [1600/6471], Loss: 2.6551, Perplexity: 14.2267
Epoch [1/3], Step [1700/6471], Loss: 2.9028, Perplexity: 18.2242
Epoch [1/3], Step [1800/6471], Loss: 2.5633, Perplexity: 12.9791
Epoch [1/3], Step [1900/6471], Loss: 2.7250, Perplexity: 15.2564
Epoch [1/3], Step [2000/6471], Loss: 2.5907, Perplexity: 13.3396
Epoch [1/3], Step [2100/6471], Loss: 2.7079, Perplexity: 14.9985
Epoch [1/3], Step [2200/6471], Loss: 2.5242, Perplexity: 12.4809
Epoch [1/3], Step [2300/6471], Loss: 2.5016, Perplexity: 12.2019
Epoch [1/3], Step [2400/6471], Loss: 2.6168, Perplexity: 13.6915
Epoch [1/3], Step [2500/6471], Loss: 2.6548, Perplexity: 14.2225
Epoch [1/3], Step [2600/6471], Loss: 2.4738, Perplexity: 11.8673
Epoch [1/3], Step [2700/6471], Loss: 2.4797, Perplexity: 11.9380
Epoch [1/3], Step [2800/6471], Loss: 2.6574, Perplexity: 14.2598
Epoch [1/3], Step [2900/6471], Loss: 2.3054, Perplexity: 10.0281
Epoch [1/3], Step [3000/6471], Loss: 2.5392, Perplexity: 12.6694
Epoch [1/3], Step [3100/6471], Loss: 2.6166, Perplexity: 13.6890
Epoch [1/3], Step [3200/6471], Loss: 2.2275, Perplexity: 9.27642
Epoch [1/3], Step [3300/6471], Loss: 2.5271, Perplexity: 12.5177
Epoch [1/3], Step [3400/6471], Loss: 2.3050, Perplexity: 10.0246
Epoch [1/3], Step [3500/6471], Loss: 2.0236, Perplexity: 7.56542
Epoch [1/3], Step [3600/6471], Loss: 2.1614, Perplexity: 8.68294
Epoch [1/3], Step [3700/6471], Loss: 2.3635, Perplexity: 10.6284
Epoch [1/3], Step [3800/6471], Loss: 2.3958, Perplexity: 10.9773
Epoch [1/3], Step [3900/6471], Loss: 2.1591, Perplexity: 8.66344
Epoch [1/3], Step [4000/6471], Loss: 2.3267, Perplexity: 10.2446
Epoch [1/3], Step [4100/6471], Loss: 3.1127, Perplexity: 22.4825
Epoch [1/3], Step [4200/6471], Loss: 2.3359, Perplexity: 10.3392
Epoch [1/3], Step [4300/6471], Loss: 2.3215, Perplexity: 10.1912
Epoch [1/3], Step [4400/6471], Loss: 2.2369, Perplexity: 9.36462
Epoch [1/3], Step [4500/6471], Loss: 2.2770, Perplexity: 9.74746
Epoch [1/3], Step [4600/6471], Loss: 2.2351, Perplexity: 9.34757
Epoch [1/3], Step [4700/6471], Loss: 2.2890, Perplexity: 9.86499
Epoch [1/3], Step [4800/6471], Loss: 2.2736, Perplexity: 9.713991
Epoch [1/3], Step [4900/6471], Loss: 2.5273, Perplexity: 12.5202
Epoch [1/3], Step [5000/6471], Loss: 2.1436, Perplexity: 8.52971
Epoch [1/3], Step [5100/6471], Loss: 2.2414, Perplexity: 9.40672
Epoch [1/3], Step [5200/6471], Loss: 2.3917, Perplexity: 10.9318
Epoch [1/3], Step [5300/6471], Loss: 2.2926, Perplexity: 9.90097
Epoch [1/3], Step [5400/6471], Loss: 2.0861, Perplexity: 8.05366
Epoch [1/3], Step [5500/6471], Loss: 2.0797, Perplexity: 8.00241
Epoch [1/3], Step [5600/6471], Loss: 2.5135, Perplexity: 12.3480
Epoch [1/3], Step [5700/6471], Loss: 2.0843, Perplexity: 8.03936
Epoch [1/3], Step [5800/6471], Loss: 2.4332, Perplexity: 11.3950
Epoch [1/3], Step [5900/6471], Loss: 2.0920, Perplexity: 8.10140
Epoch [1/3], Step [6000/6471], Loss: 2.3367, Perplexity: 10.3468
Epoch [1/3], Step [6100/6471], Loss: 2.9598, Perplexity: 19.2937
Epoch [1/3], Step [6200/6471], Loss: 2.0285, Perplexity: 7.60297
Epoch [1/3], Step [6300/6471], Loss: 2.6213, Perplexity: 13.7538
Epoch [1/3], Step [6400/6471], Loss: 2.0924, Perplexity: 8.10440
Epoch [2/3], Step [100/6471], Loss: 2.1729, Perplexity: 8.783715
Epoch [2/3], Step [200/6471], Loss: 2.1168, Perplexity: 8.30481
Epoch [2/3], Step [300/6471], Loss: 2.2427, Perplexity: 9.41848
Epoch [2/3], Step [400/6471], Loss: 2.5073, Perplexity: 12.2721
Epoch [2/3], Step [500/6471], Loss: 2.1942, Perplexity: 8.97323
Epoch [2/3], Step [600/6471], Loss: 2.2852, Perplexity: 9.82738
Epoch [2/3], Step [700/6471], Loss: 2.0216, Perplexity: 7.55076
Epoch [2/3], Step [800/6471], Loss: 2.0080, Perplexity: 7.44841
Epoch [2/3], Step [900/6471], Loss: 2.6213, Perplexity: 13.7540
Epoch [2/3], Step [1000/6471], Loss: 2.2098, Perplexity: 9.1141
Epoch [2/3], Step [1100/6471], Loss: 2.3376, Perplexity: 10.3568
Epoch [2/3], Step [1200/6471], Loss: 2.1687, Perplexity: 8.74662
Epoch [2/3], Step [1300/6471], Loss: 2.4215, Perplexity: 11.2623
Epoch [2/3], Step [1400/6471], Loss: 2.2622, Perplexity: 9.60387
Epoch [2/3], Step [1500/6471], Loss: 2.0793, Perplexity: 7.99915
Epoch [2/3], Step [1600/6471], Loss: 3.0006, Perplexity: 20.0976
Epoch [2/3], Step [1700/6471], Loss: 2.1184, Perplexity: 8.31816
Epoch [2/3], Step [1800/6471], Loss: 2.0555, Perplexity: 7.81114
Epoch [2/3], Step [1900/6471], Loss: 2.4132, Perplexity: 11.1696
Epoch [2/3], Step [2000/6471], Loss: 2.4320, Perplexity: 11.3817
Epoch [2/3], Step [2100/6471], Loss: 2.6297, Perplexity: 13.8692
Epoch [2/3], Step [2200/6471], Loss: 2.2170, Perplexity: 9.18001
Epoch [2/3], Step [2300/6471], Loss: 2.1038, Perplexity: 8.19712
Epoch [2/3], Step [2400/6471], Loss: 2.0491, Perplexity: 7.76052
Epoch [2/3], Step [2500/6471], Loss: 1.9645, Perplexity: 7.13170
Epoch [2/3], Step [2600/6471], Loss: 2.3801, Perplexity: 10.8063
Epoch [2/3], Step [2700/6471], Loss: 2.3220, Perplexity: 10.1963
Epoch [2/3], Step [2800/6471], Loss: 2.0542, Perplexity: 7.80050
Epoch [2/3], Step [2900/6471], Loss: 1.9378, Perplexity: 6.94348
Epoch [2/3], Step [3000/6471], Loss: 1.9138, Perplexity: 6.77860
Epoch [2/3], Step [3100/6471], Loss: 2.2314, Perplexity: 9.31325
Epoch [2/3], Step [3200/6471], Loss: 2.1790, Perplexity: 8.83758
Epoch [2/3], Step [3300/6471], Loss: 2.7974, Perplexity: 16.4013
Epoch [2/3], Step [3400/6471], Loss: 2.2902, Perplexity: 9.87657
Epoch [2/3], Step [3500/6471], Loss: 2.0739, Perplexity: 7.95541
Epoch [2/3], Step [3600/6471], Loss: 2.4700, Perplexity: 11.8226
Epoch [2/3], Step [3700/6471], Loss: 2.0761, Perplexity: 7.97370
Epoch [2/3], Step [3800/6471], Loss: 2.0085, Perplexity: 7.45224
Epoch [2/3], Step [3900/6471], Loss: 2.0280, Perplexity: 7.59929
Epoch [2/3], Step [4000/6471], Loss: 2.0487, Perplexity: 7.75750
Epoch [2/3], Step [4100/6471], Loss: 2.0105, Perplexity: 7.46732
Epoch [2/3], Step [4200/6471], Loss: 2.3099, Perplexity: 10.0733
Epoch [2/3], Step [4300/6471], Loss: 1.8471, Perplexity: 6.34158
Epoch [2/3], Step [4400/6471], Loss: 1.9144, Perplexity: 6.78305
Epoch [2/3], Step [4500/6471], Loss: 2.3026, Perplexity: 10.0001
Epoch [2/3], Step [4600/6471], Loss: 2.0366, Perplexity: 7.66411
Epoch [2/3], Step [4700/6471], Loss: 2.4918, Perplexity: 12.0830
Epoch [2/3], Step [4800/6471], Loss: 2.0035, Perplexity: 7.41520
Epoch [2/3], Step [4900/6471], Loss: 2.0007, Perplexity: 7.39395
Epoch [2/3], Step [5000/6471], Loss: 2.0057, Perplexity: 7.43157
Epoch [2/3], Step [5100/6471], Loss: 2.0654, Perplexity: 7.88811
Epoch [2/3], Step [5200/6471], Loss: 1.8834, Perplexity: 6.57597
Epoch [2/3], Step [5300/6471], Loss: 1.9578, Perplexity: 7.08400
Epoch [2/3], Step [5400/6471], Loss: 2.1135, Perplexity: 8.27759
Epoch [2/3], Step [5500/6471], Loss: 1.9813, Perplexity: 7.25206
Epoch [2/3], Step [5600/6471], Loss: 2.1926, Perplexity: 8.95865
Epoch [2/3], Step [5700/6471], Loss: 2.2927, Perplexity: 9.90207
Epoch [2/3], Step [5800/6471], Loss: 2.3188, Perplexity: 10.1636
Epoch [2/3], Step [5900/6471], Loss: 1.9937, Perplexity: 7.34238
Epoch [2/3], Step [6000/6471], Loss: 1.8804, Perplexity: 6.55632
Epoch [2/3], Step [6100/6471], Loss: 1.8708, Perplexity: 6.49346
Epoch [2/3], Step [6200/6471], Loss: 1.9785, Perplexity: 7.23204
Epoch [2/3], Step [6300/6471], Loss: 2.1267, Perplexity: 8.38739
Epoch [2/3], Step [6400/6471], Loss: 1.8215, Perplexity: 6.18116
Epoch [3/3], Step [100/6471], Loss: 1.9881, Perplexity: 7.301406
Epoch [3/3], Step [200/6471], Loss: 2.2102, Perplexity: 9.11727
Epoch [3/3], Step [300/6471], Loss: 1.9104, Perplexity: 6.75575
Epoch [3/3], Step [400/6471], Loss: 1.8180, Perplexity: 6.15938
Epoch [3/3], Step [500/6471], Loss: 2.5038, Perplexity: 12.2288
Epoch [3/3], Step [600/6471], Loss: 2.0724, Perplexity: 7.94375
Epoch [3/3], Step [700/6471], Loss: 2.0264, Perplexity: 7.58681
Epoch [3/3], Step [800/6471], Loss: 1.9343, Perplexity: 6.91936
Epoch [3/3], Step [900/6471], Loss: 1.9347, Perplexity: 6.92228
Epoch [3/3], Step [1000/6471], Loss: 2.6768, Perplexity: 14.5382
Epoch [3/3], Step [1100/6471], Loss: 2.1302, Perplexity: 8.41696
Epoch [3/3], Step [1200/6471], Loss: 1.9754, Perplexity: 7.20958
Epoch [3/3], Step [1300/6471], Loss: 2.0288, Perplexity: 7.60478
Epoch [3/3], Step [1400/6471], Loss: 2.1273, Perplexity: 8.39242
Epoch [3/3], Step [1500/6471], Loss: 2.6294, Perplexity: 13.8661
Epoch [3/3], Step [1600/6471], Loss: 2.6716, Perplexity: 14.4634
Epoch [3/3], Step [1700/6471], Loss: 1.8720, Perplexity: 6.50130
Epoch [3/3], Step [1800/6471], Loss: 2.3521, Perplexity: 10.5080
Epoch [3/3], Step [1900/6471], Loss: 2.0034, Perplexity: 7.41405
Epoch [3/3], Step [2000/6471], Loss: 2.0006, Perplexity: 7.39337
Epoch [3/3], Step [2100/6471], Loss: 2.0902, Perplexity: 8.08620
Epoch [3/3], Step [2200/6471], Loss: 3.3483, Perplexity: 28.4533
Epoch [3/3], Step [2300/6471], Loss: 2.0799, Perplexity: 8.00390
Epoch [3/3], Step [2400/6471], Loss: 2.1215, Perplexity: 8.34411
Epoch [3/3], Step [2500/6471], Loss: 1.9870, Perplexity: 7.29389
Epoch [3/3], Step [2600/6471], Loss: 2.1111, Perplexity: 8.25726
Epoch [3/3], Step [2700/6471], Loss: 1.8926, Perplexity: 6.63631
Epoch [3/3], Step [2800/6471], Loss: 2.0022, Perplexity: 7.40557
Epoch [3/3], Step [2900/6471], Loss: 1.9249, Perplexity: 6.85467
Epoch [3/3], Step [3000/6471], Loss: 1.8835, Perplexity: 6.57626
Epoch [3/3], Step [3100/6471], Loss: 2.0569, Perplexity: 7.82189
Epoch [3/3], Step [3200/6471], Loss: 1.8780, Perplexity: 6.54040
Epoch [3/3], Step [3300/6471], Loss: 2.3703, Perplexity: 10.7010
Epoch [3/3], Step [3400/6471], Loss: 1.9703, Perplexity: 7.17267
Epoch [3/3], Step [3500/6471], Loss: 1.9115, Perplexity: 6.76300
Epoch [3/3], Step [3600/6471], Loss: 2.2174, Perplexity: 9.18364
Epoch [3/3], Step [3700/6471], Loss: 2.4291, Perplexity: 11.3490
Epoch [3/3], Step [3800/6471], Loss: 2.3135, Perplexity: 10.1093
Epoch [3/3], Step [3900/6471], Loss: 1.9082, Perplexity: 6.74124
Epoch [3/3], Step [4000/6471], Loss: 1.9494, Perplexity: 7.02424
Epoch [3/3], Step [4100/6471], Loss: 1.8795, Perplexity: 6.55057
Epoch [3/3], Step [4200/6471], Loss: 2.0943, Perplexity: 8.12024
Epoch [3/3], Step [4300/6471], Loss: 1.9174, Perplexity: 6.80361
Epoch [3/3], Step [4400/6471], Loss: 1.8159, Perplexity: 6.14634
Epoch [3/3], Step [4500/6471], Loss: 2.1579, Perplexity: 8.65335
Epoch [3/3], Step [4600/6471], Loss: 2.0022, Perplexity: 7.40562
Epoch [3/3], Step [4700/6471], Loss: 2.0300, Perplexity: 7.61381
Epoch [3/3], Step [4800/6471], Loss: 1.9009, Perplexity: 6.69223
Epoch [3/3], Step [4900/6471], Loss: 2.4837, Perplexity: 11.9857
Epoch [3/3], Step [5000/6471], Loss: 2.0528, Perplexity: 7.79005
Epoch [3/3], Step [5100/6471], Loss: 1.9514, Perplexity: 7.03869
Epoch [3/3], Step [5200/6471], Loss: 1.8162, Perplexity: 6.14836
Epoch [3/3], Step [5300/6471], Loss: 2.0564, Perplexity: 7.81761
Epoch [3/3], Step [5400/6471], Loss: 1.8345, Perplexity: 6.26224
Epoch [3/3], Step [5500/6471], Loss: 2.2075, Perplexity: 9.09278
Epoch [3/3], Step [5600/6471], Loss: 1.8813, Perplexity: 6.56204
Epoch [3/3], Step [5700/6471], Loss: 1.8286, Perplexity: 6.22503
Epoch [3/3], Step [5800/6471], Loss: 1.8301, Perplexity: 6.23444
Epoch [3/3], Step [5900/6471], Loss: 1.9318, Perplexity: 6.90176
Epoch [3/3], Step [6000/6471], Loss: 1.9549, Perplexity: 7.06348
Epoch [3/3], Step [6100/6471], Loss: 1.9326, Perplexity: 6.90775
Epoch [3/3], Step [6200/6471], Loss: 2.0268, Perplexity: 7.58943
Epoch [3/3], Step [6300/6471], Loss: 1.8465, Perplexity: 6.33754
Epoch [3/3], Step [6400/6471], Loss: 1.9052, Perplexity: 6.72096
Epoch [3/3], Step [6471/6471], Loss: 2.0248, Perplexity: 7.57506
</code>
<a id='step3'></a>
## Step 3: (Optional) Validate your Model
To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.
If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:
- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and
- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.
The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/#download) for the COCO dataset._____no_output_____
<code>
# (Optional) TODO: Validate your model._____no_output_____
</code>
| {
"repository": "siddsrivastava/Image-captionin",
"path": "2_Training.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 10,
"size": 36626,
"hexsha": "d061eb51c53566b0ba0e39649cbe53175a51de39",
"max_line_length": 734,
"avg_line_length": 61.2474916388,
"alphanum_fraction": 0.6164746355
} |
# Notebook from pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022
Path: solutions/gqlalchemy-solutions.ipynb
# 💡 Solutions
Before trying out these solutions, please start the [gqlalchemy-workshop notebook](../workshop/gqlalchemy-workshop.ipynb) to import all data. Also, this solutions manual is here to help you out, and it is recommended you try solving the exercises first by yourself.
## Exercise 1
**Find out how many genres there are in the database.**
The correct Cypher query is:
```
MATCH (g:Genre)
RETURN count(g) AS num_of_genres;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:_____no_output_____
<code>
from gqlalchemy import match
total_genres = (
match()
.node(labels="Genre", variable="g")
.return_({"count(g)": "num_of_genres"})
.execute()
)
results = list(total_genres)
for result in results:
print(result["num_of_genres"])22084
</code>
## Exercise 2
**Find out to how many genres movie 'Matrix, The (1999)' belongs to.**
The correct Cypher query is:
```
MATCH (:Movie {title: 'Matrix, The (1999)'})-[:OF_GENRE]->(g:Genre)
RETURN count(g) AS num_of_genres;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:
_____no_output_____
<code>
matrix = (
match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g")
.where("m.title", "=", "Matrix, The (1999)")
.return_({"count(g)": "num_of_genres"})
.execute()
)
results = list(matrix)
for result in results:
print(result["num_of_genres"])3
</code>
## Exercise 3
**Find out the title of the movies that the user with `id` 1 rated.**
The correct Cypher query is:
```
MATCH (:User {id: 1})-[:RATED]->(m:Movie)
RETURN m.title;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:_____no_output_____
<code>
movies = (
match()
.node(labels="User", variable="u")
.to("RATED")
.node(labels="Movie", variable="m")
.where("u.id", "=", 1)
.return_({"m.title": "movie"})
.execute()
)
results = list(movies)
for result in results:
print(result["movie"])Toy Story (1995)
Grumpier Old Men (1995)
Heat (1995)
Seven (a.k.a. Se7en) (1995)
Usual Suspects, The (1995)
From Dusk Till Dawn (1996)
Bottle Rocket (1996)
Braveheart (1995)
Rob Roy (1995)
Canadian Bacon (1995)
Desperado (1995)
Billy Madison (1995)
Clerks (1994)
Dumb & Dumber (Dumb and Dumber) (1994)
Ed Wood (1994)
Star Wars: Episode IV - A New Hope (1977)
Pulp Fiction (1994)
Stargate (1994)
Tommy Boy (1995)
Clear and Present Danger (1994)
Forrest Gump (1994)
Jungle Book, The (1994)
Mask, The (1994)
Blown Away (1994)
Dazed and Confused (1993)
Fugitive, The (1993)
Jurassic Park (1993)
Mrs. Doubtfire (1993)
Schindler's List (1993)
So I Married an Axe Murderer (1993)
Three Musketeers, The (1993)
Tombstone (1993)
Dances with Wolves (1990)
Batman (1989)
Silence of the Lambs, The (1991)
Pinocchio (1940)
Fargo (1996)
Mission: Impossible (1996)
James and the Giant Peach (1996)
Space Jam (1996)
Rock, The (1996)
Twister (1996)
Independence Day (a.k.a. ID4) (1996)
She's the One (1996)
Wizard of Oz, The (1939)
Citizen Kane (1941)
Adventures of Robin Hood, The (1938)
Ghost and Mrs. Muir, The (1947)
Mr. Smith Goes to Washington (1939)
Escape to Witch Mountain (1975)
Winnie the Pooh and the Blustery Day (1968)
Three Caballeros, The (1945)
Sword in the Stone, The (1963)
Dumbo (1941)
Pete's Dragon (1977)
Bedknobs and Broomsticks (1971)
Alice in Wonderland (1951)
That Thing You Do! (1996)
Ghost and the Darkness, The (1996)
Swingers (1996)
Willy Wonka & the Chocolate Factory (1971)
Monty Python's Life of Brian (1979)
Reservoir Dogs (1992)
Platoon (1986)
Basic Instinct (1992)
E.T. the Extra-Terrestrial (1982)
Abyss, The (1989)
Monty Python and the Holy Grail (1975)
Star Wars: Episode V - The Empire Strikes Back (1980)
Princess Bride, The (1987)
Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
Clockwork Orange, A (1971)
Apocalypse Now (1979)
Star Wars: Episode VI - Return of the Jedi (1983)
Goodfellas (1990)
Alien (1979)
Psycho (1960)
Blues Brothers, The (1980)
Full Metal Jacket (1987)
Henry V (1989)
Quiet Man, The (1952)
Terminator, The (1984)
Duck Soup (1933)
Shining, The (1980)
Groundhog Day (1993)
Back to the Future (1985)
Highlander (1986)
Young Frankenstein (1974)
Fantasia (1940)
Indiana Jones and the Last Crusade (1989)
Pink Floyd: The Wall (1982)
Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922)
Batman Returns (1992)
Sneakers (1992)
Last of the Mohicans, The (1992)
McHale's Navy (1997)
Best Men (1997)
Grosse Pointe Blank (1997)
Austin Powers: International Man of Mystery (1997)
Con Air (1997)
Face/Off (1997)
Men in Black (a.k.a. MIB) (1997)
Conan the Barbarian (1982)
L.A. Confidential (1997)
Kiss the Girls (1997)
Game, The (1997)
I Know What You Did Last Summer (1997)
Starship Troopers (1997)
Big Lebowski, The (1998)
Wedding Singer, The (1998)
Welcome to Woop-Woop (1997)
Newton Boys, The (1998)
Wild Things (1998)
Small Soldiers (1998)
All Quiet on the Western Front (1930)
Rocky (1976)
Labyrinth (1986)
Lethal Weapon (1987)
Goonies, The (1985)
Back to the Future Part III (1990)
Bambi (1942)
Saving Private Ryan (1998)
Black Cauldron, The (1985)
Flight of the Navigator (1986)
Great Mouse Detective, The (1986)
Honey, I Shrunk the Kids (1989)
Negotiator, The (1998)
Jungle Book, The (1967)
Rescuers, The (1977)
Return to Oz (1985)
Rocketeer, The (1991)
Sleeping Beauty (1959)
Song of the South (1946)
Tron (1982)
Indiana Jones and the Temple of Doom (1984)
Lord of the Rings, The (1978)
Charlotte's Web (1973)
Secret of NIMH, The (1982)
American Tail, An (1986)
Legend (1985)
NeverEnding Story, The (1984)
Beetlejuice (1988)
Willow (1988)
Toys (1992)
Few Good Men, A (1992)
Rush Hour (1998)
Edward Scissorhands (1990)
American History X (1998)
I Still Know What You Did Last Summer (1998)
Enemy of the State (1998)
King Kong (1933)
Very Bad Things (1998)
Psycho (1998)
Rushmore (1998)
Romancing the Stone (1984)
Young Sherlock Holmes (1985)
Thin Red Line, The (1998)
Howard the Duck (1986)
Texas Chainsaw Massacre, The (1974)
Crocodile Dundee (1986)
¡Three Amigos! (1986)
20 Dates (1998)
Office Space (1999)
Logan's Run (1976)
Planet of the Apes (1968)
Lock, Stock & Two Smoking Barrels (1998)
Matrix, The (1999)
Go (1999)
SLC Punk! (1998)
Dick Tracy (1990)
Mummy, The (1999)
Star Wars: Episode I - The Phantom Menace (1999)
Superman (1978)
Superman II (1980)
Dracula (1931)
Frankenstein (1931)
Wolf Man, The (1941)
Rocky Horror Picture Show, The (1975)
Run Lola Run (Lola rennt) (1998)
South Park: Bigger, Longer and Uncut (1999)
Ghostbusters (a.k.a. Ghost Busters) (1984)
Iron Giant, The (1999)
Big (1988)
13th Warrior, The (1999)
American Beauty (1999)
Excalibur (1981)
Gulliver's Travels (1939)
Total Recall (1990)
Dirty Dozen, The (1967)
Goldfinger (1964)
From Russia with Love (1963)
Dr. No (1962)
Fight Club (1999)
RoboCop (1987)
Who Framed Roger Rabbit? (1988)
Live and Let Die (1973)
Thunderball (1965)
Being John Malkovich (1999)
Spaceballs (1987)
Robin Hood (1973)
Dogma (1999)
Messenger: The Story of Joan of Arc, The (1999)
Longest Day, The (1962)
Green Mile, The (1999)
Easy Rider (1969)
Talented Mr. Ripley, The (1999)
Encino Man (1992)
Sister Act (1992)
Wayne's World (1992)
Scream 3 (2000)
JFK (1991)
Teenage Mutant Ninja Turtles II: The Secret of the Ooze (1991)
Teenage Mutant Ninja Turtles III (1993)
Red Dawn (1984)
Good Morning, Vietnam (1987)
Grumpy Old Men (1993)
Ladyhawke (1985)
Hook (1991)
Predator (1987)
Gladiator (2000)
Road Trip (2000)
Man with the Golden Gun, The (1974)
Blazing Saddles (1974)
Mad Max (1979)
Road Warrior, The (Mad Max 2) (1981)
Shaft (1971)
Big Trouble in Little China (1986)
Shaft (2000)
X-Men (2000)
What About Bob? (1991)
Transformers: The Movie (1986)
M*A*S*H (a.k.a. MASH) (1970)
</code>
## Exercise 4
**List 15 movies of 'Documentary' and 'Comedy' genres and sort them by title descending.**
The correct Cypher query is:
```
MATCH (m:Movie)-[:OF_GENRE]->(:Genre {name: "Documentary"})
MATCH (m)-[:OF_GENRE]->(:Genre {name: "Comedy"})
RETURN m.title
ORDER BY m.title DESC
LIMIT 15;
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:_____no_output_____
<code>
movies = (
match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g1")
.where("g1.name", "=", "Documentary")
.match()
.node(labels="Movie", variable="m")
.to("OF_GENRE")
.node(labels="Genre", variable="g2")
.where("g2.name", "=", "Comedy")
.return_({"m.title": "movie"})
.order_by("m.title DESC")
.limit(15)
.execute()
)
results = list(movies)
for result in results:
print(result["movie"])What the #$*! Do We Know!? (a.k.a. What the Bleep Do We Know!?) (2004)
Union: The Business Behind Getting High, The (2007)
Super Size Me (2004)
Super High Me (2007)
Secret Policeman's Other Ball, The (1982)
Richard Pryor Live on the Sunset Strip (1982)
Religulous (2008)
Paper Heart (2009)
Original Kings of Comedy, The (2000)
Merci Patron ! (2016)
Martin Lawrence Live: Runteldat (2002)
Kevin Hart: Laugh at My Pain (2011)
Jeff Ross Roasts Criminals: Live at Brazos County Jail (2015)
Jackass: The Movie (2002)
Jackass Number Two (2006)
</code>
## Exercise 5
**Find out the minimum rating of the 'Star Wars: Episode I - The Phantom Menace (1999)' movie.**
The correct Cypher query is:
```
MATCH (:User)-[r:RATED]->(:Movie {title: 'Star Wars: Episode I - The Phantom Menace (1999)'})
RETURN min(r.rating);
```
You can try it out in Memgraph Lab at `localhost:3000`.
With GQLAlchemy's query builder, the solution is:_____no_output_____
<code>
rating = (
match()
.node(labels="User")
.to("RATED", variable="r")
.node(labels="Movie", variable="m")
.where("m.title", "=", "Star Wars: Episode I - The Phantom Menace (1999)")
.return_({"min(r.rating)": "min_rating"})
.execute()
)
results = list(rating)
for result in results:
print(result["min_rating"])0.5
</code>
And that's it! If you have any issues with this notebook, feel free to open an issue on the [GitHub repository](https://github.com/pyladiesams/graphdbs-gqlalchemy-beginner-mar2022), or [join the Discord server](https://discord.gg/memgraph) and get your answer instantly. If you are interested in the Cypher query language and want to learn more, sign up for the free [Cypher Email Course](https://memgraph.com/learn-cypher-query-language)._____no_output_____
| {
"repository": "pyladiesams/graphdatabases-gqlalchemy-beginner-mar2022",
"path": "solutions/gqlalchemy-solutions.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 4,
"size": 16541,
"hexsha": "d062ad7c364d8195bd0661953d59fa2e49a6751f",
"max_line_length": 445,
"avg_line_length": 30.2948717949,
"alphanum_fraction": 0.5175624207
} |
# Notebook from MichielStock/SelectedTopicsOptimization
Path: Chapters/06.MinimumSpanningTrees/Chapter6.ipynb
# Minimum spanning trees
*Selected Topics in Mathematical Optimization*
**Michiel Stock** ([email]([email protected]))
_____no_output_____
<code>
import matplotlib.pyplot as plt
%matplotlib inline
from minimumspanningtrees import red, green, blue, orange, yellow_____no_output_____
</code>
## Graphs in python
Consider the following example graph:
_____no_output_____This graph can be represented using an *adjacency list*. We do this using a `dict`. Every vertex is a key with the adjacent vertices given as a `set` containing tuples `(weight, neighbor)`. The weight is first because this makes it easy to compare the weights of two edges. Note that for every ingoing edges, there is also an outgoing edge, this is an undirected graph._____no_output_____
<code>
graph = {
'A' : set([(2, 'B'), (3, 'D')]),
'B' : set([(2, 'A'), (1, 'C'), (2, 'E')]),
'C' : set([(1, 'B'), (2, 'D'), (1, 'E')]),
'D' : set([(2, 'C'), (3, 'A'), (3, 'E')]),
'E' : set([(2, 'B'), (1, 'C'), (3, 'D')])
}_____no_output_____
</code>
Sometimes we will use an *edge list*, i.e. a list of (weighted) edges. This is often a more compact way of storing a graph. The edge list is given below. Note that again every edge is double: an in- and outgoing edge is included._____no_output_____
<code>
edges = [
(2, 'B', 'A'),
(3, 'D', 'A'),
(2, 'C', 'D'),
(3, 'A', 'D'),
(3, 'E', 'D'),
(2, 'B', 'E'),
(3, 'D', 'E'),
(1, 'C', 'E'),
(2, 'E', 'B'),
(2, 'A', 'B'),
(1, 'C', 'B'),
(1, 'E', 'C'),
(1, 'B', 'C'),
(2, 'D', 'C')]_____no_output_____
</code>
We can easily turn one representation in the other (with a time complexity proportional to the number of edges) using the provided functions `edges_to_adj_list` and `adj_list_to_edges`._____no_output_____
<code>
from minimumspanningtrees import edges_to_adj_list, adj_list_to_edges_____no_output_____adj_list_to_edges(graph)_____no_output_____edges_to_adj_list(edges)_____no_output_____
</code>
## Disjoint-set data structure
Implementing an algorithm for finding the minimum spanning tree is fairly straightforward. The only bottleneck is that the algorithm requires the a disjoint-set data structure to keep track of a set partitioned in a number of disjoined subsets.
For example, consider the following inital set of eight elements.

We decide to group elements A, B and C together in a subset and F and G in another subset.

The disjoint-set data structure support the following operations:
- **Find**: check which subset an element is in. Is typically used to check whether two objects are in the same subset;
- **Union** merges two subsets into a single subset.
A python implementation of a disjoint-set is available using an union-set forest. A simple example will make everything clear!_____no_output_____
<code>
from union_set_forest import USF
animals = ['mouse', 'bat', 'robin', 'trout', 'seagull', 'hummingbird',
'salmon', 'goldfish', 'hippopotamus', 'whale', 'sparrow']
union_set_forest = USF(animals)
# group mammals together
union_set_forest.union('mouse', 'bat')
union_set_forest.union('mouse', 'hippopotamus')
union_set_forest.union('whale', 'bat')
# group birds together
union_set_forest.union('robin', 'seagull')
union_set_forest.union('seagull', 'sparrow')
union_set_forest.union('seagull', 'hummingbird')
union_set_forest.union('robin', 'hummingbird')
# group fishes together
union_set_forest.union('goldfish', 'salmon')
union_set_forest.union('trout', 'salmon')_____no_output_____# mouse and whale in same subset?
print(union_set_forest.find('mouse') == union_set_forest.find('whale'))_____no_output_____# robin and salmon in the same subset?
print(union_set_forest.find('robin') == union_set_forest.find('salmon'))_____no_output_____
</code>
## Heap queue
Can be used to find the minimum of a changing list without having to sort the list every update._____no_output_____
<code>
from heapq import heapify, heappop, heappush
heap = [(5, 'A'), (3, 'B'), (2, 'C'), (7, 'D')]
heapify(heap) # turn in a heap
print(heap)_____no_output_____# return item lowest value while retaining heap property
print(heappop(heap))_____no_output_____print(heap)_____no_output_____# add new item and retain heap prop
heappush(heap, (4, 'E'))
print(heap)_____no_output_____
</code>
## Prim's algorithm
Prim's algorithm starts with a single vertex and add $|V|-1$ edges to it, always taking the next edge with minimal weight that connects a vertex on the MST to a vertex not yet in the MST._____no_output_____
<code>
def prim(vertices, edges, start):
"""
Prim's algorithm for finding a minimum spanning tree.
Inputs :
- vertices : a set of the vertices of the Graph
- edges : a list of weighted edges (e.g. (0.7, 'A', 'B') for an
edge from node A to node B with weigth 0.7)
- start : a vertex to start with
Output:
- edges : a minumum spanning tree represented as a list of edges
- total_cost : total cost of the tree
"""
adj_list = edges_to_adj_list(edges) # easier using an adjacency list
... # to complete
return mst_edges, total_cost_____no_output_____
</code>
## Kruskal's algorithm
Kruskal's algorithm is a very simple algorithm to find the minimum spanning tree. The main idea is to start with an intial 'forest' of the individual nodes of the graph. In each step of the algorithm we add an edge with the smallest possible value that connects two disjoint trees in the forest. This process is continued until we have a single tree, which is a minimum spanning tree, or until all edges are considered. In the latter case, the algoritm returns a minimum spanning forest._____no_output_____
<code>
from minimumspanningtrees import kruskal_____no_output_____def kruskal(vertices, edges):
"""
Kruskal's algorithm for finding a minimum spanning tree.
Inputs :
- vertices : a set of the vertices of the Graph
- edges : a list of weighted edges (e.g. (0.7, 'A', 'B') for an
edge from node A to node B with weigth 0.7)
Output:
- edges : a minumum spanning tree represented as a list of edges
- total_cost : total cost of the tree
"""
... # to complete
return mst_edges, total_cost_____no_output_____
</code>
<code>
print(vertices)_____no_output_____print(edges[:5])_____no_output_____# compute the minimum spanning tree of the ticket to ride data set
..._____no_output_____
</code>
## Clustering
Minimum spanning trees on a distance graph can be used to cluster a data set._____no_output_____
<code>
# import features and distance
from clustering import X, D_____no_output_____fig, ax = plt.subplots()
ax.scatter(X[:,0], X[:,1], color=green)_____no_output_____# cluster the data based on the distance_____no_output_____
</code>
| {
"repository": "MichielStock/SelectedTopicsOptimization",
"path": "Chapters/06.MinimumSpanningTrees/Chapter6.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": 22,
"size": 11587,
"hexsha": "d06303ee27ab4398b14cd8cdbe43733805679c26",
"max_line_length": 493,
"avg_line_length": 26.7598152425,
"alphanum_fraction": 0.5434538707
} |
# Notebook from emilynomura1/1030MidtermProject
Path: src/data-cleaning-final.ipynb
<code>
# Import packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Read in data. If data is zipped, unzip the file and change file path accordingly
yelp = pd.read_csv("../yelp_academic_dataset_business.csv",
dtype={'attributes': str, 'postal_code': str}, low_memory=False)
# Reorder columns
# https://stackoverflow.com/questions/41968732/set-order-of-columns-in-pandas-dataframe
cols_to_order = ['name', 'stars', 'review_count', 'categories', 'city', 'state',
'postal_code', 'latitude', 'longitude', 'address']
new_cols = cols_to_order + (yelp.columns.drop(cols_to_order).tolist())
yelp = yelp[new_cols]
print(yelp.shape)
print(yelp.info())_____no_output_____# Remove entries with null in columns: name, categories, city, postal code
yelp = yelp[(pd.isna(yelp['name'])==False) &
(pd.isna(yelp['city'])==False) &
(pd.isna(yelp['categories'])==False) &
(pd.isna(yelp['postal_code'])==False)]
print(yelp.shape)_____no_output_____# Remove columns with <0.5% non-null values (<894) except BYOB=641 non-null
# and non-relevant columns
yelp = yelp.drop(yelp.columns[[6,9,17,26,31,33,34,37,38]], axis=1)
print(yelp.shape)_____no_output_____# Remove entries with < 1000 businesses in each state
state_counts = yelp['state'].value_counts()
yelp = yelp[~yelp['state'].isin(state_counts[state_counts < 1000].index)]
print(yelp.shape)_____no_output_____# Create new column of grouped star rating
conds = [
((yelp['stars'] == 1) | (yelp['stars'] == 1.5)),
((yelp['stars'] == 2) | (yelp['stars'] == 2.5)),
((yelp['stars'] == 3) | (yelp['stars'] == 3.5)),
((yelp['stars'] == 4) | (yelp['stars'] == 4.5)),
(yelp['stars'] == 5) ]
values = [1, 2, 3, 4, 5]
yelp['star-rating'] = np.select(conds, values)
print(yelp.shape)_____no_output_____# Convert 'hours' columns to total hours open that day for each day column
from datetime import timedelta, time
# Monday ---------------------------------------------------------
yelp[['hours.Monday.start', 'hours.Monday.end']] = yelp['hours.Monday'].str.split('-', 1, expand=True)
# Monday start time
hr_min = []
for row in yelp['hours.Monday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el]) #change elements in list to int
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Monday.start'] = time_obj
# Monday end time
hr_min = []
for row in yelp['hours.Monday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Monday.end'] = time_obj
# Create column of time difference
yelp['Monday.hrs.open'] = yelp['hours.Monday.end'] - yelp['hours.Monday.start']
# Convert seconds to minutes
hour_calc = []
for ob in yelp['Monday.hrs.open']:
hour_calc.append(ob.seconds//3600) #convert seconds to hours for explainability
yelp['Monday.hrs.open'] = hour_calc
# Tuesday -------------------------------------------------------------
yelp[['hours.Tuesday.start', 'hours.Tuesday.end']] = yelp['hours.Tuesday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Tuesday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Tuesday.start'] = time_obj
hr_min = []
for row in yelp['hours.Tuesday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Tuesday.end'] = time_obj
yelp['Tuesday.hrs.open'] = yelp['hours.Tuesday.end'] - yelp['hours.Tuesday.start']
hour_calc = []
for ob in yelp['Tuesday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Tuesday.hrs.open'] = hour_calc
# Wednesday ---------------------------------------------------------
yelp[['hours.Wednesday.start', 'hours.Wednesday.end']] = yelp['hours.Wednesday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Wednesday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Wednesday.start'] = time_obj
hr_min = []
for row in yelp['hours.Wednesday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Wednesday.end'] = time_obj
yelp['Wednesday.hrs.open'] = yelp['hours.Wednesday.end'] - yelp['hours.Wednesday.start']
hour_calc = []
for ob in yelp['Wednesday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Wednesday.hrs.open'] = hour_calc
# Thursday --------------------------------------------------------------------
yelp[['hours.Thursday.start', 'hours.Thursday.end']] = yelp['hours.Thursday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Thursday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Thursday.start'] = time_obj
hr_min = []
for row in yelp['hours.Thursday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Thursday.end'] = time_obj
yelp['Thursday.hrs.open'] = yelp['hours.Thursday.end'] - yelp['hours.Thursday.start']
hour_calc = []
for ob in yelp['Thursday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Thursday.hrs.open'] = hour_calc
# Friday -----------------------------------------------------------------------
yelp[['hours.Friday.start', 'hours.Friday.end']] = yelp['hours.Friday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Friday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Friday.start'] = time_obj
hr_min = []
for row in yelp['hours.Friday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Friday.end'] = time_obj
yelp['Friday.hrs.open'] = yelp['hours.Friday.end'] - yelp['hours.Friday.start']
hour_calc = []
for ob in yelp['Friday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Friday.hrs.open'] = hour_calc
# Saturday ------------------------------------------------------------------------
yelp[['hours.Saturday.start', 'hours.Saturday.end']] = yelp['hours.Saturday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Saturday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Saturday.start'] = time_obj
hr_min = []
for row in yelp['hours.Saturday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Saturday.end'] = time_obj
yelp['Saturday.hrs.open'] = yelp['hours.Saturday.end'] - yelp['hours.Saturday.start']
hour_calc = []
for ob in yelp['Saturday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Saturday.hrs.open'] = hour_calc
# Sunday ----------------------------------------------------------------------
yelp[['hours.Sunday.start', 'hours.Sunday.end']] = yelp['hours.Sunday'].str.split('-', 1, expand=True)
hr_min = []
for row in yelp['hours.Sunday.start']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Sunday.start'] = time_obj
hr_min = []
for row in yelp['hours.Sunday.end']:
hr_min.append(str(row).split(':'))
new_el = []
for el in hr_min:
if len(el) == 1:
new_el.append([0,0])
else:
new_el.append([int(i) for i in el])
time_obj = []
for el_split in new_el:
time_obj.append(timedelta(hours=el_split[0], minutes=el_split[1]))
yelp['hours.Sunday.end'] = time_obj
yelp['Sunday.hrs.open'] = yelp['hours.Sunday.end'] - yelp['hours.Sunday.start']
hour_calc = []
for ob in yelp['Sunday.hrs.open']:
hour_calc.append(ob.seconds//3600)
yelp['Sunday.hrs.open'] = hour_calc_____no_output_____# Remove old target variable (stars) and
# unecessary time columns that were created. Only keep 'day.hrs.open' columns
yelp = yelp.drop(yelp.columns[[1,10,11,12,16,18,41,48,52,53,55,56,
58,59,61,62,64,65,67,68,70,71]], axis=1)
print(yelp.shape)_____no_output_____# Delete columns with unworkable form (dict)
del yelp['attributes.BusinessParking']
del yelp['attributes.Music']
del yelp['attributes.Ambience']
del yelp['attributes.GoodForKids']
del yelp['attributes.RestaurantsDelivery']
del yelp['attributes.BestNights']
del yelp['attributes.HairSpecializesIn']
del yelp['attributes.GoodForMeal']_____no_output_____# Look at final DF before saving
print(yelp.info())_____no_output_____# Save as CSV for faster loading -------------------------------------------------
yelp.to_csv('/Data/yelp-clean.csv')_____no_output_____
</code>
| {
"repository": "emilynomura1/1030MidtermProject",
"path": "src/data-cleaning-final.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 15261,
"hexsha": "d063570e27d884ad1284ea042d5745f573a85718",
"max_line_length": 120,
"avg_line_length": 36.9515738499,
"alphanum_fraction": 0.5197562414
} |
# Notebook from HypoChloremic/python_learning
Path: learning/matplot/animation/basic_animation.ipynb
<code>
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib
from IPython.display import HTML
_____no_output_____def update_line(num, data, line):
print(num)
line.set_data(data[..., :num])
return line,_____no_output_____plt.rcParams['animation.writer'] = 'ffmpeg'
print(matplotlib.animation.writers.list())['pillow', 'ffmpeg', 'ffmpeg_file', 'html']
fig1 = plt.figure()
# Fixing random state for reproducibility
np.random.seed(19680801)
data = np.random.rand(2, 25)
l, = plt.plot(x=[], y=[])
plt.xlim(0, 1)
plt.ylim(0, 1)
line_ani = animation.FuncAnimation(fig1, update_line, 25, fargs=(data, l),
interval=50, blit=True)
HTML(line_ani.to_html5_video())_____no_output_____help(plt.plot)Help on function plot in module matplotlib.pyplot:
plot(*args, scalex=True, scaley=True, data=None, **kwargs)
Plot y versus x as lines and/or markers.
Call signatures::
plot([x], y, [fmt], *, data=None, **kwargs)
plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)
The coordinates of the points or line nodes are given by *x*, *y*.
The optional parameter *fmt* is a convenient way for defining basic
formatting like color, marker and linestyle. It's a shortcut string
notation described in the *Notes* section below.
>>> plot(x, y) # plot x and y using default line style and color
>>> plot(x, y, 'bo') # plot x and y using blue circle markers
>>> plot(y) # plot y using x as index array 0..N-1
>>> plot(y, 'r+') # ditto, but with red plusses
You can use `.Line2D` properties as keyword arguments for more
control on the appearance. Line properties and *fmt* can be mixed.
The following two calls yield identical results:
>>> plot(x, y, 'go--', linewidth=2, markersize=12)
>>> plot(x, y, color='green', marker='o', linestyle='dashed',
... linewidth=2, markersize=12)
When conflicting with *fmt*, keyword arguments take precedence.
**Plotting labelled data**
There's a convenient way for plotting objects with labelled data (i.e.
data that can be accessed by index ``obj['y']``). Instead of giving
the data in *x* and *y*, you can provide the object in the *data*
parameter and just give the labels for *x* and *y*::
>>> plot('xlabel', 'ylabel', data=obj)
All indexable objects are supported. This could e.g. be a `dict`, a
`pandas.DataFame` or a structured numpy array.
**Plotting multiple sets of data**
There are various ways to plot multiple sets of data.
- The most straight forward way is just to call `plot` multiple times.
Example:
>>> plot(x1, y1, 'bo')
>>> plot(x2, y2, 'go')
- Alternatively, if your data is already a 2d array, you can pass it
directly to *x*, *y*. A separate data set will be drawn for every
column.
Example: an array ``a`` where the first column represents the *x*
values and the other columns are the *y* columns::
>>> plot(a[0], a[1:])
- The third way is to specify multiple sets of *[x]*, *y*, *[fmt]*
groups::
>>> plot(x1, y1, 'g^', x2, y2, 'g-')
In this case, any additional keyword argument applies to all
datasets. Also this syntax cannot be combined with the *data*
parameter.
By default, each line is assigned a different style specified by a
'style cycle'. The *fmt* and line property parameters are only
necessary if you want explicit deviations from these defaults.
Alternatively, you can also change the style cycle using
:rc:`axes.prop_cycle`.
Parameters
----------
x, y : array-like or scalar
The horizontal / vertical coordinates of the data points.
*x* values are optional and default to `range(len(y))`.
Commonly, these parameters are 1D arrays.
They can also be scalars, or two-dimensional (in that case, the
columns represent separate data sets).
These arguments cannot be passed as keywords.
fmt : str, optional
A format string, e.g. 'ro' for red circles. See the *Notes*
section for a full description of the format strings.
Format strings are just an abbreviation for quickly setting
basic line properties. All of these and more can also be
controlled by keyword arguments.
This argument cannot be passed as keyword.
data : indexable object, optional
An object with labelled data. If given, provide the label names to
plot in *x* and *y*.
.. note::
Technically there's a slight ambiguity in calls where the
second label is a valid *fmt*. `plot('n', 'o', data=obj)`
could be `plt(x, y)` or `plt(y, fmt)`. In such cases,
the former interpretation is chosen, but a warning is issued.
You may suppress the warning by adding an empty format string
`plot('n', 'o', '', data=obj)`.
Other Parameters
----------------
scalex, scaley : bool, optional, default: True
These parameters determined if the view limits are adapted to
the data limits. The values are passed on to `autoscale_view`.
**kwargs : `.Line2D` properties, optional
*kwargs* are used to specify properties like a line label (for
auto legends), linewidth, antialiasing, marker face color.
Example::
>>> plot([1, 2, 3], [1, 2, 3], 'go-', label='line 1', linewidth=2)
>>> plot([1, 2, 3], [1, 4, 9], 'rs', label='line 2')
If you make multiple lines with one plot command, the kwargs
apply to all those lines.
Here is a list of available `.Line2D` properties:
Properties:
agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha: float or None
animated: bool
antialiased or aa: bool
clip_box: `.Bbox`
clip_on: bool
clip_path: Patch or (Path, Transform) or None
color or c: color
contains: callable
dash_capstyle: {'butt', 'round', 'projecting'}
dash_joinstyle: {'miter', 'round', 'bevel'}
dashes: sequence of floats (on/off ink in points) or (None, None)
data: (2, N) array or two 1D arrays
drawstyle or ds: {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default'
figure: `.Figure`
fillstyle: {'full', 'left', 'right', 'bottom', 'top', 'none'}
gid: str
in_layout: bool
label: object
linestyle or ls: {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw: float
marker: marker style
markeredgecolor or mec: color
markeredgewidth or mew: float
markerfacecolor or mfc: color
markerfacecoloralt or mfcalt: color
markersize or ms: float
markevery: None or int or (int, int) or slice or List[int] or float or (float, float)
path_effects: `.AbstractPathEffect`
picker: float or callable[[Artist, Event], Tuple[bool, dict]]
pickradius: float
rasterized: bool or None
sketch_params: (scale: float, length: float, randomness: float)
snap: bool or None
solid_capstyle: {'butt', 'round', 'projecting'}
solid_joinstyle: {'miter', 'round', 'bevel'}
transform: `matplotlib.transforms.Transform`
url: str
visible: bool
xdata: 1D array
ydata: 1D array
zorder: float
Returns
-------
lines
A list of `.Line2D` objects representing the plotted data.
See Also
--------
scatter : XY scatter plot with markers of varying size and/or color (
sometimes also called bubble chart).
Notes
-----
**Format Strings**
A format string consists of a part for color, marker and line::
fmt = '[marker][line][color]'
Each of them is optional. If not provided, the value from the style
cycle is used. Exception: If ``line`` is given, but no ``marker``,
the data will be a line without markers.
Other combinations such as ``[color][marker][line]`` are also
supported, but note that their parsing may be ambiguous.
**Markers**
============= ===============================
character description
============= ===============================
``'.'`` point marker
``','`` pixel marker
``'o'`` circle marker
``'v'`` triangle_down marker
``'^'`` triangle_up marker
``'<'`` triangle_left marker
``'>'`` triangle_right marker
``'1'`` tri_down marker
``'2'`` tri_up marker
``'3'`` tri_left marker
``'4'`` tri_right marker
``'s'`` square marker
``'p'`` pentagon marker
``'*'`` star marker
``'h'`` hexagon1 marker
``'H'`` hexagon2 marker
``'+'`` plus marker
``'x'`` x marker
``'D'`` diamond marker
``'d'`` thin_diamond marker
``'|'`` vline marker
``'_'`` hline marker
============= ===============================
**Line Styles**
============= ===============================
character description
============= ===============================
``'-'`` solid line style
``'--'`` dashed line style
``'-.'`` dash-dot line style
``':'`` dotted line style
============= ===============================
Example format strings::
'b' # blue markers with default shape
'or' # red circles
'-g' # green solid line
'--' # dashed line with default color
'^k:' # black triangle_up markers connected by a dotted line
**Colors**
The supported color abbreviations are the single letter codes
============= ===============================
character color
============= ===============================
``'b'`` blue
``'g'`` green
``'r'`` red
``'c'`` cyan
``'m'`` magenta
``'y'`` yellow
``'k'`` black
``'w'`` white
============= ===============================
and the ``'CN'`` colors that index into the default property cycle.
If the color is the only part of the format string, you can
additionally use any `matplotlib.colors` spec, e.g. full names
(``'green'``) or hex strings (``'#008000'``).
</code>
| {
"repository": "HypoChloremic/python_learning",
"path": "learning/matplot/animation/basic_animation.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 25044,
"hexsha": "d063869573f5b3dfc578fc45bb7d1c7875fd50ea",
"max_line_length": 4768,
"avg_line_length": 61.5331695332,
"alphanum_fraction": 0.5927966778
} |
# Notebook from Switham1/PromoterArchitecture
Path: src/plotting/OpenChromatin_plotsold.ipynb
<code>
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from statsmodels.formula.api import ols
import researchpy as rp
from pingouin import kruskal
from pybedtools import BedTool_____no_output_____RootChomatin_bp_covered = '../../data/promoter_analysis/responsivepromotersRootOpenChrom.bp_covered.txt'
ShootChomatin_bp_covered = '../../data/promoter_analysis/responsivepromotersShootOpenChrom.bp_covered.txt'
RootShootIntersect_bp_covered = '../../data/promoter_analysis/responsivepromotersShootRootIntersectOpenChrom.bp_covered.txt'_____no_output_____def add_chr_linestart(input_location,output_location):
"""this function adds chr to the beginning of the line if it starts with a digit and saves a file"""
output = open(output_location, 'w') #make output file with write capability
#open input file
with open(input_location, 'r') as infile:
#iterate over lines in file
for line in infile:
line = line.strip() # removes hidden characters/spaces
if line[0].isdigit():
line = 'chr' + line #prepend chr to the beginning of line if starts with a digit
output.write(line + '\n') #output to new file
output.close()_____no_output_____def percent_coverage(bp_covered):
"""function to calculate the % coverage from the output file of bedtools coverage"""
coverage_df = pd.read_table(bp_covered, sep='\t', header=None)
col = ['chr','start','stop','gene','dot','strand','source', 'type', 'dot2', 'details', 'no._of_overlaps', 'no._of_bases_covered','promoter_length','fraction_bases_covered']
coverage_df.columns = col
#add % bases covered column
coverage_df['percentage_bases_covered'] = coverage_df.fraction_bases_covered * 100
#remove unnecessary columns
coverage_df_reduced_columns = coverage_df[['chr','start','stop','gene','strand', 'no._of_overlaps', 'no._of_bases_covered','promoter_length','fraction_bases_covered','percentage_bases_covered']]
return coverage_df_reduced_columns_____no_output_____root_coverage = percent_coverage(RootChomatin_bp_covered)_____no_output_____shoot_coverage = percent_coverage(ShootChomatin_bp_covered)_____no_output_____rootshootintersect_coverage = percent_coverage(RootShootIntersect_bp_covered)_____no_output_____sns.set(color_codes=True)
sns.set_style("whitegrid")_____no_output_____#distribution plot_____no_output_____dist_plot = root_coverage['percentage_bases_covered']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()
#save to file
#dist_plot_fig.savefig('../../data/plots/TFBS_coverage/all_genes_bp_covered_dist.pdf', format='pdf')
_____no_output_____dist_plot = shoot_coverage['percentage_bases_covered']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()
#save to file
#dist_plot_fig.savefig('../../data/plots/TFBS_coverage/all_genes_bp_covered_dist.pdf', format='pdf')
_____no_output_____dist_plot = rootshootintersect_coverage['percentage_bases_covered']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()
#save to file
#dist_plot_fig.savefig('../../data/plots/TFBS_coverage/all_genes_bp_covered_dist.pdf', format='pdf')
_____no_output_____
</code>
## constitutive vs variable_____no_output_____
<code>
def add_genetype(coverage):
"""function to add gene type to the df, and remove random genes"""
select_genes_file = '../../data/genomes/ara_housekeeping_list.out'
select_genes = pd.read_table(select_genes_file, sep='\t', header=None)
cols = ['gene','gene_type']
select_genes.columns = cols
merged = pd.merge(coverage, select_genes, on='gene')
merged_renamed = merged.copy()
merged_renamed.gene_type.replace('housekeeping','constitutive', inplace=True)
merged_renamed.gene_type.replace('highVar','variable', inplace=True)
merged_renamed.gene_type.replace('randCont','random', inplace=True)
# no_random = merged_renamed[merged_renamed.gene_type != 'random']
# no_random.reset_index(drop=True, inplace=True)
return merged_renamed_____no_output_____roots_merged = add_genetype(root_coverage)
no_random_roots = roots_merged[roots_merged.gene_type != 'random']_____no_output_____shoots_merged = add_genetype(shoot_coverage)
no_random_shoots = shoots_merged[shoots_merged.gene_type != 'random']_____no_output_____rootsshootsintersect_merged = add_genetype(rootshootintersect_coverage)
no_random_rootsshoots = rootsshootsintersect_merged[rootsshootsintersect_merged.gene_type != 'random']_____no_output_____#how many have open chromatin??
print('root openchromatin present:')
print(len(no_random_roots)-len(no_random_roots[no_random_roots.percentage_bases_covered == 0]))
print('shoot openchromatin present:')
print(len(no_random_shoots)-len(no_random_shoots[no_random_shoots.percentage_bases_covered == 0]))
print('root-shoot intersect openchromatin present:')
print(len(no_random_rootsshoots)-len(no_random_rootsshoots[no_random_rootsshoots.percentage_bases_covered == 0]))root openchromatin present:
164
shoot openchromatin present:
153
root-shoot intersect openchromatin present:
149
#how many have open chromatin??
print('root openchromatin present variable promoters:')
print(len(no_random_roots[no_random_roots.gene_type=='variable'])-len(no_random_roots[no_random_roots.gene_type=='variable'][no_random_roots[no_random_roots.gene_type=='variable'].percentage_bases_covered == 0]))
print('root openchromatin present constitutive promoters:')
print(len(no_random_roots[no_random_roots.gene_type=='constitutive'])-len(no_random_roots[no_random_roots.gene_type=='constitutive'][no_random_roots[no_random_roots.gene_type=='constitutive'].percentage_bases_covered == 0]))
print('shoot openchromatin present variable promoters:')
print(len(no_random_shoots[no_random_shoots.gene_type=='variable'])-len(no_random_shoots[no_random_shoots.gene_type=='variable'][no_random_shoots[no_random_shoots.gene_type=='variable'].percentage_bases_covered == 0]))
print('shoot openchromatin present constitutive promoters:')
print(len(no_random_shoots[no_random_shoots.gene_type=='constitutive'])-len(no_random_shoots[no_random_shoots.gene_type=='constitutive'][no_random_shoots[no_random_shoots.gene_type=='constitutive'].percentage_bases_covered == 0]))
print('root-shoot intersect openchromatin present variable promoters:')
print(len(no_random_rootsshoots[no_random_rootsshoots.gene_type=='variable'])-len(no_random_rootsshoots[no_random_rootsshoots.gene_type=='variable'][no_random_rootsshoots[no_random_rootsshoots.gene_type=='variable'].percentage_bases_covered == 0]))
print('root-shoot intersect openchromatin present constitutive promoters:')
print(len(no_random_rootsshoots[no_random_rootsshoots.gene_type=='constitutive'])-len(no_random_rootsshoots[no_random_rootsshoots.gene_type=='constitutive'][no_random_rootsshoots[no_random_rootsshoots.gene_type=='constitutive'].percentage_bases_covered == 0]))root openchromatin present variable promoters:
75
root openchromatin present constitutive promoters:
89
shoot openchromatin present variable promoters:
66
shoot openchromatin present constitutive promoters:
87
root-shoot intersect openchromatin present variable promoters:
63
root-shoot intersect openchromatin present constitutive promoters:
86
sns.catplot(x="gene_type", y="percentage_bases_covered", data=roots_merged) #.savefig('../../data/plots/TFBS_coverage/responsive_bp_covered.pdf', format='pdf')_____no_output_____sns.catplot(x="gene_type", y="percentage_bases_covered", data=shoots_merged) #.savefig('../../data/plots/TFBS_coverage/responsive_bp_covered.pdf', format='pdf')_____no_output_____#roots
plot = sns.catplot(x="gene_type", y="percentage_bases_covered", kind='box', data=no_random_roots)
#plot points
ax = sns.swarmplot(x="gene_type", y="percentage_bases_covered", data=no_random_roots, color=".25")
plt.ylabel('Percentage bases covered')
plt.xlabel('Gene type');
#ax.get_figure() #.savefig('../../data/plots/TFBS_coverage/responsive_bp_covered_boxplot.pdf', format='pdf')_____no_output_____#shoots
plot = sns.catplot(x="gene_type", y="percentage_bases_covered", kind='box', data=no_random_shoots)
#plot points
ax = sns.swarmplot(x="gene_type", y="percentage_bases_covered", data=no_random_shoots, color=".25")
plt.ylabel('Percentage bases covered')
plt.xlabel('Gene type');
#ax.get_figure() #.savefig('../../data/plots/TFBS_coverage/responsive_bp_covered_boxplot.pdf', format='pdf')_____no_output_____#roots-shoots intersect
plot = sns.catplot(x="gene_type", y="percentage_bases_covered", kind='box', data=no_random_rootsshoots)
#plot points
ax = sns.swarmplot(x="gene_type", y="percentage_bases_covered", data=no_random_rootsshoots, color=".25")
plt.ylabel('Percentage bases covered')
plt.xlabel('Gene type');
#ax.get_figure() #.savefig('../../data/plots/TFBS_coverage/responsive_bp_covered_boxplot.pdf', format='pdf')_____no_output_____#Get names of each promoter
def normality(input_proms):
"""function to test normality of data - returns test statistic, p-value"""
#Get names of each promoter
pd.Categorical(input_proms.gene_type)
names = input_proms.gene_type.unique()
# for name in names:
# print(name)
for name in names:
print('{}: {}'.format(name, stats.shapiro(input_proms.percentage_bases_covered[input_proms.gene_type == name])))
_____no_output_____def variance(input_proms):
"""function to test variance of data"""
#test variance
constitutive = input_proms[input_proms.gene_type == 'constitutive']
#reset indexes so residuals can be calculated later
constitutive.reset_index(inplace=True)
responsive = input_proms[input_proms.gene_type == 'variable']
responsive.reset_index(inplace=True)
control = input_proms[input_proms.gene_type == 'random']
control.reset_index(inplace=True)
print(stats.levene(constitutive.percentage_bases_covered, responsive.percentage_bases_covered))_____no_output_____normality(no_random_roots)variable: (0.8330899477005005, 3.833479311765586e-09)
constitutive: (0.7916173934936523, 1.8358696507458916e-10)
normality(no_random_shoots)variable: (0.8625870943069458, 4.528254393676434e-08)
constitutive: (0.8724747896194458, 1.1140339495341323e-07)
normality(no_random_rootsshoots)variable: (0.8546600937843323, 2.263117515610702e-08)
constitutive: (0.8711197376251221, 9.823354929494599e-08)
</code>
## Not normal_____no_output_____
<code>
variance(no_random_roots)LeveneResult(statistic=3.3550855113629137, pvalue=0.0685312309497174)
variance(no_random_shoots)LeveneResult(statistic=0.20460439034148425, pvalue=0.6515350841099911)
variance(no_random_rootsshoots)LeveneResult(statistic=0.00041366731166758155, pvalue=0.9837939970964911)
</code>
## unequal variance for shoots_____no_output_____
<code>
def kruskal_test(input_data):
"""function to do kruskal-wallis test on data"""
#print('\033[1m' +promoter + '\033[0m')
print(kruskal(data=input_data, dv='percentage_bases_covered', between='gene_type'))
#print('')_____no_output_____no_random_roots_____no_output_____kruskal_test(no_random_roots) Source ddof1 H p-unc
Kruskal gene_type 1 7.281793 0.006966
kruskal_test(no_random_shoots) Source ddof1 H p-unc
Kruskal gene_type 1 20.935596 0.000005
kruskal_test(no_random_rootsshoots) Source ddof1 H p-unc
Kruskal gene_type 1 22.450983 0.000002
</code>
## try gat enrichment_____no_output_____
<code>
#add Chr to linestart of chromatin bed files
add_chr_linestart('../../data/ATAC-seq/potter2018/Shoots_NaOH_peaks_all.bed','../../data/ATAC-seq/potter2018/Shoots_NaOH_peaks_all_renamed.bed')
add_chr_linestart('../../data/ATAC-seq/potter2018/Roots_NaOH_peaks_all.bed','../../data/ATAC-seq/potter2018/Roots_NaOH_peaks_all_renamed.bed')
add_chr_linestart('../../data/ATAC-seq/potter2018/intersectRootsShoots_PeaksInBoth.bed','../../data/ATAC-seq/potter2018/intersectRootsShoots_PeaksInBoth_renamed.bed')_____no_output_____#create a bed file containing all 100 constitutive/responsive promoters with the fourth column annotating whether it's constitutive or responsive
proms_file = '../../data/genes/constitutive-variable-random_100_each.csv'
promoters = pd.read_csv(proms_file)
promoters
cols2 = ['delete','promoter_AGI', 'gene_type']
promoters_df = promoters[['promoter_AGI','gene_type']]
promoters_no_random = promoters_df.copy()
#drop randCont rows
promoters_no_random = promoters_df[~(promoters_df.gene_type == 'randCont')]
promoters_no_random_____no_output_____#merge promoters with genetype selected
promoterbedfile = '../../data/FIMO/responsivepromoters.bed'
promoters_bed = pd.read_table(promoterbedfile, sep='\t', header=None)
cols = ['chr', 'start', 'stop', 'promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
promoters_bed.columns = cols
merged = pd.merge(promoters_bed,promoters_no_random, on='promoter_AGI')_____no_output_____#add gene_type to column3
merged = merged[['chr','start','stop','gene_type','promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']]_____no_output_____#write to bed file
promoter_file = '../../data/promoter_analysis/old1000bpproms_variable_constitutive_workspace.bed'
with open(promoter_file,'w') as f:
merged.to_csv(f,index=False,sep='\t',header=None)_____no_output_____# new_merged = merged.astype({'start': 'int'})
# new_merged = merged.astype({'stop': 'int'})
# new_merged = merged.astype({'chr': 'int'})_____no_output_____#add Chr to linestart of promoter bed file
add_chr_linestart('../../data/promoter_analysis/old1000bpproms_variable_constitutive_workspace.bed','../../data/promoter_analysis/old1000bpproms_variable_constitutive_workspace_renamed.bed')_____no_output_____#create separate variable and constitutive and gat workspace
promoter_file_renamed = '../../data/promoter_analysis/old1000bpproms_variable_constitutive_workspace_renamed.bed'
promoters = pd.read_table(promoter_file_renamed, sep='\t', header=None)
#make a new gat workspace file with all promoters (first 3 columns)
bed = BedTool.from_dataframe(promoters[[0,1,2]]).saveas('../../data/promoter_analysis/chromatin/variable_constitutive_promoters_1000bp_workspace.bed')
#select only variable promoters
variable_promoters = promoters[promoters[3] == 'highVar']
sorted_variable = variable_promoters.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_variable).saveas('../../data/promoter_analysis/chromatin/variable_promoters_1000bp.bed')
#make a constitutive only file
constitutive_promoters = promoters[promoters[3] == 'housekeeping']
sorted_constitutive = constitutive_promoters.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_constitutive).saveas('../../data/promoter_analysis/chromatin/constitutive_promoters_1000bp.bed')_____no_output_____
</code>
## now I will do the plots with non-overlapping promoters including the 5'UTR_____no_output_____
<code>
#merge promoters with genetype selected
promoter_UTR = '../../data/FIMO/non-overlapping_includingbidirectional_all_genes/promoters_5UTR_renamedChr.bed'
promoters_bed = pd.read_table(promoter_UTR, sep='\t', header=None)
cols = ['chr', 'start', 'stop', 'promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
promoters_bed.columns = cols
merged = pd.merge(promoters_bed,promoters_no_random, on='promoter_AGI')_____no_output_____#how many constitutive genes left after removed/shortened overlapping
len(merged[merged.gene_type == 'housekeeping'])_____no_output_____#how many variable genes left after removed/shortened overlapping
len(merged[merged.gene_type == 'highVar'])_____no_output_____merged['length'] = (merged.start - merged.stop).abs()
merged.sort_values('length',ascending=True)_____no_output_____#plot of lengths
dist_plot = merged['length']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()_____no_output_____#remove 2 genes from constitutive group so equal sample size to variable
#random sample of 98, using seed 1
merged[merged.gene_type == 'housekeeping'] = merged[merged.gene_type == 'housekeeping'].sample(98, random_state=1)_____no_output_____#drop rows with at least 2 NaNs
merged = merged.dropna(thresh=2)_____no_output_____merged_____no_output_____#write to bed file so can run OpenChromatin_coverage.py
new_promoter_file = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive.bed'
cols = ['chr', 'start', 'stop', 'promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
#remove trailing decimal .0 from start and stop
merged = merged.astype({'start': 'int'})
merged = merged.astype({'stop': 'int'})
merged = merged.astype({'chr': 'int'})
merged_coverage = merged[cols]
with open(new_promoter_file,'w') as f:
merged_coverage.to_csv(f,index=False,sep='\t',header=None)_____no_output_____#write to bed file so can run gat
new_promoter_file_gat = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_gat.bed'
cols_gat = ['chr', 'start', 'stop', 'gene_type','promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
merged_gat = merged[cols_gat]
with open(new_promoter_file_gat,'w') as f:
merged_gat.to_csv(f,index=False,sep='\t',header=None)
_____no_output_____#Read in new files
RootChomatin_bp_covered = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutiveRootOpenChrom.bp_covered.txt'
ShootChomatin_bp_covered = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutiveShootOpenChrom.bp_covered.txt'
RootShootIntersect_bp_covered = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutiveShootRootIntersectOpenChrom.bp_covered.txt'_____no_output_____root_coverage = percent_coverage(RootChomatin_bp_covered)
shoot_coverage = percent_coverage(ShootChomatin_bp_covered)
rootshootintersect_coverage = percent_coverage(RootShootIntersect_bp_covered)_____no_output_____#add Chr to linestart of promoter bed file
add_chr_linestart('../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_gat.bed','../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_gat_renamed.bed')_____no_output_____#create separate variable and constitutive and gat workspace
promoter_file_renamed = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_gat_renamed.bed'
promoters = pd.read_table(promoter_file_renamed, sep='\t', header=None)
#make a new gat workspace file with all promoters (first 3 columns)
bed = BedTool.from_dataframe(promoters[[0,1,2]]).saveas('../../data/promoter_analysis/chromatin/non-overlapping_includingbidirectional_variable_constitutive_workspace.bed')
#select only variable promoters
variable_promoters = promoters[promoters[3] == 'highVar']
sorted_variable = variable_promoters.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_variable).saveas('../../data/promoter_analysis/chromatin/non-overlapping_includingbidirectional_variable_promoters.bed')
#make a constitutive only file
constitutive_promoters = promoters[promoters[3] == 'housekeeping']
sorted_constitutive = constitutive_promoters.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_constitutive).saveas('../../data/promoter_analysis/chromatin/non-overlapping_includingbidirectional_constitutive_promoters.bed')_____no_output_____#show distribution of the distance from the closest end of the open chromatin peak to the ATG (if overlapping already then distance is 0)
root_peaks_bed = '../../data/ATAC-seq/potter2018/Roots_NaOH_peaks_all_renamed.bed'
shoot_peaks_bed = '../../data/ATAC-seq/potter2018/Shoots_NaOH_peaks_all_renamed.bed'
rootshootintersect_peaks_bed = '../../data/ATAC-seq/potter2018/intersectRootsShoots_PeaksInBoth_renamed.bed'
promoters_bed = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_renamed.bed'
promoter_openchrom_intersect = '../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_chromintersect.bed'_____no_output_____add_chr_linestart('../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive.bed','../../data/promoter_analysis/non-overlapping_includingbidirectional_variable_constitutive_renamed.bed')_____no_output_____def distr_distance_ATG(peaks_bed, promoter_bed, output_file):
"""function to show the distribution of the distance rom the closest end
of the open chromatin peak to the ATG (if overlapping already then distance is 0)"""
# peaks = pd.read_table(peaks_bed, sep='\t', header=None)
# cols = ['chr','start', 'stop']
# peaks.columns = cols
# promoters = pd.read_table(promoter_bed, sep='\t', header=None)
# cols_proms = ['chr', 'start', 'stop', 'gene_type','promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
# promoters.columns = cols_proms
proms = BedTool(promoter_bed) #read in files using BedTools
peaks = BedTool(peaks_bed)
#report chromosome position of overlapping feature, along with the promoter which overlaps it (only reports the overlapping nucleotides, not the whole promoter length. Can use u=True to get whole promoter length)
#f, the minimum overlap as fraction of A. F, nucleotide fraction of B (genes) that need to be overlapping with A (promoters)
#wa, Write the original entry in A for each overlap.
#wo, Write the original A and B entries plus the number of base pairs of overlap between the two features. Only A features with overlap are reported.
#u, write original A entry only once even if more than one overlap
intersect = proms.intersect(peaks, wo=True) #could add u=True which indicates we want to see the promoters that overlap features in the genome
#Write to output_file
with open(output_file, 'w') as output:
#Each line in the file contains bed entry a and bed entry b that it overlaps plus the number of bp in the overlap so 19 columns
output.write(str(intersect))
#read in intersect bed file
overlapping_proms = pd.read_table(output_file, sep='\t', header=None)
cols = ['chrA', 'startA', 'stopA', 'promoter_AGI','dot1','strand','source','type','dot2','attributes','chrB', 'startB','stopB','bp_overlap']
overlapping_proms.columns = cols
#add empty openchrom_distance_from_ATG column
overlapping_proms['openchrom_distance_from_ATG'] = int()
for i, v in overlapping_proms.iterrows():
#if positive strand feature A
if overlapping_proms.loc[i,'strand'] == '+':
#if end of open chromatin is downstream or equal to ATG, distance is 0
if overlapping_proms.loc[i,'stopA'] <= overlapping_proms.loc[i, 'stopB']:
overlapping_proms.loc[i,'openchrom_distance_from_ATG'] = 0
#else if upstream and chromatin stop is after promoter start, add distance from chromatin stop to ATG
elif overlapping_proms.loc[i,'startA'] <= overlapping_proms.loc[i, 'stopB']:
overlapping_proms.loc[i,'openchrom_distance_from_ATG'] = overlapping_proms.loc[i,'stopA'] - overlapping_proms.loc[i, 'stopB']
elif overlapping_proms.loc[i,'strand'] == '-':
#if end of open chromatin is downstream or equal to ATG, distance is 0
if overlapping_proms.loc[i,'startA'] >= overlapping_proms.loc[i, 'startB']:
overlapping_proms.loc[i,'openchrom_distance_from_ATG'] = 0
#else if upstream and chromatin stop is after promoter start, add distance from chromatin stop to ATG
elif overlapping_proms.loc[i,'stopA'] >= overlapping_proms.loc[i, 'startB']:
overlapping_proms.loc[i,'openchrom_distance_from_ATG'] = overlapping_proms.loc[i, 'startB'] - overlapping_proms.loc[i,'startB']
return overlapping_proms_____no_output_____#show length of open chromatin peaks
rootshootintersect = distr_distance_ATG(rootshootintersect_peaks_bed)
rootshootintersect['length'] = (rootshootintersect.start - rootshootintersect.stop).abs()
rootshootintersect.sort_values('length',ascending=True)
_____no_output_____rootshootintersect = distr_distance_ATG(rootshootintersect_peaks_bed,promoters_bed,promoter_openchrom_intersect)_____no_output_____rootshootintersect
rootshootintersect.sort_values('openchrom_distance_from_ATG',ascending=True)_____no_output_____#plot of distances of chomatin to ATG
dist_plot = rootshootintersect['openchrom_distance_from_ATG']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()_____no_output_____#now split constitutive and variable
merged_distances = pd.merge(merged, rootshootintersect, on='promoter_AGI')_____no_output_____merged_distances.gene_type_____no_output_____#VARIABLE
#plot of distances of chomatin to ATG
dist_plot = merged_distances[merged_distances.gene_type=='highVar']['openchrom_distance_from_ATG']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()_____no_output_____merged_distances[merged_distances.gene_type=='housekeeping']['openchrom_distance_from_ATG']_____no_output_____#CONSTITUTIVE
#plot of distances of chomatin to ATG
dist_plot = merged_distances[merged_distances.gene_type=='housekeeping']['openchrom_distance_from_ATG']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()/home/witham/opt/anaconda3/envs/PromoterArchitecturePipeline/lib/python3.7/site-packages/seaborn/distributions.py:369: UserWarning: Default bandwidth for data is 0; skipping density estimation.
warnings.warn(msg, UserWarning)
</code>
| {
"repository": "Switham1/PromoterArchitecture",
"path": "src/plotting/OpenChromatin_plotsold.ipynb",
"matched_keywords": [
"ATAC-seq"
],
"stars": null,
"size": 404490,
"hexsha": "d063a4b442a49c71d22336e6b555d4a0dd1f82bf",
"max_line_length": 43436,
"avg_line_length": 154.4444444444,
"alphanum_fraction": 0.8553684887
} |
# Notebook from CCADynamicsGroup/SummerSchoolWorkshops
Path: 4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb
<code>
%run ../setup/nb_setup
%matplotlib inline_____no_output_____
</code>
# Compute a Galactic orbit for a star using Gaia data
Author(s): Adrian Price-Whelan
## Learning goals
In this tutorial, we will retrieve the sky coordinates, astrometry, and radial velocity for a star — [Kepler-444](https://en.wikipedia.org/wiki/Kepler-444) — and compute its orbit in the default Milky Way mass model implemented in Gala. We will compare the orbit of Kepler-444 to the orbit of the Sun and a random sample of nearby stars.
### Notebook Setup and Package Imports_____no_output_____
<code>
import astropy.coordinates as coord
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pyia import GaiaData
# Gala
import gala.dynamics as gd
import gala.potential as gp_____no_output_____
</code>
## Define a Galactocentric Coordinate Frame
We will start by defining a Galactocentric coordinate system using `astropy.coordinates`. We will adopt the latest parameter set assumptions for the solar Galactocentric position and velocity as implemented in Astropy, but note that these parameters are customizable by passing parameters into the `Galactocentric` class below (e.g., you could change the sun-galactic center distance by setting `galcen_distance=...`)._____no_output_____
<code>
with coord.galactocentric_frame_defaults.set("v4.0"):
galcen_frame = coord.Galactocentric()
galcen_frame_____no_output_____
</code>
## Define the Solar Position and Velocity_____no_output_____In this coordinate system, the sun is along the $x$-axis (at a negative $x$ value), and the Galactic rotation at this position is in the $+y$ direction. The 3D position of the sun is therefore given by:_____no_output_____
<code>
sun_xyz = u.Quantity(
[-galcen_frame.galcen_distance, 0 * u.kpc, galcen_frame.z_sun] # x,y,z
)_____no_output_____
</code>
We can combine this with the solar velocity vector (defined in the `astropy.coordinates.Galactocentric` frame) to define the sun's phase-space position, which we will use as initial conditions shortly to compute the orbit of the Sun:_____no_output_____
<code>
sun_vxyz = galcen_frame.galcen_v_sun
sun_vxyz_____no_output_____sun_w0 = gd.PhaseSpacePosition(pos=sun_xyz, vel=sun_vxyz)_____no_output_____
</code>
To compute the sun's orbit, we need to specify a mass model for the Galaxy. Here, we will use the default Milky Way mass model implemented in Gala, which is defined in detail in the Gala documentation: [Defining a Milky Way model](define-milky-way-model.html). Here, we will initialize the potential model with default parameters:_____no_output_____
<code>
mw_potential = gp.MilkyWayPotential()
mw_potential_____no_output_____
</code>
This potential is composed of four mass components meant to represent simple models of the different structural components of the Milky Way:_____no_output_____
<code>
for k, pot in mw_potential.items():
print(f"{k}: {pot!r}")_____no_output_____
</code>
With a potential model for the Galaxy and initial conditions for the sun, we can now compute the Sun's orbit using the default integrator (Leapfrog integration): We will compute the orbit for 4 Gyr, which is about 16 orbital periods._____no_output_____
<code>
sun_orbit = mw_potential.integrate_orbit(sun_w0, dt=0.5 * u.Myr, t1=0, t2=4 * u.Gyr)_____no_output_____
</code>
Let's plot the Sun's orbit in 3D to get a feel for the geometry of the orbit:_____no_output_____
<code>
fig, ax = sun_orbit.plot_3d()
lim = (-12, 12)
ax.set(xlim=lim, ylim=lim, zlim=lim)_____no_output_____
</code>
## Retrieve Gaia Data for Kepler-444_____no_output_____As a comparison, we will compute the orbit of the exoplanet-hosting star "Kepler-444." To get Gaia data for this star, we first have to retrieve its sky coordinates so that we can do a positional cross-match query on the Gaia catalog. We can retrieve the sky position of Kepler-444 from Simbad using the `SkyCoord.from_name()` classmethod, which queries Simbad under the hood to resolve the name:_____no_output_____
<code>
star_sky_c = coord.SkyCoord.from_name("Kepler-444")
star_sky_c_____no_output_____
</code>
We happen to know a priori that Kepler-444 has a large proper motion, so the sky position reported by Simbad could be off from the Gaia sky position (epoch=2016) by many arcseconds. To run and retrieve the Gaia data, we will use the [pyia](http://pyia.readthedocs.io/) package: We can pass in an ADQL query, which `pyia` uses to query the Gaia science archive using `astroquery`, and returns the data as a `pyia.GaiaData` object. To run the query, we will do a sky position cross-match with a large positional tolerance by setting the cross-match radius to 15 arcseconds, but we will take the brightest cross-matched source within this region as our match:_____no_output_____
<code>
star_gaia = GaiaData.from_query(
f"""
SELECT TOP 1 * FROM gaiaedr3.gaia_source
WHERE 1=CONTAINS(
POINT('ICRS', {star_sky_c.ra.degree}, {star_sky_c.dec.degree}),
CIRCLE('ICRS', ra, dec, {(15*u.arcsec).to_value(u.degree)})
)
ORDER BY phot_g_mean_mag
"""
)
star_gaia_____no_output_____
</code>
We will assume (and hope!) that this source is Kepler-444, but we know that it is fairly bright compared to a typical Gaia source, so we should be safe.
We can now use the returned `pyia.GaiaData` object to retrieve an astropy `SkyCoord` object with all of the position and velocity measurements taken from the Gaia archive record for this source:_____no_output_____
<code>
star_gaia_c = star_gaia.get_skycoord()
star_gaia_c_____no_output_____
</code>
To compute this star's Galactic orbit, we need to convert its observed, Heliocentric (actually solar system barycentric) data into the Galactocentric coordinate frame we defined above. To do this, we will use the `astropy.coordinates` transformation framework using the `.transform_to()` method, and we will pass in the `Galactocentric` coordinate frame we defined above:_____no_output_____
<code>
star_galcen = star_gaia_c.transform_to(galcen_frame)
star_galcen_____no_output_____
</code>
Let's print out the Cartesian position and velocity for Kepler-444:_____no_output_____
<code>
print(star_galcen.cartesian)
print(star_galcen.velocity)_____no_output_____
</code>
Now with Galactocentric position and velocity components for Kepler-444, we can create Gala initial conditions and compute its orbit on the time grid used to compute the Sun's orbit above:_____no_output_____
<code>
star_w0 = gd.PhaseSpacePosition(star_galcen.data)
star_orbit = mw_potential.integrate_orbit(star_w0, t=sun_orbit.t)_____no_output_____
</code>
We can now compare the orbit of Kepler-444 to the solar orbit we computed above. We will plot the two orbits in two projections: First in the $x$-$y$ plane (Cartesian positions), then in the *meridional plane*, showing the cylindrical $R$ and $z$ position dependence of the orbits:_____no_output_____
<code>
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True)
sun_orbit.plot(["x", "y"], axes=axes[0])
star_orbit.plot(["x", "y"], axes=axes[0])
axes[0].set_xlim(-10, 10)
axes[0].set_ylim(-10, 10)
sun_orbit.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
label="Sun",
)
star_orbit.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
label="Kepler-444",
)
axes[1].set_xlim(0, 10)
axes[1].set_ylim(-5, 5)
axes[1].set_aspect("auto")
axes[1].legend(loc="best", fontsize=15)_____no_output_____
</code>
### Exercise: How does Kepler-444's orbit differ from the Sun's?
- What are the guiding center radii of the two orbits?
- What is the maximum $z$ height reached by each orbit?
- What are their eccentricities?
- Can you guess which star is older based on their kinematics?
- Which star do you think has a higher metallicity?_____no_output_____### Exercise: Compute orbits for Monte Carlo sampled initial conditions using the Gaia error distribution
*Hint: Use the `pyia.GaiaData.get_error_samples()` method to generate samples from the Gaia error distribution*
- Generate 128 samples from the error distribution
- Construct a `SkyCoord` object with all of these Monte Carlo samples
- Transform the error sample coordinates to the Galactocentric frame and define Gala initial conditions (a `PhaseSpacePosition` object)
- Compute orbits for all error samples using the same time grid we used above
- Compute the eccentricity and $L_z$ for all samples: what is the standard deviation of the eccentricity and $L_z$ values?
- With what fractional precision can we measure this star's eccentricity and $L_z$? (i.e. what is $\textrm{std}(e) / \textrm{mean}(e)$ and the same for $L_z$)_____no_output_____### Exercise: Comparing these orbits to the orbits of other Gaia stars
Retrieve Gaia data for a set of 100 random Gaia stars within 200 pc of the sun with measured radial velocities and well-measured parallaxes using the query:
SELECT TOP 100 * FROM gaiaedr3.gaia_source
WHERE dr2_radial_velocity IS NOT NULL AND
parallax_over_error > 10 AND
ruwe < 1.2 AND
parallax > 5
ORDER BY random_index_____no_output_____
<code>
# random_stars_g = .._____no_output_____
</code>
Compute orbits for these stars for the same time grid used above to compute the sun's orbit:_____no_output_____
<code>
# random_stars_c = ..._____no_output_____# random_stars_galcen = ...
# random_stars_w0 = ..._____no_output_____# random_stars_orbits = ..._____no_output_____
</code>
Plot the initial (present-day) positions of all of these stars in Galactocentric Cartesian coordinates:_____no_output_____Now plot the orbits of these stars in the x-y and R-z planes:_____no_output_____
<code>
fig, axes = plt.subplots(1, 2, figsize=(10, 5), constrained_layout=True)
random_stars_orbits.plot(["x", "y"], axes=axes[0])
axes[0].set_xlim(-15, 15)
axes[0].set_ylim(-15, 15)
random_stars_orbits.cylindrical.plot(
["rho", "z"],
axes=axes[1],
auto_aspect=False,
labels=["$R$ [kpc]", "$z$ [kpc]"],
)
axes[1].set_xlim(0, 15)
axes[1].set_ylim(-5, 5)
axes[1].set_aspect("auto")_____no_output_____
</code>
Compute maximum $z$ heights ($z_\textrm{max}$) and eccentricities for all of these orbits. Compare the Sun, Kepler-444, and this random sampling of nearby stars. Where do the Sun and Kepler-444 sit relative to the random sample of nearby stars in terms of $z_\textrm{max}$ and eccentricity? (Hint: plot $z_\textrm{max}$ vs. eccentricity and highlight the Sun and Kepler-444!) Are either of them outliers in any way?_____no_output_____
<code>
# rand_zmax = ..._____no_output_____# rand_ecc = ..._____no_output_____fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(
rand_ecc, rand_zmax, color="k", alpha=0.4, s=14, lw=0, label="random nearby stars"
)
ax.scatter(sun_orbit.eccentricity(), sun_orbit.zmax(), color="tab:orange", label="Sun")
ax.scatter(
star_orbit.eccentricity(), star_orbit.zmax(), color="tab:cyan", label="Kepler-444"
)
ax.legend(loc="best", fontsize=14)
ax.set_xlabel("eccentricity, $e$")
ax.set_ylabel(r"max. $z$ height, $z_{\rm max}$ [kpc]")_____no_output_____
</code>
| {
"repository": "CCADynamicsGroup/SummerSchoolWorkshops",
"path": "4-Science-case-studies/1-Computing-orbits-for-Gaia-stars.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 5,
"size": 18939,
"hexsha": "d06578db06221a89b7dc540bc849161d7d99a2a2",
"max_line_length": 662,
"avg_line_length": 29.0475460123,
"alphanum_fraction": 0.5846665611
} |
# Notebook from quantumjot/segment-classify-track
Path: stardist_segmentation.ipynb
# Segmentation
This notebook shows how to use Stardist (Object Detection with Star-convex Shapes) as a part of a segmentation-classification-tracking analysis pipeline.
The sections of this notebook are as follows:
1. Load images
2. Load model of choice and segment an initial image to test Stardist parameters
3. Batch segment a sequence of images
The data used in this notebook is timelapse microscopy data with h2b-gfp/rfp markers that show the spatial extent of the nucleus and it's mitotic state.
This notebook uses the dask octopuslite image loader from the CellX/Lowe lab project._____no_output_____
<code>
import matplotlib.pyplot as plt
import numpy as np
import os
from octopuslite import DaskOctopusLiteLoader
from stardist.models import StarDist2D
from stardist.plot import render_label
from csbdeep.utils import normalize
from tqdm.auto import tqdm
from skimage.io import imsave
import json
from scipy import ndimage as nd_____no_output_____%matplotlib inline
plt.rcParams['figure.figsize'] = [18,8]_____no_output_____
</code>
## 1. Load images_____no_output_____
<code>
# define experiment ID and select a position
expt = 'ND0011'
pos = 'Pos6'
# point to where the data is
root_dir = '/home/nathan/data'
image_path = f'{root_dir}/{expt}/{pos}/{pos}_images'
# lazily load imagesdd
images = DaskOctopusLiteLoader(image_path,
remove_background = True)
images.channelsUsing cropping: (1200, 1600)
</code>
Set segmentation channel and load test image_____no_output_____
<code>
# segmentation channel
segmentation_channel = images.channels[3]
# set test image index
frame = 1000
# load test image
irfp = images[segmentation_channel.name][frame].compute()
# create 1-channel XYC image
img = np.expand_dims(irfp, axis = -1)
img.shape_____no_output_____
</code>
## 2. Load model and test segment single image _____no_output_____
<code>
model = StarDist2D.from_pretrained('2D_versatile_fluo')
modelFound model '2D_versatile_fluo' for 'StarDist2D'.
Loading network weights from 'weights_best.h5'.
Loading thresholds from 'thresholds.json'.
Using default values: prob_thresh=0.479071, nms_thresh=0.3.
</code>
### 2.1 Test run and display initial results_____no_output_____
<code>
# initialise test segmentation
labels, details = model.predict_instances(normalize(img))
# plot input image and prediction
plt.clf()
plt.subplot(1,2,1)
plt.imshow(normalize(img[:,:,0]), cmap="PiYG")
plt.axis("off")
plt.title("input image")
plt.subplot(1,2,2)
plt.imshow(render_label(labels, img = img))
plt.axis("off")
plt.title("prediction + input overlay")
plt.show()Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
</code>
## 3. Batch segment a whole stack of images_____no_output_____When you segment a whole data set you do not want to apply any image transformation. This is so that when you load images and masks later on you can apply the same transformation. You can apply a crop but note that you need to be consistent with your use of the crop from this point on, otherwise you'll get a shift. _____no_output_____
<code>
for expt in tqdm(['ND0009', 'ND0010', 'ND0011']):
for pos in tqdm(['Pos0', 'Pos1', 'Pos2', 'Pos3', 'Pos4']):
print('Starting experiment position:', expt, pos)
# load images
image_path = f'{root_dir}/{expt}/{pos}/{pos}_images'
images = DaskOctopusLiteLoader(image_path,
remove_background = True)
# iterate over images filenames
for fn in tqdm(images.files(segmentation_channel.name)):
# compile 1-channel into XYC array
img = np.expand_dims(imread(fn), axis = -1)
# predict labels
labels, details = model.predict_instances(normalize(img))
# set filename as mask format (channel099)
fn = fn.replace(f'channel00{segmentation_channel.value}', 'channel099')
# save out labelled image
imsave(fn, labels.astype(np.uint16), check_contrast=False)_____no_output_____
</code>
| {
"repository": "quantumjot/segment-classify-track",
"path": "stardist_segmentation.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 3,
"size": 431414,
"hexsha": "d0666ddf78670d466dc35a4201d0bcac9ffda04f",
"max_line_length": 422508,
"avg_line_length": 1438.0466666667,
"alphanum_fraction": 0.9552031228
} |
# Notebook from hongyehu/Sim-Clifford
Path: circuit.ipynb
<code>
import numpy
from context import vaeqst_____no_output_____import numpy
from context import base_____no_output_____base.RandomCliffordGate(0,1)_____no_output_____
</code>
# Random Clifford Circuit_____no_output_____## RandomCliffordGate_____no_output_____`RandomClifordGate(*qubits)` represents a random Clifford gate acting on a set of qubits. There is no further parameter to specify, as it is not any particular gate, but a placeholder for a generic random Clifford gate.
**Parameters**
- `*qubits`: indices of the set of qubits on which the gate acts on.
Example:_____no_output_____
<code>
gate = vaeqst.RandomCliffordGate(0,1)
gate_____no_output_____
</code>
`RandomCliffordGate.random_clifford_map()` evokes a random sampling of the Clifford unitary, return in the form of operator mapping table $M$ and the corresponding sign indicator $h$. Such that under the mapping, any Pauli operator $\sigma_g$ specified by the binary representation $g$ (and localized within the gate support) gets mapped to
$$\sigma_g \to \prod_{i=1}^{2n} (-)^{h_i}\sigma_{M_i}^{g_i}.$$
The binary representation is in the $g=(x_0,z_0,x_1,z_1,\cdots)$ basis._____no_output_____
<code>
gate.random_clifford_map()_____no_output_____
</code>
## RandomCliffordLayer_____no_output_____`RandomCliffordLayer(*gates)` represents a layer of random Clifford gates.
**Parameters:**
* `*gates`: quantum gates contained in the layer.
The gates in the same layer should not overlap with each other (all gates need to commute). To ensure this, we do not manually add gates to the layer, but using the higher level function `.gate()` provided by `RandomCliffordCircuit` (see discussion later).
Example:_____no_output_____
<code>
layer = vaeqst.RandomCliffordLayer(vaeqst.RandomCliffordGate(0,1),vaeqst.RandomCliffordGate(3,5))
layer_____no_output_____
</code>
It hosts a list of gates:_____no_output_____
<code>
layer.gates_____no_output_____
</code>
Given the total number of qubits $N$, the layer can sample the Clifford unitary (as product of each gate) $U=\prod_{a}U_a$, and represent it as a single operator mapping (because gates do not overlap, so they maps operators in different supports independently)._____no_output_____
<code>
layer.random_clifford_map(6)_____no_output_____
</code>
## RandomCliffordCircuit_____no_output_____`RandomCliffordCircuit()` represents a quantum circuit of random Clifford gates._____no_output_____### Methods_____no_output_____#### Construct the Circuit
Example: create a random Clifford circuit._____no_output_____
<code>
circ = vaeqst.RandomCliffordCircuit()_____no_output_____
</code>
Use `.gate(*qubits)` to add random Clifford gates to the circuit._____no_output_____
<code>
circ.gate(0,1)
circ.gate(2,4)
circ.gate(1,4)
circ.gate(0,2)
circ.gate(3,5)
circ.gate(3,4)
circ_____no_output_____
</code>
Gates will automatically arranged into layers. Each new gate added to the circuit will commute through the layers if it is not blocked by the existing gates._____no_output_____If the number of qubits `.N` is not explicitly defined, it will be dynamically infered from the circuit width, as the largest qubit index of all gates + 1._____no_output_____
<code>
circ.N_____no_output_____
</code>
#### Navigate in the Circuit_____no_output_____`.layers_forward()` and `.layers_backward()` provides two generators to iterate over layers in forward and backward order resepctively._____no_output_____
<code>
list(circ.layers_forward())_____no_output_____list(circ.layers_backward())_____no_output_____
</code>
`.first_layer` and `.last_layer` points to the first and the last layers._____no_output_____
<code>
circ.first_layer_____no_output_____circ.last_layer_____no_output_____
</code>
Use `.next_layer` and `.prev_layer` to move forward and backward._____no_output_____
<code>
circ.first_layer.next_layer, circ.last_layer.prev_layer_____no_output_____
</code>
Locate a gate in the circuit._____no_output_____
<code>
circ.first_layer.next_layer.next_layer.gates[0]_____no_output_____
</code>
#### Apply Circuit to State_____no_output_____`.forward(state)` and `.backward(state)` applies the circuit to transform the state forward / backward.
* Each call will sample a new random realization of the random Clifford circuit.
* The transformation will create a new state, the original state remains untouched._____no_output_____
<code>
rho = vaeqst.StabilizerState(6, r=0)
rho_____no_output_____circ.forward(rho)_____no_output_____circ.backward(rho)_____no_output_____
</code>
#### POVM_____no_output_____`.povm(nsample)` provides a generator to sample $n_\text{sample}$ from the prior POVM based on the circuit by back evolution._____no_output_____
<code>
list(circ.povm(3))_____no_output_____
</code>
## BrickWallRCC_____no_output_____`BrickWallRCC(N, depth)` is a subclass of `RandomCliffordCircuit`. It represents the circuit with 2-qubit gates arranged following a brick wall pattern._____no_output_____
<code>
circ = vaeqst.BrickWallRCC(16,2)
circ_____no_output_____
</code>
Create an inital state as a computational basis state._____no_output_____
<code>
rho = vaeqst.StabilizerState(16, r=0)
rho_____no_output_____
</code>
Backward evolve the state to obtain the measurement operator._____no_output_____
<code>
circ.backward(rho)_____no_output_____
</code>
## OnSiteRCC_____no_output_____`OnSiteRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit of a single layer of on-site Clifford gates. It can be used to generate random Pauli states._____no_output_____
<code>
circ = vaeqst.OnSiteRCC(16)
circ_____no_output_____rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho)_____no_output_____
</code>
## GlobalRCC_____no_output_____`GlobalRCC(N)` is a subclass of `RandomCliffordCircuit`. It represents the circuit consists of a global Clifford gate. It can be used to generate Clifford states._____no_output_____
<code>
circ = vaeqst.GlobalRCC(16)
circ_____no_output_____rho = vaeqst.StabilizerState(16, r=0)
circ.backward(rho)_____no_output_____
</code>
| {
"repository": "hongyehu/Sim-Clifford",
"path": "circuit.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 3,
"size": 20559,
"hexsha": "d067373c87708d6fe6fc319e8bfe446d548fbb2d",
"max_line_length": 391,
"avg_line_length": 21.9179104478,
"alphanum_fraction": 0.4946738655
} |
# Notebook from andelpe/curso-intro-python
Path: tema_9.ipynb
<font size=6>
<b>Curso de Programación en Python</b>
</font>
<font size=4>
Curso de formación interna, CIEMAT. <br/>
Madrid, Octubre de 2021
Antonio Delgado Peris
</font>
https://github.com/andelpe/curso-intro-python/
<br/>_____no_output_____# Tema 9 - El ecosistema Python: librería estándar y otros paquetes populares_____no_output_____## Objetivos
- Conocer algunos módulos de la librería estándar
- Interacción con el propio intérprete
- Interacción con el sistema operativo
- Gestión del sistema de ficheros
- Gestión de procesos y concurrencia
- Desarrollo, depuración y perfilado
- Números y matemáticas
- Acceso y funcionalidad de red
- Utilidades para manejo avanzado de funciones e iteradores
- Introducir el ecosistema de librerías científicas de Python
- La pila Numpy/SciPY
- Gráficos
- Matemáticas y estadística
- Aprendizaje automático
- Procesamiento del lenguaje natural
- Biología
- Física
_____no_output_____## La librería estándar
Uno de los eslóganes de Python es _batteries included_. Se refiere a la cantidad de funcionalidad disponible en la instalación Python básica, sin necesidad de recurrir a paquetes externos.
En esta sección revisamos brevemente algunos de los módulos disponibles. Para muchas más información: https://docs.python.org/3/library/_____no_output_____### Interacción con el intérprete de Python: `sys`
Ofrece tanto información, como capacidad de manipular diversos aspectos del propio entorno de Python.
- `sys.argv`: Lista con los argumentos pasados al programa en ejecución.
- `sys.version`: String con la versión actual de Python.
- `sys.stdin/out/err`: Objetos fichero usados por el intérprete para entrada, salida y error.
- `sys.exit`: Función para acabar el programa.
_____no_output_____### Interacción con el sistema operativo: `os`
Interfaz _portable_ para funcionalidad que depende del sistema operativo.
Contiene funcionalidad muy variada, a veces de muy bajo nivel.
- `os.environ`: diccionario con variables de entorno (modificable)
- `os.getuid`, `os.getgid`, `os.getpid`...: Obtener UID, GID, process ID, etc. (Unix)
- `os.uname`: información sobre el sistema operativo
- `os.getcwd`, `os.chdir`, `os.mkdir`, `os.remove`, `os.stat`...: operaciones sobre el sistema de ficheros
- `os.exec`, `os.fork`, `os.kill`... : gestión de procesos
Para algunas de estas operaciones es más conveniente utilizar módulos más específicos, o de más alto nivel.
### Operaciones sobre el sistema de ficheros
- Para manipulación de _paths_, borrado, creación de directorios, etc.: `pathlib` (moderno), o `os.path` (clásico)
- Expansión de _wildcards_ de nombres de fichero (Unix _globs_): `glob`
- Para operaciones de copia (y otros) de alto nivel: `shutil`
- Para ficheros y directorios temporales (de usar y tirar): `tempfile`
### Gestión de procesos
- `threading`: interfaz de alto nivel para gestión de _threads_.
- Padece el problema del _Global Interpreter Lock_, de Python: es un _lock_ global, que asegura que solo un thread se está ejecutando en Python en un momento dado (excepto en pausas por I/O). Impide mejorar el rendimiento con múltiples CPUs.
- `queue`: implementa colas multi-productor, multi-consumidor para un intercambio seguro de información entre múltiples _threads_.
- `multiprocessing`: interfaz que imita al the `threading`, pero utiliza multi-proceso, en lugar de threads (evita el problema del GIL). Soporta Unix y Windows. Ofrece concurrencia local y remota.
- El módulo `multiprocessing.shared_memory`: facilita la asignación y gestión de memoria compartida entre varios procesos.
- `subprocess`: Permite lanzar y gestionar subprocesos (comandos externos) desde Python.
- Para Python >= 3.5, se recomienda usar la función `run`, salvo casos complejos.
_____no_output_____
<code>
from subprocess import run
def showRes(res):
print('\n------- ret code:', res.returncode, '; err:', res.stderr)
if res.stdout:
print('\n'.join(res.stdout.splitlines()[:3]))
print()
print('NO SHELL')
res = run(['ls', '-l'], capture_output=True, text=True)
showRes(res)
print('WITH SHELL')
res = run('ls -l', shell=True, capture_output=True, text=True)
showRes(res)
print('NO OUTPUT')
res = run(['ls', '-l'])
showRes(res)
_____no_output_____print('ERROR NO-CHECK')
res = run(['ls', '-l', 'XXXX'], capture_output=True, text=True)
showRes(res)
print('ERROR CHECK')
try:
res = run(['ls', '-l', 'XXXX'], capture_output=True, check=True)
showRes(res)
except Exception as ex:
print(f'--- Error of type {type(ex)}:\n {ex}\n')
print('NO OUTPUT')
res = run(['ls', '-l', 'XXXX'])
showRes(res)_____no_output_____
</code>
### Números y matemáticas
- `math`: operaciones matemáticas definidas por el estándar de C (`cmath`, para números complejos)
- `random`: generadores de números pseudo-aleatorios para varias distribuciones
- `statistics`: estadísticas básicas
### Manejo avanzado de funciones e iteradores
- `itertools`: útiles para crear iteradores de forma eficiente.
- `functools`: funciones de alto nivel que manipulan otras funciones
- `operators`: funciones correspondientes a los operadores intrínsicos de Python_____no_output_____
<code>
import operator
operator.add(3, 4)_____no_output_____
</code>
### Red
- `socket`: operaciones de red de bajo nivel
- `asyncio`: soporte para entornos de entrada/salida asíncrona
- Existen varias librerías para interacción HTTP, pero se recomienda la librería externa `requests`.
### Desarrollo, depuración y perfilado
- `pydoc`: generación de documentación (HTML), a partir de los docstrings
- Depuración
- Muchos IDEs, y Jupyterlab, incluyen facilidades de depuración en sus entornos.
- `pdb`: _Debugger_ oficial de Python
- Correr scripts como `python3 -m pdb myscript.py`
- Introducir un _break point_ con `import pdb; pdb.set_trace()`
- `cProfile`: _Profiler_
- `timeit`: Medición de tiempos de ejecución de código/scripts
```python
$ python3 -m timeit '"-".join(str(n) for n in range(100))'
10000 loops, best of 5: 30.2 usec per loop
>>> import timeit
>>> timeit.timeit('"-".join(str(n) for n in range(100))', number=10000)
0.3018611848820001
%timeit "-".join(str(n) for n in range(100)) # Jupyter line mode
%%timeit ... # Jupyter cell mode
```
- `unittest`: creación de tests para validación de código (_test-driven programming_)
- La librería externa `pytest` simplifica algunas tareas, y es muy popular_____no_output_____### Números y matemáticas
- `math`: operaciones matemáticas definidas por el estándar de C (`cmath`, para números complejos)
- `random`: generadores de números pseudo-aleatorios para varias distribuciones
- `statistics`: estadísticas básicas _____no_output_____### Otros
- `argparse`: procesado de argumentos y opciones por línea de comando
- Mi recomendación es crearse un _esqueleto_ tipo como base para futuros scripts.
- `re`: procesado de expresiones regulares
- `time`, `datetime`: manipulación de fechas y tiempo (medición y representación del tiempo, deltas de tiempo, etc.)_____no_output_____## La pila NumPy/Scipy
Este conjunto de librerías de código abierto constituye la base numérica, matemática, y de visualización sobre la que se construye el universo matemático/científico en Python.
- **NumPy**: Paquete de propósito general para procesamiento de objetos _array_ (vectores y matrices), de altas prestaciones.
- Sirve de base para la mayoría de los demás paquetes matemáticos.
- Permite realizar operaciones matriciales eficientes (sin usar bucles explícitos)
- Utiliza librerías compiladas (C y Fortran), con un API Python, para conseguir mejor rendimiento.
- **SciPy**: Construida sobre NumPy, y como base de muchas de las siguientes, ofrece múltiples utilidades para integración numérica, interpolación, optimización, algebra lineal, procesado de señal y estadística.
- No confundir la _librería SciPy_, con el proyecto o pila SciPy, que se refiere a todas las de esta sección.
- **Matplotlib**: Librería de visualización (gráficos 2D) de referencia de Python.
- También sirve de base para otras librerías, como _Seaborn_ o _Pandas_.
- **Pandas**: Manipulación de datos de manera ágil y eficiente.
- Utiliza un objeto _DataFrame_, que representa la información en columnas etiquetadas e indexadas.
- Ofrece funcionalidades para buscar, filtrar, ordenar, transformar o extraer información.
- **SymPy**: Librería de matemáticas simbólicas (al estilo de _Mathematica_)
## Gráficos
- **Seaborn**: Construida sobre Matplotlib ofrece un interfaz de alto nivel, para construir, de forma sencilla, gráficos avanzados para modelos estadísticos.
- **Bokeh**: Librería para visualización interactiva de gráficos en web, o en Jupyter notebooks.
- **Plotly**: Gráficos interactivos para web. Es parte de un proyecto mayor **_Dash_**, un entorno para construir aplicaciones web para análisis de datos en Python (sin escribir _javascript_).
- **Scikit-image**: Algoritmos para _procesado_ de imágenes (diferente propósito que los anteriores).
- Otras: **ggplot2/plotnine** (basadas en la librería _ggplot2_ de R), **Altair** (librería declarativa, basada en _Vega-Lite_), `Geoplotlib` y `Folium` (para construir mapas).
## Matemáticas y estadística
- **Statsmodel**: Estimación de modelos estadísticos, realización de tests y exploración de datos estadísticos.
- **PyStan**: Inferencia Bayesiana.
- **NetworkX**: Creación, manipulación y análisis de redes y grafos.
## Machine Learning
- **Scikit-learn**: Librería de aprendizaje automático de propósito general, construida sobre NumPy. Ofrece múltiples algoritmos de ML, como _support vector machines_, o _random forests_, así como muchas utilidades para pre- y postprocesado de datos.
- **TensorFlow** y **PyTorch**: son dos librerías para programación de redes neuronales, incluyendo optimización para GPUs, muy extendidas.
- **Keras**: Es un interfaz simplificado (de alto nivel) para el uso de TensorFlow._____no_output_____## Otros
### Procesamiento del Lenguaje Natural
Las siguientes librerías ofrecen funcionalidades de análisis sintáctico y semántico de textos libres:
- **GenSim**
- **SpaCy**
- **NLTK**
### Biología
- **Scikit-bio**: Estructuras de datos, algoritmos y recursos educativos para bioinformática.
- **BioPython**: Herramientas para computación biológica.
- **PyEnsembl**: Interfaz Python a Ensembl, base de datos de genómica.
### Física
- Astronomía: **Astropy**, y **PyFITS**
- Física de altas energías:
- **PyROOT**: interfaz Python a ROOT, entorno con ambición generalista, que ofrece muchas utilidades para análisis y almacenamiento de datos, estadística y visualización.
- **Scikit-HEP**: colección de librerías que pretenden trabajar con datos ROOT utilizando código exclusivamente Python (integrado con Numpy), sin usar PyROOT. Algunas son **uproot**, **awkward array**, **coffea**.
### Datos HDF5
- **h5py**: Interfaz a datos HDF5 que trata de ofrecer toda la funcionalidad del interfaz C de HDF5 en Python, integrado con el los objetos y tipos NumPy, por lo que puede usarse en código Python de manera sencilla.
- **pytables**: Otro interfaz a datos HDF5 con un interfaz a más alto nivel que `h5py`, y que ofrece funcionalidades adicionales al estilo de una base de datos (consultas complejas, indexado avanzado, optimización de computación con datos HDF5, etc.)_____no_output_____
| {
"repository": "andelpe/curso-intro-python",
"path": "tema_9.ipynb",
"matched_keywords": [
"BioPython",
"scikit-bio"
],
"stars": 1,
"size": 15337,
"hexsha": "d0676becb285732f84b2b4fe2ea4dbee420faaca",
"max_line_length": 259,
"avg_line_length": 38.1517412935,
"alphanum_fraction": 0.6116580818
} |
# Notebook from jamesbut/evojax
Path: examples/notebooks/TutorialTaskImplementation.ipynb
<a href="https://colab.research.google.com/github/google/evojax/blob/main/examples/notebooks/TutorialTaskImplementation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____# Tutorial: Creating Tasks_____no_output_____## Pre-requisite
Before we start, we need to install EvoJAX and import some libraries.
**Note** In our [paper](https://arxiv.org/abs/2202.05008), we ran the experiments on NVIDIA V100 GPU(s). Your results can be different from ours._____no_output_____
<code>
from IPython.display import clear_output, Image
!pip install evojax
!pip install torchvision # We use torchvision.datasets.MNIST in this tutorial.
clear_output()_____no_output_____import os
import numpy as np
import jax
import jax.numpy as jnp
from evojax.task.cartpole import CartPoleSwingUp
from evojax.policy.mlp import MLPPolicy
from evojax.algo import PGPE
from evojax import Trainer
from evojax.util import create_logger_____no_output_____# Let's create a directory to save logs and models.
log_dir = './log'
logger = create_logger(name='EvoJAX', log_dir=log_dir)
logger.info('Welcome to the tutorial on Task creation!')
logger.info('Jax backend: {}'.format(jax.local_devices()))
!nvidia-smi --query-gpu=name --format=csv,noheaderEvoJAX: 2022-02-12 05:53:28,121 [INFO] Welcome to the tutorial on Task creation!
absl: 2022-02-12 05:53:28,133 [INFO] Starting the local TPU driver.
absl: 2022-02-12 05:53:28,135 [INFO] Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
absl: 2022-02-12 05:53:28,519 [INFO] Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
EvoJAX: 2022-02-12 05:53:28,520 [INFO] Jax backend: [GpuDevice(id=0, process_index=0)]
</code>
## Introduction_____no_output_____EvoJAX has three major components: the *task*, the *policy network* and the *neuroevolution algorithm*. Once these components are implemented and instantiated, we can use a trainer to start the training process. The following code snippet provides an example of how we use EvoJAX._____no_output_____
<code>
seed = 42 # Wish me luck!
# We use the classic cart-pole swing up as our tasks, see
# https://github.com/google/evojax/tree/main/evojax/task for more example tasks.
# The test flag provides the opportunity for a user to
# 1. Return different signals as rewards. For example, in our MNIST example,
# we use negative cross-entropy loss as the reward in training tasks, and the
# classification accuracy as the reward in test tasks.
# 2. Perform reward shaping. It is common for RL practitioners to modify the
# rewards during training so that the agent learns more efficiently. But this
# modification should not be allowed in tests for fair evaluations.
hard = False
train_task = CartPoleSwingUp(harder=hard, test=False)
test_task = CartPoleSwingUp(harder=hard, test=True)
# We use a feedforward network as our policy.
# By default, MLPPolicy uses "tanh" as its activation function for the output.
policy = MLPPolicy(
input_dim=train_task.obs_shape[0],
hidden_dims=[64, 64],
output_dim=train_task.act_shape[0],
logger=logger,
)
# We use PGPE as our evolution algorithm.
# If you want to know more about the algorithm, please take a look at the paper:
# https://people.idsia.ch/~juergen/nn2010.pdf
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.05,
seed=seed,
)
# Now that we have all the three components instantiated, we can create a
# trainer and start the training process.
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=600,
log_interval=100,
test_interval=200,
n_repeats=5,
n_evaluations=128,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()EvoJAX: 2022-02-12 05:53:31,223 [INFO] MLPPolicy.num_params = 4609
EvoJAX: 2022-02-12 05:53:31,381 [INFO] Start to train for 600 iterations.
EvoJAX: 2022-02-12 05:53:42,936 [INFO] Iter=100, size=64, max=717.4396, avg=632.6160, min=475.3617, std=51.3240
EvoJAX: 2022-02-12 05:53:51,773 [INFO] Iter=200, size=64, max=838.2386, avg=751.0416, min=592.3156, std=46.3648
EvoJAX: 2022-02-12 05:53:53,555 [INFO] [TEST] Iter=200, #tests=128, max=880.2914 avg=834.9127, min=763.1976, std=40.8967
EvoJAX: 2022-02-12 05:54:02,542 [INFO] Iter=300, size=64, max=917.9876, avg=857.5809, min=48.2173, std=133.0970
EvoJAX: 2022-02-12 05:54:11,668 [INFO] Iter=400, size=64, max=917.4292, avg=900.6838, min=544.6534, std=53.2497
EvoJAX: 2022-02-12 05:54:11,770 [INFO] [TEST] Iter=400, #tests=128, max=927.4318 avg=918.8890, min=909.4037, std=3.2266
EvoJAX: 2022-02-12 05:54:20,773 [INFO] Iter=500, size=64, max=922.2775, avg=868.8109, min=227.3976, std=147.4509
EvoJAX: 2022-02-12 05:54:29,884 [INFO] [TEST] Iter=600, #tests=128, max=949.3198, avg=928.1906, min=917.7035, std=5.7788
EvoJAX: 2022-02-12 05:54:29,889 [INFO] Training done, best_score=928.1906
# Let's visualize the learned policy.
def render(task, algo, policy):
"""Render the learned policy."""
task_reset_fn = jax.jit(test_task.reset)
policy_reset_fn = jax.jit(policy.reset)
step_fn = jax.jit(test_task.step)
act_fn = jax.jit(policy.get_actions)
params = algo.best_params[None, :]
task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :])
policy_s = policy_reset_fn(task_s)
images = [CartPoleSwingUp.render(task_s, 0)]
done = False
step = 0
reward = 0
while not done:
act, policy_s = act_fn(task_s, params, policy_s)
task_s, r, d = step_fn(task_s, act)
step += 1
reward = reward + r
done = bool(d[0])
if step % 3 == 0:
images.append(CartPoleSwingUp.render(task_s, 0))
print('reward={}'.format(reward))
return images
imgs = render(test_task, solver, policy)
gif_file = os.path.join(log_dir, 'cartpole.gif')
imgs[0].save(
gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0)
Image(open(gif_file,'rb').read())reward=[934.1182]
</code>
Including the three major components, EvoJAX implements the entire training pipeline in JAX. In the first release, we have created several [demo tasks](https://github.com/google/evojax/tree/main/evojax/task) to showcase EvoJAX's capacity. And we encourage the users to bring their own tasks. To this end, we will walk you through the process of creating EvoJAX tasks in this tutorial._____no_output_____To contribute a task implementation to EvoJAX, all you need to do is to implement the `VectorizedTask` interface.
The interface is defined as the following and you can see the related Python file [here](https://github.com/google/evojax/blob/main/evojax/task/base.py):
```python
class TaskState(ABC):
"""A template of the task state."""
obs: jnp.ndarray
class VectorizedTask(ABC):
"""Interface for all the EvoJAX tasks."""
max_steps: int
obs_shape: Tuple
act_shape: Tuple
test: bool
multi_agent_training: bool = False
@abstractmethod
def reset(self, key: jnp.array) -> TaskState:
"""This resets the vectorized task.
Args:
key - A jax random key.
Returns:
TaskState. Initial task state.
"""
raise NotImplementedError()
@abstractmethod
def step(self,
state: TaskState,
action: jnp.ndarray) -> Tuple[TaskState, jnp.ndarray, jnp.ndarray]:
"""This steps once the simulation.
Args:
state - System internal states of shape (num_tasks, *).
action - Vectorized actions of shape (num_tasks, action_size).
Returns:
TaskState. Task states.
jnp.ndarray. Reward.
jnp.ndarray. Task termination flag: 1 for done, 0 otherwise.
"""
raise NotImplementedError()
```_____no_output_____## MNIST classification_____no_output_____While one would obviously use gradient descent for MNIST in practice, the point is to show that neuroevolution can also solve them to some degree of accuracy within a short amount of time, which will be useful when these models are adapted within a more complicated task where gradient-based approaches may not work.
The following code snippet shows how we wrap the dataset and treat it as a one-step `VectorizedTask`._____no_output_____
<code>
from torchvision import datasets
from flax.struct import dataclass
from evojax.task.base import TaskState
from evojax.task.base import VectorizedTask
# This state contains the information we wish to carry over to the next step.
# The state will be used in `VectorizedTask.step` method.
# In supervised learning tasks, we want to store the data and the labels so that
# we can calculate the loss or the accuracy and use that as the reward signal.
@dataclass
class State(TaskState):
obs: jnp.ndarray
labels: jnp.ndarray
def sample_batch(key, data, labels, batch_size):
ix = jax.random.choice(
key=key, a=data.shape[0], shape=(batch_size,), replace=False)
return (jnp.take(data, indices=ix, axis=0),
jnp.take(labels, indices=ix, axis=0))
def loss(prediction, target):
target = jax.nn.one_hot(target, 10)
return -jnp.mean(jnp.sum(prediction * target, axis=1))
def accuracy(prediction, target):
predicted_class = jnp.argmax(prediction, axis=1)
return jnp.mean(predicted_class == target)
class MNIST(VectorizedTask):
"""MNIST classification task.
We model the classification as an one-step task, i.e.,
`MNIST.reset` returns a batch of data to the agent, the agent outputs
predictions, `MNIST.step` returns the reward (loss or accuracy) and
terminates the rollout.
"""
def __init__(self, batch_size, test):
self.max_steps = 1
# These are similar to OpenAI Gym environment's
# observation_space and action_space.
# They are helpful for initializing the policy networks.
self.obs_shape = tuple([28, 28, 1])
self.act_shape = tuple([10, ])
# We download the dataset and normalize the value.
dataset = datasets.MNIST('./data', train=not test, download=True)
data = np.expand_dims(dataset.data.numpy() / 255., axis=-1)
labels = dataset.targets.numpy()
def reset_fn(key):
if test:
# In the test mode, we want to test on the entire test set.
batch_data, batch_labels = data, labels
else:
# In the training mode, we only sample a batch of training data.
batch_data, batch_labels = sample_batch(
key, data, labels, batch_size)
return State(obs=batch_data, labels=batch_labels)
# We use jax.vmap for auto-vectorization.
self._reset_fn = jax.jit(jax.vmap(reset_fn))
def step_fn(state, action):
if test:
# In the test mode, we report the classification accuracy.
reward = accuracy(action, state.labels)
else:
# In the training mode, we return the negative loss as the
# reward signal. It is legitimate to return accuracy as the
# reward signal in training too, but we find the performance is
# not as good as when we use the negative loss.
reward = -loss(action, state.labels)
# This is an one-step task, so that last return value (the `done`
# flag) is one.
return state, reward, jnp.ones(())
# We use jax.vmap for auto-vectorization.
self._step_fn = jax.jit(jax.vmap(step_fn))
def reset(self, key):
return self._reset_fn(key)
def step(self, state, action):
return self._step_fn(state, action)_____no_output_____# Okay, let's test out the task with a ConvNet policy.
from evojax.policy.convnet import ConvNetPolicy
batch_size = 1024
train_task = MNIST(batch_size=batch_size, test=False)
test_task = MNIST(batch_size=batch_size, test=True)
policy = ConvNetPolicy(logger=logger)
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.006,
stdev_learning_rate=0.09,
init_stdev=0.04,
logger=logger,
seed=seed,
)
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=5000,
log_interval=100,
test_interval=1000,
n_repeats=1,
n_evaluations=1,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()EvoJAX: 2022-02-12 05:54:41,285 [INFO] ConvNetPolicy.num_params = 11274
EvoJAX: 2022-02-12 05:54:41,435 [INFO] Start to train for 5000 iterations.
EvoJAX: 2022-02-12 05:54:52,635 [INFO] Iter=100, size=64, max=-0.8691, avg=-1.0259, min=-1.4128, std=0.1188
EvoJAX: 2022-02-12 05:54:56,730 [INFO] Iter=200, size=64, max=-0.5346, avg=-0.6686, min=-1.2417, std=0.1188
EvoJAX: 2022-02-12 05:55:00,824 [INFO] Iter=300, size=64, max=-0.3925, avg=-0.4791, min=-0.5902, std=0.0456
EvoJAX: 2022-02-12 05:55:04,917 [INFO] Iter=400, size=64, max=-0.3357, avg=-0.3918, min=-0.5241, std=0.0388
EvoJAX: 2022-02-12 05:55:09,010 [INFO] Iter=500, size=64, max=-0.2708, avg=-0.3235, min=-0.4797, std=0.0317
EvoJAX: 2022-02-12 05:55:13,104 [INFO] Iter=600, size=64, max=-0.1965, avg=-0.2417, min=-0.3119, std=0.0238
EvoJAX: 2022-02-12 05:55:17,198 [INFO] Iter=700, size=64, max=-0.1784, avg=-0.2177, min=-0.3148, std=0.0268
EvoJAX: 2022-02-12 05:55:21,292 [INFO] Iter=800, size=64, max=-0.1797, avg=-0.2105, min=-0.2762, std=0.0222
EvoJAX: 2022-02-12 05:55:25,386 [INFO] Iter=900, size=64, max=-0.1803, avg=-0.2379, min=-0.3923, std=0.0330
EvoJAX: 2022-02-12 05:55:29,478 [INFO] Iter=1000, size=64, max=-0.1535, avg=-0.1856, min=-0.2457, std=0.0225
EvoJAX: 2022-02-12 05:55:31,071 [INFO] [TEST] Iter=1000, #tests=1, max=0.9627 avg=0.9627, min=0.9627, std=0.0000
EvoJAX: 2022-02-12 05:55:35,170 [INFO] Iter=1100, size=64, max=-0.1150, avg=-0.1438, min=-0.1971, std=0.0153
EvoJAX: 2022-02-12 05:55:39,263 [INFO] Iter=1200, size=64, max=-0.1278, avg=-0.1571, min=-0.2458, std=0.0193
EvoJAX: 2022-02-12 05:55:43,358 [INFO] Iter=1300, size=64, max=-0.1323, avg=-0.1641, min=-0.2089, std=0.0164
EvoJAX: 2022-02-12 05:55:47,453 [INFO] Iter=1400, size=64, max=-0.1331, avg=-0.1573, min=-0.2085, std=0.0163
EvoJAX: 2022-02-12 05:55:51,547 [INFO] Iter=1500, size=64, max=-0.1709, avg=-0.2142, min=-0.2950, std=0.0197
EvoJAX: 2022-02-12 05:55:55,640 [INFO] Iter=1600, size=64, max=-0.1052, avg=-0.1410, min=-0.2766, std=0.0279
EvoJAX: 2022-02-12 05:55:59,735 [INFO] Iter=1700, size=64, max=-0.0897, avg=-0.1184, min=-0.1591, std=0.0144
EvoJAX: 2022-02-12 05:56:03,828 [INFO] Iter=1800, size=64, max=-0.0777, avg=-0.1029, min=-0.1509, std=0.0165
EvoJAX: 2022-02-12 05:56:07,922 [INFO] Iter=1900, size=64, max=-0.0935, avg=-0.1285, min=-0.1682, std=0.0151
EvoJAX: 2022-02-12 05:56:12,015 [INFO] Iter=2000, size=64, max=-0.1158, avg=-0.1439, min=-0.2054, std=0.0155
EvoJAX: 2022-02-12 05:56:12,026 [INFO] [TEST] Iter=2000, #tests=1, max=0.9740 avg=0.9740, min=0.9740, std=0.0000
EvoJAX: 2022-02-12 05:56:16,121 [INFO] Iter=2100, size=64, max=-0.1054, avg=-0.1248, min=-0.1524, std=0.0101
EvoJAX: 2022-02-12 05:56:20,213 [INFO] Iter=2200, size=64, max=-0.1092, avg=-0.1363, min=-0.1774, std=0.0146
EvoJAX: 2022-02-12 05:56:24,306 [INFO] Iter=2300, size=64, max=-0.1079, avg=-0.1298, min=-0.1929, std=0.0158
EvoJAX: 2022-02-12 05:56:28,398 [INFO] Iter=2400, size=64, max=-0.1129, avg=-0.1352, min=-0.1870, std=0.0145
EvoJAX: 2022-02-12 05:56:32,491 [INFO] Iter=2500, size=64, max=-0.0790, avg=-0.0955, min=-0.1291, std=0.0113
EvoJAX: 2022-02-12 05:56:36,584 [INFO] Iter=2600, size=64, max=-0.1299, avg=-0.1537, min=-0.1947, std=0.0128
EvoJAX: 2022-02-12 05:56:40,675 [INFO] Iter=2700, size=64, max=-0.0801, avg=-0.0983, min=-0.1301, std=0.0094
EvoJAX: 2022-02-12 05:56:44,767 [INFO] Iter=2800, size=64, max=-0.0849, avg=-0.1014, min=-0.1511, std=0.0116
EvoJAX: 2022-02-12 05:56:48,859 [INFO] Iter=2900, size=64, max=-0.0669, avg=-0.0796, min=-0.1111, std=0.0090
EvoJAX: 2022-02-12 05:56:52,950 [INFO] Iter=3000, size=64, max=-0.0782, avg=-0.0975, min=-0.1304, std=0.0123
EvoJAX: 2022-02-12 05:56:52,960 [INFO] [TEST] Iter=3000, #tests=1, max=0.9768 avg=0.9768, min=0.9768, std=0.0000
EvoJAX: 2022-02-12 05:56:57,056 [INFO] Iter=3100, size=64, max=-0.0857, avg=-0.1029, min=-0.1421, std=0.0092
EvoJAX: 2022-02-12 05:57:01,149 [INFO] Iter=3200, size=64, max=-0.0769, avg=-0.0964, min=-0.1279, std=0.0120
EvoJAX: 2022-02-12 05:57:05,242 [INFO] Iter=3300, size=64, max=-0.0805, avg=-0.1021, min=-0.1200, std=0.0088
EvoJAX: 2022-02-12 05:57:09,335 [INFO] Iter=3400, size=64, max=-0.0642, avg=-0.0774, min=-0.0972, std=0.0080
EvoJAX: 2022-02-12 05:57:13,428 [INFO] Iter=3500, size=64, max=-0.0601, avg=-0.0771, min=-0.1074, std=0.0080
EvoJAX: 2022-02-12 05:57:17,522 [INFO] Iter=3600, size=64, max=-0.0558, avg=-0.0709, min=-0.1082, std=0.0094
EvoJAX: 2022-02-12 05:57:21,615 [INFO] Iter=3700, size=64, max=-0.0915, avg=-0.1048, min=-0.1519, std=0.0100
EvoJAX: 2022-02-12 05:57:25,709 [INFO] Iter=3800, size=64, max=-0.0525, avg=-0.0667, min=-0.0823, std=0.0069
EvoJAX: 2022-02-12 05:57:29,801 [INFO] Iter=3900, size=64, max=-0.0983, avg=-0.1150, min=-0.1447, std=0.0105
EvoJAX: 2022-02-12 05:57:33,895 [INFO] Iter=4000, size=64, max=-0.0759, avg=-0.0954, min=-0.1293, std=0.0114
EvoJAX: 2022-02-12 05:57:33,909 [INFO] [TEST] Iter=4000, #tests=1, max=0.9800 avg=0.9800, min=0.9800, std=0.0000
EvoJAX: 2022-02-12 05:57:38,004 [INFO] Iter=4100, size=64, max=-0.0811, avg=-0.0957, min=-0.1184, std=0.0086
EvoJAX: 2022-02-12 05:57:42,095 [INFO] Iter=4200, size=64, max=-0.0806, avg=-0.0960, min=-0.1313, std=0.0096
EvoJAX: 2022-02-12 05:57:46,187 [INFO] Iter=4300, size=64, max=-0.0698, avg=-0.0908, min=-0.1158, std=0.0100
EvoJAX: 2022-02-12 05:57:50,278 [INFO] Iter=4400, size=64, max=-0.0754, avg=-0.0930, min=-0.1202, std=0.0104
EvoJAX: 2022-02-12 05:57:54,368 [INFO] Iter=4500, size=64, max=-0.0708, avg=-0.0877, min=-0.1107, std=0.0088
EvoJAX: 2022-02-12 05:57:58,459 [INFO] Iter=4600, size=64, max=-0.0610, avg=-0.0773, min=-0.1032, std=0.0076
EvoJAX: 2022-02-12 05:58:02,550 [INFO] Iter=4700, size=64, max=-0.0704, avg=-0.0881, min=-0.1299, std=0.0110
EvoJAX: 2022-02-12 05:58:06,640 [INFO] Iter=4800, size=64, max=-0.0651, avg=-0.0812, min=-0.1042, std=0.0080
EvoJAX: 2022-02-12 05:58:10,732 [INFO] Iter=4900, size=64, max=-0.0588, avg=-0.0712, min=-0.1096, std=0.0081
EvoJAX: 2022-02-12 05:58:14,795 [INFO] [TEST] Iter=5000, #tests=1, max=0.9822, avg=0.9822, min=0.9822, std=0.0000
EvoJAX: 2022-02-12 05:58:14,800 [INFO] Training done, best_score=0.9822
</code>
Okay! Our implementation of the classification task is successful and EvoJAX achieved $>98\%$ test accuracy within 5 min on a V100 GPU.
As mentioned before, MNIST is a simple one-step task, we want to get you familiar with the interfaces.
Next, we will build the classic cart-pole task from scratch._____no_output_____## Cart-pole swing up_____no_output_____In our cart-pole swing up task, the agent applies an action $a \in [-1, 1]$ on the cart, and we maintain 4 states:
1. cart position $x$
2. cart velocity $\dot{x}$
3. the angle between the cart and the pole $\theta$
4. the pole's angular velocity $\dot{\theta}$
We randomly sample the initial states and will use the forward Euler integration to update them:
$\mathbf{x}(t + \Delta t) = \mathbf{x}(t) + \Delta t \mathbf{v}(t)$ and
$\mathbf{v}(t + \Delta t) = \mathbf{v}(t) + \Delta t f(a, \mathbf{x}(t), \mathbf{v}(t))$
where $\mathbf{x}(t) = [x, \theta]^{\intercal}$, $\mathbf{v}(t) = [\dot{x}, \dot{\theta}]^{\intercal}$ and $f(\cdot)$ is a function that represents the physical model.
Thanks to `jax.vmap`, we are able to write the task as if it is designed to deal with non-batch inputs though in the training process JAX will automatically vectorize the task for us._____no_output_____
<code>
from evojax.task.base import TaskState
from evojax.task.base import VectorizedTask
import PIL
# Define some physics metrics.
GRAVITY = 9.82
CART_MASS = 0.5
POLE_MASS = 0.5
POLE_LEN = 0.6
FRICTION = 0.1
FORCE_SCALING = 10.0
DELTA_T = 0.01
CART_X_LIMIT = 2.4
# Define some constants for visualization.
SCREEN_W = 600
SCREEN_H = 600
CART_W = 40
CART_H = 20
VIZ_SCALE = 100
WHEEL_RAD = 5
@dataclass
class State(TaskState):
obs: jnp.ndarray # This is the tuple (x, x_dot, theta, theta_dot)
state: jnp.ndarray # This maintains the system's state.
steps: jnp.int32 # This tracks the rollout length.
key: jnp.ndarray # This serves as a random seed.
class CartPole(VectorizedTask):
"""A quick implementation of the cart-pole task."""
def __init__(self, max_steps=1000, test=False):
self.max_steps = max_steps
self.obs_shape = tuple([4, ])
self.act_shape = tuple([1, ])
def sample_init_state(sample_key):
return (
jax.random.normal(sample_key, shape=(4,)) * 0.2 +
jnp.array([0, 0, jnp.pi, 0])
)
def get_reward(x, x_dot, theta, theta_dot):
# We encourage
# the pole to be held upward (i.e., theta is close to 0) and
# the cart to be at the origin (i.e., x is close to 0).
reward_theta = (jnp.cos(theta) + 1.0) / 2.0
reward_x = jnp.cos((x / CART_X_LIMIT) * (jnp.pi / 2.0))
return reward_theta * reward_x
def update_state(action, x, x_dot, theta, theta_dot):
action = jnp.clip(action, -1.0, 1.0)[0] * FORCE_SCALING
s = jnp.sin(theta)
c = jnp.cos(theta)
total_m = CART_MASS + POLE_MASS
m_p_l = POLE_MASS * POLE_LEN
# This is the physical model: f-function.
x_dot_update = (
(-2 * m_p_l * (theta_dot ** 2) * s +
3 * POLE_MASS * GRAVITY * s * c +
4 * action - 4 * FRICTION * x_dot) /
(4 * total_m - 3 * POLE_MASS * c ** 2)
)
theta_dot_update = (
(-3 * m_p_l * (theta_dot ** 2) * s * c +
6 * total_m * GRAVITY * s +
6 * (action - FRICTION * x_dot) * c) /
(4 * POLE_LEN * total_m - 3 * m_p_l * c ** 2)
)
# This is the forward Euler integration.
x = x + x_dot * DELTA_T
theta = theta + theta_dot * DELTA_T
x_dot = x_dot + x_dot_update * DELTA_T
theta_dot = theta_dot + theta_dot_update * DELTA_T
return jnp.array([x, x_dot, theta, theta_dot])
def out_of_screen(x):
"""We terminate the rollout if the cart is out of the screen."""
beyond_boundary_l = jnp.where(x < -CART_X_LIMIT, 1, 0)
beyond_boundary_r = jnp.where(x > CART_X_LIMIT, 1, 0)
return jnp.bitwise_or(beyond_boundary_l, beyond_boundary_r)
def reset_fn(key):
next_key, key = jax.random.split(key)
state = sample_init_state(key)
return State(
obs=state, # We make the task fully-observable.
state=state,
steps=jnp.zeros((), dtype=int),
key=next_key,
)
self._reset_fn = jax.jit(jax.vmap(reset_fn))
def step_fn(state, action):
current_state = update_state(action, *state.state)
reward = get_reward(*current_state)
steps = state.steps + 1
done = jnp.bitwise_or(
out_of_screen(current_state[0]), steps >= max_steps)
# We reset the step counter to zero if the rollout has ended.
steps = jnp.where(done, jnp.zeros((), jnp.int32), steps)
# We automatically reset the states if the rollout has ended.
next_key, key = jax.random.split(state.key)
# current_state = jnp.where(
# done, sample_init_state(key), current_state)
return State(
state=current_state,
obs=current_state,
steps=steps,
key=next_key), reward, done
self._step_fn = jax.jit(jax.vmap(step_fn))
def reset(self, key):
return self._reset_fn(key)
def step(self, state, action):
return self._step_fn(state, action)
# Optinally, we can implement a render method to visualize the task.
@staticmethod
def render(state, task_id):
"""Render a specified task."""
img = PIL.Image.new('RGB', (SCREEN_W, SCREEN_H), (255, 255, 255))
draw = PIL.ImageDraw.Draw(img)
x, _, theta, _ = np.array(state.state[task_id])
cart_y = SCREEN_H // 2 + 100
cart_x = x * VIZ_SCALE + SCREEN_W // 2
# Draw the horizon.
draw.line(
(0, cart_y + CART_H // 2 + WHEEL_RAD,
SCREEN_W, cart_y + CART_H // 2 + WHEEL_RAD),
fill=(0, 0, 0), width=1)
# Draw the cart.
draw.rectangle(
(cart_x - CART_W // 2, cart_y - CART_H // 2,
cart_x + CART_W // 2, cart_y + CART_H // 2),
fill=(255, 0, 0), outline=(0, 0, 0))
# Draw the wheels.
draw.ellipse(
(cart_x - CART_W // 2 - WHEEL_RAD,
cart_y + CART_H // 2 - WHEEL_RAD,
cart_x - CART_W // 2 + WHEEL_RAD,
cart_y + CART_H // 2 + WHEEL_RAD),
fill=(220, 220, 220), outline=(0, 0, 0))
draw.ellipse(
(cart_x + CART_W // 2 - WHEEL_RAD,
cart_y + CART_H // 2 - WHEEL_RAD,
cart_x + CART_W // 2 + WHEEL_RAD,
cart_y + CART_H // 2 + WHEEL_RAD),
fill=(220, 220, 220), outline=(0, 0, 0))
# Draw the pole.
draw.line(
(cart_x, cart_y,
cart_x + POLE_LEN * VIZ_SCALE * np.cos(theta - np.pi / 2),
cart_y + POLE_LEN * VIZ_SCALE * np.sin(theta - np.pi / 2)),
fill=(0, 0, 255), width=6)
return img_____no_output_____# Okay, let's test this simple cart-pole implementation.
rollout_key = jax.random.PRNGKey(seed=seed)
reset_key, rollout_key = jax.random.split(rollout_key, 2)
reset_key = reset_key[None, :] # Expand dim, the leading is the batch dim.
# Initialize the task.
cart_pole_task = CartPole()
t_state = cart_pole_task.reset(reset_key)
task_screens = [CartPole.render(t_state, 0)]
# Rollout with random actions.
done = False
step_cnt = 0
total_reward = 0
while not done:
action_key, rollout_key = jax.random.split(rollout_key, 2)
action = jax.random.uniform(
action_key, shape=(1, 1), minval=-1., maxval=1.)
t_state, reward, done = cart_pole_task.step(t_state, action)
total_reward = total_reward + reward
step_cnt += 1
if step_cnt % 4 == 0:
task_screens.append(CartPole.render(t_state, 0))
print('reward={}, steps={}'.format(total_reward, step_cnt))
# Visualze the rollout.
gif_file = os.path.join(log_dir, 'rand_cartpole.gif')
task_screens[0].save(
gif_file, save_all=True, append_images=task_screens[1:], loop=0)
Image(open(gif_file,'rb').read())reward=[4.687451], steps=221
</code>
The random policy does not solve the cart-pole task, but our implementation seems to be correct. Let's now plug in this task to EvoJAX._____no_output_____
<code>
train_task = CartPole(test=False)
test_task = CartPole(test=True)
# We use the same policy and solver to solve this "new" task.
policy = MLPPolicy(
input_dim=train_task.obs_shape[0],
hidden_dims=[64, 64],
output_dim=train_task.act_shape[0],
logger=logger,
)
solver = PGPE(
pop_size=64,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=0.05,
seed=seed,
)
trainer = Trainer(
policy=policy,
solver=solver,
train_task=train_task,
test_task=test_task,
max_iter=600,
log_interval=100,
test_interval=200,
n_repeats=5,
n_evaluations=128,
seed=seed,
log_dir=log_dir,
logger=logger,
)
_ = trainer.run()EvoJAX: 2022-02-12 05:58:16,702 [INFO] MLPPolicy.num_params = 4545
EvoJAX: 2022-02-12 05:58:16,868 [INFO] Start to train for 600 iterations.
EvoJAX: 2022-02-12 05:58:26,417 [INFO] Iter=100, size=64, max=704.6008, avg=538.1765, min=115.6323, std=110.1506
EvoJAX: 2022-02-12 05:58:34,678 [INFO] Iter=200, size=64, max=716.4336, avg=595.8668, min=381.3772, std=60.5778
EvoJAX: 2022-02-12 05:58:35,551 [INFO] [TEST] Iter=200, #tests=128, max=695.8007 avg=685.7385, min=668.2902, std=4.3287
EvoJAX: 2022-02-12 05:58:44,053 [INFO] Iter=300, size=64, max=759.5718, avg=658.8391, min=296.1095, std=71.2600
EvoJAX: 2022-02-12 05:58:52,540 [INFO] Iter=400, size=64, max=919.3878, avg=839.7709, min=134.9505, std=136.0545
EvoJAX: 2022-02-12 05:58:52,624 [INFO] [TEST] Iter=400, #tests=128, max=930.0361 avg=915.0107, min=900.9803, std=5.1936
EvoJAX: 2022-02-12 05:59:00,732 [INFO] Iter=500, size=64, max=926.3024, avg=812.4763, min=121.6825, std=229.5144
EvoJAX: 2022-02-12 05:59:09,005 [INFO] [TEST] Iter=600, #tests=128, max=942.0136, avg=922.7744, min=235.6000, std=61.2483
EvoJAX: 2022-02-12 05:59:09,010 [INFO] Training done, best_score=922.7744
# Let's visualize the learned policy.
def render(task, algo, policy):
"""Render the learned policy."""
task_reset_fn = jax.jit(test_task.reset)
policy_reset_fn = jax.jit(policy.reset)
step_fn = jax.jit(test_task.step)
act_fn = jax.jit(policy.get_actions)
params = algo.best_params[None, :]
task_s = task_reset_fn(jax.random.PRNGKey(seed=seed)[None, :])
policy_s = policy_reset_fn(task_s)
images = [CartPole.render(task_s, 0)]
done = False
step = 0
reward = 0
while not done:
act, policy_s = act_fn(task_s, params, policy_s)
task_s, r, d = step_fn(task_s, act)
step += 1
reward = reward + r
done = bool(d[0])
if step % 3 == 0:
images.append(CartPole.render(task_s, 0))
print('reward={}'.format(reward))
return images
imgs = render(test_task, solver, policy)
gif_file = os.path.join(log_dir, 'trained_cartpole.gif')
imgs[0].save(
gif_file, save_all=True, append_images=imgs[1:], duration=40, loop=0)
Image(open(gif_file,'rb').read())reward=[923.1105]
</code>
Nice! EvoJAX is able to solve the new cart-pole task within a minute.
In this tutorial, we walked you through the process of creating tasks from scratch. The two examples we used are simple and are supposed to help you understand the interfaces. If you are interested in learning more, please check out our GitHub [repo](https://github.com/google/evojax/tree/main/evojax/task).
Please let us ([email protected]) know if you have any problems or suggestions, thanks!_____no_output_____
<code>
_____no_output_____
</code>
| {
"repository": "jamesbut/evojax",
"path": "examples/notebooks/TutorialTaskImplementation.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 365,
"size": 591491,
"hexsha": "d0677754cddfb6416d1c99e3af9aa84b4e95cd38",
"max_line_length": 352222,
"avg_line_length": 565.4789674952,
"alphanum_fraction": 0.8887844447
} |
# Notebook from wangyendt/deeplearning_models
Path: sklearn/sklearn learning/demonstration/auto_examples_jupyter/applications/plot_species_distribution_modeling.ipynb
<code>
%matplotlib inline_____no_output_____
</code>
# Species distribution modeling
Modeling species' geographic distributions is an important
problem in conservation biology. In this example we
model the geographic distribution of two south american
mammals given past observations and 14 environmental
variables. Since we have only positive examples (there are
no unsuccessful observations), we cast this problem as a
density estimation problem and use the :class:`sklearn.svm.OneClassSVM`
as our modeling tool. The dataset is provided by Phillips et. al. (2006).
If available, the example uses
`basemap <https://matplotlib.org/basemap/>`_
to plot the coast lines and national boundaries of South America.
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References
----------
* `"Maximum entropy modeling of species geographic distributions"
<http://rob.schapire.net/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
_____no_output_____
<code>
# Authors: Peter Prettenhofer <[email protected]>
# Jake Vanderplas <[email protected]>
#
# License: BSD 3 clause
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import Bunch
from sklearn.datasets import fetch_species_distributions
from sklearn import svm, metrics
# if basemap is available, we'll use it.
# otherwise, we'll improvise later...
try:
from mpl_toolkits.basemap import Basemap
basemap = True
except ImportError:
basemap = False
print(__doc__)
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
def create_species_bunch(species_name, train, test, coverages, xgrid, ygrid):
"""Create a bunch with information about a particular organism
This will use the test/train record arrays to extract the
data specific to the given species name.
"""
bunch = Bunch(name=' '.join(species_name.split("_")[:2]))
species_name = species_name.encode('ascii')
points = dict(test=test, train=train)
for label, pts in points.items():
# choose points associated with the desired species
pts = pts[pts['species'] == species_name]
bunch['pts_%s' % label] = pts
# determine coverage values for each of the training & testing points
ix = np.searchsorted(xgrid, pts['dd long'])
iy = np.searchsorted(ygrid, pts['dd lat'])
bunch['cov_%s' % label] = coverages[:, -iy, ix].T
return bunch
def plot_species_distribution(species=("bradypus_variegatus_0",
"microryzomys_minutus_0")):
"""
Plot the species distribution.
"""
if len(species) > 2:
print("Note: when more than two species are provided,"
" only the first two will be used")
t0 = time()
# Load the compressed data
data = fetch_species_distributions()
# Set up the data grid
xgrid, ygrid = construct_grids(data)
# The grid in x,y coordinates
X, Y = np.meshgrid(xgrid, ygrid[::-1])
# create a bunch for each species
BV_bunch = create_species_bunch(species[0],
data.train, data.test,
data.coverages, xgrid, ygrid)
MM_bunch = create_species_bunch(species[1],
data.train, data.test,
data.coverages, xgrid, ygrid)
# background points (grid coordinates) for evaluation
np.random.seed(13)
background_points = np.c_[np.random.randint(low=0, high=data.Ny,
size=10000),
np.random.randint(low=0, high=data.Nx,
size=10000)].T
# We'll make use of the fact that coverages[6] has measurements at all
# land points. This will help us decide between land and water.
land_reference = data.coverages[6]
# Fit, predict, and plot for each species.
for i, species in enumerate([BV_bunch, MM_bunch]):
print("_" * 80)
print("Modeling distribution of species '%s'" % species.name)
# Standardize features
mean = species.cov_train.mean(axis=0)
std = species.cov_train.std(axis=0)
train_cover_std = (species.cov_train - mean) / std
# Fit OneClassSVM
print(" - fit OneClassSVM ... ", end='')
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.5)
clf.fit(train_cover_std)
print("done.")
# Plot map of South America
plt.subplot(1, 2, i + 1)
if basemap:
print(" - plot coastlines using basemap")
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c')
m.drawcoastlines()
m.drawcountries()
else:
print(" - plot coastlines from coverage")
plt.contour(X, Y, land_reference,
levels=[-9998], colors="k",
linestyles="solid")
plt.xticks([])
plt.yticks([])
print(" - predict species distribution")
# Predict species distribution using the training data
Z = np.ones((data.Ny, data.Nx), dtype=np.float64)
# We'll predict only for the land points.
idx = np.where(land_reference > -9999)
coverages_land = data.coverages[:, idx[0], idx[1]].T
pred = clf.decision_function((coverages_land - mean) / std)
Z *= pred.min()
Z[idx[0], idx[1]] = pred
levels = np.linspace(Z.min(), Z.max(), 25)
Z[land_reference == -9999] = -9999
# plot contours of the prediction
plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds)
plt.colorbar(format='%.2f')
# scatter training/testing points
plt.scatter(species.pts_train['dd long'], species.pts_train['dd lat'],
s=2 ** 2, c='black',
marker='^', label='train')
plt.scatter(species.pts_test['dd long'], species.pts_test['dd lat'],
s=2 ** 2, c='black',
marker='x', label='test')
plt.legend()
plt.title(species.name)
plt.axis('equal')
# Compute AUC with regards to background points
pred_background = Z[background_points[0], background_points[1]]
pred_test = clf.decision_function((species.cov_test - mean) / std)
scores = np.r_[pred_test, pred_background]
y = np.r_[np.ones(pred_test.shape), np.zeros(pred_background.shape)]
fpr, tpr, thresholds = metrics.roc_curve(y, scores)
roc_auc = metrics.auc(fpr, tpr)
plt.text(-35, -70, "AUC: %.3f" % roc_auc, ha="right")
print("\n Area under the ROC curve : %f" % roc_auc)
print("\ntime elapsed: %.2fs" % (time() - t0))
plot_species_distribution()
plt.show()_____no_output_____
</code>
| {
"repository": "wangyendt/deeplearning_models",
"path": "sklearn/sklearn learning/demonstration/auto_examples_jupyter/applications/plot_species_distribution_modeling.ipynb",
"matched_keywords": [
"biology"
],
"stars": 1,
"size": 9180,
"hexsha": "d06899ca1424d963c9296da32df2653028d6a5cc",
"max_line_length": 6943,
"avg_line_length": 170,
"alphanum_fraction": 0.6037037037
} |
# Notebook from bastivkl/nh2020-curriculum
Path: we-geometry-benson/class-notebook.ipynb
# The Structure and Geometry of the Human Brain
[Noah C. Benson](https://nben.net/) <[[email protected]](mailto:[email protected])>
[eScience Institute](https://escience.washingtonn.edu/)
[University of Washington](https://www.washington.edu/)
[Seattle, WA 98195](https://seattle.gov/)_____no_output_____## Introduction_____no_output_____This notebook is designed to accompany the lecture "Introduction to the Strugure and Geometry of the Human Brain" as part of the Neurohackademt 2020 curriculum. It can be run either in Neurohackademy's Jupyterhub environment, or using the `docker-compose.yml` file (see the `README.md` file for instructions).
In this notebook we will examine various structural and geometric data used commonly in neuroscience. These demos will primarily use [FreeSurfer](http://surfer.nmr.mgh.harvard.edu/) subjects. In the lecture and the Neurohackademy Jupyterhub environment, we will look primarily at a subject named `nben`; however, you can alternately use the subject `bert`, which is an example subject that comes with FreeSurfer. Optionally, this notebook can be used with subject from the [Human Connectome Project (HCP)](https://db.humanconnectome.org/)--see the `README.md` file for instructions on getting credentials for use with the HCP.
We will look at these data using both the [`nibabel`](https://nipy.org/nibabel/), which is an excellent core library for importing various kinds of neuroimaging data, as well as [`neuropythy`](https://github.com/noahbenson/neuropythy), which builds on `nibabel` to provide a user-friendly API for interacting with subjects. At its core, `neuropythy` is a library for interacting with neuroscientific data in the context of brain structure.
This notebook itself consists of this introduction as well as four sections that follow the topic areas in the slide-deck from the lecture. These sections are intended to be explored in order._____no_output_____### Libraries_____no_output_____Before running any of the code in this notebook, we need to start by importing a few libraries and making sure we have configured those that need to be configured (mainly, `matplotlib`)._____no_output_____
<code>
# We will need os for paths:
import os
# Numpy, Scipy, and Matplotlib are effectively standard libraries.
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.pyplot as plt
# Ipyvolume is a 3D plotting library that is used by neuropythy.
import ipyvolume as ipv
# Nibabel is the library that understands various neuroimaging file
# formats; it is also used by neuropythy.
import nibabel as nib
# Neuropythy is the main library we will be using in this notebook.
import neuropythy as ny_____no_output_____%matplotlib inline_____no_output_____
</code>
## MRI and Volumetric Data_____no_output_____The first section of this notebook will deal with MR images and volumetric data. We will start by loading in an MRImage. We will use the same image that was visualized in the lecture (if you are not using the Jupyterhub, you won't have access to this subject, but you can use the subject `'bert'` instead).
---_____no_output_____### Load a subject._____no_output_____---
For starters, we will load the subject._____no_output_____
<code>
subject_id = 'nben'
subject = ny.freesurfer_subject(subject_id)
# If you have configured the HCP credentials and wish to use an HCP
# subject instead of nben:
#
#subject_id = 111312
#subject = ny.hcp_subject(subject_id)_____no_output_____
</code>
The `freesurfer_subject` function returns a `neuropythy` `Subject` object._____no_output_____
<code>
subject_____no_output_____
</code>
---_____no_output_____### Load an MRImage file._____no_output_____---
Let's load in an image file. FreeSurfer directories contain a subdirectory `mri/` that contains all of the volumetric/image data for the subject. This includes images that have been preprocessed as well as copies of the original T1-weighted image. We will load an image called `T1.mgz`._____no_output_____
<code>
# This function will load data from a subject's directory using neuropythy's
# builtin ny.load() function; in most cases, this calls down to nibabel's own
# nib.load() function.
im = subject.load('mri/T1.mgz')
# For an HCP subject, use this file instead:
#im = subject.load("T1w/T1w_acpc_dc.nii.gz")
# The return value should be a nibabel image object.
im_____no_output_____# In fact, we could just as easily have loaded the same object using nibabel:
im_from_nibabel = nib.load(subject.path + '/mri/T1.mgz')
print('From neuropythy: ', im.get_filename())
print('From nibabel: ', im_from_nibabel.get_filename())_____no_output_____# And neuropythy manages this image as part of the subject-data. Neuropythy's
# name for it is 'intensity_normalized', which is due to its position as an
# output in FreeSurfer's processing pipeline.
ny_im = subject.images['intensity_normalized']
(ny_im.dataobj == im.dataobj).all()_____no_output_____
</code>
---_____no_output_____### Visualize some slices of the image._____no_output_____---
Next, we will make 2D plots of some of the image slices. Feel free to change which slices you visualize; I have just chosen some defaults._____no_output_____
<code>
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = im.dataobj[:,slice_num,:]
else:
imslice = im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')_____no_output_____
</code>
---_____no_output_____### Visualize the 3D Image as a whole._____no_output_____---
Next we will use `ipyvolume` to render a 3D View of the volume. The volume plotting function is part of `ipyvolume` and has a variety of options that are beyond the scope of this demo._____no_output_____
<code>
# Note that this will generate a warning, which can be safely ignored.
fig = ipv.figure()
ipv.quickvolshow(subject.images['intensity_normalized'].dataobj)
ipv.show()_____no_output_____
</code>
---_____no_output_____### Load and visualize anatomical segments._____no_output_____---
FreeSurfer creates a segmentation image file called `aseg.mgz`, which we can load and use to identify ROIs. First, we will load this file and plot some slices from it._____no_output_____
<code>
# First load the file; any of these lines will work:
#aseg = subject.load('mri/aseg.mgz')
#aseg = nib.load(subject.path + '/mri/aseg.mgz')
aseg = subject.images['segmentation']_____no_output_____
</code>
We can plot this as-is, but we don't know what the values in the numbers correspond to. Nonetheless, let's go ahead. This code block is the same as the block we used to plot slices above except that it uses the new image `aseg` we just loaded._____no_output_____
<code>
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')_____no_output_____
</code>
Clearly, the balues in the plots above are discretized, but it's not clear what they correspond to. The map of numbers to characters and colors can be found in the various FreeSurfer color LUT files. These are all located in the FreeSurfer home directory and end with `LUT.txt`. They are essentially spreadsheets and are loaded by `neuropythy` as `pandas.DataFrame` objects. In `neuropythy`, the LUT objects are associated with the `'freesurfer_home'` configuration variable. This has been setup automatically in the course and the `neuropythy` docker-image._____no_output_____
<code>
ny.config['freesurfer_home'].luts['aseg']_____no_output_____
</code>
So suppose we want to look at left cerebral cortex. In the table, this has value 3. We can find this value in the images we are plotting and plot only it to see the ROI in each the slices we plot._____no_output_____
<code>
# We want to plot left cerebral cortex (label ID = 3, per the LUT)
label = 3
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Plot only the values that are equal to the label ID.
imslice = (imslice == label)
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off')_____no_output_____
</code>
By plotting the LH cortex specifically, we can see that LEFT is in the direction of increasing rows (down the image slices, if you used `axis = 2`), thus RIGHT must be in the direction of decreasing rows in the image._____no_output_____Let's also make some images from these slices in which we replace each of the pixels in each slice with the color recommended by the color LUT._____no_output_____
<code>
# We are using this color LUT:
lut = ny.config['freesurfer_home'].luts['aseg']
# The axis:
axis = 2
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Convert the slice into an RGBA image using the color LUT:
rgba_im = np.zeros(imslice.shape + (4,))
for (label_id, row) in lut.iterrows():
rgba_im[imslice == label_id,:] = row['color']
ax.imshow(rgba_im)
# Turn off labels:
ax.axis('off')_____no_output_____
</code>
## Cortical Surface Data_____no_output_____Cortical surface data is handled and represented much differently than volumetric data. This section demonstrates how to interact with cortical surface data in a Jupyter notebook, primarily using `neuropythy`.
To start off, however, we will just load a surface file using `nibabel` to see what one contains.
---_____no_output_____### Load a Surface-Geometry File Using `nibabel`_____no_output_____---_____no_output_____
<code>
# Each subject has a number of surface files; we will look at the
# left hemisphere, white surface.
hemi = 'lh'
surf = 'white'
# Feel free to change hemi to 'rh' for the RH and surf to 'pial'
# or 'inflated'.
# We load the surface from the subject's 'surf' directory in FreeSurfer.
# Nibabel refers to these files as "geometry" files.
filename = subject.path + f'/surf/{hemi}.{surf}'
# If you are using an HCP subject, you should instead load from this path:
#relpath = f'T1w/{subject.name}/surf/{hemi}.{surf}'
#filename = subject.pseudo_path.local_path(relpath)
# Read the file, using nibabel.
surface_data = nib.freesurfer.read_geometry(filename)
# What does this return?
surface_data_____no_output_____
</code>
So when `nibabel` reads in one of these surface files, what we get back is an `n x 3` matrix of real numbers (coordiantes) and an `m x 3` matrix of integers (triangle indices).
The `ipyvolume` module has support for plotting triangle meshes--let's see how it works._____no_output_____
<code>
# Extract the coordinates and triangle-faces.
(coords, faces) = surface_data
# And get the (x,y,z) from coordinates.
(x, y, z) = coords.T
# Now, plot the triangle mesh.
fig = ipv.figure()
ipv.plot_trisurf(x, y, z, triangles=faces)
# Adjust the plot limits (making them equal makes the plot look good).
ipv.pylab.xlim(-100,100)
ipv.pylab.ylim(-100,100)
ipv.pylab.zlim(-100,100)
# Generally, one must call show() with ipyvolume.
ipv.show()_____no_output_____
</code>
---_____no_output_____### Hemisphere (`neuropythy.mri.Cortex`) objects_____no_output_____---
Although one can load and plot cortical surfaces with `nibabel`, `neuropythy` builds on `nibabel` by providing a framework around which the cortical surface can be represented. It includes a number of utilities related specifically to cortical surface analysis, and allows much of the power of FreeSurfer to be leveraged through simple Python data structures.
To start with, we will look at our subject's hemispheres (`neuropythy.mri.Cortex` objects) and how they represent surfaces._____no_output_____
<code>
# Grab the hemisphere for our subject.
cortex = subject.hemis[hemi]
# Note that `cortex = subject.lh` and `cortex = subject.rh` are equivalent
# to `cortex = subject.hemis['lh']` and `cortex = subject.hemis['rh']`.
# What is cortex?
cortex_____no_output_____
</code>
From this we can see which hemisphere we have selected, the number of triangle faces that it has, and the number of vertices that it has. Let's look at a few of its' properties._____no_output_____#### Surfaces
Each hemisphere has a number of surfaces; we can view them through the `cortex.surfaces` dictionary._____no_output_____
<code>
cortex.surfaces.keys()_____no_output_____cortex.surfaces['white_smooth']_____no_output_____
</code>
The `'white_smooth'` mesh is a well-processed mesh of the white surface that has been well-smoothed. You might notice that there is a `'midgray'` surface, even though FreeSurfer does not include a mid-gray mesh file. The `'midgray'` mesh, however, can be made by averaging the white and pial mesh vertices.
Recall that all surfaces of a hemisphere have equivalent vertices and identical triangles. We can test that here._____no_output_____
<code>
np.array_equal(cortex.surfaces['white'].tess.faces,
cortex.surfaces['pial'].tess.faces)_____no_output_____
</code>
Surfaces track a large amount of data about their meshes and vertices and inherit most of the properties of hemispheres that are discussed below. In addition, surfaces uniquely carry data about cortical distances and surface areas. For example:_____no_output_____
<code>
# The area of each of the triangle-faces in nthe white surface mesh, in mm^2.
cortex.surfaces['white'].face_areas_____no_output_____# The length of each edge in the white surface mesh, in mm.
cortex.surfaces['white'].edge_lengths_____no_output_____# And the edges themselves, as indices like the faces.
cortex.surfaces['white'].tess.edges_____no_output_____
</code>
#### Vertex Properties_____no_output_____Properties arre values assigned to each surface vertex. They can include anatomical or geometric properties, such as ROI labels (i.e., a vector of values for each vertex: `True` if the vertex is in the ROI and `False` if not), cortical thickness (in mm), the vertex surface-area (in square mm), the curvature, or data from other functional measurements, such as BOLD-time-series data or source-localized MEG data.
The properties of a hemisphere are stored in the `properties` value. `Cortex.properties` is a kind of dictionary object and can generally be treated as a dictionary. One can also access property vectors via `cortex.prop(property_name)` rather than `cortex.properties[property_name]`; the former is largely short-hand for the latter._____no_output_____
<code>
sorted(cortex.properties.keys())_____no_output_____
</code>
A few thigs worth noting: First, not all FreeSurfer subjects will have all of the properties listed. This is because different versions of FreeSurfer include different files, and sometimes subjects are distributed without their full set of files (e.g., to save storage space). However, rather than go and try to load all of these files right away, `neuropythy` makes place-holders for them and loads them only when first requested (this saves on loading time drastically). Accordingly, if you try to use a property whose file doesn't exist, an nexception will be raised.
Additionally, notice that the first several properties are for Brodmann Area labels. The ones ending in `_label` are `True` / `False` boolean labels indicating whether the vertex is in the given ROI (according to an estimation based on anatomy). The subject we are using in the Jupyterhub environment does not actually have these files included, but they do have, for example `BA1_weight` files. The weights represent the probability that a vertex is in the associated ROI, so we can make a label from this._____no_output_____
<code>
ba1_label = cortex.prop('BA1_weight') >= 0.5_____no_output_____
</code>
We can now plot this property using `neuropythy`'s `cortex_plot()` function._____no_output_____
<code>
ny.cortex_plot(cortex.surfaces['white'], color=ba1_label)_____no_output_____
</code>
**Improving this plot.** While this plot shows us where the ROI is, it's rather hard to interpret. Rather, we would prefer to plot the ROI in red and the rest of the brain using a binarized curvature map. `neuropythy` supports this kind of binarized curvature map as a default underlay, so, in fact, the easiest way to accomplish this is to tell `cortex_plot` to color the surface red, but to add a vertex mask that instructs the function to *only* color the ROI vertices.
Additionally, it is easier to see the inflated surface, so we will switch to that._____no_output_____
<code>
ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label)_____no_output_____
</code>
We can optionally make this red ROI plot a little bit transparent as well._____no_output_____
<code>
ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label, alpha=0.4)_____no_output_____
</code>
**Plotting the weight instead of the label.** Alternately, we might have wanted to plot the weight / probability of the ROI. Continuous properties like probability can be plotted using color-maps, similar to how they are plotted in `matplotlib`._____no_output_____
<code>
ny.cortex_plot(cortex.surfaces['inflated'], color='BA1_weight',
cmap='hot', vmin=0, vmax=1, alpha=0.6)_____no_output_____
</code>
**Another property.** Other properties can be very informative. For example, the cortical thickness property, which is stored in mm. This can tell us the parts of the brain that are thick or not thick._____no_output_____
<code>
ny.cortex_plot(cortex.surfaces['inflated'], color='thickness',
cmap='hot', vmin=1, vmax=6)_____no_output_____
</code>
---_____no_output_____### Interpolation (Surface to Image and Image to Surface)_____no_output_____---
Hemisphere/Cortex objects also manage interpolation, both to/from image volumes as well as to/from the cortical surfaces of other subjects (we will demo interpolation between subjects in the last section). Here we will focus on the former: interpolation to and from images.
**Cortex to Image Interpolation.**
Because our subjects only have structural data and do not have functional data, we do not have anything handy to interpolate out of a volume onto a surface. So instead, we will start by innterpolating from the cortex into the volume. A good property for this is the subject's cortical thickness. Thickness is difficult to calculate in the volume, so if one wants thickness data in a volume, it would typically be calculated using surface meshes then projected back into the volume. We will do that now.
Note that in order to create a new image, we have to provide the interpolation method with some information about how the image is oriented and shaped. This includes two critical pieces of information: the `'image_shape'` (i.e., the `numpy.shape` of the image's array) and the `'affine'`, which is simply the affine-transformation that aligns the image with the subject. Usually, it is easiest to provide this information in the form of a template image. For all kinds of subjects (HCP and FreeSurfer), an image is correctly aligned with a subject and thus the subject's cortical surfaces if its affine transfomation correctly aligns it with `subject.images['brain']`. _____no_output_____
<code>
# We need a template image; the new image will have the same shape,
# affine, image type, and hader as the template image.
template_im = subject.images['brain']
# We can use just the template's header for this.
template = template_im.header
# We can alternately just provide information about the image geometry:
#template = {'image_shape': (256,256,256), 'affine': template_im.affine}
# Alternately, we can provide an actual image into which the data will
# be inserted. In this case, we would want to make a cleared-duplicate
# of the brain image (i.e. all voxels set to 0)
#template = ny.image_clear(template_im)
# All of the above templates should provide the same result.
# We are going to save the property from both hemispheres into an image.
lh_prop = subject.lh.prop('thickness')
rh_prop = subject.rh.prop('thickness')
# This may be either 'linear' or 'nearest'; for thickness 'linear'
# is probably best, but the difference will be small.
method = 'linear'
# Do the interpolation. This may take a few minutes the first time it is run.
new_im = subject.cortex_to_image((lh_prop, rh_prop), template, method=method,
# The template is integer, so we override it.
dtype='float')_____no_output_____
</code>
Now that we have made this new image, let's take a look at it by plotting some slices from it, once again._____no_output_____
<code>
# What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = new_im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = new_im.dataobj[:,slice_num,:]
else:
imslice = new_im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='hot', vmin=0, vmax=6)
# Turn off labels:
ax.axis('off')_____no_output_____
</code>
**Image to Cortex Interpolation.** A good test of our interpolation methods is now to ensure that, when we interpolate data from the image we just created back to the cortex, we get approximately the same values. The values we interpolate back out of the volume will not be identical to the volumes we started with because the resolution of the image is finite, but they should be close.
The `image_to_cortex()` method of the `Subject` class is capable of interpolating from an image to the cortical surface(s), based on the alignment of the image with the cortex._____no_output_____
<code>
(lh_prop_interp, rh_prop_interp) = subject.image_to_cortex(new_im, method=method)_____no_output_____
</code>
We can plot the hemispheres together to visualize the difference between the original thickenss and the thickness that was interpolated into an image then back onto the cortex._____no_output_____
<code>
fig = ny.cortex_plot(subject.lh, surface='midgray',
color=(lh_prop_interp - lh_prop)**2,
cmap='hot', vmin=0, vmax=2)
fig = ny.cortex_plot(subject.rh, surface='midgray',
color=(rh_prop_interp - rh_prop)**2,
cmap='hot', vmin=0, vmax=2,
figure=fig)
ipv.show()_____no_output_____
</code>
## Intersubject Surface Alignment_____no_output_____Comparison between multiple subjects is usually accomplished by first aligning each subject's cortical surface with that of a template surface (*fsaverage* in FreeSurfer, *fs_LR* in the HCP), then interpolating between vertices in the aligned arrangements. The alignment to the template are calculated and saved by FreeSurfer, the HCPpipelines, and various other utilities, but as of when this tutorial was written, `neuropythy` only supports these first two formats. Alignments are calculated by warping the vertices of the subject's spherical (fully inflated) hemisphere in a diffeomorphic fashion with the goal of minimizing the difference between the sulcal topology (curvature and depth) of the subject's vertices and that of the nearby *fsaverage* vertices. The process involves a number of steps, and any who are interested should follow up with the various documentations and papers published by the [FreeSurfer group](https://surfer.nmr.mgh.harvard.edu/).
For practical purposes, it is not necessary to understand the details of this algorithm--FreeSurfer is a large complex collection of software that has been under development for decades. However, to better understand what is produced by FreeSurfer's alignment procedure, let us start by looking at its outputs.
---_____no_output_____### Compare Subject Registrations_____no_output_____---
To better understand the various spherical surfaces produced by FreeSurfer, let's start by plotting three spherical surfaces in 3D. The first will be the subject's "native" inflated spherical surface. The next will be the subjects "fsaverage"-aligned sphere. The last will be The *fsaverage* subject's native sphere.
These spheres are accessed not through the `subject.surfaces` dictionary but through the `subject.registrations` dictionary. This is simply a design decision--registrations and surfaces are not fundamentally different except that registrations can be used for interpolation between subjects (more below).
Note that you may need to zoom out once the plot has been made._____no_output_____
<code>
# Get the fsaverage subject.
fsaverage = ny.freesurfer_subject('fsaverage')
# Get the hemispheres we will be examining.
fsa_hemi = fsaverage.hemis[hemi]
sub_hemi = subject.hemis[hemi]
# Next, get the three registrations we want to plot.
sub_native_reg = sub_hemi.registrations['native']
sub_fsaverage_reg = sub_hemi.registrations['fsaverage']
fsa_native_reg = fsa_hemi.registrations['native']
# We want to plot them all three together in one scene, so to do this
# we need to translate two of them a bit along the x-axis.
sub_native_reg = sub_native_reg.translate([-225,0,0])
fsa_native_reg = fsa_native_reg.translate([ 225,0,0])
# Now plot them all.
fig = ipv.figure(width=900, height=300)
ny.cortex_plot(sub_native_reg, figure=fig)
ny.cortex_plot(fsa_native_reg, figure=fig)
ny.cortex_plot(sub_fsaverage_reg, figure=fig)
ipv.show()_____no_output_____
</code>
---_____no_output_____### Interpolate Between Subjects_____no_output_____---
Interpolation between subjects requires interpolating between a shared registration. For a subject and the *fsaverage*, this is the subject's *fsaverage*-aligned registration and *fsaverage*'s native. However, for two non-meta subjects, the *fsaverage*-aligned registration of both subjects are used.
We will first show how to interpolate from a subject over to the **fsaverage**. This is a very valuable operation to be able to do as it allows you to compute statistics across subejcts of cortical surface data (such as BOLD activation data or source-localized MEG data)._____no_output_____
<code>
# The property we're going to interpolate over to fsaverage:
sub_prop = sub_hemi.prop('thickness')
# The method we use ('nearest' or 'linear'):
method = 'linear'
# Interpolate the subject's thickness onto the fsaverage surface.
fsa_prop = sub_hemi.interpolate(fsa_hemi, sub_prop, method=method)
# Let's make a plot of this:
ny.cortex_plot(fsa_hemi, surface='inflated',
color=fsa_prop, cmap='hot', vmin=0, vmax=6)_____no_output_____
</code>
Okay, for our last exercise, let's interpolate back from the *fsaverage* subject to our subject. It is occasionally nice to be able to plot the *fsaverage*'s average curvature map as an underlay, so let's do that._____no_output_____
<code>
# This time we are going to interpolate curvature from the fsaverage
# back to the subject. When the property we are interpolating is a
# named property of the hemisphere, we can actually just specify it
# by name in the interpolation call.
fsa_curv_on_sub = fsa_hemi.interpolate(sub_hemi, 'curvature')
# We can make a duplicate subject hemisphere with this new property
# so that it's easy to plot this curvature map.
sub_hemi_fsacurv = sub_hemi.with_prop(curvature=fsa_curv_on_sub)
# Great, let's see what this looks like:
ny.cortex_plot(sub_hemi_fsacurv, surface='inflated')_____no_output_____
</code>
| {
"repository": "bastivkl/nh2020-curriculum",
"path": "we-geometry-benson/class-notebook.ipynb",
"matched_keywords": [
"neuroscience"
],
"stars": 94,
"size": 39911,
"hexsha": "d06b7778ab40d51902c40cccf06704dcbf0b39bf",
"max_line_length": 973,
"avg_line_length": 35.8267504488,
"alphanum_fraction": 0.6171732104
} |
# Notebook from TechLabs-Dortmund/nutritional-value-determination
Path: webgrabber_wikilisten.ipynb
# webgrabber für Listen von Wikipedia
_____no_output_____
<code>
# Gebäckliste
import requests
from bs4 import BeautifulSoup
# man muss der liste einen letzten eintrag geben, weil sonst weitere listen unter der eigentlichen ausgelesen werden.
def grab_list(url, last_item): # wenn wikipedia eine Tabelle anzeigt
grabbed_list = []
r = requests.get(url)
text = r.text
soup = BeautifulSoup(text, 'lxml')
soup.prettify()
matches = soup.find_all('tr')
for index, row in enumerate(matches):
try:
obj = row.find('td').a.get('title')
if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '')
grabbed_list.append(obj)
if obj == last_item:
break
except AttributeError:
continue
return grabbed_list
def grab_list2(url, last_item): # wenn wikipedia eine bullet-point liste anzeigt
grabbed_list = []
r = requests.get(url)
text = r.text
soup = BeautifulSoup(text, 'lxml')
soup.prettify()
matches = soup.find_all('li')
for index, row in enumerate(matches):
try:
obj = row.a.get('title')
if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '')
grabbed_list.append(obj)
if obj == last_item:
break
except AttributeError:
continue
return grabbed_list_____no_output_____url_gebaeck = r'https://en.wikipedia.org/wiki/List_of_pastries'
gebaeckliste = grab_list(url_gebaeck, 'Zlebia')
print(gebaeckliste)['Alexandertorte', 'Alfajor', 'Aloo pie', 'Apple pie', 'Apple strudel', 'Bahulu', 'Bakewell pudding', 'Baklava', 'Bakpia Pathok', 'Banbury cake', 'Banitsa', 'Banket (food)', 'Bear claw', 'BeaverTails', 'Bedfordshire clanger', 'Belekoy', 'Belokranjska povitica', 'Berliner (doughnut)', 'Bethmännchen', 'Bichon au citron', 'Bierock', 'Birnbrot', 'Bizcocho', 'Börek', 'Bossche bol', 'Bougatsa', 'Boyoz', 'Bridie', 'Briouat', 'Bruttiboni', 'Buko pie', 'Bundevara', 'Canelé', 'Cannoli', 'Carac (pastry)', 'Charlotte (cake)', 'ChaSan', 'Chatti Pathiri', 'Cherry pie', 'Chorley cake', 'Chouquette', 'Choux pastry', 'Cinnamon Roll', 'Coca (pastry)', 'Conejito', 'Cornish pasty', 'Conversation (pastry)', 'Cornulețe', 'Coussin de Lyon', 'Cream horn', 'Crêpes Suzette', 'Crocetta of Caltanissetta', 'Croissant', 'Croline', 'Cronut', 'Croquembouche', 'Cuban pastelito (page does not exist)', 'Curry puff', 'Dabby-Doughs', 'Danish pastry', 'Djevrek', 'Dutch letter', 'Dutch Baby Pancake (page does not exist)', 'Eccles cake', 'Éclair (pastry)', 'Empanada', 'Ensaïmada', 'Fa gao', 'Fazuelos', 'Fig roll', 'Flaky pastry', 'Flaugnarde', 'Flaons', 'Flies graveyard', 'Franzbrötchen', 'Galette', 'Gâteau Basque', 'Shorgoghal', 'Gibanica', 'Gujia', 'Gözleme', 'Gulab jamun', 'Gundain', 'Gustavus Adolphus pastry', 'Gyeongju bread', 'Haddekuche', 'Hamantash', 'Hellimli', 'Heong Peng', 'Hot water crust pastry', 'Huff paste', 'Inipit', 'Jachnun', 'Jalebi', 'Jambon', 'Jesuite', 'Joulutorttu (pastry)', 'Kalács', 'Kanafeh', 'Karakudamono', 'Kifli', 'Klobasnek', 'Knieküchle', 'Knish', 'Kolache', 'Kolompeh', 'Kołacz', 'Komaj sehen', 'Kouign-amann', 'Krempita', 'Kringle', 'Kroštule', 'Kūčiukai', 'Kürtőskalács', "Ladies' navels", 'Lattice (pastry)', 'Leipziger Lerche', 'Linzer torte', 'Lotus seed bun', "Ma'amoul", 'Macaron', 'Makmur', 'Makroudh', 'Malsouka', 'Mandelkubb', 'Mantecadas', 'Marillenknödel', 'Marry girl cake', 'Masan (pastry)', 'Miguelitos', 'Milhoja', 'Milk-cream strudel', 'Mille-feuille', 'Mooncake', 'Moorkop', 'Muskazine', 'Nazook', "Nun's puffs", 'Nunt', 'Öçpoçmaq', 'Ox-tongue pastry', 'Pain au chocolat', 'Pain aux raisins', 'Palmier', 'Pannekoek', 'Pan dulce (sweet bread)', 'Panzarotti', 'Papanași', 'Paper wrapped cake', 'Paris–Brest', 'Paste (pasty)', 'Pastel (food)', 'Pastizz', 'Pastry heart', 'Pâté Chaud', 'Pecan pie', 'Filo', 'Pie', 'Pineapple bun', 'Pineapple cake', 'Pionono', 'Pithivier', 'Pizza', 'Plăcintă', 'Poffertjes', 'Pogača', 'Poppy seed roll', 'Pot pie', 'Prekmurska gibanica', 'Pretzel', 'Profiterole', 'Puff pastry', "Puits d'amour", 'Punsch-roll', 'Punschkrapfen', 'Qottab', 'Quesito', 'Rab cake', 'Remonce', 'Rhubarb tart', 'Roti john', 'Roti tissue', 'Roze koek', 'Rugelach', "Runeberg's torte", 'Rustico (pastry)', 'Sad cake', 'Samosa', 'Schaumrolle', 'Schnecken', 'Schneeballe', 'Schuxen', 'Semla', 'Sfenj', 'Sfințișori', 'Sfogliatelle', 'Shortcrust pastry', 'Sou (pastry)', 'Spanakopita', 'Streusel', 'Strudel', 'Stutenkerl', 'Sufganiyah', 'Suncake (Taiwan)', 'Sweetheart cake', 'Taiyaki', 'Toaster pastry', 'Torpil', 'Tortell', 'Tortita negra', 'Trdelník', 'Tu (cake)', 'Turnover (food)', 'Utap', 'Vatrushka', 'Vetkoek', 'Viennoiserie', 'Vol-au-vent', 'Welsh cake', 'Xuixo', 'Yurla (dish)', 'Zeeuwse bolus', 'Zlebia']
# deutsche desserts
url_deutschedesserts = r'https://en.wikipedia.org/wiki/List_of_German_desserts'
germanpastrylist = grab_list(url_deutschedesserts, 'Zwetschgenkuchen')
print(germanpastrylist)['Aachener Printen', 'Bavarian cream', 'Berliner (doughnut)', 'Bethmännchen', 'Baumkuchen', 'Bratapfel', 'Bienenstich', 'Black Forest cake', 'Bremer Klaben', 'Brenntar', 'Buchteln', 'Buckwheat gateau', 'Carrot cake', 'Cheesecake', 'Dampfnudel', 'Dominostein', 'Donauwelle', 'Fasnacht (doughnut)', 'Frankfurter Kranz', 'Franzbrötchen', 'Gugelhupf', 'Germknödel', 'Garrapinyades', 'Götterspeise', 'Herrencreme', 'Kuchen', 'Lebkuchen', 'de:Linzer Auge', 'Makówki', 'Muskazine', 'Marzipan', 'Magenbrot', 'Nussecke (page does not exist)', 'Pfeffernüsse', 'Prinzregententorte', 'Rote Grütze', 'Rumtopf', 'Schneeball (pastry)', 'Schokokuss', 'Spaghettieis', 'Spekulatius', 'Springerle', 'Spritzgebäck', 'Spritzkuchen', 'Stollen', 'Streusel', 'Streuselkuchen', 'Tollatsch', 'Vanillekipferl', 'Welfenspeise', 'Wibele', 'Windbeutel', 'Zwetschgenkuchen']
# Milchprodukte
url_dairy = r'https://en.wikipedia.org/wiki/List_of_dairy_products'
dairyproductlist = grab_list(url_dairy, 'Yogurt')
print(dairyproductlist)['Aarts (food)', 'Acidophiline', 'Amasi', 'Ayran', 'Basundi', 'Bhuna khoya', 'Blaand', 'Black Kashk', 'Booza', 'Borhani', 'Buffalo curd', 'Bulgarian yogurt', 'Butter', 'Butterfat', 'Buttermilk', 'Buttermilk koldskål', 'Buttermilk', 'Cacık', 'Camel milk', 'Casein', 'Caudle', 'Chaas', 'Chal', 'Chalap', 'Chass', 'Cheese', 'Chocolate butter', 'Clabber (food)', 'Clotted cream', 'Condensed milk', 'Cottage cheese', 'Cream', 'Cream cheese', 'Crème anglaise', 'Crème fraîche', 'Cuajada', 'Curd', 'Curd snack', 'Custard', 'Dadiah', 'Daigo (dairy product)', 'Dondurma', "Donkey's milk", 'Dulce de Leche', 'Doogh', 'Evaporated milk', 'Eggnog', 'Filled milk', 'Filmjölk', 'Fromage frais', 'Fermented milk products', 'Frozen custard', 'Frozen yogurt', 'Gelato', 'Ghee', "Goat's milk", 'Gombe (dish)', 'Gomme (food)', 'Horse', 'Ice cream', 'Ice milk', 'Infant formula', 'Junket (dessert)', 'Junnu', 'Kalvdans', 'Kashk', 'Kaymak', 'Kefir', 'Khoa', 'Kulfi', 'Kumis', 'Lassi', 'Leben (milk product)', 'Malai', 'Malaiyo', 'Matzoon (yogurt)', 'Milk', 'Milk skin', 'Míša', 'Mitha Dahi', 'Mozzarella', 'Moose milk', 'Mursik', 'Paneer', 'Podmleč', 'Pomazánkové máslo', 'Powdered milk', 'Processed cheese', 'Pytia', 'Qatiq', 'Qimiq', 'Quark (dairy product)', 'Reindeer husbandry', 'Ryazhenka', 'Ricotta', 'Sarasson', 'Semifreddo', 'Sergem', 'Sheep milk', 'Shrikhand', 'Skorup', 'Skyr', 'Smen', 'Smetana (dairy product)', 'Snow cream', 'So (dairy product)', 'Soft serve', 'Sour cream', 'Soured milk', 'Spaghettieis', 'Strained yogurt', 'Súrmjólk', 'Sütlaç', 'Tarhana', 'Tuttis', 'Uunijuusto', 'Vaccenic acid', 'Varenets', 'Viili', 'Vla', 'Whey', 'Whey protein', 'Whipped cream', 'Yak butter', 'Pack yak', 'Yakult', 'Yayık ayranı', 'Ymer (dairy product)', 'Yogurt']
# Cheeses
url_cheese = r'https://en.wikipedia.org/wiki/List_of_cheeses'
cheeselist = grab_list(url_cheese, 'Rice cheese')
print(cheeselist)['Wagasi', 'Ethiopian cuisine', 'Caravane cheese', 'Chechil', 'Chhanabora', 'Byaslag', 'Chura kampo', 'Chura loenpa', 'Nguri', 'Rubing', 'Rushan cheese', 'Bandel cheese', 'Paneer', 'Chhena', 'Dahi Chhena', 'Kalari cheese', 'Kalimpong cheese', 'Dangke', 'Sakura cheese', 'Imsil', 'Byaslag', 'Flower of Rajya', 'Chhurpi', 'Kesong puti', 'Sirene', 'Kashkaval', 'Quark (dairy product)', 'Bergkäse', 'Lüneberg cheese', 'Sura Kees', 'Mondseer', 'Tyrolean grey cheese', 'Brussels cheese', 'Chimay Brewery', 'Herve cheese', 'Le Wavreumont', 'Limburger cheese', 'Maredsous cheese', 'Passendale cheese', 'Remoudou', 'Livno cheese', 'Herzegovina "squeaking" cheese', 'Trappista cheese', 'Vlašić cheese', 'Bosnian Smoked Cheese (Suhi Sir)', 'Cherni Vit (cheese)', 'Kashkaval', 'Sirene', 'Paški sir', 'Škripavac', 'Tounjski sir', 'Prgica', 'Dimsi', 'Akkawi', 'Anari cheese', 'Halloumi', 'Kefalotyri', 'Abertam cheese', 'Blaťácké zlato', 'Olomoucké syrečky', 'Hermelín', 'Danbo', 'Danish Blue', 'Esrom', 'Fynbo', 'Havarti', 'Maribo cheese', 'Molbo cheese', 'Saga (cheese)', 'Samsø cheese', 'Tybo', 'Vesterhavsost', 'Atleet', 'Eesti Juust', 'Kadaka juust', 'Aura cheese', 'Lappi cheese', 'Leipäjuusto', 'Oltermanni', 'Raejuusto', 'Sulguni', 'Anthotyros', 'Chloro (cheese)', 'Feta', 'Graviera', 'Kasseri', 'Kefalograviera', 'Kefalotyri', 'Kopanisti', 'Manouri', 'Metsovone', 'Myzithra', 'Tyrozouli', 'Xynomizithra', 'Xynotyro', 'Protected designation of origin', 'Liptauer', 'Orda (cheese)', 'Pálpusztai', 'Trappista cheese', 'Oázis', 'Balaton cheese', 'Karaván', 'Pannónia', 'Höfðingi', 'Šar cheese', 'Fried Camembert cheese', 'Jāņi cheese', 'Latvian cheese', 'Ġbejna', 'Cașcaval', 'Urdă', 'Brânză', 'Brânză de vaci (cow cheese)', 'Kolašinski sir', 'Pljevaljski sir', 'Podgorički sir', 'Nikšićki kozji sir', 'Njeguški sir', 'Kashkaval', 'Urdă', 'Belo Sirenje', 'Brunost', 'Gamalost', 'Geitost', 'Heidal cheese', 'Jarlsberg cheese', 'Nøkkelost', 'Norvegia', 'Pultost', 'Snøfrisk', 'Castelo Branco cheese', 'Queijo de Nisa', 'Queijo do Pico', 'Queijo de Azeitão', 'São Jorge cheese', 'Serra da Estrela cheese', 'Requeijão', 'Saloio', 'Santarém cheese', 'Brânzǎ de burduf', 'ro:Brânză de Suhaia', 'Brânză de vaci', 'Caș', 'Cașcaval', 'Năsal cheese', 'Telemea', 'Urdă', 'Bryndza', 'Circassian cheese', 'Korall', 'Quark (cheese)', 'Caciocavallo', 'Pule cheese', 'Bryndza', 'Liptauer', 'Ovčia hrudka', 'Kravská hrudka', 'Korbáčiky', 'Oštiepok', 'Parenica', 'Urda cheese', 'Quark (cheese)', 'Brie', 'Camembert', 'Mohant', 'Tolminc cheese', 'Ädelost', 'Blå Gotland', 'Grevé', 'Gräddost', 'Herrgårdsost', 'Hushållsost', 'Moose cheese', 'Prästost', 'Svecia', 'Västerbottensost', 'Bilozhar', 'Bukovinskyi', 'Bryndza', 'Dobrodar', 'Smetankowyi', 'Quark (cheese)', 'Ukraїnskyi', 'Vurda', 'Banbury cheese', 'Cheddar cheese', 'Stilton cheese', 'Stinking Bishop cheese', 'Areesh cheese', 'Baramily', 'Domiati', 'Halumi', 'Istanboly', 'Mish', 'Rumi cheese', 'Lighvan cheese', 'Tzfat cheese', 'Tzfat cheese', 'Labneh', 'Kashkaval', 'Qishta', 'Halloumi', 'Akkawi', 'Areesh cheese', 'Baladi cheese', 'Basket cheese', 'Jameed', 'Jibneh Arabieh', 'Kashkaval', 'Qishta', 'Labneh', 'Syrian cheese', 'Nabulsi cheese', 'Surke', 'Syrian cheese', 'Antep peyniri', 'Armola peyniri', 'Beyaz peynir', 'Chechil', 'Çökelek', 'Çömlek cheese', 'String cheese', 'Ezine peyniri', 'Füme çerkes peyniri', 'Halloumi', 'Kars gravyer cheese', 'Kaşar', 'Kopanisti peyniri', 'Curd', 'Mihaliç Peyniri', 'Strained yogurt', 'Telli peynir', 'Tulum (cheese)', 'Van otlu peyniri', 'Bleu Bénédictin', 'Cheddar cheese', 'Cheese curds', 'Oka cheese', 'Pikauba (cheese)', 'Turrialba cheese', 'Cuajada', 'Crema (cheese)', 'Crema (cheese)', 'Cuajada', 'Quesillo', 'Queijo seco', 'Adobera cheese', 'Añejo cheese', 'Asadero cheese', 'Chiapas cheese', 'Cotija cheese', 'Criollo cheese', 'Lingallin (cheese)', 'Oaxaca cheese', 'Crema Mexicana', 'Chihuahua cheese', 'Queso de cuajo', 'Queso Fresco', 'Queso Panela', 'Quesillo', 'Bergenost', 'Brick cheese', 'Cheese curds', 'Colby cheese', 'Colby-Jack cheese', 'Colorado Blackie', 'Cream cheese', 'Creole cream cheese', 'Cup Cheese', 'Farmer cheese', 'Hoop cheese', 'Humboldt Fog', 'Liederkranz cheese', 'Monterey Jack', 'Muenster cheese', 'Nacho cheese', 'Pepper jack cheese', 'Pinconning cheese', 'Provel cheese', 'Red Hawk cheese', 'String cheese', 'Teleme cheese', 'Cremoso cheese', 'Criollo cheese (Argentina)', 'Goya cheese', 'Reggianito', 'Sardo cheese', 'Chubut cheese', 'Tandil cheese', 'Mar del Plata cheese', 'Chaqueño', 'Menonita (cheese)', 'Catupiry', 'Minas cheese', 'Queijo coalho', 'Colony cheese', 'Queijo Meia Cura', 'Canastra cheese', 'Queijo Cobocó', 'Queijo-do-Reino', 'Queijo do Serro', 'Queijo Manteiga', 'Queijo prato', 'Requeijão', 'Chanco cheese', 'Panquehue (cheese)', 'Renaico (cheese)', 'Queso Campesino', 'Queso costeño', 'Cuajada', 'Queso Paipa', 'Queso Pera', 'Quesillo', 'Guayanés cheese', 'Queso crineja', 'Queso de mano', 'Queso Llanero', 'Queso Palmita', 'Queso Parma de Barinitas', 'Queso telita', 'Cottage cheese', 'Farmer cheese', 'Port wine cheese', 'Smoked cheese', 'Soy cheese', 'Rice cheese']
url_fruit = r'https://en.wikipedia.org/wiki/List_of_culinary_fruits'
fruits = grab_list(url_fruit, 'Yantok')
print(fruits)['Malus pumila', 'Pseudocydonia sinensis', 'Aronia melanocarpa', 'Planchonia careya', 'Crataegus aestivalis', 'Crataegus rhipidophylla', 'Genipa americana', 'Eriobotrya japonica', 'Flacourtia inermis', 'Mespilus germanica', 'Malus niedzwetzkyana', 'Pyrus communis', 'Cydonia oblonga', 'Flacourtia indica', 'Sorbus aucuparia', 'Manilkara zapota', 'Amelanchier alnifolia', 'Pyracantha coccinea', 'Shipova', 'Sorbus domestica', 'Malus angustifolia', 'Heteromeles arbutifolia', 'Euterpe oleracea', 'Malpighia emarginata', 'Irvingia gabonensis', 'Garcinia livingstonei', 'Elaeis guineensis', 'Cornus × unalaschkensis', 'Pourouma cecropiifolia', 'Spondias dulcis', 'Elaeis oleifera', 'Prunus americana', 'Prunus armeniaca', 'Mangifera pajang', 'Prunus maritima', 'Antidesma bunius', 'Mangifera caesia', 'Prunus serotina', 'Euclea crispa', 'Parajubaea torallyi', 'Syzygium australe', 'Pleiogynium timoriense', 'Dacryodes edulis', 'Calamus erectus', 'Calligonum junceum', 'Cornus canadensis', 'Casimiroa edulis', 'Eugenia reinwardtiana', 'Byrsonima crassifolia', 'Prunus avium', 'Elaeagnus multiflora', 'Eugenia involucrata', 'Ziziphus mauritiana', 'Prunus virginiana', 'Cassytha melantha', 'Chrysobalanus icaco', 'Cocos nucifera', 'Cornus mas', 'Terminalia catappa', 'Prunus rivularis', 'Empetrum nigrum', 'Murraya koenigii', 'Prunus domestica subsp. insititia', 'Phoenix dactylifera', 'Santalum acuminatum', 'Phyllanthus emblica', 'Owenia acidula', 'Litsea garciae', 'Syzygium fibrosum', 'Prunus umbellata', 'Gomortega keule', 'Greengage', 'Buchanania obovata', 'Myrciaria floribunda', 'Terminalia ferdinandiana', 'Celtis occidentalis', 'Nephelium xerospermoides', 'Syzygium cumini', 'Elaeagnus umbellata', 'Butia capitata', 'Spondias purpurea', 'Ziziphus jujuba', 'Prunus salicina spp.', 'King coconut', 'Nephelium hypoleucum', 'Acronychia acidula', 'Buchanania arborescens', 'Dimocarpus longan', 'Litchi chinensis', 'Syzygium malaccense', 'Pouteria sapota', 'Mangifera indica', 'Bouea macrophylla', 'Sclerocarya birrea', 'Synsepalum dulcificum', 'Mauritia flexuosa', 'Kunzea pomifera', 'Viburnum lentago', 'Peach', 'Azadirachta indica', 'Choerospondias axillaris', 'Myristica fragrans', 'Phyllanthus acidus', 'Prunus persica', 'Bunchosia glandulifera', 'Caryocar brasiliense', 'Grewia asiatica', 'Coccoloba diversifolia', 'Canarium ovatum', 'Eugenia uniflora', 'Prunus domestica', 'Nephelium mutabile', 'Nephelium lappaceum', 'Syzygium suborbiculare', 'Syzygium luehmannii', 'Sageretia theezans', 'Sansapote', 'Savannah cherry', 'Serenoa repens', 'Lodoicea maldivica', 'Coccoloba uvifera', 'Ardisia elliptica', 'Shepherdia argentea', 'Prunus spinosa', 'Mimusops elengi', 'Melicoccus bijugatus', 'Dialium indum', 'Dialium cochinchinense', 'Dialium guineense', 'Syzygium aqueum', 'Syzygium samarangense', 'Acronychia oblongifolia', 'Terminalia carpentariae', 'Manilkara kauki', 'Myrica rubra', 'Spondias mombin', 'Ximenia americana', 'Zwetschge', 'Pouteria caimito', 'Sambucus canadensis', 'Diospyros virginiana', 'Sambucus pubens', 'Billardiera scandens', 'Coffea arabica', 'Eugenia stipitata', 'Vasconcellea × heilbornii', 'Musa acuminata', 'Berberis vulgaris', 'Arctostaphylos uva-ursi', 'Carissa carandas', 'Vaccinium myrtillus', 'Myrtillocactus geometrizans', 'Ribes nigrum', 'Diospyros nigra', 'Vaccinium corymbosum', 'Eupomatia laurina', 'Eugenia brasiliensis', 'Psidium guineense', 'Stelechocarpus burahol', 'Chrysophyllum cainito', 'Muntingia calabura', 'Myrciaria dubia', 'Pouteria campechiana', 'Physalis peruviana', 'Pachycereus pringlei', 'Psidium cattleyanum', 'Dovyalis hebecarpa', 'Ugni molinae', 'Carissa spinarum', 'Psidium friedrichsthalianum', 'Vaccinium macrocarpon', 'Berberis darwinii', 'Diospyros lotus', 'Davidsonia jerseyana', 'Hylocereus undatus', 'Sambucus nigra', 'Feijoa sellowiana', 'Vitis labrusca', 'Passiflora quadrangularis', 'Glenniea philippinensis', 'Actinidia chinensis', 'Ribes uva-crispa', 'Vitis vinifera', 'Psidium guajava', 'Actinidia arguta', 'Lonicera caerulea', 'Lonicera periclymenum', 'Vaccinium ovatum', 'Plinia cauliflora', 'Dovyalis caffra', 'Actinidia deliciosa', 'Lansium parasiticum', 'Vaccinium vitis-idaea', 'Pouteria lucuma', 'Syzygium jambos', 'Mammea americana', 'Hancornia speciosa', 'Aristotelia chilensis', 'Podophyllum peltatum', 'Passiflora incarnata', 'Austromyrtus dulcis', 'Vaccinium floribundum', 'Vitis rotundifolia', 'Solanum quitoense', 'Acrotriche depressa', 'Davidsonia pruriens', 'Mahonia aquifolium', 'Carica papaya', 'Passiflora alata', 'Passiflora platyloba', 'Passiflora edulis', 'Pentadiplandra brazzeana', 'Solanum muricatum', 'Diospyros kaki', 'Cereus repandus', 'Eugenia luschnathiana', 'Punica granatum', 'Opuntia ficus-indica', 'Billardiera longiflora', 'Psidium rufum', 'Ribes rubrum', 'Vaccinium parvifolium', 'Flacourtia rukam', 'Carnegiea gigantea', 'Gaultheria shallon', 'Hippophae rhamnoides', 'Small-leaved fuchsia', 'Archirhodomyrtus beckleri', 'Davidsonia johnsonii', 'Quararibea cordata', 'Averrhoa carambola', 'Arbutus unedo', 'Billardiera cymosa', 'Passiflora ligularis', 'Solanum betaceum', 'Diospyros texana', 'Diospyros blancoi', 'Clausena lansium', 'Capparis mitchellii', 'Lycium barbarum', 'Passiflora edulis f flavicarpa', 'Aegle marmelos', 'Bailan melon', 'Banana melon', 'Canary melon', 'Cucumis prophetarum', 'Sicana odorifera', 'Crane melon', 'Crenshaw melon', 'Borassus flabellifer', 'Cantaloupe', 'Gaya melon', 'Honeydew melon', 'Cucumis metuliferus', 'Hydnora abyssinica', 'Crescentia cujete', 'Kajari melon', 'Kolkhoznitsa melon', 'Cucumis melo var. makuwa', 'Mirza melon', 'Cucumis melo', 'Strychnos spinosa', 'Cantaloupe', 'Santa Claus melon', 'Sprite melon', 'Tigger melon', 'Citrullus lanatus', 'Limonia acidissima', 'Citropsis articulata', 'Citrus × natsudaidai', 'Citrus medica ssp. bajoura', 'Citrus bergamia', 'Citrus × aurantium', 'Blood lime', 'Blood orange', 'Citrus medica var. sarcodactylus', '× Citrofortunella microcarpa', 'Cam sành', 'Kumquat', 'Citrus medica', 'Citrus × clementina', 'Citrus glauca', 'Etrog', 'Citrus australasica', 'Citrus × limonimedica', 'Citrus × paradisi', 'Haruka (citrus)', 'Hyuganatsu', 'Citrus cavaleriei', 'Citrus × iyo', 'Kumquat', 'Citrus × sphaerocarpa', 'Citrus hystrix', 'Kanpei', 'Kawachi Bankan', 'Citrus ×aurantiifolia', 'Kinkoji unshiu', 'Kinnow', 'Kiyomi', 'Kobayashi mikan', 'Koji orange', 'Kuchinotsu No.37', 'Citrus japonica', 'Citrus limon', 'Citrus × latifolia', 'Triphasia trifolia', 'Limequat', 'Citrus reticulata', 'Citrus mangshanensis', 'Melogold', 'Citrus × meyeri', 'Citrus myrtifolia', 'Ōgonkan', 'Orange (fruit)', 'Oroblanco', 'Kumquat', 'Citrus maxima', 'Pompia', 'Ponderosa lemon', 'Citrus × limonia', 'Citrus australis', 'Citrus unshiu', 'Shangjuan', 'Shonan gold', 'Citrus sudachi', 'Citrus limetta', 'Citrus × depressa', 'Citrus × tangelo', 'Citrus tangerina', 'Citrus reticulata x sinensis', 'Ugli fruit', 'Volkamer lemon', 'Citrus junos', 'Annona senegalensis', 'Rubus strigosus', 'Annona conica', 'Atemoya', 'Rubus probus', 'Rollinia deliciosa', 'Morus nigra', 'Rubus allegheniensis', 'Boysenberry', 'Annona scleroderma', 'Annona cherimola', 'Rubus chamaemorus', 'Rubus hayata-koidzumii', 'Maclura tricuspidata', 'Annona reticulata', 'Rubus flagellaris', 'Dillenia indica', 'Grewia retusifolia', 'Annona diversifolia', 'Rubus parvifolius', 'Rubus × loganobaccus', 'Annona crassiflora', 'Potentilla indica', 'Rubus moluccanus', 'Rubus adenotrichos', 'Rubus glaucus', 'Annona montana', 'Annona glabra', 'Morus rubra', 'Rosa rugosa', 'Rubus rosifolius', 'Rubus spectabilis', 'Annona purpurea', 'Annona muricata', 'Fragaria × ananassa', 'Annona squamosa', 'Tayberry', 'Rubus parviflorus', 'Rubus leucodermis', 'Morus alba', 'Fragaria vesca', 'Rubus phoenicolasius', 'Youngberry', 'Artocarpus altilis', 'Artocarpus camansi', 'Artocarpus integer', 'Ficus racemosa', 'Ficus platypoda', 'Duguetia confinis', 'Duguetia spixiana', 'Ficus carrii', 'Ficus carica', 'Pandanus tectorius', 'Artocarpus heterophyllus', 'Artocarpus parvus', 'Artocarpus lacucha', 'Artocarpus rigidus', 'Monstera deliciosa', 'Morinda citrifolia', 'Ananas comosus', 'Pandanus conoideus', 'Ficus coronata', 'Ficus aurea', 'Ficus sycomorus', 'Artocarpus odoratissimus', 'Ficus virens', 'Artocarpus hirsutus', 'Garcinia humilis', 'Blighia sapida', 'Aglaia teysmanniana', 'Garcinia atroviridis', 'Eleiodoxa conferta', 'Platonia insignis', 'Bemange', 'Pouteria australis', 'Melastoma affine', 'Boquila trifoliolata', 'Baccaurea ramiflora', 'Garcinia prainiana', 'Theobroma cacao', 'Garcinia madruno', 'Gaultheria hispida', 'Hymenaea courbaril', 'Theobroma grandiflorum', 'Durio zibethinus', 'Gaultheria procumbens', 'Momordica cochinchinensis', 'Garcinia morella', 'Garcinia gummi-gutta', 'Garcinia forbesii', 'Garcinia magnifolia', 'Garcinia pseudoguttifera', 'Pangium edule', 'Garcinia indica', 'Cola nitida', 'Garcinia parvifolia', 'Lardizabala biternata', 'Siraitia grosvenorii', 'Garcinia mangostana', 'Baccaurea racemosa', 'Garcinia dulcis', 'Asimina triloba', 'Red salak', 'Salacca zalacca', 'Sandoricum koetjape', 'Diploglottis campbellii', 'Vangueria madagascariensis', 'Trichosanthes beccariana', 'Vanilla planifolia', 'Yantok']
url_vegetables = r'https://en.wikipedia.org/wiki/List_of_vegetables'
vegetables = grab_list(url_vegetables, 'Wakame')
print(vegetables)['Amaranth', 'Xanthosoma sagittifolium', 'Centella asiatica', 'Arugula', 'Rubus pectinellus', 'Beet', 'Christella dentata', 'Chinese cabbage', 'Borage', 'Broccoli', 'Brooklime', 'Brussels sprout', 'Cabbage', 'Caraway', 'Hypochaeris radicata', 'Celery', 'Celtuce', 'Chaya (plant)', 'Chili pepper', 'Stellaria', 'Chicory', 'Chinese mallow', 'Collard greens', 'Common purslane', 'Corn salad', 'Garden cress', 'Cucumis prophetarum', 'Garland Chrysanthemum', 'Aegopodium podagraria', 'Dandelion', 'Dill', 'Endive', 'Chenopodium album', 'Fiddlehead', 'Telfairia occidentalis', 'Gnetum gnemon', 'Golden samphire', 'Good King Henry', 'Grape', 'Plantago major', 'Corchorus olitorius', 'Kai-lan', 'Kale', 'Kalette', 'Pringlea', 'Komatsuna', 'Adansonia', 'Talinum fruticosum', 'Corn salad', "Lamb's quarters", 'Land cress', 'Leaf celery', 'Lettuce', 'Houttuynia cordata', 'Basella alba', 'Malvaceae', 'Moringa oleifera', "Miner's lettuce", 'Mizuna greens', 'Sinapis alba', 'Napa cabbage', 'Tetragonia', 'Atriplex', 'Chinese cabbage', 'Papaya', 'Paracress', 'Pea', 'Cycas riuminiana', 'Phytolacca americana', 'Bauhinia purpurea', 'Radicchio', 'Rapini', 'Amaranthus dubius', 'Rock samphire', 'Osmunda regalis', 'Sculpit', 'Sea beet', 'Sea kale', 'Capsella bursa-pastoris', 'Crassocephalum', 'Celosia argentea', 'Sorrel', 'Sour cabbage', 'Spinach', 'Amaranthus spinosus', 'Portulaca oleracea', 'Abelmoschus manihot', 'Sweet potato', 'Chard', 'Xanthosoma brasiliense', 'Taro', 'Tatsoi', 'Turnip', 'Diplazium esculentum', 'Sesbania grandiflora', 'Viagra palm', 'Watercress', 'Ipomoea aquatica', 'Wheatgrass', 'Achillea millefolium', 'Rapeseed', 'Artocarpus blancoi', 'Armenian cucumber', 'Breadfruit', 'Artocarpus camansi', 'Momordica charantia', 'Cyclanthera pedata', 'Calabash', 'Chayote', 'Cooking banana', 'Durian', 'Gac', 'Anacolosa frutescens', 'Melothria scabra', 'Cucumber', 'Cucumis prophetarum', 'Eggplant', 'Coccinia grandis', 'Jackfruit', 'Cucumis metuliferus', 'Telosma procumbens', 'Luffa', 'Artocarpus mariannensis', 'Olive', 'Papaya', 'Pumpkin', 'Trichosanthes dioica', 'Luffa acutangula', 'Trichosanthes cucumerina', 'Momordica dioica', 'Squash (plant)', 'Tinda', 'Artocarpus treculianus', 'Tomatillo', 'Tomato', 'Vanilla', 'Cucumis anguria', 'Water melon', 'Winter melon', 'Zucchini', 'Bell pepper', 'Big Jim pepper', 'Cayenne pepper', 'Friggitello', 'Habanero', 'Hungarian wax pepper', 'Jalapeño', 'New Mexico chile', 'Peperoncino', 'Pimiento', 'Sandia pepper', 'Siling haba', 'Artichoke', 'Banana flower', 'Clitoria ternatea', 'Broccoli', 'Broccolini', 'Calabaza', 'Caper', 'Cauliflower', 'Telosma procumbens', 'Broussonetia luzonica', 'Pumpkin', 'Bauhinia purpurea', 'Daylily', 'Strongylodon macrobotrys', 'Loroco', 'Sesbania grandiflora', 'Zucchini', 'Zucchini', 'Apios americana', 'Asparagus bean', 'Azuki bean', 'Black-eyed pea', 'Clitoria ternatea', 'Chickpea', 'Common bean', 'Drumstick (vegetable)', 'Hyacinth Bean', 'Vicia faba', 'Chickpea', 'Green bean', 'Guar', 'Horse gram', 'Lablab purpureus', 'Lathyrus sativus', 'Lentil', 'Phaseolus lunatus', 'Moth bean', 'Mung bean', 'Okra', 'Pea', 'Peanut', 'Pigeon pea', 'Ricebean', 'Runner bean', 'Snap pea', 'Snow pea', 'Soybean', 'Lupinus mutabilis', 'Tepary bean', 'Urad (bean)', 'Mucuna pruriens', 'Winged bean', 'Asparagus', 'Banana pith', 'Cardoon', 'Celeriac', 'Celery', 'Chives', 'Elephant garlic', 'Fennel', 'Garlic', 'Allium tuberosum', 'Heart of palm', 'Kohlrabi', 'Kurrat', 'Landang', 'Lemongrass', 'Leek', 'Nelumbo nucifera', 'Nopal', 'Onion', 'Pearl onion', 'Potato onion', 'Ornithogalum pyrenaicum', 'Sago', 'Scallion', 'Salicornia', 'Shallot', 'Tree onion', 'Welsh onion', 'Allium tricoccum', 'Zizania latifolia', 'Pachyrhizus', 'Arracacha', 'Arrowleaf elephant ear', 'Bamboo shoot', 'Beet', 'Burdock', 'Broadleaf arrowhead', 'Camassia', 'Canna (plant)', 'Carrot', 'Zingiber cassumunar', 'Cassava', 'Chinese artichoke', 'Chinese ginger', 'Daikon', 'Lathyrus tuberosus', 'Amorphophallus paeoniifolius', 'Ensete', 'Giant swamp taro', 'Giant taro', 'Ginger', 'Parsley', 'Horseradish', 'Jerusalem artichoke', 'Jícama', 'Kaempferia galanga', 'Lengkuas', 'Alpinia officinarum', 'Mashua', 'Palmyra sprout', 'Parsnip', 'Conopodium majus', 'Tacca leontopetaloides', 'Potato', 'Psoralea esculenta', 'Radish', 'Rutabaga', 'Purple Salsify', 'Black salsify', 'Skirret', 'Rutabaga', 'Sweet potato', 'Taro', 'Ti (plant)', 'Cyperus esculentus', 'Turmeric', 'Turnip', 'Dioscorea alata', 'Ulluco', 'Wasabi', 'Water caltrop', 'Eleocharis dulcis', 'Yacón', 'Yam (vegetable)', 'Xanthosoma caracu', 'Aonori', 'Arame', 'Carola (sea vegetable)', 'Alaria esculenta', 'Palmaria palmata', 'Zostera', 'Gusô', 'Hijiki', 'Kombu', 'Laver (seaweed)', 'Mozuku', 'Nori', 'Ogonori', 'Caulerpa lentillifera', 'Sea lettuce', 'Wakame']
url_seafood = r'https://en.wikipedia.org/wiki/List_of_types_of_seafood'
seafood = grab_list2(url_seafood, 'Nautilus')
print(seafood)['Anchovies', 'Barracuda', 'Basa fish', 'Bass (fish)', 'Anoplopoma fimbria', 'Pufferfish', 'Bluefish', 'Bombay duck', 'Bream', 'Brill (fish)', 'Butter fish', 'Catfish', 'Cod', 'Squaliformes', 'Dorade', 'Eel', 'Flounder', 'Grouper', 'Haddock', 'Hake', 'Halibut', 'Herring', 'Ilish', 'John Dory', 'Lamprey', 'Lingcod', 'Mackerel', 'Mahi Mahi', 'Monkfish', 'Mullet (fish)', 'Orange roughy', 'Parrotfish', 'Patagonian toothfish', 'Perch', 'Pike (fish)', 'Pilchard', 'Pollock', 'Pomfret', 'Pompano', 'Sablefish', 'Salmon', 'Sanddab', 'Sardine', 'Bass (fish)', 'Shad', 'Shark', 'Skate (fish)', 'Smelt (fish)', 'Snakehead (fish)', 'Lutjanidae', 'Sole (fish)', 'Sprat', 'Sturgeon', 'Surimi', 'Swordfish', 'Tilapia', 'Tilefish', 'Trout', 'Tuna', 'Turbot', 'Wahoo', 'Coregonus', 'Whiting (fish)', 'Witch (righteye flounder)', 'Purified Water', 'Caviar', 'Ikura', 'Kazunoko', 'Cyclopterus lumpus', 'Masago', 'Shad', 'Tobiko', 'Crab', 'Crayfish', 'Lobster', 'Shrimp', 'Cockle (bivalve)', 'Cuttlefish', 'Clam', 'Concholepas concholepas', 'Mussel', 'Octopus', 'Oyster', 'Common periwinkle', 'Scallop', 'Squid', 'Conch', 'Nautilus']
url_seafood = r'https://en.wikipedia.org/wiki/List_of_seafood_dishes'
seafood = grab_list2(url_seafood, 'Cuttlefish')
print(seafood)['Baik kut kyee kaik', 'Balchão', 'Bánh canh', 'Bisque (food)', 'Bún mắm', 'Bún riêu', 'Chowder', 'Cioppino', 'Crawfish pie', 'Curanto', 'Fideuà', 'Halabos', 'Hoe (dish)', 'Hoedeopbap', 'Kaeng som', 'Kedgeree', 'Maeuntang', 'Moules-frites', 'Namasu', 'New England clam bake', 'Paella', 'Paelya', 'Paila marina', 'Piaparan', 'Plateau de fruits de mer', 'Seafood basket', 'Seafood birdsnest', 'Seafood boil', 'Seafood cocktail', 'Seafood pizza', 'Stroganina', 'Sundubu jjigae', 'Surf and turf', 'Tinumok', 'Clam cake', 'Clam chowder', 'Clams casino', 'Clams oreganata', 'Fabes con almejas', 'Fried clams', 'Jaecheopguk', 'New England clam bake', 'Steamed clams', 'Stuffed clam', 'Crab puff', 'Fish heads', "'Ota 'ika", 'Ginataang sugpo', 'Bisque (food)', 'Lobster Newberg', 'Lobster roll', 'Lobster stew', 'Scampi', 'Miruhulee boava', 'Nakji-bokkeum', 'Nakji-yeonpo-tang', 'Polbo á feira', 'Pulpo a la campechana', 'Akashiyaki', 'San-nakji', 'Takoyaki', 'Takomeshi', 'Angels on horseback', 'Hangtown fry', 'Oyster omelette', 'Oyster sauce', 'Oyster vermicelli', 'Oysters Bienville', 'Oysters en brochette', 'Oysters Kirkpatrick', 'Oysters Rockefeller', 'Steak and oyster pie', 'Balao-balao', 'Biyaring', 'Bobó de camarão', 'Bún mắm', 'Camaron rebosado', 'Chakkoli', 'Chạo tôm', 'Coconut shrimp', 'Drunken shrimp', 'Ebi chili', 'Fried prawn', 'Ginataang hipon', 'Ginataang kalabasa', 'Halabos na hipon', 'Har gow', 'Nilasing na hipon', 'Okoy', 'Pininyahang hipon', 'Potted shrimps', 'Prawn cracker', 'Prawn cocktail', 'Shrimp ball', 'Shrimp DeJonghe', 'White boiled shrimp', 'Adobong pusit', 'Arròs negre', 'Dried shredded squid', 'Squid as food', 'Gising-gising', 'Ikameshi', 'Orange cuttlefish', 'Paella negra', 'Pancit choca', 'Squid cocktail', 'Cuttlefish']
</code>
| {
"repository": "TechLabs-Dortmund/nutritional-value-determination",
"path": "webgrabber_wikilisten.ipynb",
"matched_keywords": [
"Salmon"
],
"stars": null,
"size": 33847,
"hexsha": "d06cd4e3cb484b7a346000fdf987c6911105b0d3",
"max_line_length": 9188,
"avg_line_length": 136.4798387097,
"alphanum_fraction": 0.6750376695
} |
# Notebook from biosustain/p-thermo
Path: notebooks/28. Resolve issue 37-Reaction reversibility.ipynb
# Introduction
Now that I have removed the RNA/DNA node and we have fixed many pathways, I will re-visit the things that were raised in issue #37: 'Reaction reversibility'. There were reactions that we couldn't reverse or remove or they would kill the biomass. I will try to see if these problems have been resolved now. If not, I will dig into the underlying cause in a smanner similar to what was done in notebook 20. _____no_output_____
<code>
import cameo
import pandas as pd
import cobra.io
import escher
from escher import Builder
from cobra import Reaction_____no_output_____model = cobra.io.read_sbml_model('../model/p-thermo.xml')_____no_output_____model_e_coli = cameo.load_model('iML1515')_____no_output_____model_b_sub = cameo.load_model('iYO844')_____no_output_____
</code>
__ALDD2x__
should be irreversible, but doing so kills the biomass growth completely at this moment. It needs to be changed as we right now have an erroneous energy generating cycle going from aad_c --> ac_c (+atp) --> acald --> accoa_c -->aad_c.
Apparently, unconciously i already fixed this problem in notebook 20. So this is fine now. _____no_output_______GLYO1__ This reaction has already been removed in notebook 20 to fix the glycine pathway.
__DHORDfum__ Has been renamed to DHORD6 in notebook 20 in the first check of fixing dCMP. And the reversability has been fixed too.
__OMPDC__ This has by chance also been fixed in notebook 20 in the first pass to fix dCMP biosynthesis._____no_output_______NADK__ The reaction is currently reversible, but should be irreversible, producing nadp and adp.
Still, when I try to fix the flux in the direction it should be, it kills the biomass production. I will try to figure out why, likely it has to do with co-factor balance._____no_output_____
<code>
model.reactions.NADK.bounds = (0,1000)_____no_output_____model.reactions.ALAD_L.bounds = (-1000,0)_____no_output_____model.optimize().objective_value_____no_output_____cofactors = ['nad_c', 'nadh_c','', '', '', '']
with model:
# model.add_boundary(model.metabolites.glc__D_c, type = 'sink', reaction_id = 'test')
# model.add_boundary(model.metabolites.r5p_c , type = 'sink', reaction_id = 'test2')
# model.add_boundary(model.metabolites.hco3_c, type = 'sink', reaction_id = 'test3')
for met in model.reactions.biomass.metabolites:
if met.id in cofactors:
coeff = model.reactions.biomass.metabolites[met]
model.reactions.biomass.add_metabolites({met:-coeff})
else:
continue
solution = model.optimize()
#print (model.metabolites.glu__D_c.summary())
#print ('test flux:', solution['test'])
#print ('test2 flux:', solution['test2'])
print (solution.objective_value)1.8496633304871162
</code>
It seems that the NAD and NADH are the blocked metabolites for biomass generation. Now lets try to figure out where this problem lies.
I think the problem lies in re-generating NAD. The model uses this reaction togehter with oth strange reactions to regenerate NAD, where normally in oxygen containing conditions I would expect respiration to do this. So let me see how bacillus and e. coli models do this and see if maybe some form of ETC is missing in our model. This would explain why adding the ATP synthase didn't influence our biomass prediction at all.
__Flavin reductase__
In E. coli we observed that there is a flavin reductase in the genome, contributing to NADH regeneration. We've looked into the genome annotation for our strain, and have found that there is a flavin reductase annotated there aswell (https://www.genome.jp/dbget-bin/www_bget?ptl:AOT13_02085), but not in bacillus (fitting the model). Therefore, I will add this reaction into our model, named FADRx.
__NADH dehydrogenase__
The NADH dehydrogenase, tansfering reducing equivalents from NADH to quinone, is the first part of the electron transport chain. The quinones then can transfer the electrons to pump out protons, which can allow ATP synthase to generate additional energy. in iML1515 this reaction is captures by NADH16pp, NADH17pp and NADH18pp. In B. subtilis NADH4 reflects this reaction. In our model, we don't currently have anything that resembles this reaction. However, in Beata's thesis (and the genome) we can find EC 1.6.5.3, which performs the a similar reaction to NADH16pp. Therefore, I will add this reactin into our model.
In our model, we also have the reactions QH2OR and NADHQOR, which somewhat resemble the NADHDH reaction. Both do not include proton translocation or are reversible. To prevent these reactions from forming a cycle and having incorrect duplicate reactions in the model, I will remove them.
__CYOR__
The last 'step' in the model electron transport chain is the transfer of electrons from the quinone to oxygen, pumping protons out of the cell. E. coli has a CYTBO3_4pp reaction that shows this, performed by a cytochrome oxidase. The model doesnt have this reaction, but from Beata's thesis and the genome annotation one would expect this to be present. We found the reaction in a way similar to the E. coli model. Therefor I will add the CYTBO3 reaction to our model, as indicated in Beata's thesis.
_____no_output_____
<code>
model.add_reaction(Reaction(id='FADRx'))_____no_output_____model.reactions.FADRx.name = 'Flavin reductase'_____no_output_____model.reactions.FADRx.annotation = model_e_coli.reactions.FADRx.annotation_____no_output_____model.reactions.FADRx.add_metabolites({
model.metabolites.fad_c:-1,
model.metabolites.h_c: -1,
model.metabolites.nadh_c:-1,
model.metabolites.fadh2_c:1,
model.metabolites.nad_c:1
})_____no_output_____#add NADH dehydrogenase reaction
model.add_reaction(Reaction(id='NADHDH'))_____no_output_____model.reactions.NADHDH.name = 'NADH Dehydrogenase (ubiquinone & 3.5 protons)'_____no_output_____model.reactions.NADHDH.annotation['ec-code'] = '1.6.5.3'
model.reactions.NADHDH.annotation['kegg.reaction'] = 'R11945'_____no_output_____model.reactions.NADHDH.add_metabolites({
model.metabolites.nadh_c:-1, model.metabolites.h_c: -4.5, model.metabolites.ubiquin_c:-1,
model.metabolites.nad_c: 1, model.metabolites.h_e: 3.5, model.metabolites.qh2_c: 1
})_____no_output_____model.remove_reactions(model.reactions.NADHQOR)C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\model.py:716: UserWarning:
need to pass in a list
model.remove_reactions(model.reactions.QH2OR)C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\group.py:110: UserWarning:
need to pass in a list
model.add_reaction(Reaction(id='CYTBO3'))_____no_output_____model.reactions.CYTBO3.name = 'Cytochrome oxidase bo3 (ubiquinol: 2.5 protons)'_____no_output_____model.reactions.CYTBO3.add_metabolites({
model.metabolites.o2_c:-0.5, model.metabolites.h_c: -2.5, model.metabolites.qh2_c:-1,
model.metabolites.h2o_c:1, model.metabolites.h_e: 2.5, model.metabolites.ubiquin_c:1
})_____no_output_____
</code>
In looking at the above, I also observed some other reactions that probably should not looked at and modified._____no_output_____
<code>
model.reactions.MALQOR.id = 'MDH2'_____no_output_____model.reactions.MDH2.bounds = (0,1000)_____no_output_____model.metabolites.hexcoa_c.id = 'hxcoa_c'_____no_output_____model.reactions.HEXOT.id = 'ACOAD2f'_____no_output_____model.metabolites.dccoa_c.id = 'dcacoa_c'_____no_output_____model.reactions.DECOT.id = 'ACOAD4f'_____no_output_____#in the wrong direction and id
model.reactions.GLYCDH_1.id = 'HPYRRx'_____no_output_____model.reactions.HPYRRx.bounds = (-1000,0)_____no_output_____#in the wrong direction
model.reactions.FMNRx.bounds = (-1000,0)_____no_output_____model.metabolites.get_by_id('3hbycoa_c').id = '3hbcoa_c'_____no_output_____
</code>
Even with the changes above we still do not restore growth... Supplying nmn_c restores growth, but supplying aspartate (beginning of the pathway) doesn't sovle the problem. so maybe the problem lies more with the NAD biosynthesis pathway than really the regeneration anymore?_____no_output_____
<code>
model.metabolites.nicrnt_c.name = 'Nicotinate ribonucleotide'_____no_output_____model.metabolites.ncam_c.name = 'Niacinamide'_____no_output_____#wrong direction
model.reactions.QULNS.bounds = (-1000,0)
#this rescued biomass accumulation! _____no_output_____#connected to aspartate_____no_output_____model.optimize().objective_value_____no_output_____#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')_____no_output_____
</code>
Flux is carried through the
Still it is strange that flux is not carried through the ETC, but is through the ATP synthase as one would expect in the presence of oxygen. Therefore I will investigate where the extracellular protons come from.
It seems all the extracellular protons come from the export of phosphate (pi_c) which is proton symport coupled. We are producing so much phosphate from the biomass reaction. Though in theory, phosphate should not be produced so much, as it is also used for the generation of ATP from ADP. Right now I don't really see how to solve this problem. I've made an issueof it and will look into this at another time. _____no_output_____
<code>
model.optimize()['ATPS4r']_____no_output_____model.metabolites.pi_c.summary()_____no_output_____
</code>
I also noticed that now most ATP comes from dGTP. The production of dGDP should just play a role in supplying nucleotides for biomass and so the flux it carries be low. I will check where the majority of the dGTP comes from.
What is happening is the following: dgtp is converted dgdp and atp (rct ATDGDm). The dgdp then reacts with pep to form dGTP again. Pep formation is somewhat energy neutral, but it is wierd the metabolism decides to do this by themselves instead of flowing the pep into pyruvate via the normal glycolysis and into the TCA._____no_output_____
<code>
#reaction to be removed
model.remove_reactions(model.reactions.PYRPT)C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\model.py:716: UserWarning:
need to pass in a list
C:\Users\vivmol\AppData\Local\Continuum\anaconda3\envs\g-thermo\lib\site-packages\cobra\core\group.py:110: UserWarning:
need to pass in a list
</code>
Removing these reactions triggers normal ATP production via ETC and ATP synthase again. So this may be solved now. _____no_output_____
<code>
model.metabolites.pi_c.summary()_____no_output_____#save & commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')_____no_output_____
</code>
| {
"repository": "biosustain/p-thermo",
"path": "notebooks/28. Resolve issue 37-Reaction reversibility.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 42639,
"hexsha": "d06d0f96c4f2b247b6cdf91df305892a99930e8a",
"max_line_length": 5293,
"avg_line_length": 37.8340727595,
"alphanum_fraction": 0.5254579141
} |
# Notebook from goldford/Ecosystem-Model-Data-Framework
Path: notebooks/Analysis - Visualize Monte Carlo Results (R 3.6) v2.ipynb
<code>
# G Oldford Feb 19 2022
# visualize monte carlo results from ecosim Monte Carlo
# uses ggplot2
#
# https://erdavenport.github.io/R-ecology-lesson/05-visualization-ggplot2.html_____no_output_____library(tidyverse)
library(matrixStats)-- [1mAttaching packages[22m --------------------------------------- tidyverse 1.3.0 --
[32mv[39m [34mggplot2[39m 3.2.1 [32mv[39m [34mpurrr [39m 0.3.4
[32mv[39m [34mtibble [39m 2.1.3 [32mv[39m [34mdplyr [39m 1.0.2
[32mv[39m [34mtidyr [39m 1.1.2 [32mv[39m [34mstringr[39m 1.4.0
[32mv[39m [34mreadr [39m 1.3.1 [32mv[39m [34mforcats[39m 0.4.0
-- [1mConflicts[22m ------------------------------------------ tidyverse_conflicts() --
[31mx[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31mx[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
Attaching package: 'matrixStats'
The following object is masked from 'package:dplyr':
count
# No biomass found in the auto written MC run out files, so saving from the plot direct from MC plugin
# The B's are relative to initialization year!
path_MC_sc1 = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//"
file_MC_sc1 = "MRM_SealsTKWJuveSalm_Feb172022_NOTKWforce_graph_MCs_500trials.csv"
n_MC_runs_1 = 500 # sets cols that correspond to seal B
path_MC_sc2 = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//SealWCT_Feb172022_midB-WCT//mc_Scenario 2b- WCT Forcing - Mid B//"
file_MC_sc2 = "BiomassDirectSaveMC_test_2022-02-19_500runs.csv"
n_MC_runs_2 = 500
relB_base = 0.134 # base yr seal B hard coded - careful
path_TS = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//"
file_TS = "SealWCT_B_timeseries_Scen2b_rev20220217v3_MidB.csv"
# ==== read MC results file ====
header_lines = 1
header_lines = 1
results_df_sc1 <- read.csv(paste(path_MC_sc1, file_MC_sc1,sep=""), skip = header_lines)
# rename col and get seals B only
results_trim_TKWForcemid_sc1 = results_df_sc1 %>% rename(year = Data) %>%
select(c("year",starts_with("X2..Seals"))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) #deals with single row w/ erroroneous large year at end of TS data
# head(results_trim_TKWForcemid_sc1)
results_df_sc2 <- read.csv(paste(path_MC_sc2, file_MC_sc2,sep=""), skip = header_lines)
# rename col and get seals B only
results_trim_TKWForcemid_sc2 = results_df_sc2 %>% rename(year = Data) %>%
select(c("year",starts_with("X2..Seals"))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) #deals with single row w/ erroroneous large year at end of TS data
# head(results_trim_TKWForcemid_sc2)
# ==== read TS reference file ====
header_lines = 3
sealobs_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
#relB_base = sealobs_df$BiomassAbs[1]
# convert to relative B
sealobs_df$SealsObsRelB = sealobs_df$BiomassAbs / relB_base
seals_obs_relB = sealobs_df %>% rename(year = Type) %>%
select(c("year","SealsObsRelB")) %>%
mutate(source = "surveys")
# pivot tables to long, for scatter plotting
sc1_df = results_trim_TKWForcemid_sc1 %>% select(-year) %>%
pivot_longer(!year_int, names_to = "Mc_run_sc", values_to = "RelB") %>%
mutate(scenario = "No TKW")
sc2_df = results_trim_TKWForcemid_sc2 %>% select(-year) %>%
pivot_longer(!year_int, names_to = "Mc_run_sc", values_to = "RelB") %>%
mutate(scenario = "TKW")
# combine
sc_df = bind_rows(sc1_df,sc2_df)
# # rename 'data' col to Year
# results_trim = results_df %>% rename(year = Data) %>%
# select(c(0:n_MC_runs))
# head(results_trim)_____no_output_____# to do - eliminate scenarios where seals go extinct.
# likely this is due to issues with total catch (forcing) time series.
# EwE doesn't allow for F forcing._____no_output_____ggplot(data = sc_df, aes(x = year_int, y = RelB)) +
geom_point(alpha = 0.01, aes(color=scenario))Warning message:
"Removed 1680 rows containing missing values (geom_point)."# visualize, scenario 1 vs scenario 2, after year 2000 when seals plateau
sc_2000fwd_df = sc_df %>% filter(year_int < 2000) %>%
filter(RelB > 0.05)
ggplot(data = sc_2000fwd_df, aes(x = scenario, y = RelB)) +
geom_boxplot()_____no_output_____# OLD CODE BELOW_____no_output_____
# for geom_ribbon plots get upper and lower bound
columns <- grep("X2..Seals", colnames(results_trim_TKWForcemid))
results_trim_TKWmid = results_trim_TKWForcemid %>%
mutate(Mean= rowMeans(.[columns],,na.rm = TRUE),
logMean = rowMeans(log(.[columns]),na.rm = TRUE),
stdev=rowSds(as.matrix(.[columns]),na.rm = TRUE),
stdev_log=rowSds(as.matrix(log(.[columns])),na.rm = TRUE)) %>%
# 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)),
lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
mutate(source = "EwE") %>%
rename(year = year_int) %>%
# 12 vals per year - average the stats within years
group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
mean_std = mean(stdev, na.rm=TRUE),
mean_lwrB = mean(lower_B, na.rm=TRUE),
mean_uppB = mean(upper_B, na.rm=TRUE))
results_trim_TKWmid
`summarise()` ungrouping output (override with `.groups` argument)
model_obs_binding = bind_rows(results_trim2,seals_obs_relB)
(model_obs_binding)_____no_output_____ggplot(data = roughrundata_df, aes(x = year, y = seals_sc1)) +
geom_line() +
geom_ribbon(aes(ymin=seals_sc1_lo, ymax=seals_sc1_up),alpha = 0.1, fill = "blue") +
geom_line(aes(y=seals_sc2b)) +
geom_ribbon(aes(y=seals_sc2b, ymin=seals_sc2b_lo, ymax=seals_sc2b_up),alpha = 0.1, fill = "green") +
geom_point(data = seals_obs_relB_norecent, aes(y=SealsObsRelB_mt, x=year),alpha = 0.8, color = "black") +
ylab("Seal Biomass Density (mt km-2)") _____no_output_____# # for geom_ribbon plots get upper and lower bound
# columns <- c(2:n_MC_runs)
# Old stuff
# results_trim2 = results_trim_TKWForcemid %>%
# mutate(Mean= rowMeans(.[columns]),
# logMean = rowMeans(log(.[columns])),
# stdev=rowSds(as.matrix(.[columns])),
# stdev_log=rowSds(as.matrix(log(.[columns])))) %>%
# mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)), # 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
# lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
# mutate(year_int = round(year,0)) %>%
# filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
# select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
# mutate(source = "EwE") %>%
# rename(year = year_int) %>%
# # at this point there are 12 vals per year but these appear to jump every year
# # below will average the stats across each year
# group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
# mean_std = mean(stdev, na.rm=TRUE),
# mean_lwrB = mean(lower_B, na.rm=TRUE),
# mean_uppB = mean(upper_B, na.rm=TRUE))
#mutate(upper_B = exp(upper_logB),
# lower_B = exp(lower_logB))
# pivot wide to long
#results_piv = results_trim2 %>% pivot_longer(
# cols = starts_with("X2"),
# names_to = "Seals",
# names_prefix = "",
# values_to = "B",
# values_drop_na = TRUE
# )
# head(results_trim2)
# I can't find Biomass in the auto written MC run out files, so I'm saving from the plot in the MC plugin
#path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//SealWCT_Feb172022_midB-WCT//mc_Scenario 3c- MonteCarlo TKWForce Mid//"
#file = "BiomassPlotSave_Scen3b_TKWForce_min.csv"
#file = "BiomassPlotSave_Scen3c_TKWForce_mid_500runs.csv"
#file = "BiomassDirectSaveMC_test_2022-02-19.csv"
#starts_with(results_trim_TKWForcemid,"X2..Seals")
#grep("X2..Seals", colnames(results_trim_TKWForcemid))
#results_trim_TKWForcemid
# # for geom_ribbon plots get upper and lower bound
# columns <- grep("X2..Seals", colnames(results_trim_TKWForcemid))
# results_trim_TKWmid = results_trim_TKWForcemid %>%
# mutate(Mean= rowMeans(.[columns],,na.rm = TRUE),
# logMean = rowMeans(log(.[columns]),na.rm = TRUE),
# stdev=rowSds(as.matrix(.[columns]),na.rm = TRUE),
# stdev_log=rowSds(as.matrix(log(.[columns])),na.rm = TRUE)) %>%
# mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)), # 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
# lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
# mutate(year_int = round(year,0)) %>%
# filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
# select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
# mutate(source = "EwE") %>%
# rename(year = year_int) %>%
# # at this point there are 12 vals per year but these appear to jump every year
# # below will average the stats across each year
# group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
# mean_std = mean(stdev, na.rm=TRUE),
# mean_lwrB = mean(lower_B, na.rm=TRUE),
# mean_uppB = mean(upper_B, na.rm=TRUE))
#mutate(upper_B = exp(upper_logB),
# lower_B = exp(lower_logB))
# pivot wide to long
#results_piv = results_trim2 %>% pivot_longer(
# cols = starts_with("X2"),
# names_to = "Seals",
# names_prefix = "",
# values_to = "B",
# values_drop_na = TRUE
# )
# tail(results_trim_TKWmid)
# read seal time series data
# convert from abs to rel to match MC out
# path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//"
#file = "SealTKW_timeseries_Scen1_NoTKWForcing.csv"
# file = "SealWCT_B_timeseries_Scen2b_rev20220217v3_MidB.csv"
# header_lines = 3
# sealobs_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
# #relB_base = sealobs_df$BiomassAbs[1]
# relB_base = 0.134
# sealobs_df$SealsObsRelB = sealobs_df$BiomassAbs / relB_base
# seals_obs_relB = sealobs_df %>% rename(year = Type) %>%
# select(c("year","SealsObsRelB")) %>%
# mutate(source = "surveys")
# #sealobs_df
# (seals_obs_relB)
# merge two tables
# model_obs_binding = bind_rows(results_trim2,seals_obs_relB)
# (model_obs_binding)_____no_output_____SealsObsRelB_1970on = seals_obs_relB %>% filter(year > 1969)_____no_output_____ggplot(data = results_trim2, aes(x = year, y = mean_yr)) +
geom_ribbon(aes(ymin=mean_lwrB, ymax=mean_uppB),alpha = 0.1, color = "blue") +
geom_ribbon(data = results_trim_TKWmid, aes(y=mean_yr, ymin=mean_lwrB, ymax=mean_uppB),alpha = 0.1, color = "blue") +
geom_point(data = SealsObsRelB_1970on, aes(y=SealsObsRelB, x=year),alpha = 0.8, color = "black") +
ylab("Relative Seal Biomass (SoG)")Warning message:
"Ignoring unknown aesthetics: y"# temporary
# read seal time series data
# convert from abs to rel to match MC out
path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//results//"
file = "biomass_annual_justresultsnoMC_scen1scen2b.csv"
header_lines = 0
roughrundata_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
head(roughrundata_df)
_____no_output_____ggplot(data = roughrundata_df, aes(x = year, y = seals_sc1)) +
geom_line() +
geom_ribbon(aes(ymin=seals_sc1_lo, ymax=seals_sc1_up),alpha = 0.1, fill = "blue") +
geom_line(aes(y=seals_sc2b)) +
geom_ribbon(aes(y=seals_sc2b, ymin=seals_sc2b_lo, ymax=seals_sc2b_up),alpha = 0.1, fill = "green") +
geom_point(data = seals_obs_relB_norecent, aes(y=SealsObsRelB_mt, x=year),alpha = 0.8, color = "black") +
ylab("Seal Biomass Density (mt km-2)") Warning message:
"Ignoring unknown aesthetics: y"seals_obs_relB$SealsObsRelB_mt = seals_obs_relB$SealsObsRelB * 0.169_____no_output_____seals_obs_relB_norecent = seals_obs_relB %>% filter(seals_obs_relB < 2015)_____no_output_____
</code>
| {
"repository": "goldford/Ecosystem-Model-Data-Framework",
"path": "notebooks/Analysis - Visualize Monte Carlo Results (R 3.6) v2.ipynb",
"matched_keywords": [
"ecology"
],
"stars": null,
"size": 360421,
"hexsha": "d06d782f4636163354afe37c8b688d31a4788183",
"max_line_length": 264144,
"avg_line_length": 368.9058341863,
"alphanum_fraction": 0.9059433274
} |
# Notebook from jcjveraa/docs-2
Path: site/en/tutorials/structured_data/time_series.ipynb
##### Copyright 2019 The TensorFlow Authors._____no_output_____
<code>
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License._____no_output_____
</code>
# Time series forecasting_____no_output_____<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/time_series"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>_____no_output_____This tutorial is an introduction to time series forecasting using TensorFlow. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs).
This is covered in two main parts, with subsections:
* Forecast for a single timestep:
* A single feature.
* All features.
* Forecast multiple steps:
* Single-shot: Make the predictions all at once.
* Autoregressive: Make one prediction at a time and feed the output back to the model._____no_output_____## Setup_____no_output_____
<code>
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False_____no_output_____
</code>
## The weather dataset
This tutorial uses a <a href="https://www.bgc-jena.mpg.de/wetter/" class="external">weather time series dataset</a> recorded by the <a href="https://www.bgc-jena.mpg.de" class="external">Max Planck Institute for Biogeochemistry</a>.
This dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. For efficiency, you will use only the data collected between 2009 and 2016. This section of the dataset was prepared by François Chollet for his book [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python)._____no_output_____
<code>
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
csv_path, _ = os.path.splitext(zip_path)_____no_output_____
</code>
This tutorial will just deal with **hourly predictions**, so start by sub-sampling the data from 10 minute intervals to 1h:_____no_output_____
<code>
df = pd.read_csv(csv_path)
# slice [start:stop:step], starting from index 5 take every 6th record.
df = df[5::6]
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')_____no_output_____
</code>
Let's take a glance at the data. Here are the first few rows:_____no_output_____
<code>
df.head()_____no_output_____
</code>
Here is the evolution of a few features over time. _____no_output_____
<code>
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots=True)
plot_features = df[plot_cols][:480]
plot_features.index = date_time[:480]
_ = plot_features.plot(subplots=True)_____no_output_____
</code>
### Inspect and cleanup_____no_output_____Next look at the statistics of the dataset:_____no_output_____
<code>
df.describe().transpose()_____no_output_____
</code>
#### Wind velocity_____no_output_____One thing that should stand out is the `min` value of the wind velocity, `wv (m/s)` and `max. wv (m/s)` columns. This `-9999` is likely erroneous. There's a separate wind direction column, so the velocity should be `>=0`. Replace it with zeros:
_____no_output_____
<code>
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
# The above inplace edits are reflected in the DataFrame
df['wv (m/s)'].min()_____no_output_____
</code>
### Feature engineering
Before diving in to build a model it's important to understand your data, and be sure that you're passing the model appropriately formatted data._____no_output_____#### Wind
The last column of the data, `wd (deg)`, gives the wind direction in units of degrees. Angles do not make good model inputs, 360° and 0° should be close to each other, and wrap around smoothly. Direction shouldn't matter if the wind is not blowing.
Right now the distribution of wind data looks like this:_____no_output_____
<code>
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind Direction [deg]')
plt.ylabel('Wind Velocity [m/s]')_____no_output_____
</code>
But this will be easier for the model to interpret if you convert the wind direction and velocity columns to a wind **vector**:_____no_output_____
<code>
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)_____no_output_____
</code>
The distribution of wind vectors is much simpler for the model to correctly interpret._____no_output_____
<code>
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind X [m/s]')
plt.ylabel('Wind Y [m/s]')
ax = plt.gca()
ax.axis('tight')_____no_output_____
</code>
#### Time_____no_output_____Similarly the `Date Time` column is very useful, but not in this string form. Start by converting it to seconds:_____no_output_____
<code>
timestamp_s = date_time.map(datetime.datetime.timestamp)_____no_output_____
</code>
Similar to the wind direction the time in seconds is not a useful model input. Being weather data it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.
A simple approach to convert it to a usable signal is to use `sin` and `cos` to convert the time to clear "Time of day" and "Time of year" signals:_____no_output_____
<code>
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))_____no_output_____plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')_____no_output_____
</code>
This gives the model access to the most important frequency features. In this case you knew ahead of time which frequencies were important.
If you didn't know, you can determine which frequencies are important using an `fft`. To check our assumptions, here is the `tf.signal.rfft` of the temperature over time. Note the obvious peaks at frequencies near `1/year` and `1/day`: _____no_output_____
<code>
fft = tf.signal.rfft(df['T (degC)'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['T (degC)'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 400000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')_____no_output_____
</code>
### Split the data_____no_output_____We'll use a `(70%, 20%, 10%)` split for the training, validation, and test sets. Note the data is **not** being randomly shuffled before splitting. This is for two reasons.
1. It ensures that chopping the data into windows of consecutive samples is still possible.
2. It ensures that the validation/test results are more realistic, being evaluated on data collected after the model was trained._____no_output_____
<code>
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]_____no_output_____
</code>
### Normalize the data
It is important to scale features before training a neural network. Normalization is a common way of doing this scaling. Subtract the mean and divide by the standard deviation of each feature._____no_output_____The mean and standard deviation should only be computed using the training data so that the models have no access to the values in the validation and test sets.
It's also arguable that the model shouldn't have access to future values in the training set when training, and that this normalization should be done using moving averages. That's not the focus of this tutorial, and the validation and test sets ensure that you get (somewhat) honest metrics. So in the interest of simplicity this tutorial uses a simple average._____no_output_____
<code>
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std_____no_output_____
</code>
Now peek at the distribution of the features. Some features do have long tails, but there are no obvious errors like the `-9999` wind velocity value._____no_output_____
<code>
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)_____no_output_____
</code>
## Data windowing
The models in this tutorial will make a set of predictions based on a window of consecutive samples from the data.
The main features of the input windows are:
* The width (number of time steps) of the input and label windows
* The time offset between them.
* Which features are used as inputs, labels, or both.
This tutorial builds a variety of models (including Linear, DNN, CNN and RNN models), and uses them for both:
* *Single-output*, and *multi-output* predictions.
* *Single-time-step* and *multi-time-step* predictions.
This section focuses on implementing the data windowing so that it can be reused for all of those models.
_____no_output_____Depending on the task and type of model you may want to generate a variety of data windows. Here are some examples:
1. For example, to make a single prediction 24h into the future, given 24h of history you might define a window like this:

2. A model that makes a prediction 1h into the future, given 6h of history would need a window like this:
_____no_output_____The rest of this section defines a `WindowGenerator` class. This class can:
1. Handle the indexes and offsets as shown in the diagrams above.
1. Split windows of features into a `(features, labels)` pairs.
2. Plot the content of the resulting windows.
3. Efficiently generate batches of these windows from the training, evaluation, and test data, using `tf.data.Dataset`s._____no_output_____### 1. Indexes and offsets
Start by creating the `WindowGenerator` class. The `__init__` method includes all the necessary logic for the input and label indices.
It also takes the train, eval, and test dataframes as input. These will be converted to `tf.data.Dataset`s of windows later._____no_output_____
<code>
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])_____no_output_____
</code>
Here is code to create the 2 windows shown in the diagrams at the start of this section:_____no_output_____
<code>
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['T (degC)'])
w1_____no_output_____w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['T (degC)'])
w2_____no_output_____
</code>
### 2. Split
Given a list consecutive inputs, the `split_window` method will convert them to a window of inputs and a window of labels.
The example `w2`, above, will be split like this:

This diagram doesn't show the `features` axis of the data, but this `split_window` function also handles the `label_columns` so it can be used for both the single output and multi-output examples._____no_output_____
<code>
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window_____no_output_____
</code>
Try it out:_____no_output_____
<code>
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')_____no_output_____
</code>
Typically data in TensorFlow is packed into arrays where the outermost index is across examples (the "batch" dimension). The middle indices are the "time" or "space" (width, height) dimension(s). The innermost indices are the features.
The code above took a batch of 3, 7-timestep windows, with 19 features at each time step. It split them into a batch of 6-timestep, 19 feature inputs, and a 1-timestep 1-feature label. The label only has one feature because the `WindowGenerator` was initialized with `label_columns=['T (degC)']`. Initially this tutorial will build models that predict single output labels._____no_output_____### 3. Plot
Here is a plot method that allows a simple visualization of the split window:_____no_output_____
<code>
w2.example = example_inputs, example_labels_____no_output_____def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot_____no_output_____
</code>
This plot aligns inputs, labels, and (later) predictions based on the time that the item refers to:_____no_output_____
<code>
w2.plot()_____no_output_____
</code>
You can plot the other columns, but the example window `w2` configuration only has labels for the `T (degC)` column._____no_output_____
<code>
w2.plot(plot_col='p (mbar)')_____no_output_____
</code>
### 4. Create `tf.data.Dataset`s_____no_output_____Finally this `make_dataset` method will take a time series `DataFrame` and convert it to a `tf.data.Dataset` of `(input_window, label_window)` pairs using the `preprocessing.timeseries_dataset_from_array` function._____no_output_____
<code>
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset_____no_output_____
</code>
The `WindowGenerator` object holds training, validation and test data. Add properties for accessing them as `tf.data.Datasets` using the above `make_dataset` method. Also add a standard example batch for easy access and plotting:_____no_output_____
<code>
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example_____no_output_____
</code>
Now the `WindowGenerator` object gives you access to the `tf.data.Dataset` objects, so you can easily iterate over the data.
The `Dataset.element_spec` property tells you the structure, `dtypes` and shapes of the dataset elements._____no_output_____
<code>
# Each element is an (inputs, label) pair
w2.train.element_spec_____no_output_____
</code>
Iterating over a `Dataset` yields concrete batches:_____no_output_____
<code>
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')_____no_output_____
</code>
## Single step models
The simplest model you can build on this sort of data is one that predicts a single feature's value, 1 timestep (1h) in the future based only on the current conditions.
So start by building models to predict the `T (degC)` value 1h into the future.

Configure a `WindowGenerator` object to produce these single-step `(input, label)` pairs:_____no_output_____
<code>
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window_____no_output_____
</code>
The `window` object creates `tf.data.Datasets` from the training, validation, and test sets, allowing you to easily iterate over batches of data.
_____no_output_____
<code>
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')_____no_output_____
</code>
### Baseline
Before building a trainable model it would be good to have a performance baseline as a point for comparison with the later more complicated models.
This first task is to predict temperature 1h in the future given the current value of all features. The current values include the current temperature.
So start with a model that just returns the current temperature as the prediction, predicting "No change". This is a reasonable baseline since temperature changes slowly. Of course, this baseline will work less well if you make a prediction further in the future.
_____no_output_____
<code>
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]_____no_output_____
</code>
Instantiate and evaluate this model:_____no_output_____
<code>
baseline = Baseline(label_index=column_indices['T (degC)'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)_____no_output_____
</code>
That printed some performance metrics, but those don't give you a feeling for how well the model is doing.
The `WindowGenerator` has a plot method, but the plots won't be very interesting with only a single sample. So, create a wider `WindowGenerator` that generates windows 24h of consecutive inputs and labels at a time.
The `wide_window` doesn't change the way the model operates. The model still makes predictions 1h into the future based on a single input time step. Here the `time` axis acts like the `batch` axis: Each prediction is made independently with no interaction between time steps._____no_output_____
<code>
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window_____no_output_____
</code>
This expanded window can be passed directly to the same `baseline` model without any code changes. This is possible because the inputs and labels have the same number of timesteps, and the baseline just forwards the input to the output:
_____no_output_____
<code>
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)_____no_output_____
</code>
Plotting the baseline model's predictions you can see that it is simply the labels, shifted right by 1h._____no_output_____
<code>
wide_window.plot(baseline)_____no_output_____
</code>
In the above plots of three examples the single step model is run over the course of 24h. This deserves some explaination:
* The blue "Inputs" line shows the input temperature at each time step. The model recieves all features, this plot only shows the temperature.
* The green "Labels" dots show the target prediction value. These dots are shown at the prediction time, not the input time. That is why the range of labels is shifted 1 step relative to the inputs.
* The orange "Predictions" crosses are the model's prediction's for each output time step. If the model were predicting perfectly the predictions would land directly on the "labels"._____no_output_____### Linear model
The simplest **trainable** model you can apply to this task is to insert linear transformation between the input and output. In this case the output from a time step only depends on that step:

A `layers.Dense` with no `activation` set is a linear model. The layer only transforms the last axis of the data from `(batch, time, inputs)` to `(batch, time, units)`, it is applied independently to every item across the `batch` and `time` axes._____no_output_____
<code>
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])_____no_output_____print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)_____no_output_____
</code>
This tutorial trains many models, so package the training procedure into a function:_____no_output_____
<code>
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history_____no_output_____
</code>
Train the model and evaluate its performance:_____no_output_____
<code>
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)_____no_output_____
</code>
Like the `baseline` model, the linear model can be called on batches of wide windows. Used this way the model makes a set of independent predictions on consecuitive time steps. The `time` axis acts like another `batch` axis. There are no interactions between the predictions at each time step.
_____no_output_____
<code>
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)_____no_output_____
</code>
Here is the plot of its example predictions on the `wide_window`, note how in many cases the prediction is clearly better than just returning the input temperature, but in a few cases it's worse:_____no_output_____
<code>
wide_window.plot(linear)_____no_output_____
</code>
One advantage to linear models is that they're relatively simple to interpret.
You can pull out the layer's weights, and see the weight assigned to each input:_____no_output_____
<code>
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)_____no_output_____
</code>
Sometimes the model doesn't even place the most weight on the input `T (degC)`. This is one of the risks of random initialization. _____no_output_____### Dense
Before applying models that actually operate on multiple time-steps, it's worth checking the performance of deeper, more powerful, single input step models.
Here's a model similar to the `linear` model, except it stacks several a few `Dense` layers between the input and the output: _____no_output_____
<code>
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)_____no_output_____
</code>
### Multi-step dense
A single-time-step model has no context for the current values of its inputs. It can't see how the input features are changing over time. To address this issue the model needs access to multiple time steps when making predictions:

_____no_output_____The `baseline`, `linear` and `dense` models handled each time step independently. Here the model will take multiple time steps as input to produce a single output.
Create a `WindowGenerator` that will produce batches of the 3h of inputs and, 1h of labels:_____no_output_____Note that the `Window`'s `shift` parameter is relative to the end of the two windows.
_____no_output_____
<code>
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window_____no_output_____conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")_____no_output_____
</code>
You could train a `dense` model on a multiple-input-step window by adding a `layers.Flatten` as the first layer of the model:_____no_output_____
<code>
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])_____no_output_____print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)_____no_output_____history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)_____no_output_____conv_window.plot(multi_step_dense)_____no_output_____
</code>
The main down-side of this approach is that the resulting model can only be executed on input windows of exactly this shape. _____no_output_____
<code>
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')_____no_output_____
</code>
The convolutional models in the next section fix this problem._____no_output_____### Convolution neural network
A convolution layer (`layers.Conv1D`) also takes multiple time steps as input to each prediction._____no_output_____Below is the **same** model as `multi_step_dense`, re-written with a convolution.
Note the changes:
* The `layers.Flatten` and the first `layers.Dense` are replaced by a `layers.Conv1D`.
* The `layers.Reshape` is no longer necessary since the convolution keeps the time axis in its output._____no_output_____
<code>
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])_____no_output_____
</code>
Run it on an example batch to see that the model produces outputs with the expected shape:_____no_output_____
<code>
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)_____no_output_____
</code>
Train and evaluate it on the ` conv_window` and it should give performance similar to the `multi_step_dense` model._____no_output_____
<code>
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)_____no_output_____
</code>
The difference between this `conv_model` and the `multi_step_dense` model is that the `conv_model` can be run on inputs of any length. The convolutional layer is applied to a sliding window of inputs:

If you run it on wider input, it produces wider output:_____no_output_____
<code>
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)_____no_output_____
</code>
Note that the output is shorter than the input. To make training or plotting work, you need the labels, and prediction to have the same length. So build a `WindowGenerator` to produce wide windows with a few extra input time steps so the label and prediction lengths match: _____no_output_____
<code>
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window_____no_output_____print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)_____no_output_____
</code>
Now you can plot the model's predictions on a wider window. Note the 3 input time steps before the first prediction. Every prediction here is based on the 3 preceding timesteps:_____no_output_____
<code>
wide_conv_window.plot(conv_model)_____no_output_____
</code>
### Recurrent neural network
A Recurrent Neural Network (RNN) is a type of neural network well-suited to time series data. RNNs process a time series step-by-step, maintaining an internal state from time-step to time-step.
For more details, read the [text generation tutorial](https://www.tensorflow.org/tutorials/text/text_generation) or the [RNN guide](https://www.tensorflow.org/guide/keras/rnn).
In this tutorial, you will use an RNN layer called Long Short Term Memory ([LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM))._____no_output_____An important constructor argument for all keras RNN layers is the `return_sequences` argument. This setting can configure the layer in one of two ways.
1. If `False`, the default, the layer only returns the output of the final timestep, giving the model time to warm up its internal state before making a single prediction:

2. If `True` the layer returns an output for each input. This is useful for:
* Stacking RNN layers.
* Training a model on multiple timesteps simultaneously.
_____no_output_____
<code>
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])_____no_output_____
</code>
With `return_sequences=True` the model can be trained on 24h of data at a time.
Note: This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier._____no_output_____
<code>
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)_____no_output_____history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)_____no_output_____wide_window.plot(lstm_model)_____no_output_____
</code>
### Performance_____no_output_____With this dataset typically each of the models does slightly better than the one before it._____no_output_____
<code>
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()_____no_output_____for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')_____no_output_____
</code>
### Multi-output models
The models so far all predicted a single output feature, `T (degC)`, for a single time step.
All of these models can be converted to predict multiple features just by changing the number of units in the output layer and adjusting the training windows to include all features in the `labels`.
_____no_output_____
<code>
single_step_window = WindowGenerator(
# `WindowGenerator` returns all features as labels if you
# don't set the `label_columns` argument.
input_width=1, label_width=1, shift=1)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
for example_inputs, example_labels in wide_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')_____no_output_____
</code>
Note above that the `features` axis of the labels now has the same depth as the inputs, instead of 1._____no_output_____#### Baseline
The same baseline model can be used here, but this time repeating all features instead of selecting a specific `label_index`._____no_output_____
<code>
baseline = Baseline()
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])_____no_output_____val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(wide_window.val)
performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)_____no_output_____
</code>
#### Dense_____no_output_____
<code>
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=num_features)
])_____no_output_____history = compile_and_fit(dense, single_step_window)
IPython.display.clear_output()
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)_____no_output_____
</code>
#### RNN
_____no_output_____
<code>
%%time
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate( wide_window.val)
performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0)
print()_____no_output_____
</code>
<a id="residual"></a>
#### Advanced: Residual connections
The `Baseline` model from earlier took advantage of the fact that the sequence doesn't change drastically from time step to time step. Every model trained in this tutorial so far was randomly initialized, and then had to learn that the output is a a small change from the previous time step.
While you can get around this issue with careful initialization, it's simpler to build this into the model structure.
It's common in time series analysis to build models that instead of predicting the next value, predict how the value will change in the next timestep.
Similarly, "Residual networks" or "ResNets" in deep learning refer to architectures where each layer adds to the model's accumulating result.
That is how you take advantage of the knowledge that the change should be small.

Essentially this initializes the model to match the `Baseline`. For this task it helps models converge faster, with slightly better performance._____no_output_____This approach can be used in conjunction with any model discussed in this tutorial.
Here it is being applied to the LSTM model, note the use of the `tf.initializers.zeros` to ensure that the initial predicted changes are small, and don't overpower the residual connection. There are no symmetry-breaking concerns for the gradients here, since the `zeros` are only used on the last layer._____no_output_____
<code>
class ResidualWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def call(self, inputs, *args, **kwargs):
delta = self.model(inputs, *args, **kwargs)
# The prediction for each timestep is the input
# from the previous time step plus the delta
# calculated by the model.
return inputs + delta_____no_output_____%%time
residual_lstm = ResidualWrapper(
tf.keras.Sequential([
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(
num_features,
# The predicted deltas should start small
# So initialize the output layer with zeros
kernel_initializer=tf.initializers.zeros)
]))
history = compile_and_fit(residual_lstm, wide_window)
IPython.display.clear_output()
val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val)
performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0)
print()_____no_output_____
</code>
#### Performance_____no_output_____Here is the overall performance for these multi-output models._____no_output_____
<code>
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
plt.ylabel('MAE (average over all outputs)')
_ = plt.legend()_____no_output_____for name, value in performance.items():
print(f'{name:15s}: {value[1]:0.4f}')_____no_output_____
</code>
The above performances are averaged across all model outputs._____no_output_____## Multi-step models
Both the single-output and multiple-output models in the previous sections made **single time step predictions**, 1h into the future.
This section looks at how to expand these models to make **multiple time step predictions**.
In a multi-step prediction, the model needs to learn to predict a range of future values. Thus, unlike a single step model, where only a single future point is predicted, a multi-step model predicts a sequence of the future values.
There are two rough approaches to this:
1. Single shot predictions where the entire time series is predicted at once.
2. Autoregressive predictions where the model only makes single step predictions and its output is fed back as its input.
In this section all the models will predict **all the features across all output time steps**.
_____no_output_____For the multi-step model, the training data again consists of hourly samples. However, here, the models will learn to predict 24h of the future, given 24h of the past.
Here is a `Window` object that generates these slices from the dataset:_____no_output_____
<code>
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window_____no_output_____
</code>
### Baselines_____no_output_____A simple baseline for this task is to repeat the last input time step for the required number of output timesteps:
_____no_output_____
<code>
class MultiStepLastBaseline(tf.keras.Model):
def call(self, inputs):
return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1])
last_baseline = MultiStepLastBaseline()
last_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance = {}
multi_performance = {}
multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val)
multi_performance['Last'] = last_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(last_baseline)_____no_output_____
</code>
Since this task is to predict 24h given 24h another simple approach is to repeat the previous day, assuming tomorrow will be similar:
_____no_output_____
<code>
class RepeatBaseline(tf.keras.Model):
def call(self, inputs):
return inputs
repeat_baseline = RepeatBaseline()
repeat_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val)
multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(repeat_baseline)_____no_output_____
</code>
### Single-shot models
One high level approach to this problem is use a "single-shot" model, where the model makes the entire sequence prediction in a single step.
This can be implemented efficiently as a `layers.Dense` with `OUT_STEPS*features` output units. The model just needs to reshape that output to the required `(OUTPUT_STEPS, features)`._____no_output_____#### Linear
A simple linear model based on the last input time step does better than either baseline, but is underpowered. The model needs to predict `OUTPUT_STEPS` time steps, from a single input time step with a linear projection. It can only capture a low-dimensional slice of the behavior, likely based mainly on the time of day and time of year.
_____no_output_____
<code>
multi_linear_model = tf.keras.Sequential([
# Take the last time-step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_linear_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val)
multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_linear_model)_____no_output_____
</code>
#### Dense
Adding a `layers.Dense` between the input and output gives the linear model more power, but is still only based on a single input timestep._____no_output_____
<code>
multi_dense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(512, activation='relu'),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_dense_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val)
multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_dense_model)_____no_output_____
</code>
#### CNN_____no_output_____A convolutional model makes predictions based on a fixed-width history, which may lead to better performance than the dense model since it can see how things are changing over time:
_____no_output_____
<code>
CONV_WIDTH = 3
multi_conv_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, CONV_WIDTH, features]
tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]),
# Shape => [batch, 1, conv_units]
tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_conv_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val)
multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_conv_model)_____no_output_____
</code>
#### RNN_____no_output_____A recurrent model can learn to use a long history of inputs, if it's relevant to the predictions the model is making. Here the model will accumulate internal state for 24h, before making a single prediction for the next 24h.
In this single-shot format, the LSTM only needs to produce an output at the last time step, so set `return_sequences=False`.

_____no_output_____
<code>
multi_lstm_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, lstm_units]
# Adding more `lstm_units` just overfits more quickly.
tf.keras.layers.LSTM(32, return_sequences=False),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_lstm_model, multi_window)
IPython.display.clear_output()
multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val)
multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_lstm_model)_____no_output_____
</code>
### Advanced: Autoregressive model
The above models all predict the entire output sequence as a in a single step.
In some cases it may be helpful for the model to decompose this prediction into individual time steps. Then each model's output can be fed back into itself at each step and predictions can be made conditioned on the previous one, like in the classic [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/abs/1308.0850).
One clear advantage to this style of model is that it can be set up to produce output with a varying length.
You could take any of single single-step multi-output models trained in the first half of this tutorial and run in an autoregressive feedback loop, but here you'll focus on building a model that's been explicitly trained to do that.

_____no_output_____#### RNN
This tutorial only builds an autoregressive RNN model, but this pattern could be applied to any model that was designed to output a single timestep.
The model will have the same basic form as the single-step `LSTM` models: An `LSTM` followed by a `layers.Dense` that converts the `LSTM` outputs to model predictions.
A `layers.LSTM` is a `layers.LSTMCell` wrapped in the higher level `layers.RNN` that manages the state and sequence results for you (See [Keras RNNs](https://www.tensorflow.org/guide/keras/rnn) for details).
In this case the model has to manually manage the inputs for each step so it uses `layers.LSTMCell` directly for the lower level, single time step interface._____no_output_____
<code>
class FeedBack(tf.keras.Model):
def __init__(self, units, out_steps):
super().__init__()
self.out_steps = out_steps
self.units = units
self.lstm_cell = tf.keras.layers.LSTMCell(units)
# Also wrap the LSTMCell in an RNN to simplify the `warmup` method.
self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)
self.dense = tf.keras.layers.Dense(num_features)_____no_output_____feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)_____no_output_____
</code>
The first method this model needs is a `warmup` method to initialize its internal state based on the inputs. Once trained this state will capture the relevant parts of the input history. This is equivalent to the single-step `LSTM` model from earlier:_____no_output_____
<code>
def warmup(self, inputs):
# inputs.shape => (batch, time, features)
# x.shape => (batch, lstm_units)
x, *state = self.lstm_rnn(inputs)
# predictions.shape => (batch, features)
prediction = self.dense(x)
return prediction, state
FeedBack.warmup = warmup_____no_output_____
</code>
This method returns a single time-step prediction, and the internal state of the LSTM:_____no_output_____
<code>
prediction, state = feedback_model.warmup(multi_window.example[0])
prediction.shape_____no_output_____
</code>
With the `RNN`'s state, and an initial prediction you can now continue iterating the model feeding the predictions at each step back as the input.
The simplest approach to collecting the output predictions is to use a python list, and `tf.stack` after the loop._____no_output_____Note: Stacking a python list like this only works with eager-execution, using `Model.compile(..., run_eagerly=True)` for training, or with a fixed length output. For a dynamic output length you would need to use a `tf.TensorArray` instead of a python list, and `tf.range` instead of the python `range`._____no_output_____
<code>
def call(self, inputs, training=None):
# Use a TensorArray to capture dynamically unrolled outputs.
predictions = []
# Initialize the lstm state
prediction, state = self.warmup(inputs)
# Insert the first prediction
predictions.append(prediction)
# Run the rest of the prediction steps
for n in range(1, self.out_steps):
# Use the last prediction as input.
x = prediction
# Execute one lstm step.
x, state = self.lstm_cell(x, states=state,
training=training)
# Convert the lstm output to a prediction.
prediction = self.dense(x)
# Add the prediction to the output
predictions.append(prediction)
# predictions.shape => (time, batch, features)
predictions = tf.stack(predictions)
# predictions.shape => (batch, time, features)
predictions = tf.transpose(predictions, [1, 0, 2])
return predictions
FeedBack.call = call_____no_output_____
</code>
Test run this model on the example inputs:_____no_output_____
<code>
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)_____no_output_____
</code>
Now train the model:_____no_output_____
<code>
history = compile_and_fit(feedback_model, multi_window)
IPython.display.clear_output()
multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)_____no_output_____
</code>
### Performance_____no_output_____There are clearly diminishing returns as a function of model complexity on this problem._____no_output_____
<code>
x = np.arange(len(multi_performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in multi_val_performance.values()]
test_mae = [v[metric_index] for v in multi_performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=multi_performance.keys(),
rotation=45)
plt.ylabel(f'MAE (average over all times and outputs)')
_ = plt.legend()_____no_output_____
</code>
The metrics for the multi-output models in the first half of this tutorial show the performance averaged across all output features. These performances similar but also averaged across output timesteps. _____no_output_____
<code>
for name, value in multi_performance.items():
print(f'{name:8s}: {value[1]:0.4f}')_____no_output_____
</code>
The gains achieved going from a dense model to convolutional and recurrent models are only a few percent (if any), and the autoregressive model performed clearly worse. So these more complex approaches may not be worth while on **this** problem, but there was no way to know without trying, and these models could be helpful for **your** problem._____no_output_____## Next steps
This tutorial was a quick introduction to time series forecasting using TensorFlow.
* For further understanding, see:
* Chapter 15 of [Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/), 2nd Edition
* Chapter 6 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
* Lesson 8 of [Udacity's intro to TensorFlow for deep learning](https://www.udacity.com/course/intro-to-tensorflow-for-deep-learning--ud187), and the [exercise notebooks](https://github.com/tensorflow/examples/tree/master/courses/udacity_intro_to_tensorflow_for_deep_learning)
* Also remember that you can implement any [classical time series model](https://otexts.com/fpp2/index.html) in TensorFlow, this tutorial just focuses on TensorFlow's built-in functionality._____no_output_____
| {
"repository": "jcjveraa/docs-2",
"path": "site/en/tutorials/structured_data/time_series.ipynb",
"matched_keywords": [
"evolution"
],
"stars": 2,
"size": 98353,
"hexsha": "d06ebcfeaa96064c6c99ecf3633df19903bda3e8",
"max_line_length": 408,
"avg_line_length": 33.8681129477,
"alphanum_fraction": 0.5354590099
} |
# Notebook from haltakov/course-content-dl
Path: tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____
# DL Neuromatch Academy: Week 1, Day 2, Tutorial 1
# Gradients and AutoGrad
__Content creators:__ Saeed Salehi, Vladimir Haltakov, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Atnafu Lambebo, Yu-Fang Yang
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, Spiros Chavlis_____no_output_____---
#Tutorial Objectives
Day 2 Tutorial 1 will continue on buiding PyTorch skillset and motivate its core functionality, Autograd. In this notebook, we will cover the key concepts and ideas of:
* Gradient descent
* PyTorch Autograd
* PyTorch nn module
_____no_output_____
<code>
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('<iframe src="https://docs.google.com/presentation/d/1kfWWYhSIkczYfjebhMaqQILTCu7g94Q-o_ZcWb1QAKs/embed?start=false&loop=false&delayms=3000" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>')_____no_output_____
</code>
---
# Setup
_____no_output_____
<code>
# Imports
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch import nn
import time
import random
from tqdm.notebook import tqdm, trange_____no_output_____#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")_____no_output_____#@title Plotting functions
def ex3_plot(epochs, losses, values, gradients):
f, (plot1, plot2, plot3) = plt.subplots(3, 1, sharex=True, figsize=(10, 7))
plot1.set_title("Cross Entropy Loss")
plot1.plot(np.linspace(1, epochs, epochs), losses, color='r')
plot2.set_title("First Parameter value")
plot2.plot(np.linspace(1, epochs, epochs), values, color='c')
plot3.set_title("First Parameter gradient")
plot3.plot(np.linspace(1, epochs, epochs), gradients, color='m')
plot3.set_xlabel("Epoch")
plt.show()_____no_output_____#@title Helper functions
seed = 1943 # McCulloch & Pitts (1943)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
np.random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False_____no_output_____
</code>
---
# Section 1: Gradient Descent Algorithm
Since the goal of most learning algorithms is **minimizing the risk (cost) function**, optimization is the soul of learning! The gradient descent algorithm, along with its variations such as stochastic gradient descent, is one of the most powerful and popular optimization methods used for deep learning.
## 1.1: Gradient Descent
_____no_output_____
<code>
#@title Video 1.1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="PFQeUDxQFls", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____#@title Video 1.2: Gradient Descent
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Z3dyRLR8GbM", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____
</code>
Gradient Descent (introduced by Augustin-Louis Cauchy in 1847) is an **iterative method** to **minimize** a **continuous** and (ideally) **differentiable function** of **many variables**.
### Definition
Let $f(\mathbf{w}): \mathbb{R}^d \rightarrow \mathbb{R}$ be a differentiable function. Gradient Descent is an iterative algorithm for minimizing the function $f$, starting with an initial value for variables $\mathbf{w}$, taking steps of size $\eta$ in the direction of the negative gradient at the current point to update the variables $\mathbf{w}$.
$$ \mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)}) $$
where $\eta > 0$ and $\nabla f (\mathbf{w})= \left( \frac{\delta f(\mathbf{w})}{\delta w_1}, ..., \frac{\delta f(\mathbf{w})}{\delta w_d} \right)$. Since negative gradients always point locally in the direction of steepest descent, the algorithm makes small steps at each point **towards** the minimum.
<br/>
### Vanilla Algorithm
---
> *inputs*: initial guess $\mathbf{w}^{(0)}$, step size $\eta > 0$, number of steps $T$
> *For* *t = 1, 2, ..., T* *do* \
$\qquad$ $\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})$\
*end*
> *return*: $\mathbf{w}^{(t+1)}$
---
<br/>
To be able to use this algorithm, we need to calculate the gradient of the loss with respect to the learnable parameters.
_____no_output_____## 1.2: Calculating Gradients
To minimize the empirical risk (loss) function using gradient descent, we need to calculate the vector of partial derivatives:
$$\dfrac{\partial Loss}{\partial \mathbf{w}} = \left[ \dfrac{\partial Loss}{\partial w_1}, \dfrac{\partial Loss}{\partial w_2} , ..., \dfrac{\partial Loss}{\partial w_d} \right]^{\top} $$
Although PyTorch and other deep learning frameworks (e.g. JAX and TensorFlow) provide us with incredibly powerful and efficient algorithms for automatic differentiation, calculating few derivatives with hand would be fun.
### Exercise 1.2
1. Given $L(w_1, w_2) = w_1^2 - 2w_1 w_2$ find $\dfrac{\partial L}{\partial w_1}$ and $\dfrac{\partial L}{\partial w_1}$.
<br/>
2. Given $f(x) = sin(x)$ and $g(x) = \ln(x)$, find the derivative of their composite function $\dfrac{d (f \circ g)(x)}{d x}$ (*hint: chain rule*).
**Chain rule**: For a composite function $F(x) = f(g(x)) \equiv (f \circ g)(x)$:
$$F'(x) = f'(g(x)) \cdot g'(x)$$
or differently denoted:
$$ \frac{dF}{dx} = \frac{df}{dg} ~ \frac{dg}{dx} $$
<br/>
3. Given $f(x, y, z) = \tanh \left( \ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$, how easy is it to derive $\dfrac{\partial f}{\partial x}$, $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$? (*hint: you don't have to actually calculate them!*)
_____no_output_____## 1.3: Computational Graphs and Backprop
_____no_output_____
<code>
#@title Video 1.3: Computational Graph
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="7c8iCHcVgVs", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____
</code>
The last function in *Exercise 1.2* is an example of how overwhelming the derivation of gradients can get, as the number of variables and nested functions increases. This function is still extremely simple compared to the loss functions of modern neural networks. So how do PyTorch and similar frameworks approach such beasts?
### 1.3.1: Computational Graphs
Let’s look at the function again:
$$f(x, y, z) = \tanh \left(\ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$$
we can build a so-called computational graph (shown below) to break the original function to smaller and more approachable expressions. If this "reverse engineering" approach seems unintuitive and arbitrary, it's because it is! Usually, the graph is built first.
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/comput_graph.png" alt="Computation Graph" width="800"/></center>
Starting from $x$, $y$, and $z$ and following the arrows and expressions, you would see that our graph returns the same function as $f$. It does so by calculating intermediate variables $a,b,c,d,$ and $e$. This is called the **forward pass**.
Now, let’s start from $f$, and work our way against the arrows while calculating the gradient of each expression as we go. This is called **backward pass**, which later inspires **backpropagation of errors**.
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/comput_graph_full.png" alt="Computation Graph full" width="1200"/></center>
Because we've split the computation into simple operations on intermediate variables, I hope you can appreciate how easy it now is to calculate the partial derivatives.
Now we can use chain rule and simply calculate any gradient:
$$ \dfrac{\partial f}{\partial x} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial a}~\dfrac{\partial a}{\partial x} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d}\cdot z \cdot \frac{1}{b} \cdot 2$$
Conveniently, the values for $e$, $b$, and $d$ are available to us from when we did the forward pass through the graph. That is, the partial derivatives have simple expressions in terms of the intermediate variables $a,b,c,d,e$ that we calculated and stored during the forward pass.
_____no_output_____### Exercise 1.3
For the function above, calculate the $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$ using the computational graph and chain rule._____no_output_____For more: [Calculus on Computational Graphs: Backpropagation](https://colah.github.io/posts/2015-08-Backprop/)_____no_output_____---
# Section 2: PyTorch AutoGrad_____no_output_____
<code>
#@title Video 2.1: Automatic Differentiation
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="h8B8Nlcz7yY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____
</code>
Deep learning frameworks such as PyTorch, JAX, and TensorFlow come with a very efficient and sophisticated set of algorithms, commonly known as Automatic differentiation. AutoGrad is PyTorch's automatic differentiation engine. Here we start by covering the essentials of AutoGrad, but you will learn more in the coming days.
_____no_output_____## Section 2.1: Forward Propagation
Everything starts with the forward propagation (pass). PyTorch plans the computational graph, as we declare the variables and operations, and it builds the graph when we call the backward pass. PyTorch rebuilds the graph every time we iterate or change it (or simply put, PyTorch uses a dynamic graph).
Before we start our first example, let's recall gradient descent algorithm. In gradient descent algorithm, it is only required to have the gradient of our cost function with respect to variables which are accessible to us for updating (changing). These variables are often called "learnable parameters" or simply parameter in PyTorch. In the case of neural networks, weights and biases are often the learnable parameters._____no_output_____### Exercise 2.1
In PyTorch, we can set the optional argument `requires_grad` in tensors to `True`, so PyTorch would track every operation on them while configuring the computational graph. For this exercise, use the provided tensors to build the following graph.
<br/>
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/simple_graph.png" alt="Simple nn graph" width="600"/></center>_____no_output_____
<code>
class SimpleGraph:
def __init__(self, w=None, b=None):
"""Initializing the SimpleGraph
Args:
w (float): initial value for weight
b (float): initial value for bias
"""
if w is None:
self.w = torch.randn(1, requires_grad=True)
else:
self.w = torch.tensor([w], requires_grad=True)
if b is None:
self.b = torch.randn(1, requires_grad=True)
else:
self.b = torch.tensor([b], requires_grad=True)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 1D tensor of features
Returns:
torch.Tensor: model predictions
"""
assert isinstance(x, torch.Tensor)
#################################################
## Implement the the forward pass to calculate prediction
## Note that prediction is not the loss, but the value after `tanh`
# Complete the function and remove or comment the line below
raise NotImplementedError("Forward Pass `forward`")
#################################################
prediction = ...
return prediction
def sq_loss(y_true, y_prediction):
"""L2 loss function
Args:
y_true (torch.Tensor): 1D tensor of target labels
y_true (torch.Tensor): 1D tensor of predictions
Returns:
torch.Tensor: L2-loss (squared error)
"""
assert isinstance(y_true, torch.Tensor)
assert isinstance(y_prediction, torch.Tensor)
#################################################
## Implement the L2-loss (squred error) given true label and prediction
# Complete the function and remove or comment the line below
raise NotImplementedError("Loss function `sq_loss`")
#################################################
loss = ...
return loss
# # Uncomment to run
# feature = torch.tensor([1]) # input tensor
# target = torch.tensor([7]) # target tensor
# simple_graph = SimpleGraph(-0.5, 0.5)
# print("initial weight = {} \ninitial bias = {}".format(simple_graph.w.item(),
# simple_graph.b.item()))
# prediction = simple_graph.forward(feature)
# square_loss = sq_loss(target, prediction)
# print("for x={} and y={}, prediction={} and L2 Loss = {}".format(feature.item(),
# target.item(),
# prediction.item(),
# square_loss.item()))_____no_output_____
</code>
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_b9fccdbe.py)
_____no_output_____It is important to appreciate the fact that PyTorch can follow our operations as we arbitrary go through classes and functions._____no_output_____## 2.2 Backward Propagation
Here is where all the magic lies. We can first look at the operations that PyTorch kept track of. Tensor property `grad_fn` keeps reference to backward propagation functions._____no_output_____
<code>
print('Gradient function for prediction =', prediction.grad_fn)
print('Gradient function for loss =', square_loss.grad_fn)_____no_output_____
</code>
Now let's kick off backward pass to calculate the gradients by calling the `.backward()` on the tensor we wish to initiate the backpropagation from. Often, `.backward()` is called on the loss, which is the last on the graph. Before doing that, let's calculate the loss gradients:
$$\frac{\partial{loss}}{\partial{w}} = - 2 x (y_t - y_p)(1 - y_p^2)$$
$$\frac{\partial{loss}}{\partial{b}} = - 2 (y_t - y_p)(1 - y_p^2)$$
We can then compare it to PyTorch gradients, which can be obtained by calling `.grad` on tensors.
**Important Notes**
* Always keep in mind that PyTorch is tracking all the operations for tensors that require grad. To stop this tracking, we use `.detach()`.
* PyTorch builds the graph only when `.backward()` is called and then it is set free. If you try calling `.backward()` after it is already called, you get the following error:
*`Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.`*
* Learnable parameters are "contagious". If you recall from our computational graph, we need all the intermediate gradients to be able to use the chain rule. Therefore, we need to `.detach()` any tensor that was on the path of gradient flow (e.g. prediction tensor).
* `.backward()` accumulates gradients in the leaves. For most of training methods, we call `.zero_grad()` on the loss or optimizer to zero `.grad` attributes (see [autograd.backward](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) for more)._____no_output_____
<code>
# analytical gradients (remember detaching)
ana_dloss_dw = - 2 * feature * (target - prediction.detach())*(1 - prediction.detach()**2)
ana_dloss_db = - 2 * (target - prediction.detach())*(1 - prediction.detach()**2)
square_loss.backward() # first we should call the backward to build the graph
autograd_dloss_dw = simple_graph.w.grad
autograd_dloss_db = simple_graph.b.grad
print(ana_dloss_dw == autograd_dloss_dw)
print(ana_dloss_db == autograd_dloss_db)_____no_output_____
</code>
References and more:
* [A GENTLE INTRODUCTION TO TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)
* [AUTOMATIC DIFFERENTIATION PACKAGE - TORCH.AUTOGRAD](https://pytorch.org/docs/stable/autograd.html)
* [AUTOGRAD MECHANICS](https://pytorch.org/docs/stable/notes/autograd.html)
* [AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html)_____no_output_____---
# Section 3: PyTorch's Neural Net module (`nn.Module`)_____no_output_____
<code>
#@title Video 3.1: Putting it together
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="rUChBWj9ihw", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____
</code>
In this section we will focus on training the simple neural network model from yesterday._____no_output_____
<code>
#@title Generate the sample dataset
import sklearn.datasets
# Create a dataset of 256 points with a little noise
X_orig, y_orig = sklearn.datasets.make_moons(256, noise=0.1)
# Plot the dataset
plt.figure(figsize=(9, 7))
plt.scatter(X_orig[:,0], X_orig[:,1], s=40, c=y_orig)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
# Select the appropriate device (GPU or CPU)
device = "cuda" if torch.cuda.is_available() else "cpu"
# Convert the 2D points to a float tensor
X = torch.from_numpy(X_orig).type(torch.FloatTensor)
X = X.to(device)
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
y = y.to(device)_____no_output_____
</code>
Let's define the same simple neural network model as in Day 1. This time we will not define a `train` method, but instead implement it outside of the class so we can better inspect it._____no_output_____
<code>
# Simple neural network with a single hidden layer
class NaiveNet(nn.Module):
def __init__(self):
"""
Initializing the NaiveNet
"""
super(NaiveNet, self).__init__()
self.layers = nn.Sequential(
nn.Linear(2, 16),
nn.ReLU(),
nn.Linear(16, 2),
)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 2D tensor of features
Returns:
torch.Tensor: model predictions
"""
return self.layers(x)_____no_output_____
</code>
PyTorch provides us with ready to use neural network building blocks, such as linear or recurrent layers, activation functions and loss functions, packed in the [`torch.nn`](https://pytorch.org/docs/stable/nn.html) module. If we build a neural network using the `torch.nn` layers, the weights and biases are already in `requires_grad` mode.
Now let's prepare the training! We need 3 things for that:
* **Model parameters** - Model parameters refer to all the learnable parameters' of the model which are accessible by calling `.parameters()` on the model. Please note that not all the `requires_grad` are seen as model parameters. To create a custom model parameter, you can use [`nn.Parameter`](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html) (*A kind of Tensor that is to be considered a module parameter*). When we create a new instace of our model, layer parameters are initialized using a uniform distribution (more on that in the coming tutorials and days).
* **Loss function** - we need to define the loss that we are going to be optimizing. The cross entropy loss is suitable for classification problems.
* **Optimizer** - the optimizer will perform the adaptation of the model parameters according to the chosen loss function. The optimizer takes the parameters of the model (often by calling `.parameters()` on the model) as its input to be adapted.
You will learn more details about choosing the right loss function and optimizer later in the course._____no_output_____
<code>
# Create an instance of our network
naive_model = NaiveNet().to(device)
# Create a cross entropy loss function
cross_entropy_loss = nn.CrossEntropyLoss()
# Stochstic Gradient Descent optimizer with a learning rate of 1e-1
learning_rate = 1e-1
sgd_optimizer = torch.optim.SGD(naive_model.parameters(), lr=learning_rate)_____no_output_____
</code>
The training process in PyTorch is interactive - you can perform training iterations as you wish and inspect the results after each iteration. We encourage leaving the loss function outside the explicit forward pass function, and rather calculate it on the output (prediction).
Let's perform one training iteration. You can run the cell multiple times and see how the parameters are being updated and the loss is reducing. We pick the parameters of the first neuron in the first layer. Please make sure you go through all the commands and discuss their purpose with the pod._____no_output_____
<code>
# Reset all gradients to 0
sgd_optimizer.zero_grad()
# Forward pass (Compute the output of the model on the data)
y_logits = naive_model(X)
# Compute the loss
loss = cross_entropy_loss(y_logits, y)
print('Loss:', loss.item())
# Perform backpropagation to build the graph and compute the gradients
loss.backward()
# `.parameters()` returns a generator
print('Gradients:', next(naive_model.parameters()).grad[0].detach().numpy())
# Print model's first learnable parameters
print('Parameters before:', next(naive_model.parameters())[0].detach().numpy())
# Optimizer takes a step in the steepest direction (negative of gradient)
# and "updates" the weights and biases of the network
sgd_optimizer.step()
# Print model's first learnable parameters
print('Parameters after: ', next(naive_model.parameters())[0].detach().numpy())_____no_output_____
</code>
## Exercise 3
Following everything we learned so far, we ask you to complete the `train` function below._____no_output_____
<code>
def train(features, labels, model, loss_fun, optimizer, n_epochs):
"""Training function
Args:
features (torch.Tensor): features (input) with shape torch.Size([n_samples, 2])
labels (torch.Tensor): labels (targets) with shape torch.Size([n_samples, 2])
model (torch nn.Module): the neural network
loss_fun (function): loss function
optimizer(function): optimizer
n_epochs (int): number of training epochs
Returns:
list: record (evolution) of losses
list: record (evolution) of value of the first parameter
list: record (evolution) of gradient of the first parameter
"""
loss_record = [] # keeping recods of loss
par_values = [] # keeping recods of first parameter
par_grads = [] # keeping recods of gradient of first parameter
# we use `tqdm` methods for progress bar
epoch_range = trange(n_epochs, desc='loss: ', leave=True)
for i in epoch_range:
if loss_record:
epoch_range.set_description("loss: {:.4f}".format(loss_record[-1]))
epoch_range.refresh() # to show immediately the update
time.sleep(0.01)
#################################################
## Implement the missing parts of the training loop
# Complete the function and remove or comment the line below
raise NotImplementedError("Training setup `train`")
#################################################
... # Initialize gradients to 0
predictions = ... # Compute model prediction (output)
loss = ... # Compute the loss
... # Compute gradients (backward pass)
... # update parameters (optimizer takes a step)
loss_record.append(loss.item())
par_values.append(next(model.parameters())[0][0].item())
par_grads.append(next(model.parameters()).grad[0][0].item())
return loss_record, par_values, par_grads
# # Uncomment to run
# epochs = 5000
# losses, values, gradients = train(X, y,
# naive_model,
# cross_entropy_loss,
# sgd_optimizer,
# epochs)
# ex3_plot(epochs, losses, values, gradients)_____no_output_____
</code>
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_364cd4e2.py)
*Example output:*
<img alt='Solution hint' align='left' width=704 height=488 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/W1D2_Tutorial1_Solution_364cd4e2_2.png>
_____no_output_____
<code>
#@title Video 3.2: Wrap-up
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="zFmWs6doqhM", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video_____no_output_____
</code>
| {
"repository": "haltakov/course-content-dl",
"path": "tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 35680,
"hexsha": "d06fd54463088bdc177a5356657d261971a1b6b7",
"max_line_length": 603,
"avg_line_length": 38.4068891281,
"alphanum_fraction": 0.592853139
} |
# Notebook from Steve-Hawk/nrpytutorial
Path: Tutorial-ScalarWaveCurvilinear.ipynb
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Generating C code for the right-hand-side of the scalar wave equation, in ***curvilinear*** coordinates, using a reference metric formalism
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
[comment]: <> (Abstract: TODO)
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). In addition, all expressions have been validated against a trusted code (the [original SENR/NRPy+ code](https://bitbucket.org/zach_etienne/nrpy)).
### NRPy+ Source Code for this module: [ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py](../edit/ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py)
[comment]: <> (Introduction: TODO)_____no_output_____<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
0. [Preliminaries](#prelim): Reference Metrics and Picking Best Coordinate System to Solve the PDE
1. [Example](#example): The scalar wave equation in spherical coordinates
1. [Step 1](#contracted_christoffel): Contracted Christoffel symbols $\hat{\Gamma}^i = \hat{g}^{ij}\hat{\Gamma}^k_{ij}$ in spherical coordinates, using NRPy+
1. [Step 2](#rhs_scalarwave_spherical): The right-hand side of the scalar wave equation in spherical coordinates, using NRPy+
1. [Step 3](#code_validation): Code Validation against `ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs` NRPy+ Module
1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file_____no_output_____<a id='prelim'></a>
# Preliminaries: Reference Metrics and Picking Best Coordinate System to Solve the PDE \[Back to [top](#toc)\]
$$\label{prelim}$$
Recall from [NRPy+ tutorial notebook on the Cartesian scalar wave equation](Tutorial-ScalarWave.ipynb), the scalar wave equation in 3D Cartesian coordinates is given by
$$\partial_t^2 u = c^2 \nabla^2 u \text{,}$$
where $u$ (the amplitude of the wave) is a function of time and Cartesian coordinates in space: $u = u(t,x,y,z)$ (spatial dimension as-yet unspecified), and subject to some initial condition
$$u(0,x,y,z) = f(x,y,z),$$
with suitable (sometimes approximate) spatial boundary conditions.
To simplify this equation, let's first choose units such that $c=1$. Alternative wave speeds can be constructed
by simply rescaling the time coordinate, with the net effect being that the time $t$ is replaced with time in dimensions of space; i.e., $t\to c t$:
$$\partial_t^2 u = \nabla^2 u.$$
As we learned in the [NRPy+ tutorial notebook on reference metrics](Tutorial-Reference_Metric.ipynb), reference metrics are a means to pick the best coordinate system for the PDE we wish to solve. However, to take advantage of reference metrics requires first that we generalize the PDE. In the case of the scalar wave equation, this involves first rewriting in [Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) (with implied summation over repeated indices) via
$$(-\partial_t^2 + \nabla^2) u = \eta^{\mu\nu} u_{,\ \mu\nu} = 0,$$
where $u_{,\mu\nu} = \partial_\mu \partial_\nu u$, and $\eta^{\mu\nu}$ is the contravariant flat-space metric tensor with components $\text{diag}(-1,1,1,1)$.
Next we apply the "comma-goes-to-semicolon rule" and replace $\eta^{\mu\nu}$ with $\hat{g}^{\mu\nu}$ to generalize the scalar wave equation to an arbitrary reference metric $\hat{g}^{\mu\nu}$:
$$\hat{g}^{\mu\nu} u_{;\ \mu\nu} = \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u = 0,$$
where $\hat{\nabla}_{\mu}$ denotes the [covariant derivative](https://en.wikipedia.org/wiki/Covariant_derivative) with respect to the reference metric basis vectors $\hat{x}^{\mu}$, and $\hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u$ is the covariant
[D'Alembertian](https://en.wikipedia.org/wiki/D%27Alembert_operator) of $u$.
For example, suppose we wish to model a short-wavelength wave that is nearly spherical. In this case, if we were to solve the wave equation PDE in Cartesian coordinates, we would in principle need high resolution in all three cardinal directions. If instead we chose spherical coordinates centered at the center of the wave, we might need high resolution only in the radial direction, with only a few points required in the angular directions. Thus choosing spherical coordinates would be far more computationally efficient than modeling the wave in Cartesian coordinates.
Let's now expand the covariant scalar wave equation in arbitrary coordinates. Since the covariant derivative of a scalar is equivalent to its partial derivative, we have
\begin{align}
0 &= \hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u \\
&= \hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \partial_{\nu} u.
\end{align}
$\partial_{\nu} u$ transforms as a one-form under covariant differentiation, so we have
$$\hat{\nabla}_{\mu} \partial_{\nu} u = \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau_{\mu\nu} \partial_\tau u,$$
where
$$\hat{\Gamma}^\tau_{\mu\nu} = \frac{1}{2} \hat{g}^{\tau\alpha} \left(\partial_\nu \hat{g}_{\alpha\mu} + \partial_\mu \hat{g}_{\alpha\nu} - \partial_\alpha \hat{g}_{\mu\nu} \right)$$
are the [Christoffel symbols](https://en.wikipedia.org/wiki/Christoffel_symbols) associated with the reference metric $\hat{g}_{\mu\nu}$.
Then the scalar wave equation is written:
$$0 = \hat{g}^{\mu \nu} \left( \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau_{\mu\nu} \partial_\tau u\right).$$
Define the contracted Christoffel symbols:
$$\hat{\Gamma}^\tau = \hat{g}^{\mu\nu} \hat{\Gamma}^\tau_{\mu\nu}.$$
Then the scalar wave equation is given by
$$0 = \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u.$$
The reference metrics we adopt satisfy
$$\hat{g}^{t \nu} = -\delta^{t \nu},$$
where $\delta^{t \nu}$ is the [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta). Therefore the scalar wave equation in curvilinear coordinates can be written
\begin{align}
0 &= \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u \\
&= -\partial_t^2 u + \hat{g}^{i j} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u \\
\implies \partial_t^2 u &= \hat{g}^{i j} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u,
\end{align}
where repeated Latin indices denote implied summation over *spatial* components only. This module implements the bottom equation for arbitrary reference metrics satisfying $\hat{g}^{t \nu} = -\delta^{t \nu}$. To gain an appreciation for what NRPy+ accomplishes automatically, let's first work out the scalar wave equation in spherical coordinates by hand:_____no_output_____<a id='example'></a>
# Example: The scalar wave equation in spherical coordinates \[Back to [top](#toc)\]
$$\label{example}$$
For example, the spherical reference metric is written
$$\hat{g}_{\mu\nu} = \begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & r^2 & 0 \\
0 & 0 & 0 & r^2 \sin^2 \theta \\
\end{pmatrix}.
$$
Since the inverse of a diagonal matrix is simply the inverse of the diagonal elements, we can write
$$\hat{g}^{\mu\nu} = \begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \frac{1}{r^2} & 0 \\
0 & 0 & 0 & \frac{1}{r^2 \sin^2 \theta} \\
\end{pmatrix}.$$
The scalar wave equation in these coordinates can thus be written
\begin{align}
0 &= \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u \\
&= \hat{g}^{tt} \partial_t^2 u + \hat{g}^{rr} \partial_r^2 u + \hat{g}^{\theta\theta} \partial_\theta^2 u + \hat{g}^{\phi\phi} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u \\
&= -\partial_t^2 u + \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2
u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u\\
\implies \partial_t^2 u &= \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2
u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u
\end{align}
The contracted Christoffel symbols
$\hat{\Gamma}^\tau$ can then be computed directly from the metric $\hat{g}_{\mu\nu}$.
It can be shown (exercise to the reader) that the only nonzero
components of $\hat{\Gamma}^\tau$ in static spherical polar coordinates are
given by
\begin{align}
\hat{\Gamma}^r &= -\frac{2}{r} \\
\hat{\Gamma}^\theta &= -\frac{\cos\theta}{r^2 \sin\theta}.
\end{align}
Thus we have found the Laplacian in spherical coordinates is simply:
\begin{align}
\nabla^2 u &=
\partial_r^2 u + \frac{1}{r^2} \partial_\theta^2 u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u\\
&= \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2 u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u + \frac{2}{r} \partial_r u + \frac{\cos\theta}{r^2 \sin\theta} \partial_\theta u
\end{align}
(cf. http://mathworld.wolfram.com/SphericalCoordinates.html; though note that they defined the angle $\phi$ as $\theta$ and $\theta$ as $\phi$.)_____no_output_____<a id='contracted_christoffel'></a>
# Step 1: Contracted Christoffel symbols $\hat{\Gamma}^i = \hat{g}^{ij}\hat{\Gamma}^k_{ij}$ in spherical coordinates, using NRPy+ \[Back to [top](#toc)\]
$$\label{contracted_christoffel}$$
Let's next use NRPy+ to derive the contracted Christoffel symbols
$$\hat{g}^{ij} \hat{\Gamma}^k_{ij}$$
in spherical coordinates, where $i\in\{1,2,3\}$ and $j\in\{1,2,3\}$ are spatial indices.
As discussed in the [NRPy+ tutorial notebook on reference metrics](Tutorial-Reference_Metric.ipynb), several reference-metric-related quantities in spherical coordinates are computed in NRPy+ (provided the parameter **`reference_metric::CoordSystem`** is set to **`"Spherical"`**), including the inverse spatial spherical reference metric $\hat{g}^{ij}$ and the Christoffel symbols from this reference metric $\hat{\Gamma}^{i}_{jk}$. _____no_output_____
<code>
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import reference_metric as rfm
# reference_metric::CoordSystem can be set to Spherical, SinhSpherical, SinhSphericalv2,
# Cylindrical, SinhCylindrical, SinhCylindricalv2, etc.
# See reference_metric.py and NRPy+ tutorial notebook on
# reference metrics for full list and description of how
# to extend.
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
par.set_parval_from_str("grid::DIM",3)
rfm.reference_metric()
contractedGammahatU = ixp.zerorank1()
for k in range(3):
for i in range(3):
for j in range(3):
contractedGammahatU[k] += rfm.ghatUU[i][j] * rfm.GammahatUDD[k][i][j]
for k in range(3):
print("contracted GammahatU["+str(k)+"]:")
sp.pretty_print(sp.simplify(contractedGammahatU[k]))
if k<2:
print("\n\n")contracted GammahatU[0]:
-2
───
xx₀
contracted GammahatU[1]:
-1
─────────────
2
xx₀ ⋅tan(xx₁)
contracted GammahatU[2]:
0
</code>
<a id='rhs_scalarwave_spherical'></a>
# Step 2: The right-hand side of the scalar wave equation in spherical coordinates, using NRPy+ \[Back to [top](#toc)\]
$$\label{rhs_scalarwave_spherical}$$
Following our [implementation of the scalar wave equation in Cartesian coordinates](Tutorial-ScalarWave.ipynb), we will introduce a new variable $v=\partial_t u$ that will enable us to split the second time derivative into two first-order time derivatives:
\begin{align}
\partial_t u &= v \\
\partial_t v &= \hat{g}^{ij} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u.
\end{align}
Adding back the sound speed $c$, we have a choice of a single factor of $c$ multiplying both right-hand sides, or a factor of $c^2$ multiplying the second equation only. We'll choose the latter:
\begin{align}
\partial_t u &= v \\
\partial_t v &= c^2 \left(\hat{g}^{ij} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u\right).
\end{align}
Now let's generate the C code for the finite-difference representations of the right-hand sides of the above "time evolution" equations for $u$ and $v$. Since the right-hand side of $\partial_t v$ contains implied sums over $i$ and $j$ in the first term, and an implied sum over $k$ in the second term, we'll find it useful to split the right-hand side into two parts
\begin{equation}
\partial_t v = c^2 \left(
{\underbrace {\textstyle \hat{g}^{ij} \partial_{i} \partial_{j} u}_{\text{Part 1}}}
{\underbrace {\textstyle -\hat{\Gamma}^i \partial_i u}_{\text{Part 2}}}\right),
\end{equation}
and perform the implied sums in two pieces:_____no_output_____
<code>
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
from outputC import *_____no_output_____# The name of this module ("scalarwave") is given by __name__:
thismodule = __name__
# Step 0: Read the spatial dimension parameter as DIM.
DIM = par.parval_from_str("grid::DIM")
# Step 1: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",4)
# Step 2a: Reset the gridfunctions list; below we define the
# full complement of gridfunctions needed by this
# tutorial. This line of code enables us to re-run this
# tutorial without resetting the running Python kernel.
gri.glb_gridfcs_list = []
# Step 2b: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 3a: Declare the rank-1 indexed expression \partial_{i} u,
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dD = ixp.declarerank1("uu_dD")
# Step 3b: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD","sym01")
# Step 4: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
wavespeed = par.Cparameters("REAL",thismodule,"wavespeed", 1.0)
# Step 5: Define right-hand sides for the evolution.
uu_rhs = vv
# Step 5b: The right-hand side of the \partial_t v equation
# is given by:
# \hat{g}^{ij} \partial_i \partial_j u - \hat{\Gamma}^i \partial_i u.
# ^^^^^^^^^^^^ PART 1 ^^^^^^^^^^^^^^^^ ^^^^^^^^^^ PART 2 ^^^^^^^^^^^
vv_rhs = 0
for i in range(DIM):
# PART 2:
vv_rhs -= contractedGammahatU[i]*uu_dD[i]
for j in range(DIM):
# PART 1:
vv_rhs += rfm.ghatUU[i][j]*uu_dDD[i][j]
vv_rhs *= wavespeed*wavespeed
# Step 6: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=vv_rhs)]){
/*
* NRPy+ Finite Difference Code Generation, Step 1 of 2: Read from main memory and compute finite difference stencils:
*/
/*
* Original SymPy expressions:
* "[const double uu_dD0 = invdx0*(-2*uu_i0m1_i1_i2/3 + uu_i0m2_i1_i2/12 + 2*uu_i0p1_i1_i2/3 - uu_i0p2_i1_i2/12),
* const double uu_dD1 = invdx1*(-2*uu_i0_i1m1_i2/3 + uu_i0_i1m2_i2/12 + 2*uu_i0_i1p1_i2/3 - uu_i0_i1p2_i2/12),
* const double uu_dDD00 = invdx0**2*(-5*uu/2 + 4*uu_i0m1_i1_i2/3 - uu_i0m2_i1_i2/12 + 4*uu_i0p1_i1_i2/3 - uu_i0p2_i1_i2/12),
* const double uu_dDD11 = invdx1**2*(-5*uu/2 + 4*uu_i0_i1m1_i2/3 - uu_i0_i1m2_i2/12 + 4*uu_i0_i1p1_i2/3 - uu_i0_i1p2_i2/12),
* const double uu_dDD22 = invdx2**2*(-5*uu/2 + 4*uu_i0_i1_i2m1/3 - uu_i0_i1_i2m2/12 + 4*uu_i0_i1_i2p1/3 - uu_i0_i1_i2p2/12)]"
*/
const double uu_i0_i1_i2m2 = in_gfs[IDX4(UUGF, i0,i1,i2-2)];
const double uu_i0_i1_i2m1 = in_gfs[IDX4(UUGF, i0,i1,i2-1)];
const double uu_i0_i1m2_i2 = in_gfs[IDX4(UUGF, i0,i1-2,i2)];
const double uu_i0_i1m1_i2 = in_gfs[IDX4(UUGF, i0,i1-1,i2)];
const double uu_i0m2_i1_i2 = in_gfs[IDX4(UUGF, i0-2,i1,i2)];
const double uu_i0m1_i1_i2 = in_gfs[IDX4(UUGF, i0-1,i1,i2)];
const double uu = in_gfs[IDX4(UUGF, i0,i1,i2)];
const double uu_i0p1_i1_i2 = in_gfs[IDX4(UUGF, i0+1,i1,i2)];
const double uu_i0p2_i1_i2 = in_gfs[IDX4(UUGF, i0+2,i1,i2)];
const double uu_i0_i1p1_i2 = in_gfs[IDX4(UUGF, i0,i1+1,i2)];
const double uu_i0_i1p2_i2 = in_gfs[IDX4(UUGF, i0,i1+2,i2)];
const double uu_i0_i1_i2p1 = in_gfs[IDX4(UUGF, i0,i1,i2+1)];
const double uu_i0_i1_i2p2 = in_gfs[IDX4(UUGF, i0,i1,i2+2)];
const double vv = in_gfs[IDX4(VVGF, i0,i1,i2)];
const double tmpFD0 = (1.0/12.0)*uu_i0m2_i1_i2;
const double tmpFD1 = -1.0/12.0*uu_i0p2_i1_i2;
const double tmpFD2 = (1.0/12.0)*uu_i0_i1m2_i2;
const double tmpFD3 = -1.0/12.0*uu_i0_i1p2_i2;
const double tmpFD4 = -5.0/2.0*uu;
const double uu_dD0 = invdx0*(tmpFD0 + tmpFD1 - 2.0/3.0*uu_i0m1_i1_i2 + (2.0/3.0)*uu_i0p1_i1_i2);
const double uu_dD1 = invdx1*(tmpFD2 + tmpFD3 - 2.0/3.0*uu_i0_i1m1_i2 + (2.0/3.0)*uu_i0_i1p1_i2);
const double uu_dDD00 = ((invdx0)*(invdx0))*(-tmpFD0 + tmpFD1 + tmpFD4 + (4.0/3.0)*uu_i0m1_i1_i2 + (4.0/3.0)*uu_i0p1_i1_i2);
const double uu_dDD11 = ((invdx1)*(invdx1))*(-tmpFD2 + tmpFD3 + tmpFD4 + (4.0/3.0)*uu_i0_i1m1_i2 + (4.0/3.0)*uu_i0_i1p1_i2);
const double uu_dDD22 = ((invdx2)*(invdx2))*(tmpFD4 + (4.0/3.0)*uu_i0_i1_i2m1 - 1.0/12.0*uu_i0_i1_i2m2 + (4.0/3.0)*uu_i0_i1_i2p1 - 1.0/12.0*uu_i0_i1_i2p2);
/*
* NRPy+ Finite Difference Code Generation, Step 2 of 2: Evaluate SymPy expressions and write to main memory:
*/
/*
* Original SymPy expressions:
* "[rhs_gfs[IDX4(UUGF, i0, i1, i2)] = vv,
* rhs_gfs[IDX4(VVGF, i0, i1, i2)] = wavespeed**2*(2*uu_dD0/xx0 + uu_dD1*sin(2*xx1)/(2*xx0**2*sin(xx1)**2) + uu_dDD00 + uu_dDD11/xx0**2 + uu_dDD22/(xx0**2*sin(xx1)**2))]"
*/
const double tmp0 = (1.0/((xx0)*(xx0)));
const double tmp1 = tmp0/((sin(xx1))*(sin(xx1)));
rhs_gfs[IDX4(UUGF, i0, i1, i2)] = vv;
rhs_gfs[IDX4(VVGF, i0, i1, i2)] = ((wavespeed)*(wavespeed))*(tmp0*uu_dDD11 + (1.0/2.0)*tmp1*uu_dD1*sin(2*xx1) + tmp1*uu_dDD22 + 2*uu_dD0/xx0 + uu_dDD00);
}
</code>
<a id='code_validation'></a>
# Step 3: Code Validation against `ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs` NRPy+ Module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the Curvilinear Scalar Wave equation (i.e., uu_rhs and vv_rhs) between
1. this tutorial and
2. the NRPy+ [ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs](../edit/ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py) module.
By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen._____no_output_____
<code>
# Step 7: We already have SymPy expressions for uu_rhs and vv_rhs in
# terms of other SymPy variables. Even if we reset the list
# of NRPy+ gridfunctions, these *SymPy* expressions for
# uu_rhs and vv_rhs *will remain unaffected*.
#
# Here, we will use the above-defined uu_rhs and vv_rhs to
# validate against the same expressions in the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear module,
# to ensure consistency between the tutorial and the
# module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 8: Call the ScalarWaveCurvilinear_RHSs() function from within the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py module,
# which should do exactly the same as in Steps 1-6 above.
import ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs as swcrhs
swcrhs.ScalarWaveCurvilinear_RHSs()
# Step 9: Consistency check between the tutorial notebook above
# and the ScalarWaveCurvilinear_RHSs() function from within the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py module.
print("Consistency check between ScalarWaveCurvilinear tutorial and NRPy+ module:")
print("uu_rhs - swcrhs.uu_rhs: "+str(sp.simplify(uu_rhs - swcrhs.uu_rhs))+"\t\t (should be zero)")
print("vv_rhs - swcrhs.vv_rhs: "+str(sp.simplify(vv_rhs - swcrhs.vv_rhs))+"\t\t (should be zero)")Consistency check between ScalarWaveCurvilinear tutorial and NRPy+ module:
uu_rhs - swcrhs.uu_rhs: 0 (should be zero)
vv_rhs - swcrhs.vv_rhs: 0 (should be zero)
</code>
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ScalarWaveCurvilinear.pdf](Tutorial-ScalarWaveCurvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)_____no_output_____
<code>
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ScalarWaveCurvilinear.ipynb
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!rm -f Tut*.out Tut*.aux Tut*.log[pandoc warning] Duplicate link reference `[comment]' "source" (line 23, column 1)
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
</code>
| {
"repository": "Steve-Hawk/nrpytutorial",
"path": "Tutorial-ScalarWaveCurvilinear.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 28655,
"hexsha": "d07001fd9722bf73a21ef6542e18561a127c7abc",
"max_line_length": 581,
"avg_line_length": 51.6306306306,
"alphanum_fraction": 0.5910312336
} |
# Notebook from junelsolis/ZeroCostDL4Mic
Path: Colab_notebooks/Deep-STORM_2D_ZeroCostDL4Mic.ipynb
# **Deep-STORM (2D)**
---
<font size = 4>Deep-STORM is a neural network capable of image reconstruction from high-density single-molecule localization microscopy (SMLM), first published in 2018 by [Nehme *et al.* in Optica](https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458). The architecture used here is a U-Net based network without skip connections. This network allows image reconstruction of 2D super-resolution images, in a supervised training manner. The network is trained using simulated high-density SMLM data for which the ground-truth is available. These simulations are obtained from random distribution of single molecules in a field-of-view and therefore do not imprint structural priors during training. The network output a super-resolution image with increased pixel density (typically upsampling factor of 8 in each dimension).
Deep-STORM has **two key advantages**:
- SMLM reconstruction at high density of emitters
- fast prediction (reconstruction) once the model is trained appropriately, compared to more common multi-emitter fitting processes.
---
<font size = 4>*Disclaimer*:
<font size = 4>This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.
<font size = 4>This notebook is based on the following paper:
<font size = 4>**Deep-STORM: super-resolution single-molecule microscopy by deep learning**, Optica (2018) by *Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman* (https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458)
<font size = 4>And source code found in: https://github.com/EliasNehme/Deep-STORM
<font size = 4>**Please also cite this original paper when using or developing this notebook.**_____no_output_____# **How to use this notebook?**
---
<font size = 4>Video describing how to use our notebooks are available on youtube:
- [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
- [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
---
###**Structure of a notebook**
<font size = 4>The notebook contains two types of cell:
<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
---
###**Table of contents, Code snippets** and **Files**
<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
<font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
---
###**Making changes to the notebook**
<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment._____no_output_____#**0. Before getting started**
---
<font size = 4> Deep-STORM is able to train on simulated dataset of SMLM data (see https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458 for more info). Here, we provide a simulator that will generate training dataset (section 3.1.b). A few parameters will allow you to match the simulation to your experimental data. Similarly to what is described in the paper, simulations obtained from ThunderSTORM can also be loaded here (section 3.1.a).
---
<font size = 4>**Important note**
<font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
<font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
<font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
---_____no_output_____# **1. Install Deep-STORM and dependencies**
---
_____no_output_____
<code>
Notebook_version = '1.13'
Network = 'Deep-STORM'
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
#@markdown ##Install Deep-STORM and dependencies
# %% Model definition + helper functions
!pip install fpdf
# Import keras modules and libraries
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Activation, UpSampling2D, Convolution2D, MaxPooling2D, BatchNormalization, Layer
from tensorflow.keras.callbacks import Callback
from tensorflow.keras import backend as K
from tensorflow.keras import optimizers, losses
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import ReduceLROnPlateau
from skimage.transform import warp
from skimage.transform import SimilarityTransform
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from scipy.signal import fftconvolve
# Import common libraries
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import h5py
import scipy.io as sio
from os.path import abspath
from sklearn.model_selection import train_test_split
from skimage import io
import time
import os
import shutil
import csv
from PIL import Image
from PIL.TiffTags import TAGS
from scipy.ndimage import gaussian_filter
import math
from astropy.visualization import simple_norm
from sys import getsizeof
from fpdf import FPDF, HTMLMixin
from pip._internal.operations.freeze import freeze
import subprocess
from datetime import datetime
# For sliders and dropdown menu, progress bar
from ipywidgets import interact
import ipywidgets as widgets
from tqdm import tqdm
# For Multi-threading in simulation
from numba import njit, prange
# define a function that projects and rescales an image to the range [0,1]
def project_01(im):
im = np.squeeze(im)
min_val = im.min()
max_val = im.max()
return (im - min_val)/(max_val - min_val)
# normalize image given mean and std
def normalize_im(im, dmean, dstd):
im = np.squeeze(im)
im_norm = np.zeros(im.shape,dtype=np.float32)
im_norm = (im - dmean)/dstd
return im_norm
# Define the loss history recorder
class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# Define a matlab like gaussian 2D filter
def matlab_style_gauss2D(shape=(7,7),sigma=1):
"""
2D gaussian filter - should give the same result as:
MATLAB's fspecial('gaussian',[shape],[sigma])
"""
m,n = [(ss-1.)/2. for ss in shape]
y,x = np.ogrid[-m:m+1,-n:n+1]
h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) )
h.astype(dtype=K.floatx())
h[ h < np.finfo(h.dtype).eps*h.max() ] = 0
sumh = h.sum()
if sumh != 0:
h /= sumh
h = h*2.0
h = h.astype('float32')
return h
# Expand the filter dimensions
psf_heatmap = matlab_style_gauss2D(shape = (7,7),sigma=1)
gfilter = tf.reshape(psf_heatmap, [7, 7, 1, 1])
# Combined MSE + L1 loss
def L1L2loss(input_shape):
def bump_mse(heatmap_true, spikes_pred):
# generate the heatmap corresponding to the predicted spikes
heatmap_pred = K.conv2d(spikes_pred, gfilter, strides=(1, 1), padding='same')
# heatmaps MSE
loss_heatmaps = losses.mean_squared_error(heatmap_true,heatmap_pred)
# l1 on the predicted spikes
loss_spikes = losses.mean_absolute_error(spikes_pred,tf.zeros(input_shape))
return loss_heatmaps + loss_spikes
return bump_mse
# Define the concatenated conv2, batch normalization, and relu block
def conv_bn_relu(nb_filter, rk, ck, name):
def f(input):
conv = Convolution2D(nb_filter, kernel_size=(rk, ck), strides=(1,1),\
padding="same", use_bias=False,\
kernel_initializer="Orthogonal",name='conv-'+name)(input)
conv_norm = BatchNormalization(name='BN-'+name)(conv)
conv_norm_relu = Activation(activation = "relu",name='Relu-'+name)(conv_norm)
return conv_norm_relu
return f
# Define the model architechture
def CNN(input,names):
Features1 = conv_bn_relu(32,3,3,names+'F1')(input)
pool1 = MaxPooling2D(pool_size=(2,2),name=names+'Pool1')(Features1)
Features2 = conv_bn_relu(64,3,3,names+'F2')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool2')(Features2)
Features3 = conv_bn_relu(128,3,3,names+'F3')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool3')(Features3)
Features4 = conv_bn_relu(512,3,3,names+'F4')(pool3)
up5 = UpSampling2D(size=(2, 2),name=names+'Upsample1')(Features4)
Features5 = conv_bn_relu(128,3,3,names+'F5')(up5)
up6 = UpSampling2D(size=(2, 2),name=names+'Upsample2')(Features5)
Features6 = conv_bn_relu(64,3,3,names+'F6')(up6)
up7 = UpSampling2D(size=(2, 2),name=names+'Upsample3')(Features6)
Features7 = conv_bn_relu(32,3,3,names+'F7')(up7)
return Features7
# Define the Model building for an arbitrary input size
def buildModel(input_dim, initial_learning_rate = 0.001):
input_ = Input (shape = (input_dim))
act_ = CNN (input_,'CNN')
density_pred = Convolution2D(1, kernel_size=(1, 1), strides=(1, 1), padding="same",\
activation="linear", use_bias = False,\
kernel_initializer="Orthogonal",name='Prediction')(act_)
model = Model (inputs= input_, outputs=density_pred)
opt = optimizers.Adam(lr = initial_learning_rate)
model.compile(optimizer=opt, loss = L1L2loss(input_dim))
return model
# define a function that trains a model for a given data SNR and density
def train_model(patches, heatmaps, modelPath, epochs, steps_per_epoch, batch_size, upsampling_factor=8, validation_split = 0.3, initial_learning_rate = 0.001, pretrained_model_path = '', L2_weighting_factor = 100):
"""
This function trains a CNN model on the desired training set, given the
upsampled training images and labels generated in MATLAB.
# Inputs
# TO UPDATE ----------
# Outputs
function saves the weights of the trained model to a hdf5, and the
normalization factors to a mat file. These will be loaded later for testing
the model in test_model.
"""
# for reproducibility
np.random.seed(123)
X_train, X_test, y_train, y_test = train_test_split(patches, heatmaps, test_size = validation_split, random_state=42)
print('Number of training examples: %d' % X_train.shape[0])
print('Number of validation examples: %d' % X_test.shape[0])
# Setting type
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
#===================== Training set normalization ==========================
# normalize training images to be in the range [0,1] and calculate the
# training set mean and std
mean_train = np.zeros(X_train.shape[0],dtype=np.float32)
std_train = np.zeros(X_train.shape[0], dtype=np.float32)
for i in range(X_train.shape[0]):
X_train[i, :, :] = project_01(X_train[i, :, :])
mean_train[i] = X_train[i, :, :].mean()
std_train[i] = X_train[i, :, :].std()
# resulting normalized training images
mean_val_train = mean_train.mean()
std_val_train = std_train.mean()
X_train_norm = np.zeros(X_train.shape, dtype=np.float32)
for i in range(X_train.shape[0]):
X_train_norm[i, :, :] = normalize_im(X_train[i, :, :], mean_val_train, std_val_train)
# patch size
psize = X_train_norm.shape[1]
# Reshaping
X_train_norm = X_train_norm.reshape(X_train.shape[0], psize, psize, 1)
# ===================== Test set normalization ==========================
# normalize test images to be in the range [0,1] and calculate the test set
# mean and std
mean_test = np.zeros(X_test.shape[0],dtype=np.float32)
std_test = np.zeros(X_test.shape[0], dtype=np.float32)
for i in range(X_test.shape[0]):
X_test[i, :, :] = project_01(X_test[i, :, :])
mean_test[i] = X_test[i, :, :].mean()
std_test[i] = X_test[i, :, :].std()
# resulting normalized test images
mean_val_test = mean_test.mean()
std_val_test = std_test.mean()
X_test_norm = np.zeros(X_test.shape, dtype=np.float32)
for i in range(X_test.shape[0]):
X_test_norm[i, :, :] = normalize_im(X_test[i, :, :], mean_val_test, std_val_test)
# Reshaping
X_test_norm = X_test_norm.reshape(X_test.shape[0], psize, psize, 1)
# Reshaping labels
Y_train = y_train.reshape(y_train.shape[0], psize, psize, 1)
Y_test = y_test.reshape(y_test.shape[0], psize, psize, 1)
# Save datasets to a matfile to open later in matlab
mdict = {"mean_test": mean_val_test, "std_test": std_val_test, "upsampling_factor": upsampling_factor, "Normalization factor": L2_weighting_factor}
sio.savemat(os.path.join(modelPath,"model_metadata.mat"), mdict)
# Set the dimensions ordering according to tensorflow consensous
# K.set_image_dim_ordering('tf')
K.set_image_data_format('channels_last')
# Save the model weights after each epoch if the validation loss decreased
checkpointer = ModelCheckpoint(filepath=os.path.join(modelPath,"weights_best.hdf5"), verbose=1,
save_best_only=True)
# Change learning when loss reaches a plataeu
change_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.00005)
# Model building and complitation
model = buildModel((psize, psize, 1), initial_learning_rate = initial_learning_rate)
model.summary()
# Load pretrained model
if not pretrained_model_path:
print('Using random initial model weights.')
else:
print('Loading model weights from '+pretrained_model_path)
model.load_weights(pretrained_model_path)
# Create an image data generator for real time data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0., # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0., # randomly shift images horizontally (fraction of total width)
height_shift_range=0., # randomly shift images vertically (fraction of total height)
zoom_range=0.,
shear_range=0.,
horizontal_flip=False, # randomly flip images
vertical_flip=False, # randomly flip images
fill_mode='constant',
data_format=K.image_data_format())
# Fit the image generator on the training data
datagen.fit(X_train_norm)
# loss history recorder
history = LossHistory()
# Inform user training begun
print('-------------------------------')
print('Training model...')
# Fit model on the batches generated by datagen.flow()
train_history = model.fit_generator(datagen.flow(X_train_norm, Y_train, batch_size=batch_size),
steps_per_epoch=steps_per_epoch, epochs=epochs, verbose=1,
validation_data=(X_test_norm, Y_test),
callbacks=[history, checkpointer, change_lr])
# Inform user training ended
print('-------------------------------')
print('Training Complete!')
# Save the last model
model.save(os.path.join(modelPath, 'weights_last.hdf5'))
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(train_history.history)
if os.path.exists(os.path.join(modelPath,"Quality Control")):
shutil.rmtree(os.path.join(modelPath,"Quality Control"))
os.makedirs(os.path.join(modelPath,"Quality Control"))
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = os.path.join(modelPath,"Quality Control/training_evaluation.csv")
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss','learning rate'])
for i in range(len(train_history.history['loss'])):
writer.writerow([train_history.history['loss'][i], train_history.history['val_loss'][i], train_history.history['lr'][i]])
return
# Normalization functions from Martin Weigert used in CARE
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Multi-threaded Erf-based image construction
@njit(parallel=True)
def FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
erfImage = np.zeros((w, h))
for ij in prange(w*h):
j = int(ij/w)
i = ij - j*w
for (xc, yc, photon, sigma) in zip(xc_array, yc_array, photon_array, sigma_array):
# Don't bother if the emitter has photons <= 0 or if Sigma <= 0
if (sigma > 0) and (photon > 0):
S = sigma*math.sqrt(2)
x = i*pixel_size - xc
y = j*pixel_size - yc
# Don't bother if the emitter is further than 4 sigma from the centre of the pixel
if (x+pixel_size/2)**2 + (y+pixel_size/2)**2 < 16*sigma**2:
ErfX = math.erf((x+pixel_size)/S) - math.erf(x/S)
ErfY = math.erf((y+pixel_size)/S) - math.erf(y/S)
erfImage[j][i] += 0.25*photon*ErfX*ErfY
return erfImage
@njit(parallel=True)
def FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
locImage = np.zeros((image_size[0],image_size[1]) )
n_locs = len(xc_array)
for e in prange(n_locs):
locImage[int(max(min(round(yc_array[e]/pixel_size),w-1),0))][int(max(min(round(xc_array[e]/pixel_size),h-1),0))] += 1
return locImage
def getPixelSizeTIFFmetadata(TIFFpath, display=False):
with Image.open(TIFFpath) as img:
meta_dict = {TAGS[key] : img.tag[key] for key in img.tag.keys()}
# TIFF tags
# https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
# https://www.awaresystems.be/imaging/tiff/tifftags/resolutionunit.html
ResolutionUnit = meta_dict['ResolutionUnit'][0] # unit of resolution
width = meta_dict['ImageWidth'][0]
height = meta_dict['ImageLength'][0]
xResolution = meta_dict['XResolution'][0] # number of pixels / ResolutionUnit
if len(xResolution) == 1:
xResolution = xResolution[0]
elif len(xResolution) == 2:
xResolution = xResolution[0]/xResolution[1]
else:
print('Image resolution not defined.')
xResolution = 1
if ResolutionUnit == 2:
# Units given are in inches
pixel_size = 0.025*1e9/xResolution
elif ResolutionUnit == 3:
# Units given are in cm
pixel_size = 0.01*1e9/xResolution
else:
# ResolutionUnit is therefore 1
print('Resolution unit not defined. Assuming: um')
pixel_size = 1e3/xResolution
if display:
print('Pixel size obtained from metadata: '+str(pixel_size)+' nm')
print('Image size: '+str(width)+'x'+str(height))
return (pixel_size, width, height)
def saveAsTIF(path, filename, array, pixel_size):
"""
Image saving using PIL to save as .tif format
# Input
path - path where it will be saved
filename - name of the file to save (no extension)
array - numpy array conatining the data at the required format
pixel_size - physical size of pixels in nanometers (identical for x and y)
"""
# print('Data type: '+str(array.dtype))
if (array.dtype == np.uint16):
mode = 'I;16'
elif (array.dtype == np.uint32):
mode = 'I'
else:
mode = 'F'
# Rounding the pixel size to the nearest number that divides exactly 1cm.
# Resolution needs to be a rational number --> see TIFF format
# pixel_size = 10000/(round(10000/pixel_size))
if len(array.shape) == 2:
im = Image.fromarray(array)
im.save(os.path.join(path, filename+'.tif'),
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
elif len(array.shape) == 3:
imlist = []
for frame in array:
imlist.append(Image.fromarray(frame))
imlist[0].save(os.path.join(path, filename+'.tif'), save_all=True,
append_images=imlist[1:],
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
return
class Maximafinder(Layer):
def __init__(self, thresh, neighborhood_size, use_local_avg, **kwargs):
super(Maximafinder, self).__init__(**kwargs)
self.thresh = tf.constant(thresh, dtype=tf.float32)
self.nhood = neighborhood_size
self.use_local_avg = use_local_avg
def build(self, input_shape):
if self.use_local_avg is True:
self.kernel_x = tf.reshape(tf.constant([[-1,0,1],[-1,0,1],[-1,0,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_y = tf.reshape(tf.constant([[-1,-1,-1],[0,0,0],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_sum = tf.reshape(tf.constant([[1,1,1],[1,1,1],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
def call(self, inputs):
# local maxima positions
max_pool_image = MaxPooling2D(pool_size=(self.nhood,self.nhood), strides=(1,1), padding='same')(inputs)
cond = tf.math.greater(max_pool_image, self.thresh) & tf.math.equal(max_pool_image, inputs)
indices = tf.where(cond)
bind, xind, yind = indices[:, 0], indices[:, 2], indices[:, 1]
confidence = tf.gather_nd(inputs, indices)
# local CoG estimator
if self.use_local_avg:
x_image = K.conv2d(inputs, self.kernel_x, padding='same')
y_image = K.conv2d(inputs, self.kernel_y, padding='same')
sum_image = K.conv2d(inputs, self.kernel_sum, padding='same')
confidence = tf.cast(tf.gather_nd(sum_image, indices), dtype=tf.float32)
x_local = tf.math.divide(tf.gather_nd(x_image, indices),tf.gather_nd(sum_image, indices))
y_local = tf.math.divide(tf.gather_nd(y_image, indices),tf.gather_nd(sum_image, indices))
xind = tf.cast(xind, dtype=tf.float32) + tf.cast(x_local, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32) + tf.cast(y_local, dtype=tf.float32)
else:
xind = tf.cast(xind, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32)
return bind, xind, yind, confidence
def get_config(self):
# Implement get_config to enable serialization. This is optional.
base_config = super(Maximafinder, self).get_config()
config = {}
return dict(list(base_config.items()) + list(config.items()))
# ------------------------------- Prediction with postprocessing function-------------------------------
def batchFramePredictionLocalization(dataPath, filename, modelPath, savePath, batch_size=1, thresh=0.1, neighborhood_size=3, use_local_avg = False, pixel_size = None):
"""
This function tests a trained model on the desired test set, given the
tiff stack of test images, learned weights, and normalization factors.
# Inputs
dataPath - the path to the folder containing the tiff stack(s) to run prediction on
filename - the name of the file to process
modelPath - the path to the folder containing the weights file and the mean and standard deviation file generated in train_model
savePath - the path to the folder where to save the prediction
batch_size. - the number of frames to predict on for each iteration
thresh - threshoold percentage from the maximum of the gaussian scaling
neighborhood_size - the size of the neighborhood for local maxima finding
use_local_average - Boolean whether to perform local averaging or not
"""
# load mean and std
matfile = sio.loadmat(os.path.join(modelPath,'model_metadata.mat'))
test_mean = np.array(matfile['mean_test'])
test_std = np.array(matfile['std_test'])
upsampling_factor = np.array(matfile['upsampling_factor'])
upsampling_factor = upsampling_factor.item() # convert to scalar
L2_weighting_factor = np.array(matfile['Normalization factor'])
L2_weighting_factor = L2_weighting_factor.item() # convert to scalar
# Read in the raw file
Images = io.imread(os.path.join(dataPath, filename))
if pixel_size == None:
pixel_size, _, _ = getPixelSizeTIFFmetadata(os.path.join(dataPath, filename), display=True)
pixel_size_hr = pixel_size/upsampling_factor
# get dataset dimensions
(nFrames, M, N) = Images.shape
print('Input image is '+str(N)+'x'+str(M)+' with '+str(nFrames)+' frames.')
# Build the model for a bigger image
model = buildModel((upsampling_factor*M, upsampling_factor*N, 1))
# Load the trained weights
model.load_weights(os.path.join(modelPath,'weights_best.hdf5'))
# add a post-processing module
max_layer = Maximafinder(thresh*L2_weighting_factor, neighborhood_size, use_local_avg)
# Initialise the results: lists will be used to collect all the localizations
frame_number_list, x_nm_list, y_nm_list, confidence_au_list = [], [], [], []
# Initialise the results
Prediction = np.zeros((M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
Widefield = np.zeros((M, N), dtype=np.float32)
# run model in batches
n_batches = math.ceil(nFrames/batch_size)
for b in tqdm(range(n_batches)):
nF = min(batch_size, nFrames - b*batch_size)
Images_norm = np.zeros((nF, M, N),dtype=np.float32)
Images_upsampled = np.zeros((nF, M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
# Upsampling using a simple nearest neighbor interp and calculating - MULTI-THREAD this?
for f in range(nF):
Images_norm[f,:,:] = project_01(Images[b*batch_size+f,:,:])
Images_norm[f,:,:] = normalize_im(Images_norm[f,:,:], test_mean, test_std)
Images_upsampled[f,:,:] = np.kron(Images_norm[f,:,:], np.ones((upsampling_factor,upsampling_factor)))
Widefield += Images[b*batch_size+f,:,:]
# Reshaping
Images_upsampled = np.expand_dims(Images_upsampled,axis=3)
# Run prediction and local amxima finding
predicted_density = model.predict_on_batch(Images_upsampled)
predicted_density[predicted_density < 0] = 0
Prediction += predicted_density.sum(axis = 3).sum(axis = 0)
bind, xind, yind, confidence = max_layer(predicted_density)
# normalizing the confidence by the L2_weighting_factor
confidence /= L2_weighting_factor
# turn indices to nms and append to the results
xind, yind = xind*pixel_size_hr, yind*pixel_size_hr
frmind = (bind.numpy() + b*batch_size + 1).tolist()
xind = xind.numpy().tolist()
yind = yind.numpy().tolist()
confidence = confidence.numpy().tolist()
frame_number_list += frmind
x_nm_list += xind
y_nm_list += yind
confidence_au_list += confidence
# Open and create the csv file that will contain all the localizations
if use_local_avg:
ext = '_avg'
else:
ext = '_max'
with open(os.path.join(savePath, 'Localizations_' + os.path.splitext(filename)[0] + ext + '.csv'), "w", newline='') as file:
writer = csv.writer(file)
writer.writerow(['frame', 'x [nm]', 'y [nm]', 'confidence [a.u]'])
locs = list(zip(frame_number_list, x_nm_list, y_nm_list, confidence_au_list))
writer.writerows(locs)
# Save the prediction and widefield image
Widefield = np.kron(Widefield, np.ones((upsampling_factor,upsampling_factor)))
Widefield = np.float32(Widefield)
# io.imsave(os.path.join(savePath, 'Predicted_'+os.path.splitext(filename)[0]+'.tif'), Prediction)
# io.imsave(os.path.join(savePath, 'Widefield_'+os.path.splitext(filename)[0]+'.tif'), Widefield)
saveAsTIF(savePath, 'Predicted_'+os.path.splitext(filename)[0], Prediction, pixel_size_hr)
saveAsTIF(savePath, 'Widefield_'+os.path.splitext(filename)[0], Widefield, pixel_size_hr)
return
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
NORMAL = '\033[0m' # white (normal)
def list_files(directory, extension):
return (f for f in os.listdir(directory) if f.endswith('.' + extension))
# @njit(parallel=True)
def subPixelMaxLocalization(array, method = 'CoM', patch_size = 3):
xMaxInd, yMaxInd = np.unravel_index(array.argmax(), array.shape, order='C')
centralPatch = XC[(xMaxInd-patch_size):(xMaxInd+patch_size+1),(yMaxInd-patch_size):(yMaxInd+patch_size+1)]
if (method == 'MAX'):
x0 = xMaxInd
y0 = yMaxInd
elif (method == 'CoM'):
x0 = 0
y0 = 0
S = 0
for xy in range(patch_size*patch_size):
y = math.floor(xy/patch_size)
x = xy - y*patch_size
x0 += x*array[x,y]
y0 += y*array[x,y]
S = array[x,y]
x0 = x0/S - patch_size/2 + xMaxInd
y0 = y0/S - patch_size/2 + yMaxInd
elif (method == 'Radiality'):
# Not implemented yet
x0 = xMaxInd
y0 = yMaxInd
return (x0, y0)
@njit(parallel=True)
def correctDriftLocalization(xc_array, yc_array, frames, xDrift, yDrift):
n_locs = xc_array.shape[0]
xc_array_Corr = np.empty(n_locs)
yc_array_Corr = np.empty(n_locs)
for loc in prange(n_locs):
xc_array_Corr[loc] = xc_array[loc] - xDrift[frames[loc]]
yc_array_Corr[loc] = yc_array[loc] - yDrift[frames[loc]]
return (xc_array_Corr, yc_array_Corr)
print('--------------------------------')
print('DeepSTORM installation complete.')
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
# Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
# if Notebook_version == list(Latest_notebook_version.columns):
# print("This notebook is up-to-date.")
# if not Notebook_version == list(Latest_notebook_version.columns):
# print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
def pdf_export(trained = False, raw_data = False, pretrained_model = False):
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
#model_name = 'little_CARE_test'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hours)+ "hour(s) "+str(minutes)+"min(s) "+str(round(seconds))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and method:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
if raw_data == True:
shape = (M,N)
else:
shape = (int(FOV_size/pixel_size),int(FOV_size/pixel_size))
#dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. The models was retrained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(180, 5, txt = text, align='L')
pdf.ln(1)
pdf.set_font('')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if raw_data==False:
simul_text = 'The training dataset was created in the notebook using the following simulation settings:'
pdf.cell(200, 5, txt=simul_text, align='L')
pdf.ln(1)
html = """
<table width=60% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Setting</th>
<th width = 50% align="left">Simulated Value</th>
</tr>
<tr>
<td width = 50%>FOV_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>pixel_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>ADC_per_photon_conversion</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>ReadOutNoise_ADC</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>ADC_offset</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>emitter_density</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>emitter_density_std</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>number_of_frames</td>
<td width = 50%>{7}</td>
</tr>
<tr>
<td width = 50%>sigma</td>
<td width = 50%>{8}</td>
</tr>
<tr>
<td width = 50%>sigma_std</td>
<td width = 50%>{9}</td>
</tr>
<tr>
<td width = 50%>n_photons</td>
<td width = 50%>{10}</td>
</tr>
<tr>
<td width = 50%>n_photons_std</td>
<td width = 50%>{11}</td>
</tr>
</table>
""".format(FOV_size, pixel_size, ADC_per_photon_conversion, ReadOutNoise_ADC, ADC_offset, emitter_density, emitter_density_std, number_of_frames, sigma, sigma_std, n_photons, n_photons_std)
pdf.write_html(html)
else:
simul_text = 'The training dataset was simulated using ThunderSTORM and loaded into the notebook.'
pdf.multi_cell(190, 5, txt=simul_text, align='L')
pdf.set_font("Arial", size = 11, style='B')
#pdf.ln(1)
#pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'ImageData_path', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = ImageData_path, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'LocalizationData_path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = LocalizationData_path, align = 'L')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'pixel_size:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = str(pixel_size), align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
# if Use_Default_Advanced_Parameters:
# pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used to generate patches:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Patch Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>upsampling_factor</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>num_patches_per_frame</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>min_number_of_emitters_per_patch</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>max_num_patches</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>gaussian_sigma</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>Automatic_normalization</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>L2_weighting_factor</td>
<td width = 50%>{7}</td>
</tr>
""".format(str(patch_size)+'x'+str(patch_size), upsampling_factor, num_patches_per_frame, min_number_of_emitters_per_patch, max_num_patches, gaussian_sigma, Automatic_normalization, L2_weighting_factor)
pdf.write_html(html)
pdf.ln(3)
pdf.set_font('Arial', size=10)
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Training Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{4}</td>
</tr>
</table>
""".format(number_of_epochs,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
pdf.ln(1)
# pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training Images', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_DeepSTORM2D.png').shape
pdf.image('/content/TrainingDataExample_DeepSTORM2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
# if Use_Data_augmentation:
# ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
# pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+'_training_report.pdf')
print('------------------------------')
print('PDF report exported in '+model_path+'/'+model_name+'/')
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'Deep-STORM'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+os.path.basename(QC_model_path)+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Loss curves', ln=1, align='L')
pdf.ln(1)
if os.path.exists(savePath+'/lossCurvePlots.png'):
exp_size = io.imread(savePath+'/lossCurvePlots.png').shape
pdf.image(savePath+'/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(savePath+'/QC_example_data.png').shape
pdf.image(savePath+'/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(savePath+'/'+os.path.basename(QC_model_path)+'_QC_metrics.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
print('------------------------------')
print('QC PDF report exported as '+savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)_____no_output_____
</code>
# **2. Complete the Colab session**
---_____no_output_____
## **2.1. Check for GPU access**
---
By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
<font size = 4>Go to **Runtime -> Change the Runtime type**
<font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
<font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
_____no_output_____
<code>
#@markdown ##Run this cell to check if you have GPU access
# %tensorflow_version 1.x
import tensorflow as tf
# if tf.__version__ != '2.2.0':
# !pip install tensorflow==2.2.0
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime settings are correct then Google did not allocate GPU to your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
# from tensorflow.python.client import device_lib
# device_lib.list_local_devices()
# print the tensorflow version
print('Tensorflow version is ' + str(tf.__version__))
_____no_output_____
</code>
## **2.2. Mount your Google Drive**
---
<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook._____no_output_____
<code>
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')_____no_output_____
</code>
# **3. Generate patches for training**
---
For Deep-STORM the training data can be obtained in two ways:
* Simulated using ThunderSTORM or other simulation tool and loaded here (**using Section 3.1.a**)
* Directly simulated in this notebook (**using Section 3.1.b**)
_____no_output_____## **3.1.a Load training data**
---
Here you can load your simulated data along with its corresponding localization file.
* The `pixel_size` is defined in nanometer (nm). _____no_output_____
<code>
#@markdown ##Load raw data
load_raw_data = True
# Get user input
ImageData_path = "" #@param {type:"string"}
LocalizationData_path = "" #@param {type: "string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size,_,_ = getPixelSizeTIFFmetadata(ImageData_path, True)
# load the tiff data
Images = io.imread(ImageData_path)
# get dataset dimensions
if len(Images.shape) == 3:
(number_of_frames, M, N) = Images.shape
elif len(Images.shape) == 2:
(M, N) = Images.shape
number_of_frames = 1
print('Loaded images: '+str(M)+'x'+str(N)+' with '+str(number_of_frames)+' frames')
# Interactive display of the stack
def scroll_in_time(frame):
f=plt.figure(figsize=(6,6))
plt.imshow(Images[frame-1], interpolation='nearest', cmap = 'gray')
plt.title('Training source at frame = ' + str(frame))
plt.axis('off');
if number_of_frames > 1:
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
else:
f=plt.figure(figsize=(6,6))
plt.imshow(Images, interpolation='nearest', cmap = 'gray')
plt.title('Training source')
plt.axis('off');
# Load the localization file and display the first
LocData = pd.read_csv(LocalizationData_path, index_col=0)
LocData.tail()
_____no_output_____
</code>
## **3.1.b Simulate training data**
---
This simulation tool allows you to generate SMLM data of randomly distrubuted emitters in a field-of-view.
The assumptions are as follows:
* Gaussian Point Spread Function (PSF) with standard deviation defined by `Sigma`. The nominal value of `sigma` can be evaluated using `sigma = 0.21 x Lambda / NA`. (from [Zhang *et al.*, Applied Optics 2007](https://doi.org/10.1364/AO.46.001819))
* Each emitter will emit `n_photons` per frame, and generate their equivalent Poisson noise.
* The camera will contribute Gaussian noise to the signal with a standard deviation defined by `ReadOutNoise_ADC` in ADC
* The `emitter_density` is defined as the number of emitters / um^2 on any given frame. Variability in the emitter density can be applied by adjusting `emitter_density_std`. The latter parameter represents the standard deviation of the normal distribution that the density is drawn from for each individual frame. `emitter_density` **is defined in number of emitters / um^2**.
* The `n_photons` and `sigma` can additionally include some Gaussian variability by setting `n_photons_std` and `sigma_std`.
Important note:
- All dimensions are in nanometer (e.g. `FOV_size` = 6400 represents a field of view of 6.4 um x 6.4 um).
_____no_output_____
<code>
load_raw_data = False
# ---------------------------- User input ----------------------------
#@markdown Run the simulation
#@markdown ---
#@markdown Camera settings:
FOV_size = 6400#@param {type:"number"}
pixel_size = 100#@param {type:"number"}
ADC_per_photon_conversion = 1 #@param {type:"number"}
ReadOutNoise_ADC = 4.5#@param {type:"number"}
ADC_offset = 50#@param {type:"number"}
#@markdown Acquisition settings:
emitter_density = 6#@param {type:"number"}
emitter_density_std = 0#@param {type:"number"}
number_of_frames = 20#@param {type:"integer"}
sigma = 110 #@param {type:"number"}
sigma_std = 5 #@param {type:"number"}
# NA = 1.1 #@param {type:"number"}
# wavelength = 800#@param {type:"number"}
# wavelength_std = 150#@param {type:"number"}
n_photons = 2250#@param {type:"number"}
n_photons_std = 250#@param {type:"number"}
# ---------------------------- Variable initialisation ----------------------------
# Start the clock to measure how long it takes
start = time.time()
print('-----------------------------------------------------------')
n_molecules = emitter_density*FOV_size*FOV_size/10**6
n_molecules_std = emitter_density_std*FOV_size*FOV_size/10**6
print('Number of molecules / FOV: '+str(round(n_molecules,2))+' +/- '+str((round(n_molecules_std,2))))
# sigma = 0.21*wavelength/NA
# sigma_std = 0.21*wavelength_std/NA
# print('Gaussian PSF sigma: '+str(round(sigma,2))+' +/- '+str(round(sigma_std,2))+' nm')
M = N = round(FOV_size/pixel_size)
FOV_size = M*pixel_size
print('Final image size: '+str(M)+'x'+str(M)+' ('+str(round(FOV_size/1000, 3))+'um x'+str(round(FOV_size/1000,3))+' um)')
np.random.seed(1)
display_upsampling = 8 # used to display the loc map here
NoiseFreeImages = np.zeros((number_of_frames, M, M))
locImage = np.zeros((number_of_frames, display_upsampling*M, display_upsampling*N))
frames = []
all_xloc = []
all_yloc = []
all_photons = []
all_sigmas = []
# ---------------------------- Main simulation loop ----------------------------
print('-----------------------------------------------------------')
for f in tqdm(range(number_of_frames)):
# Define the coordinates of emitters by randomly distributing them across the FOV
n_mol = int(max(round(np.random.normal(n_molecules, n_molecules_std, size=1)[0]), 0))
x_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
y_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_mol)
sigma_array = np.random.normal(sigma, sigma_std, size=n_mol)
# x_c = np.linspace(0,3000,5)
# y_c = np.linspace(0,3000,5)
all_xloc += x_c.tolist()
all_yloc += y_c.tolist()
frames += ((f+1)*np.ones(x_c.shape[0])).tolist()
all_photons += photon_array.tolist()
all_sigmas += sigma_array.tolist()
locImage[f] = FromLoc2Image_SimpleHistogram(x_c, y_c, image_size = (N*display_upsampling, M*display_upsampling), pixel_size = pixel_size/display_upsampling)
# # Get the approximated locations according to the grid pixel size
# Chr_emitters = [int(max(min(round(display_upsampling*x_c[i]/pixel_size),N*display_upsampling-1),0)) for i in range(len(x_c))]
# Rhr_emitters = [int(max(min(round(display_upsampling*y_c[i]/pixel_size),M*display_upsampling-1),0)) for i in range(len(y_c))]
# # Build Localization image
# for (r,c) in zip(Rhr_emitters, Chr_emitters):
# locImage[f][r][c] += 1
NoiseFreeImages[f] = FromLoc2Image_Erf(x_c, y_c, photon_array, sigma_array, image_size = (M,M), pixel_size = pixel_size)
# ---------------------------- Create DataFrame fof localization file ----------------------------
# Table with localization info as dataframe output
LocData = pd.DataFrame()
LocData["frame"] = frames
LocData["x [nm]"] = all_xloc
LocData["y [nm]"] = all_yloc
LocData["Photon #"] = all_photons
LocData["Sigma [nm]"] = all_sigmas
LocData.index += 1 # set indices to start at 1 and not 0 (same as ThunderSTORM)
# ---------------------------- Estimation of SNR ----------------------------
n_frames_for_SNR = 100
M_SNR = 10
x_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
y_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_frames_for_SNR)
sigma_array = np.random.normal(sigma, sigma_std, size=n_frames_for_SNR)
SNR = np.zeros(n_frames_for_SNR)
for i in range(n_frames_for_SNR):
SingleEmitterImage = FromLoc2Image_Erf(np.array([x_c[i]]), np.array([x_c[i]]), np.array([photon_array[i]]), np.array([sigma_array[i]]), (M_SNR, M_SNR), pixel_size)
Signal_photon = np.max(SingleEmitterImage)
Noise_photon = math.sqrt((ReadOutNoise_ADC/ADC_per_photon_conversion)**2 + Signal_photon)
SNR[i] = Signal_photon/Noise_photon
print('SNR: '+str(round(np.mean(SNR),2))+' +/- '+str(round(np.std(SNR),2)))
# ---------------------------- ----------------------------
# Table with info
simParameters = pd.DataFrame()
simParameters["FOV size (nm)"] = [FOV_size]
simParameters["Pixel size (nm)"] = [pixel_size]
simParameters["ADC/photon"] = [ADC_per_photon_conversion]
simParameters["Read-out noise (ADC)"] = [ReadOutNoise_ADC]
simParameters["Constant offset (ADC)"] = [ADC_offset]
simParameters["Emitter density (emitters/um^2)"] = [emitter_density]
simParameters["STD of emitter density (emitters/um^2)"] = [emitter_density_std]
simParameters["Number of frames"] = [number_of_frames]
# simParameters["NA"] = [NA]
# simParameters["Wavelength (nm)"] = [wavelength]
# simParameters["STD of wavelength (nm)"] = [wavelength_std]
simParameters["Sigma (nm))"] = [sigma]
simParameters["STD of Sigma (nm))"] = [sigma_std]
simParameters["Number of photons"] = [n_photons]
simParameters["STD of number of photons"] = [n_photons_std]
simParameters["SNR"] = [np.mean(SNR)]
simParameters["STD of SNR"] = [np.std(SNR)]
# ---------------------------- Finish simulation ----------------------------
# Calculating the noisy image
Images = ADC_per_photon_conversion * np.random.poisson(NoiseFreeImages) + ReadOutNoise_ADC * np.random.normal(size = (number_of_frames, M, N)) + ADC_offset
Images[Images <= 0] = 0
# Convert to 16-bit or 32-bits integers
if Images.max() < (2**16-1):
Images = Images.astype(np.uint16)
else:
Images = Images.astype(np.uint32)
# ---------------------------- Display ----------------------------
# Displaying the time elapsed for simulation
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds,1),"sec(s)")
# Interactively display the results using Widgets
def scroll_in_time(frame):
f = plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
plt.imshow(locImage[frame-1], interpolation='bilinear', vmin = 0, vmax=0.1)
plt.title('Localization image')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(NoiseFreeImages[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noise-free simulation')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(Images[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noisy simulation')
plt.axis('off');
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
# Display the head of the dataframe with localizations
LocData.tail()
_____no_output_____#@markdown ---
#@markdown ##Play this cell to save the simulated stack
#@markdown Please select a path to the folder where to save the simulated data. It is not necessary to save the data to run the training, but keeping the simulated for your own record can be useful to check its validity.
Save_path = "" #@param {type:"string"}
if not os.path.exists(Save_path):
os.makedirs(Save_path)
print('Folder created.')
else:
print('Training data already exists in folder: Data overwritten.')
saveAsTIF(Save_path, 'SimulatedDataset', Images, pixel_size)
# io.imsave(os.path.join(Save_path, 'SimulatedDataset.tif'),Images)
LocData.to_csv(os.path.join(Save_path, 'SimulatedDataset.csv'))
simParameters.to_csv(os.path.join(Save_path, 'SimulatedParameters.csv'))
print('Training dataset saved.')_____no_output_____
</code>
## **3.2. Generate training patches**
---
Training patches need to be created from the training data generated above.
* The `patch_size` needs to give sufficient contextual information and for most cases a `patch_size` of 26 (corresponding to patches of 26x26 pixels) works fine. **DEFAULT: 26**
* The `upsampling_factor` defines the effective magnification of the final super-resolved image compared to the input image (this is called magnification in ThunderSTORM). This is used to generate the super-resolved patches as target dataset. Using an `upsampling_factor` of 16 will require the use of more memory and it may be necessary to decreae the `patch_size` to 16 for example. **DEFAULT: 8**
* The `num_patches_per_frame` defines the number of patches extracted from each frame generated in section 3.1. **DEFAULT: 500**
* The `min_number_of_emitters_per_patch` defines the minimum number of emitters that need to be present in the patch to be a valid patch. An empty patch does not contain useful information for the network to learn from. **DEFAULT: 7**
* The `max_num_patches` defines the maximum number of patches to generate. Fewer may be generated depending on how many pacthes are rejected and how many frames are available. **DEFAULT: 10000**
* The `gaussian_sigma` defines the Gaussian standard deviation (in magnified pixels) applied to generate the super-resolved target image. **DEFAULT: 1**
* The `L2_weighting_factor` is a normalization factor used in the loss function. It helps balancing the loss from the L2 norm. When using higher densities, this factor should be decreased and vice-versa. This factor can be autimatically calculated using an empiraical formula. **DEFAULT: 100**
_____no_output_____
<code>
#@markdown ## **Provide patch parameters**
# -------------------- User input --------------------
patch_size = 26 #@param {type:"integer"}
upsampling_factor = 8 #@param ["4", "8", "16"] {type:"raw"}
num_patches_per_frame = 500#@param {type:"integer"}
min_number_of_emitters_per_patch = 7#@param {type:"integer"}
max_num_patches = 10000#@param {type:"integer"}
gaussian_sigma = 1#@param {type:"integer"}
#@markdown Estimate the optimal normalization factor automatically?
Automatic_normalization = True #@param {type:"boolean"}
#@markdown Otherwise, it will use the following value:
L2_weighting_factor = 100 #@param {type:"number"}
# -------------------- Prepare variables --------------------
# Start the clock to measure how long it takes
start = time.time()
# Initialize some parameters
pixel_size_hr = pixel_size/upsampling_factor # in nm
n_patches = min(number_of_frames*num_patches_per_frame, max_num_patches)
patch_size = patch_size*upsampling_factor
# Dimensions of the high-res grid
Mhr = upsampling_factor*M # in pixels
Nhr = upsampling_factor*N # in pixels
# Initialize the training patches and labels
patches = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
spikes = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
heatmaps = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
# Run over all frames and construct the training examples
k = 1 # current patch count
skip_counter = 0 # number of dataset skipped due to low density
id_start = 0 # id position in LocData for current frame
print('Generating '+str(n_patches)+' patches of '+str(patch_size)+'x'+str(patch_size))
n_locs = len(LocData.index)
print('Total number of localizations: '+str(n_locs))
density = n_locs/(M*N*number_of_frames*(0.001*pixel_size)**2)
print('Density: '+str(round(density,2))+' locs/um^2')
n_locs_per_patch = patch_size**2*density
if Automatic_normalization:
# This empirical formulae attempts to balance the loss L2 function between the background and the bright spikes
# A value of 100 was originally chosen to balance L2 for a patch size of 2.6x2.6^2 0.1um pixel size and density of 3 (hence the 20.28), at upsampling_factor = 8
L2_weighting_factor = 100/math.sqrt(min(n_locs_per_patch, min_number_of_emitters_per_patch)*8**2/(upsampling_factor**2*20.28))
print('Normalization factor: '+str(round(L2_weighting_factor,2)))
# -------------------- Patch generation loop --------------------
print('-----------------------------------------------------------')
for (f, thisFrame) in enumerate(tqdm(Images)):
# Upsample the frame
upsampledFrame = np.kron(thisFrame, np.ones((upsampling_factor,upsampling_factor)))
# Read all the provided high-resolution locations for current frame
DataFrame = LocData[LocData['frame'] == f+1].copy()
# Get the approximated locations according to the high-res grid pixel size
Chr_emitters = [int(max(min(round(DataFrame['x [nm]'][i]/pixel_size_hr),Nhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
Rhr_emitters = [int(max(min(round(DataFrame['y [nm]'][i]/pixel_size_hr),Mhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
id_start += len(DataFrame.index)
# Build Localization image
LocImage = np.zeros((Mhr,Nhr))
LocImage[(Rhr_emitters, Chr_emitters)] = 1
# Here, there's a choice between the original Gaussian (classification approach) and using the erf function
HeatMapImage = L2_weighting_factor*gaussian_filter(LocImage, gaussian_sigma)
# HeatMapImage = L2_weighting_factor*FromLoc2Image_MultiThreaded(np.array(list(DataFrame['x [nm]'])), np.array(list(DataFrame['y [nm]'])),
# np.ones(len(DataFrame.index)), pixel_size_hr*gaussian_sigma*np.ones(len(DataFrame.index)),
# Mhr, pixel_size_hr)
# Generate random position for the top left corner of the patch
xc = np.random.randint(0, Mhr-patch_size, size=num_patches_per_frame)
yc = np.random.randint(0, Nhr-patch_size, size=num_patches_per_frame)
for c in range(len(xc)):
if LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size].sum() < min_number_of_emitters_per_patch:
skip_counter += 1
continue
else:
# Limit maximal number of training examples to 15k
if k > max_num_patches:
break
else:
# Assign the patches to the right part of the images
patches[k-1] = upsampledFrame[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
spikes[k-1] = LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
heatmaps[k-1] = HeatMapImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
k += 1 # increment current patch count
# Remove the empty data
patches = patches[:k-1]
spikes = spikes[:k-1]
heatmaps = heatmaps[:k-1]
n_patches = k-1
# -------------------- Failsafe --------------------
# Check if the size of the training set is smaller than 5k to notify user to simulate more images using ThunderSTORM
if ((k-1) < 5000):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(bcolors.WARNING+'!! WARNING: Training set size is below 5K - Consider simulating more images in ThunderSTORM. !!'+bcolors.NORMAL)
# -------------------- Displays --------------------
print('Number of patches skipped due to low density: '+str(skip_counter))
# dataSize = int((getsizeof(patches)+getsizeof(heatmaps)+getsizeof(spikes))/(1024*1024)) #rounded in MB
# print('Size of patches: '+str(dataSize)+' MB')
print(str(n_patches)+' patches were generated.')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display patches interactively with a slider
def scroll_patches(patch):
f = plt.figure(figsize=(16,6))
plt.subplot(1,3,1)
plt.imshow(patches[patch-1], interpolation='nearest', cmap='gray')
plt.title('Raw data (frame #'+str(patch)+')')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(heatmaps[patch-1], interpolation='nearest')
plt.title('Heat map')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(spikes[patch-1], interpolation='nearest')
plt.title('Localization map')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_DeepSTORM2D.png',bbox_inches='tight',pad_inches=0)
interact(scroll_patches, patch=widgets.IntSlider(min=1, max=patches.shape[0], step=1, value=0, continuous_update=False));
_____no_output_____
</code>
# **4. Train the network**
---_____no_output_____## **4.1. Select your paths and parameters**
---
<font size = 4>**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).
<font size = 4>**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.
<font size = 5>**Training parameters**
<font size = 4>**`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for ~100 epochs. Evaluate the performance after training (see 5). **Default value: 80**
<font size =4>**`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16**
<font size = 4>**`number_of_steps`:** Define the number of training steps by epoch. **If this value is set to 0**, by default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size**
<font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 30**
<font size = 4>**`initial_learning_rate`:** This parameter represents the initial value to be used as learning rate in the optimizer. **Default value: 0.001**_____no_output_____
<code>
#@markdown ###Path to training images and parameters
model_path = "" #@param {type: "string"}
model_name = "" #@param {type: "string"}
number_of_epochs = 80#@param {type:"integer"}
batch_size = 16#@param {type:"integer"}
number_of_steps = 0#@param {type:"integer"}
percentage_validation = 30 #@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
percentage_validation /= 100
if number_of_steps == 0:
number_of_steps = int((1-percentage_validation)*n_patches/batch_size)
print('Number of steps: '+str(number_of_steps))
# Pretrained model path initialised here so next cell does not need to be run
h5_file_path = ''
Use_pretrained_model = False
if not ('patches' in locals()):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(WARNING+'!! WARNING: No patches were found in memory currently. !!')
Save_path = os.path.join(model_path, model_name)
if os.path.exists(Save_path):
print(bcolors.WARNING+'The model folder already exists and will be overwritten.'+bcolors.NORMAL)
print('-----------------------------')
print('Training parameters set.')
_____no_output_____
</code>
## **4.2. Using weights from a pre-trained model as initial weights**
---
<font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Deep-STORM 2D model**.
<font size = 4> This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.
<font size = 4> In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used. _____no_output_____
<code>
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.hdf5 pretrained model does not exist'+bcolors.NORMAL)
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead.'+bcolors.NORMAL)
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+bcolors.NORMAL)
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print('No pretrained network will be used.')
h5_file_path = ''
_____no_output_____
</code>
## **4.4. Start Training**
---
<font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
<font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.
<font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder._____no_output_____
<code>
#@markdown ##Start training
# Start the clock to measure how long it takes
start = time.time()
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#here we check that no model with the same name already exist, if so delete
if os.path.exists(Save_path):
shutil.rmtree(Save_path)
# Create the model folder!
os.makedirs(Save_path)
# Export pdf summary
pdf_export(raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
# Let's go !
train_model(patches, heatmaps, Save_path,
steps_per_epoch=number_of_steps, epochs=number_of_epochs, batch_size=batch_size,
upsampling_factor = upsampling_factor,
validation_split = percentage_validation,
initial_learning_rate = initial_learning_rate,
pretrained_model_path = h5_file_path,
L2_weighting_factor = L2_weighting_factor)
# # Show info about the GPU memory useage
# !nvidia-smi
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# export pdf after training to update the existing document
pdf_export(trained = True, raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
_____no_output_____
</code>
# **5. Evaluate your model**
---
<font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model.
<font size = 4>**We highly recommend to perform quality control on all newly trained models.**_____no_output_____
<code>
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
#@markdown #####During training, the model files are automatically saved inside a folder named after the parameter `model_name` (see section 4.1). Provide the name of this folder as `QC_model_path` .
QC_model_path = "" #@param {type:"string"}
if (Use_the_current_trained_model):
QC_model_path = os.path.join(model_path, model_name)
if os.path.exists(QC_model_path):
print("The "+os.path.basename(QC_model_path)+" model will be evaluated")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
_____no_output_____
</code>
## **5.1. Inspection of the loss function**
---
<font size = 4>First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*
<font size = 4>**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.
<font size = 4>**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.
<font size = 4>During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.
<font size = 4>Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased._____no_output_____
<code>
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
lossDataFromCSV = []
vallossDataFromCSV = []
with open(os.path.join(QC_model_path,'Quality Control/training_evaluation.csv'),'r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
if row:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(os.path.join(QC_model_path,'Quality Control/lossCurvePlots.png'), bbox_inches='tight', pad_inches=0)
plt.show()
_____no_output_____
</code>
## **5.2. Error mapping and quality metrics estimation**
---
<font size = 4>This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "QC_image_folder" using teh corresponding localization data contained in "QC_loc_folder" !
<font size = 4>**1. The SSIM (structural similarity) map**
<font size = 4>The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info).
<font size=4>**mSSIM** is the SSIM value calculated across the entire window of both images.
<font size=4>**The output below shows the SSIM maps with the mSSIM**
<font size = 4>**2. The RSE (Root Squared Error) map**
<font size = 4>This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).
<font size =4>**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.
<font size = 4>**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.
<font size=4>**The output below shows the RSE maps with the NRMSE and PSNR values.**
_____no_output_____
<code>
# ------------------------ User input ------------------------
#@markdown ##Choose the folders that contain your Quality Control dataset
QC_image_folder = "" #@param{type:"string"}
QC_loc_folder = "" #@param{type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size_INPUT = None
else:
pixel_size_INPUT = pixel_size
# ------------------------ QC analysis loop over provided dataset ------------------------
savePath = os.path.join(QC_model_path, 'Quality Control')
# Open and create the csv file that will contain all the QC metrics
with open(os.path.join(savePath, os.path.basename(QC_model_path)+"_QC_metrics.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","WF v. GT mSSIM", "Prediction v. GT NRMSE","WF v. GT NRMSE", "Prediction v. GT PSNR", "WF v. GT PSNR"])
# These lists will be used to collect all the metrics values per slice
file_name_list = []
slice_number_list = []
mSSIM_GvP_list = []
mSSIM_GvWF_list = []
NRMSE_GvP_list = []
NRMSE_GvWF_list = []
PSNR_GvP_list = []
PSNR_GvWF_list = []
# Let's loop through the provided dataset in the QC folders
for (imageFilename, locFilename) in zip(list_files(QC_image_folder, 'tif'), list_files(QC_loc_folder, 'csv')):
print('--------------')
print(imageFilename)
print(locFilename)
# Get the prediction
batchFramePredictionLocalization(QC_image_folder, imageFilename, QC_model_path, savePath, pixel_size = pixel_size_INPUT)
# test_model(QC_image_folder, imageFilename, QC_model_path, savePath, display=False);
thisPrediction = io.imread(os.path.join(savePath, 'Predicted_'+imageFilename))
thisWidefield = io.imread(os.path.join(savePath, 'Widefield_'+imageFilename))
Mhr = thisPrediction.shape[0]
Nhr = thisPrediction.shape[1]
if pixel_size_INPUT == None:
pixel_size, N, M = getPixelSizeTIFFmetadata(os.path.join(QC_image_folder,imageFilename))
upsampling_factor = int(Mhr/M)
print('Upsampling factor: '+str(upsampling_factor))
pixel_size_hr = pixel_size/upsampling_factor # in nm
# Load the localization file and display the first
LocData = pd.read_csv(os.path.join(QC_loc_folder,locFilename), index_col=0)
x = np.array(list(LocData['x [nm]']))
y = np.array(list(LocData['y [nm]']))
locImage = FromLoc2Image_SimpleHistogram(x, y, image_size = (Mhr,Nhr), pixel_size = pixel_size_hr)
# Remove extension from filename
imageFilename_no_extension = os.path.splitext(imageFilename)[0]
# io.imsave(os.path.join(savePath, 'GT_image_'+imageFilename), locImage)
saveAsTIF(savePath, 'GT_image_'+imageFilename_no_extension, locImage, pixel_size_hr)
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm, test_prediction_norm = norm_minmse(locImage, thisPrediction, normalize_gt=True)
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm, test_wf_norm = norm_minmse(locImage, thisWidefield, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = structural_similarity(test_GT_norm, test_prediction_norm, data_range=1., full=True)
index_SSIM_GTvsWF, img_SSIM_GTvsWF = structural_similarity(test_GT_norm, test_wf_norm, data_range=1., full=True)
# Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
# io.imsave(os.path.join(savePath,'SSIM_GTvsPrediction_'+imageFilename),img_SSIM_GTvsPrediction_32bit)
saveAsTIF(savePath,'SSIM_GTvsPrediction_'+imageFilename_no_extension, img_SSIM_GTvsPrediction_32bit, pixel_size_hr)
img_SSIM_GTvsWF_32bit = np.float32(img_SSIM_GTvsWF)
# io.imsave(os.path.join(savePath,'SSIM_GTvsWF_'+imageFilename),img_SSIM_GTvsWF_32bit)
saveAsTIF(savePath,'SSIM_GTvsWF_'+imageFilename_no_extension, img_SSIM_GTvsWF_32bit, pixel_size_hr)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsWF = np.sqrt(np.square(test_GT_norm - test_wf_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
# io.imsave(os.path.join(savePath,'RSE_GTvsPrediction_'+imageFilename),img_RSE_GTvsPrediction_32bit)
saveAsTIF(savePath,'RSE_GTvsPrediction_'+imageFilename_no_extension, img_RSE_GTvsPrediction_32bit, pixel_size_hr)
img_RSE_GTvsWF_32bit = np.float32(img_RSE_GTvsWF)
# io.imsave(os.path.join(savePath,'RSE_GTvsWF_'+imageFilename),img_RSE_GTvsWF_32bit)
saveAsTIF(savePath,'RSE_GTvsWF_'+imageFilename_no_extension, img_RSE_GTvsWF_32bit, pixel_size_hr)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsWF = np.sqrt(np.mean(img_RSE_GTvsWF))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsWF = psnr(test_GT_norm,test_wf_norm,data_range=1.0)
writer.writerow([imageFilename,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsWF),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsWF),str(PSNR_GTvsPrediction), str(PSNR_GTvsWF)])
# Collect values to display in dataframe output
file_name_list.append(imageFilename)
mSSIM_GvP_list.append(index_SSIM_GTvsPrediction)
mSSIM_GvWF_list.append(index_SSIM_GTvsWF)
NRMSE_GvP_list.append(NRMSE_GTvsPrediction)
NRMSE_GvWF_list.append(NRMSE_GTvsWF)
PSNR_GvP_list.append(PSNR_GTvsPrediction)
PSNR_GvWF_list.append(PSNR_GTvsWF)
# Table with metrics as dataframe output
pdResults = pd.DataFrame(index = file_name_list)
pdResults["Prediction v. GT mSSIM"] = mSSIM_GvP_list
pdResults["Wide-field v. GT mSSIM"] = mSSIM_GvWF_list
pdResults["Prediction v. GT NRMSE"] = NRMSE_GvP_list
pdResults["Wide-field v. GT NRMSE"] = NRMSE_GvWF_list
pdResults["Prediction v. GT PSNR"] = PSNR_GvP_list
pdResults["Wide-field v. GT PSNR"] = PSNR_GvWF_list
# ------------------------ Display ------------------------
print('--------------------------------------------')
@interact
def show_QC_results(file = list_files(QC_image_folder, 'tif')):
plt.figure(figsize=(15,15))
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(savePath, 'GT_image_'+file))
plt.imshow(img_GT, norm = simple_norm(img_GT, percent = 99.5))
plt.title('Target',fontsize=15)
# Wide-field
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(savePath, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(savePath, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsWF = io.imread(os.path.join(savePath, 'SSIM_GTvsWF_'+file))
imSSIM_GTvsWF = plt.imshow(img_SSIM_GTvsWF, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsWF,fraction=0.046, pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Wide-field v. GT mSSIM"],3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsPrediction = io.imread(os.path.join(savePath, 'SSIM_GTvsPrediction_'+file))
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Prediction v. GT mSSIM"],3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsWF = io.imread(os.path.join(savePath, 'RSE_GTvsWF_'+file))
imRSE_GTvsWF = plt.imshow(img_RSE_GTvsWF, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsWF,fraction=0.046,pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Wide-field v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Wide-field v. GT PSNR"],3)),fontsize=14)
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsPrediction = io.imread(os.path.join(savePath, 'RSE_GTvsPrediction_'+file))
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Prediction v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Prediction v. GT PSNR"],3)),fontsize=14)
plt.savefig(QC_model_path+'/Quality Control/QC_example_data.png', bbox_inches='tight', pad_inches=0)
print('--------------------------------------------')
pdResults.head()
# Export pdf wth summary of QC results
qc_pdf_export()_____no_output_____
</code>
# **6. Using the trained model**
---
<font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive._____no_output_____## **6.1 Generate image prediction and localizations from unseen dataset**
---
<font size = 4>The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).
<font size = 4>**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.
<font size = 4>**`Result_folder`:** This folder will contain the found localizations csv.
<font size = 4>**`batch_size`:** This paramter determines how many frames are processed by any single pass on the GPU. A higher `batch_size` will make the prediction faster but will use more GPU memory. If an OutOfMemory (OOM) error occurs, decrease the `batch_size`. **DEFAULT: 4**
<font size = 4>**`threshold`:** This paramter determines threshold for local maxima finding. The value is expected to reside in the range **[0,1]**. A higher `threshold` will result in less localizations. **DEFAULT: 0.1**
<font size = 4>**`neighborhood_size`:** This paramter determines size of the neighborhood within which the prediction needs to be a local maxima in recovery pixels (CCD pixel/upsampling_factor). A high `neighborhood_size` will make the prediction slower and potentially discard nearby localizations. **DEFAULT: 3**
<font size = 4>**`use_local_average`:** This paramter determines whether to locally average the prediction in a 3x3 neighborhood to get the final localizations. If set to **True** it will make inference slightly slower depending on the size of the FOV. **DEFAULT: True**
_____no_output_____
<code>
# ------------------------------- User input -------------------------------
#@markdown ### Data parameters
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value (in nm):
pixel_size = 100 #@param {type:"number"}
#@markdown ### Model parameters
#@markdown Do you want to use the model you just trained?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown Otherwise, please provide path to the model folder below
prediction_model_path = "" #@param {type:"string"}
#@markdown ### Prediction parameters
batch_size = 4#@param {type:"integer"}
#@markdown ### Post processing parameters
threshold = 0.1#@param {type:"number"}
neighborhood_size = 3#@param {type:"integer"}
#@markdown Do you want to locally average the model output with CoG estimator ?
use_local_average = True #@param {type:"boolean"}
if get_pixel_size_from_file:
pixel_size = None
if (Use_the_current_trained_model):
prediction_model_path = os.path.join(model_path, model_name)
if os.path.exists(prediction_model_path):
print("The "+os.path.basename(prediction_model_path)+" model will be used.")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
# inform user whether local averaging is being used
if use_local_average == True:
print('Using local averaging')
if not os.path.exists(Result_folder):
print('Result folder was created.')
os.makedirs(Result_folder)
# ------------------------------- Run predictions -------------------------------
start = time.time()
#%% This script tests the trained fully convolutional network based on the
# saved training weights, and normalization created using train_model.
if os.path.isdir(Data_folder):
for filename in list_files(Data_folder, 'tif'):
# run the testing/reconstruction process
print("------------------------------------")
print("Running prediction on: "+ filename)
batchFramePredictionLocalization(Data_folder, filename, prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
elif os.path.isfile(Data_folder):
batchFramePredictionLocalization(os.path.dirname(Data_folder), os.path.basename(Data_folder), prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------------------- Interactive display -------------------------------
print('--------------------------------------------------------------------')
print('---------------------------- Previews ------------------------------')
print('--------------------------------------------------------------------')
if os.path.isdir(Data_folder):
@interact
def show_QC_results(file = list_files(Data_folder, 'tif')):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
if os.path.isfile(Data_folder):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+os.path.basename(Data_folder)))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+os.path.basename(Data_folder)))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
_____no_output_____
</code>
## **6.2 Drift correction**
---
<font size = 4>The visualization above is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. The display is a preview without any drift correction applied. This section performs drift correction using cross-correlation between time bins to estimate the drift.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the image reconstructions used for the Drift Correction estmication (in **nm**). A smaller pixel size will be more precise but will take longer to compute. **DEFAULT: 20**
<font size = 4>**`number_of_bins`:** This parameter defines how many temporal bins are used across the full dataset. All localizations in each bins are used ot build an image. This image is used to find the drift with respect to the image obtained from the very first bin. A typical value would correspond to about 500 frames per bin. **DEFAULT: Total number of frames / 500**
<font size = 4>**`polynomial_fit_degree`:** The drift obtained for each temporal bins needs to be interpolated to every single frames. This is performed by polynomial fit, the degree of which is defined here. **DEFAULT: 4**
<font size = 4> The drift-corrected localization data is automaticaly saved in the `save_path` folder._____no_output_____
<code>
# @markdown ##Data parameters
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100 #@param {type:"number"}
# @markdown ##Drift correction parameters
visualization_pixel_size = 20#@param {type:"number"}
number_of_bins = 50#@param {type:"integer"}
polynomial_fit_degree = 4#@param {type:"integer"}
# @markdown ##Saving parameters
save_path = '' #@param {type:"string"}
# Let's go !
start = time.time()
# Get info from the raw file if selected
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
# Read the localizations in
LocData = pd.read_csv(Loc_file_path)
# Calculate a few variables
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
n_locs = len(LocData.index)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(n_locs))
blocksize = math.ceil(nFrames/number_of_bins)
print('Number of frames per block: '+str(blocksize))
blockDataFrame = LocData[(LocData['frame'] < blocksize)].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
# Preparing the Reference image
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageRef = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImagesRef = np.rot90(ImageRef, k=2)
xDrift = np.zeros(number_of_bins)
yDrift = np.zeros(number_of_bins)
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
with open(os.path.join(save_path, filename_no_extension+"_DriftCorrectionData.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["Block #", "x-drift [nm]","y-drift [nm]"])
for b in tqdm(range(number_of_bins)):
blockDataFrame = LocData[(LocData['frame'] >= (b*blocksize)) & (LocData['frame'] < ((b+1)*blocksize))].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageBlock = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
XC = fftconvolve(ImagesRef, ImageBlock, mode = 'same')
yDrift[b], xDrift[b] = subPixelMaxLocalization(XC, method = 'CoM')
# saveAsTIF(save_path, 'ImageBlock'+str(b), ImageBlock, visualization_pixel_size)
# saveAsTIF(save_path, 'XCBlock'+str(b), XC, visualization_pixel_size)
writer.writerow([str(b), str((xDrift[b]-xDrift[0])*visualization_pixel_size), str((yDrift[b]-yDrift[0])*visualization_pixel_size)])
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
print('Fitting drift data...')
bin_number = np.arange(number_of_bins)*blocksize + blocksize/2
xDrift = (xDrift-xDrift[0])*visualization_pixel_size
yDrift = (yDrift-yDrift[0])*visualization_pixel_size
xDriftCoeff = np.polyfit(bin_number, xDrift, polynomial_fit_degree)
yDriftCoeff = np.polyfit(bin_number, yDrift, polynomial_fit_degree)
xDriftFit = np.poly1d(xDriftCoeff)
yDriftFit = np.poly1d(yDriftCoeff)
bins = np.arange(nFrames)
xDriftInterpolated = xDriftFit(bins)
yDriftInterpolated = yDriftFit(bins)
# ------------------ Displaying the image results ------------------
plt.figure(figsize=(15,10))
plt.plot(bin_number,xDrift, 'r+', label='x-drift')
plt.plot(bin_number,yDrift, 'b+', label='y-drift')
plt.plot(bins,xDriftInterpolated, 'r-', label='y-drift (fit)')
plt.plot(bins,yDriftInterpolated, 'b-', label='y-drift (fit)')
plt.title('Cross-correlation estimated drift')
plt.ylabel('Drift [nm]')
plt.xlabel('Bin number')
plt.legend();
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:", hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------ Actual drift correction -------------------
print('Correcting localization data...')
xc_array = LocData['x [nm]'].to_numpy(dtype=np.float32)
yc_array = LocData['y [nm]'].to_numpy(dtype=np.float32)
frames = LocData['frame'].to_numpy(dtype=np.int32)
xc_array_Corr, yc_array_Corr = correctDriftLocalization(xc_array, yc_array, frames, xDriftInterpolated, yDriftInterpolated)
ImageRaw = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImageCorr = FromLoc2Image_SimpleHistogram(xc_array_Corr, yc_array_Corr, image_size = image_size, pixel_size = visualization_pixel_size)
# ------------------ Displaying the imge results ------------------
plt.figure(figsize=(15,7.5))
# Raw
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(ImageRaw, norm = simple_norm(ImageRaw, percent = 99.5))
plt.title('Raw', fontsize=15);
# Corrected
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(ImageCorr, norm = simple_norm(ImageCorr, percent = 99.5))
plt.title('Corrected',fontsize=15);
# ------------------ Table with info -------------------
driftCorrectedLocData = pd.DataFrame()
driftCorrectedLocData['frame'] = frames
driftCorrectedLocData['x [nm]'] = xc_array_Corr
driftCorrectedLocData['y [nm]'] = yc_array_Corr
driftCorrectedLocData['confidence [a.u]'] = LocData['confidence [a.u]']
driftCorrectedLocData.to_csv(os.path.join(save_path, filename_no_extension+'_DriftCorrected.csv'))
print('-------------------------------')
print('Corrected localizations saved.')
_____no_output_____
</code>
## **6.3 Visualization of the localizations**
---
<font size = 4>The visualization in section 6.1 is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. This section performs visualization of the result by plotting the localizations as a simple histogram.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the final image reconstruction (in **nm**). **DEFAULT: 10**
<font size = 4>**`visualization_mode`:** This parameter defines what visualization method is used to visualize the final image. NOTES: The Integrated Gaussian can be quite slow. **DEFAULT: Simple histogram.**
_____no_output_____
<code>
# @markdown ##Data parameters
Use_current_drift_corrected_localizations = True #@param {type:"boolean"}
# @markdown Otherwise provide a localization file path
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100#@param {type:"number"}
# @markdown ##Visualization parameters
visualization_pixel_size = 10#@param {type:"number"}
visualization_mode = "Simple histogram" #@param ["Simple histogram", "Integrated Gaussian (SLOW!)"]
if not Use_current_drift_corrected_localizations:
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
if Use_current_drift_corrected_localizations:
LocData = driftCorrectedLocData
else:
LocData = pd.read_csv(Loc_file_path)
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(len(LocData.index)))
xc_array = LocData['x [nm]'].to_numpy()
yc_array = LocData['y [nm]'].to_numpy()
if (visualization_mode == 'Simple histogram'):
locImage = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
elif (visualization_mode == 'Shifted histogram'):
print(bcolors.WARNING+'Method not implemented yet!'+bcolors.NORMAL)
locImage = np.zeros(image_size)
elif (visualization_mode == 'Integrated Gaussian (SLOW!)'):
photon_array = np.ones(xc_array.shape)
sigma_array = np.ones(xc_array.shape)
locImage = FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = image_size, pixel_size = visualization_pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display
plt.figure(figsize=(20,10))
plt.axis('off')
# plt.imshow(locImage, cmap='gray');
plt.imshow(locImage, norm = simple_norm(locImage, percent = 99.5));
LocData.head()
_____no_output_____# @markdown ---
# @markdown #Play this cell to save the visualization
# @markdown ####Please select a path to the folder where to save the visualization.
save_path = "" #@param {type:"string"}
if not os.path.exists(save_path):
os.makedirs(save_path)
print('Folder created.')
saveAsTIF(save_path, filename_no_extension+'_Visualization', locImage, visualization_pixel_size)
print('Image saved.')_____no_output_____
</code>
## **6.4. Download your predictions**
---
<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name._____no_output_____# **7. Version log**
---
<font size = 4>**v1.13**:
* The section 1 and 2 are now swapped for better export of *requirements.txt*.
* This version also now includes built-in version check and the version log that you're reading now.
---_____no_output_____
#**Thank you for using Deep-STORM 2D!**_____no_output_____
| {
"repository": "junelsolis/ZeroCostDL4Mic",
"path": "Colab_notebooks/Deep-STORM_2D_ZeroCostDL4Mic.ipynb",
"matched_keywords": [
"ImageJ",
"evolution"
],
"stars": 321,
"size": 137442,
"hexsha": "d0708a053abd71282b4191f1de69cceaa8dfd6b1",
"max_line_length": 137442,
"avg_line_length": 137442,
"alphanum_fraction": 0.6533956141
} |
# Notebook from mathemage/TheMulQuaBio
Path: notebooks/17-MulExplInter.ipynb
<code>
library(repr) ; options(repr.plot.res = 100, repr.plot.width=5, repr.plot.height= 5) # Change plot sizes (in cm) - this bit of code is only relevant if you are using a jupyter notebook - ignore otherwise_____no_output_____
</code>
<!--NAVIGATION-->
< [Multiple Explanatory Variables](16-MulExpl.ipynb) | [Main Contents](Index.ipynb) | [Model Simplification](18-ModelSimp.ipynb)>_____no_output_____# Linear Models: Multiple variables with interactions <span class="tocSkip">_____no_output_____<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Chapter-aims" data-toc-modified-id="Chapter-aims-1.1"><span class="toc-item-num">1.1 </span>Chapter aims</a></span></li><li><span><a href="#Formulae-with-interactions-in-R" data-toc-modified-id="Formulae-with-interactions-in-R-1.2"><span class="toc-item-num">1.2 </span>Formulae with interactions in R</a></span></li></ul></li><li><span><a href="#Model-1:-Mammalian-genome-size" data-toc-modified-id="Model-1:-Mammalian-genome-size-2"><span class="toc-item-num">2 </span>Model 1: Mammalian genome size</a></span></li><li><span><a href="#Model-2-(ANCOVA):-Body-Weight-in-Odonata" data-toc-modified-id="Model-2-(ANCOVA):-Body-Weight-in-Odonata-3"><span class="toc-item-num">3 </span>Model 2 (ANCOVA): Body Weight in Odonata</a></span></li></ul></div>_____no_output_____# Introduction
Here you will build on your skills in fitting linear models with multiple explanatory variables to data. You will learn about another commonly used Linear Model fitting technique: ANCOVA.
We will build two models in this chapter:
* **Model 1**: Is mammalian genome size predicted by interactions between trophic level and whether species are ground dwelling?
* **ANCOVA**: Is body size in Odonata predicted by interactions between genome size and taxonomic suborder?
So far, we have only looked at the independent effects of variables. For example, in the trophic level and ground dwelling model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), we only looked for specific differences for being a omnivore *or* being ground dwelling, not for being
specifically a *ground dwelling omnivore*. These independent effects of a variable are known as *main effects* and the effects of combinations of variables acting together are known as *interactions* — they describe how the variables *interact*.
## Chapter aims
The aims of this chapter are[$^{[1]}$](#fn1):
* Creating more complex Linear Models with multiple explanatory variables
* Including the effects of interactions between multiple variables in a linear model
* Plotting predictions from more complex (multiple explanatory variables) linear models
## Formulae with interactions in R
We've already seen a number of different model formulae in R. They all use this syntax:
`response variable ~ explanatory variable(s)`
But we are now going to see two extra pieces of syntax:
* `y ~ a + b + a:b`: The `a:b` means the interaction between `a` and `b` — do combinations of these variables lead to different outcomes?
* `y ~ a * b`: This a shorthand for the model above. The means fit `a` and `b` as main effects and their interaction `a:b`.
# Model 1: Mammalian genome size
$\star$ Make sure you have changed the working directory to `Code` in your stats coursework directory.
$\star$ Create a new blank script called 'Interactions.R' and add some introductory comments.
$\star$ Load the data:_____no_output_____
<code>
load('../data/mammals.Rdata')_____no_output_____
</code>
If `mammals.Rdata` is missing, just import the data again using `read.csv`. You will then have to add the log C Value column to the imported data frame again.
Let's refit the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), but including the interaction between trophic level and ground dwelling. We'll immediately check the model is appropriate:_____no_output_____
<code>
model <- lm(logCvalue ~ TrophicLevel * GroundDwelling, data= mammals)
par(mfrow=c(2,2), mar=c(3,3,1,1), mgp=c(2, 0.8,0))
plot(model) _____no_output_____
</code>
Now, examine the `anova` and `summary` outputs for the model:_____no_output_____
<code>
anova(model)_____no_output_____
</code>
Compared to the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), there is an extra line at the bottom. The top two are the same and show that trophic level and ground dwelling both have independent main effects. The extra line
shows that there is also an interaction between the two. It doesn't explain a huge amount of variation, about half as much as trophic level, but it is significant.
Again, we can calculate the $r^2$ for the model: $\frac{0.81 + 2.75 + 0.43}{0.81+2.75+0.43+12.77} = 0.238$
The model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb) without the interaction had an $r^2 = 0.212$ — our new
model explains 2.6% more of the variation in the data.
The summary table is as follows:_____no_output_____
<code>
summary(model)_____no_output_____
</code>
The lines in this output are:
1. The reference level (intercept) for non ground dwelling carnivores. (The reference level is decided just by the alphabetic order of the levels)
2. Two differences for being in different trophic levels.
3. One difference for being ground dwelling
4. Two new differences that give specific differences for ground dwelling herbivores and omnivores.
The first four lines, as in the model from the [ANOVA chapter](15-anova.ipynb), which would allow us to find the predicted values for each group *if the size of the differences did not vary between levels because of the interactions*. That is, this part of the model only includes a single difference ground and non-ground species, which has to be the same for each trophic group because it ignores interactions between trophic level and ground / non-ground identity of each species. The last two lines then give the estimated coefficients associated with the interaction terms, and allow cause the size of differences to vary
between levels because of the further effects of interactions.
The table below show how these combine to give the predictions for each group combination, with those two new lines show in red:
$\begin{array}{|r|r|r|}
\hline
& \textrm{Not ground} & \textrm{Ground} \\
\hline
\textrm{Carnivore} & 0.96 = 0.96 & 0.96+0.25=1.21 \\
\textrm{Herbivore} & 0.96 + 0.05 = 1.01 & 0.96+0.05+0.25{\color{red}+0.03}=1.29\\
\textrm{Omnivore} & 0.96 + 0.23 = 1.19 & 0.96+0.23+0.25{\color{red}-0.15}=1.29\\
\hline
\end{array}$
So why are there two new coefficients? For interactions between two factors, there are always $(n-1)\times(m-1)$ new coefficients, where $n$ and $m$ are the number of levels in the two factors (Ground dwelling or not: 2 levels and trophic level: 3 levels, in our current example). So in this model, $(3-1) \times (2-1) =2$. It is easier to understand why
graphically: the prediction for the white boxes below can be found by adding the main effects together but for the grey boxes we need to find specific differences and so there are $(n-1)\times(m-1)$ interaction coefficients to add.
<a id="fig:interactionsdiag"></a>
<figure>
<img src="./graphics/interactionsdiag.png" alt="interactionsdiag" style="width:50%">
<small>
<center>
<figcaption>
Figure 2
</figcaption>
</center>
</small>
</figure>
If we put this together, what is the model telling us?
* Herbivores have the same genome sizes as carnivores, but omnivores have larger genomes.
* Ground dwelling mammals have larger genomes.
These two findings suggest that ground dwelling omnivores should have extra big genomes. However, the interaction shows they are smaller than expected and are, in fact, similar to ground dwelling herbivores.
Note that although the interaction term in the `anova` output is significant, neither of the two coefficients in the `summary` has a $p<0.05$. There are two weak differences (one
very weak, one nearly significant) that together explain significant
variance in the data.
$\star$ Copy the code above into your script and run the model.
Make sure you understand the output!
Just to make sure the sums above are correct, we'll use the same code as
in [the first multiple explanatory variables chapter](16-MulExpl.ipynb) to get R to calculate predictions for us, similar to the way we did [before](16-MulExpl.ipynb):_____no_output_____
<code>
# a data frame of combinations of variables
gd <- rep(levels(mammals$GroundDwelling), times = 3)
print(gd)[1] "No" "Yes" "No" "Yes" "No" "Yes"
tl <- rep(levels(mammals$TrophicLevel), each = 2)
print(tl)[1] "Carnivore" "Carnivore" "Herbivore" "Herbivore" "Omnivore" "Omnivore"
# New data frame
predVals <- data.frame(GroundDwelling = gd, TrophicLevel = tl)
# predict using the new data frame
predVals$predict <- predict(model, newdata = predVals)
print(predVals) GroundDwelling TrophicLevel predict
1 No Carnivore 0.9589465
2 Yes Carnivore 1.2138170
3 No Herbivore 1.0124594
4 Yes Herbivore 1.2976624
5 No Omnivore 1.1917603
6 Yes Omnivore 1.2990165
</code>
$\star$ Include and run the code for gererating these predictions in your script.
If we plot these data points onto the barplot from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), they now lie exactly on the mean values, because we've allowed for interactions. The triangle on this plot shows the predictions for ground dwelling omnivores from the main effects ($0.96 + 0.23 + 0.25 = 1.44$), the interaction of $-0.15$ pushes the prediction back down.
<a id="fig:predPlot"></a>
<figure>
<img src="./graphics/predPlot.svg" alt="predPlot" style="width:70%">
</figure>
_____no_output_____
# Model 2 (ANCOVA): Body Weight in Odonata
We'll go all the way back to the regression analyses from the [Regression chapter](14-regress.ipynb). Remember that we fitted two separate regression lines to the data for damselflies and dragonflies. We'll now use an interaction to fit these in a single model. This kind of linear model — with a mixture of continuous variables and factors — is often called an *analysis of covariance*, or ANCOVA. That is, ANCOVA is a type of linear model that blends ANOVA and regression. ANCOVA evaluates whether population means of a dependent variable are equal across levels of a categorical independent variable, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates.
*Thus, ANCOVA is a linear model with one categorical and one or more continuous predictors*.
We will use the odonates data that we have worked with [before](12-ExpDesign.ipynb).
$\star$ First load the data:_____no_output_____
<code>
odonata <- read.csv('../data/GenomeSize.csv')_____no_output_____
</code>
$\star$ Now create two new variables in the `odonata` data set called `logGS` and `logBW` containing log genome size and log body weight:_____no_output_____
<code>
odonata$logGS <- log(odonata$GenomeSize)
odonata$logBW <- log(odonata$BodyWeight)_____no_output_____
</code>
The models we fitted [before](12-ExpDesign.ipynb) looked like this:
<a id="fig:dragonData"></a>
<figure>
<img src="./graphics/dragonData.svg" alt="dragonData" style="width:60%">
<small>
<center>
<figcaption>
</figcaption>
</center>
</small>
</figure>
We can now fit the model of body weight as a function of both genome size and suborder:_____no_output_____
<code>
odonModel <- lm(logBW ~ logGS * Suborder, data = odonata)_____no_output_____
</code>
Again, we'll look at the <span>anova</span> table first:_____no_output_____
<code>
anova(odonModel)_____no_output_____
</code>
Interpreting this:
* There is no significant main effect of log genome size. The *main* effect is the important thing here — genome size is hugely important but does very different things for the two different suborders. If we ignored `Suborder`, there isn't an overall relationship: the average of those two lines is pretty much flat.
* There is a very strong main effect of Suborder: the mean body weight in the two groups are very different.
* There is a strong interaction between suborder and genome size. This is an interaction between a factor and a continuous variable and shows that the *slopes* are different for the different factor levels.
Now for the summary table:_____no_output_____
<code>
summary(odonModel)_____no_output_____
</code>
* The first thing to note is that the $r^2$ value is really high. The model explains three quarters (0.752) of the variation in the data.
* Next, there are four coefficients:
* The intercept is for the first level of `Suborder`, which is Anisoptera (dragonflies).
* The next line, for `log genome size`, is the slope for Anisoptera.
* We then have a coefficient for the second level of `Suborder`, which is Zygoptera (damselflies). As with the first model, this difference in factor levels is a difference in mean values and shows the difference in the intercept for Zygoptera.
* The last line is the interaction between `Suborder` and `logGS`. This shows how the slope for Zygoptera differs from the slope for Anisoptera.
How do these hang together to give the two lines shown in the model? We can calculate these by hand:
$\begin{aligned}
\textrm{Body Weight} &= -2.40 + 1.01 \times \textrm{logGS} & \textrm{[Anisoptera]}\\
\textrm{Body Weight} &= (-2.40 -2.25) + (1.01 - 2.15) \times \textrm{logGS} & \textrm{[Zygoptera]}\\
&= -4.65 - 1.14 \times \textrm{logGS} \\\end{aligned}$
$\star$ Add the above code into your script and check that you understand the outputs.
We'll use the `predict` function again to get the predicted values from the model and add lines to the plot above.
First, we'll create a set of numbers spanning the range of genome size:_____no_output_____
<code>
#get the range of the data:
rng <- range(odonata$logGS)
#get a sequence from the min to the max with 100 equally spaced values:
LogGSForFitting <- seq(rng[1], rng[2], length = 100)_____no_output_____
</code>
Have a look at these numbers:_____no_output_____
<code>
print(LogGSForFitting) [1] -0.891598119 -0.873918728 -0.856239337 -0.838559945 -0.820880554
[6] -0.803201163 -0.785521772 -0.767842380 -0.750162989 -0.732483598
[11] -0.714804206 -0.697124815 -0.679445424 -0.661766032 -0.644086641
[16] -0.626407250 -0.608727859 -0.591048467 -0.573369076 -0.555689685
[21] -0.538010293 -0.520330902 -0.502651511 -0.484972119 -0.467292728
[26] -0.449613337 -0.431933946 -0.414254554 -0.396575163 -0.378895772
[31] -0.361216380 -0.343536989 -0.325857598 -0.308178207 -0.290498815
[36] -0.272819424 -0.255140033 -0.237460641 -0.219781250 -0.202101859
[41] -0.184422467 -0.166743076 -0.149063685 -0.131384294 -0.113704902
[46] -0.096025511 -0.078346120 -0.060666728 -0.042987337 -0.025307946
[51] -0.007628554 0.010050837 0.027730228 0.045409619 0.063089011
[56] 0.080768402 0.098447793 0.116127185 0.133806576 0.151485967
[61] 0.169165358 0.186844750 0.204524141 0.222203532 0.239882924
[66] 0.257562315 0.275241706 0.292921098 0.310600489 0.328279880
[71] 0.345959271 0.363638663 0.381318054 0.398997445 0.416676837
[76] 0.434356228 0.452035619 0.469715011 0.487394402 0.505073793
[81] 0.522753184 0.540432576 0.558111967 0.575791358 0.593470750
[86] 0.611150141 0.628829532 0.646508923 0.664188315 0.681867706
[91] 0.699547097 0.717226489 0.734905880 0.752585271 0.770264663
[96] 0.787944054 0.805623445 0.823302836 0.840982228 0.858661619
</code>
We can now use the model to predict the values of body weight at each of those points for each of the two suborders:_____no_output_____
<code>
#get a data frame of new data for the order
ZygoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Zygoptera")
#get the predictions and standard error
ZygoPred <- predict(odonModel, newdata = ZygoVals, se.fit = TRUE)
#repeat for anisoptera
AnisoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Anisoptera")
AnisoPred <- predict(odonModel, newdata = AnisoVals, se.fit = TRUE)_____no_output_____
</code>
We've added `se.fit=TRUE` to the function to get the standard error around the regression lines. Both `AnisoPred` and `ZygoPred` contain predicted values (called `fit`) and standard error values (called `se.fit`) for each of the values in our generated values in `LogGSForFitting` for each of the two suborders.
We can add the predictions onto a plot like this:_____no_output_____
<code>
# plot the scatterplot of the data
plot(logBW ~ logGS, data = odonata, col = Suborder)
# add the predicted lines
lines(AnisoPred$fit ~ LogGSForFitting, col = "black")
lines(AnisoPred$fit + AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
lines(AnisoPred$fit - AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)_____no_output_____
</code>
$\star$ Copy the prediction code into your script and run the plot above.
Copy and modify the last three lines to add the lines for the Zygoptera. Your final plot should look like this.
<a id="fig:odonPlot"></a>
<figure>
<img src="./graphics/odonPlot.svg" alt="odonPlot" style="width:70%">
<small>
<center>
<figcaption>
Figure 4
</figcaption>
</center>
</small>
</figure>
---
<a id="fn1"></a>
[1]: Here you work with the script file `MulExplInter.R`_____no_output_____
| {
"repository": "mathemage/TheMulQuaBio",
"path": "notebooks/17-MulExplInter.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 188665,
"hexsha": "d07106a26be1118a37972515d670d38e96b7a7e0",
"max_line_length": 108476,
"avg_line_length": 237.3144654088,
"alphanum_fraction": 0.8861845069
} |
# Notebook from bigginlab/OxCompBio
Path: tutorials/MD/02_Protein_Visualization.ipynb
# <span style='color:darkred'> 2 Protein Visualization </span>
***
For the purposes of this tutorial, we will use the HIV-1 protease structure (PDB ID: 1HSG). It is a homodimer with two chains of 99 residues each. Before starting to perform any simulations and data analysis, we need to observe and familiarize with the protein of interest.
There are various software packages for visualizing molecular systems, but here we will guide you through using two of those; NGLView and VMD:
* [NGLView](http://nglviewer.org/#nglview): An IPython/Jupyter widget to interactively view molecular structures and trajectories.
* [VMD](https://www.ks.uiuc.edu/Research/vmd/): VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
You could either take your time to familiarize with both, or select which one you prefer to delve into.
NGLView is great for looking at things directly within a jupyter notebook, but VMD can be a more powerful tool for visualizing, generating high quality images and videos, but also analysing simulation trajectories.
_____no_output_____## <span style='color:darkred'> 2.0 Obtain the protein structure </span>
The first step is to obtain the crystal structure of the HIV-1 protease.
Start your web-browser and go to the [protein data bank](https://www.rcsb.org/). Enter the pdb code 1HSG in the site search box at the top and hit the site search button. The protein should come up. Select download from the top right hand menu and save the .pdb file to the current working directory._____no_output_____
## <span style='color:darkred'> 2.1 VMD (optional) </span>
You can now open the pdb structure with VMD (the following file name might be uppercase depending on how you downloaded it):
`% vmd 1hsg.pdb`
You should experiment with the menu system and try various representations of the protein such as `Trace`, `NewCartoon` and `Ribbons` for example.
Go to `Graphics` and then `Graphical Representations` and from the `Drawing Method` drop-down list, select `Trace`. Similarly, you can explore other drawing methods.
<span style='color:Blue'> **Questions** </span>
* Can you find the indinavir drug?
*Hint: At the `Graphical Representations` menu, click `Create Rep` and type "all and not protein" and hit Enter. Change the `Drawing Method` to `Licorice`.*
* Give the protein the Trace representation and then make the polar residues in vdw format as an additional representation. Repeat with the hydrophobic residues. What do you notice?
*Hint: Explore the `Selections` tab and the options provided as singlewords.*
*Hint: To hide a representation, double-click on it. Double-click again if you want to make it reappear.*
Take your time to explore the features of VMD and to observe the protein. Once you are happy, you can exit VMD, either by clicking on `File` and then `Quit` or by typing `quit` in the terminal box.
***_____no_output_____## <span style='color:darkred'> 2.2 NGLView </span>_____no_output_____You have already been introduced to NGLView during the Python tutorial. You can now spend more time to navigate through its features._____no_output_____
<code>
# Import NGLView
import nglview
# Select as your protein the 1HSG pdb entry
protein_view = nglview.show_pdbid('1hsg')
protein_view.gui_style = 'ngl'
#Uncomment the command below to add a hyperball representation of the crystal water oxygens in grey
#protein_view.add_hyperball('HOH', color='grey', opacity=1.0)
#Uncomment the command below to color the protein according to its secondary structure with opacity 0.6
#protein_view.update_cartoon(color='sstruc', opacity=0.6)
# Let's change the display a little bit
protein_view.parameters = dict(camera_type='orthographic', clip_dist=0)
# Set the background colour to black
protein_view.background = 'black'
# Call protein_view to visualise the trajectory
protein_view_____no_output_____
</code>
<span style='color:Blue'> **Questions** </span>
* When you load the structure, can you see the two subunits that form the dimer?
* Can you locate the drug in the binding pocket?
*Hint: Go to `View` and then `Full screen` to expand the viewing window.*
* Can you hide all the other representations and view only the drug?
*Hint: Use your mouse to rotate, translate and zoom in and out.*
*Hint: You can hide/show a representation by clicking on the "eye" symbol on the right panel.*
***
Explore the [NGLView documentation](http://nglviewer.org/nglview/latest/api.html), and play around with different representations, selections, colors etc. Take as much time as you want in this step.
***
## <span style='color:darkred'> Next Step </span>
You can now open the `03_Running_an_MD_simulation.ipynb` notebook to setup and perform a Molecular Dynamics simulation of your protein._____no_output_____
| {
"repository": "bigginlab/OxCompBio",
"path": "tutorials/MD/02_Protein_Visualization.ipynb",
"matched_keywords": [
"molecular dynamics"
],
"stars": 19,
"size": 6531,
"hexsha": "d07142d33f424b6beee94f0908ac9d0fdfb3d146",
"max_line_length": 306,
"avg_line_length": 40.81875,
"alphanum_fraction": 0.6433930485
} |
# Notebook from FangmingXie/scf_enhancer_paper
Path: eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb
# Stage 1: Correlation for individual enhancers_____no_output_____
<code>
import pandas as pd
import numpy as np
import time, re, datetime
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from scipy.stats import zscore
import random
from multiprocessing import Pool,cpu_count
num_processors = cpu_count()
print('Starting analysis; %d processors; %s' % (num_processors, datetime.datetime.today()))
t00 =time.time()
# np.random.seed(0)
import sys
sys.path.insert(0, '/cndd/fangming/CEMBA/snmcseq_dev')
from __init__jupyterlab import *
import snmcseq_utilsStarting analysis; 40 processors; 2020-06-27 13:16:21.271673
today=datetime.datetime.today().strftime('%d-%m-%Y')
use_kmers = False
corr_type = 'Pearson' # corr_type = 'Spearman'
features_use = 'mCG+ATAC'
analysis_prefix = 'eran_model_{}'.format(features_use)
output_fig = '/cndd2/fangming/projects/scf_enhancers/results/figures/{}_{{}}_{}.pdf'.format(analysis_prefix, today)
output = '/cndd2/fangming/projects/scf_enhancers/results/{}_{{}}_{}'.format(analysis_prefix, today)_____no_output_____# fn_load_prefix = 'RegressData/Regress_data_6143genes_19cells_'
# fn_load_prefix = 'RegressData/Regress_data_6174genes_20cells_'
fn_load_prefix = 'RegressData/Regress_data_9811genes_24cells_'
# Load datasets
save_vars = ['genes2enhu', 'rnau', 'df_mlevelu', 'df_atacu', 'genes']
# save_vars = ['rnau','genes']
for var in save_vars:
fn = fn_load_prefix+var+'.pkl'
cmd = '%s=pd.read_pickle("%s")' % (var, fn)
exec(cmd)
print('Loaded %s from %s' % (var, fn))
if use_kmers:
with np.load(fn_load_prefix+'kmer_countsu.npz', allow_pickle=True) as x:
kmer_countsu=x['kmer_countsu']
kmer_countsu = kmer_countsu/kmer_countsu.shape[1]/100
# Testing:
kmer_countsu = kmer_countsu[:,:2]
print('Kmers shape: ', kmer_countsu.shape)
Nk=kmer_countsu.shape[1]
print('Loaded kmers')
else:
Nk=0
# Cell type names
df_cellnames = pd.read_csv(
'/cndd/Public_Datasets/CEMBA/BICCN_minibrain_data/data_freeze/supp_info/clusters_final/cluster_annotation_scf_round2.tsv',
sep='\t', index_col='cluster')Loaded genes2enhu from RegressData/Regress_data_9811genes_24cells_genes2enhu.pkl
Loaded rnau from RegressData/Regress_data_9811genes_24cells_rnau.pkl
Loaded df_mlevelu from RegressData/Regress_data_9811genes_24cells_df_mlevelu.pkl
Loaded df_atacu from RegressData/Regress_data_9811genes_24cells_df_atacu.pkl
Loaded genes from RegressData/Regress_data_9811genes_24cells_genes.pkl
genes2enhu = genes2enhu.iloc[[i in genes.index for i in genes2enhu['ensid']],:]
genes2enhu.shape, genes2enhu.index.unique().shape
celltypes = df_mlevelu.columns
assert np.all(celltypes == df_atacu.columns)_____no_output_____if (features_use=='mCG'):
x = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
elif (features_use=='ATAC'):
x = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
elif (features_use=='mCG_ATAC'):
x1 = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
x2 = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
x = f_mcg(x1) * f_atac(x2)
elif (features_use=='mCG+ATAC'):
x1 = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
x2 = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
else:
x = []
y = rnau.loc[genes2enhu['ensid'],:].to_numpy()_____no_output_____print(
rnau.shape, # rna by celltype
df_mlevelu.shape, # enh by cell type
df_atacu.shape, # enh by cell type
genes.shape, # gene annotation
genes2enhu.shape, # gene-enh pair
x1.shape, # enh_mcg by cell type (mcg_enh for each enh-gene pair) how ?
x2.shape, # enh_atac by cell type (mcg_enh for each enh-gene pair) how ?
y.shape, # rna by cell type (rna for each enh-gene pair)
)(9811, 24) (242461, 24) (242461, 24) (51809, 6) (468604, 13) (468604, 24) (468604, 24) (468604, 24)
def my_cc(x,y,ensid,doshuff=False,jshuff=0,corr_type='Pearson',use_abs=True, doshuffgene=False,verbose=False):
"""Calculate corr for each row of x and y
x, y: enh_mcg/gene_rna (pair) vs celltype
ensid: matched gene ensid for each row
x, y contains no nan; but constant rows of x and y produces nan with zscoring
"""
t0=time.time()
seed = int(time.time()*1e7 + jshuff) % 100
np.random.seed(seed)
ngenes, ncells = y.shape
print('Computing correlations for %d gene-enhancer pairs; jshuff=%d; ' % (ngenes, jshuff))
if doshuff:
y = y[:,np.random.permutation(ncells)] # permute cells
if doshuffgene:
y = y[np.random.permutation(ngenes),:] # permute genes (pairs)
if (corr_type=='Spearman'):
y = np.argsort(y,axis=1)
x = np.argsort(x,axis=1)
xz = zscore(x, axis=1, nan_policy='propagate', ddof=0)
yz = zscore(y, axis=1, nan_policy='propagate', ddof=0)
xy_cc = np.nan_to_num(np.nanmean(xz*yz, axis=1)) # turn np.nan into zero
xy_cc_df = pd.DataFrame(data=xy_cc, columns=['cc'])
xy_cc_df['enh_num'] = np.arange(ngenes)
xy_cc_df['ensid'] = ensid.values
xy_cc_df['cc_abs'] = np.abs(xy_cc_df['cc'])
if use_abs: # max abs_corr for each gene
xy_cc_df = xy_cc_df.sort_values(['ensid','cc_abs'],
ascending=[True,False]).drop_duplicates(['ensid'])
else: # max corr for each gene
xy_cc_df = xy_cc_df.sort_values(['ensid','cc'],
ascending=[True,False]).drop_duplicates(['ensid'])
best_cc = xy_cc_df['cc'] # corr (not abs)
best_enh = xy_cc_df['enh_num'] # enh
best_ensid = xy_cc_df['ensid'] # gene
if verbose:
print('t=%3.3f' % (time.time()-t0))
return best_cc,best_enh,best_ensid,xy_cc
def my_cc_shuffgene(x, y, ensid, rnau,
doshuff=False, jshuff=0,
corr_type='Pearson', use_abs=True,
doshuffgene=False,
):
"""
"""
seed = int(time.time()*1e7 + jshuff) % 100
rnau_shuff = rnau.copy()
rnau_shuff.index = rnau.index.values[
np.random.RandomState(seed=seed).permutation(len(rnau))
]
y_shuff = rnau_shuff.loc[ensid,:].to_numpy()
return my_cc(x, y_shuff, ensid,
doshuff, jshuff,
corr_type, use_abs,
doshuffgene,
)_____no_output_____def corr_pipe(x, y, genes2enhu, rnau, corr_type,):
"""
"""
# observed
best_cc, best_enh, best_ensid, all_cc = my_cc(x,y,genes2enhu['ensid'],False,0,corr_type,True,False)
print(best_cc.shape, best_enh.shape, best_ensid.shape, all_cc.shape)
# shuffled
nshuff = np.min((num_processors*16,128))
np.random.seed(0)
with Pool(processes = num_processors) as p:
best_cc_shuff_list = p.starmap(my_cc_shuffgene,
[(x,y,genes2enhu['ensid'],rnau,False,jshuff,corr_type,True,False) for jshuff in range(nshuff)])
# significance
alpha = 0.01;
best_cc_shuff = np.hstack([b[0].values[:,np.newaxis] for b in best_cc_shuff_list]) # gene (best corr) by num_shuff
best_cc_shuff_max = np.percentile(np.abs(best_cc_shuff), 100*(1-alpha), axis=1) # get 99% (robust max) across shuffles
best_cc_shuff_mean = np.abs(best_cc_shuff).mean(axis=1) # get mean across shuffles for each gene
sig = np.abs(best_cc).squeeze()>best_cc_shuff_max # corr greater than 99% of the shuffled
fdr = (alpha*len(sig))/np.sum(sig) # fdr - alpha
print(np.sum(sig), len(sig), alpha, fdr)
return best_cc, best_enh, best_ensid, all_cc, best_cc_shuff, best_cc_shuff_max, best_cc_shuff_mean, sig, fdr
_____no_output_____import warnings
warnings.filterwarnings('ignore')
(best_cc_1, best_enh_1, best_ensid_1, all_cc_1,
best_cc_shuff_1, best_cc_shuff_max_1, best_cc_shuff_mean_1, sig_1, fdr_1,
) = corr_pipe(x1, y, genes2enhu, rnau, corr_type,)
(best_cc_2, best_enh_2, best_ensid_2, all_cc_2,
best_cc_shuff_2, best_cc_shuff_max_2, best_cc_shuff_mean_2, sig_2, fdr_2,
) = corr_pipe(x2, y, genes2enhu, rnau, corr_type,)Computing correlations for 468604 gene-enhancer pairs; jshuff=0;
(9811,) (9811,) (9811,) (468604,)
Computing correlations for 468604 gene-enhancer pairs; jshuff=0;
Computing correlations for 468604 gene-enhancer pairs; jshuff=1;
Computing correlations for 468604 gene-enhancer pairs; jshuff=2;
Computing correlations for 468604 gene-enhancer pairs; jshuff=3;
Computing correlations for 468604 gene-enhancer pairs; jshuff=4;
Computing correlations for 468604 gene-enhancer pairs; jshuff=5;
Computing correlations for 468604 gene-enhancer pairs; jshuff=6;
Computing correlations for 468604 gene-enhancer pairs; jshuff=8;
Computing correlations for 468604 gene-enhancer pairs; jshuff=7;
Computing correlations for 468604 gene-enhancer pairs; jshuff=9;
Computing correlations for 468604 gene-enhancer pairs; jshuff=10;
Computing correlations for 468604 gene-enhancer pairs; jshuff=11;
Computing correlations for 468604 gene-enhancer pairs; jshuff=12;
Computing correlations for 468604 gene-enhancer pairs; jshuff=13;
Computing correlations for 468604 gene-enhancer pairs; jshuff=14;
Computing correlations for 468604 gene-enhancer pairs; jshuff=15;
Computing correlations for 468604 gene-enhancer pairs; jshuff=16;
Computing correlations for 468604 gene-enhancer pairs; jshuff=17;
Computing correlations for 468604 gene-enhancer pairs; jshuff=18;
Computing correlations for 468604 gene-enhancer pairs; jshuff=19;
Computing correlations for 468604 gene-enhancer pairs; jshuff=20;
Computing correlations for 468604 gene-enhancer pairs; jshuff=21;
Computing correlations for 468604 gene-enhancer pairs; jshuff=22;
Computing correlations for 468604 gene-enhancer pairs; jshuff=23;
Computing correlations for 468604 gene-enhancer pairs; jshuff=24;
Computing correlations for 468604 gene-enhancer pairs; jshuff=25;
Computing correlations for 468604 gene-enhancer pairs; jshuff=26;
Computing correlations for 468604 gene-enhancer pairs; jshuff=27;
Computing correlations for 468604 gene-enhancer pairs; jshuff=28;
Computing correlations for 468604 gene-enhancer pairs; jshuff=29;
Computing correlations for 468604 gene-enhancer pairs; jshuff=30;
Computing correlations for 468604 gene-enhancer pairs; jshuff=31;
Computing correlations for 468604 gene-enhancer pairs; jshuff=32;
Computing correlations for 468604 gene-enhancer pairs; jshuff=33;
Computing correlations for 468604 gene-enhancer pairs; jshuff=34;
Computing correlations for 468604 gene-enhancer pairs; jshuff=35;
Computing correlations for 468604 gene-enhancer pairs; jshuff=36;
Computing correlations for 468604 gene-enhancer pairs; jshuff=37;
Computing correlations for 468604 gene-enhancer pairs; jshuff=38;
Computing correlations for 468604 gene-enhancer pairs; jshuff=39;
Computing correlations for 468604 gene-enhancer pairs; jshuff=40;
Computing correlations for 468604 gene-enhancer pairs; jshuff=41;
Computing correlations for 468604 gene-enhancer pairs; jshuff=42;
Computing correlations for 468604 gene-enhancer pairs; jshuff=43;
Computing correlations for 468604 gene-enhancer pairs; jshuff=44;
Computing correlations for 468604 gene-enhancer pairs; jshuff=45;
Computing correlations for 468604 gene-enhancer pairs; jshuff=46;
Computing correlations for 468604 gene-enhancer pairs; jshuff=47;
Computing correlations for 468604 gene-enhancer pairs; jshuff=48;
Computing correlations for 468604 gene-enhancer pairs; jshuff=49;
Computing correlations for 468604 gene-enhancer pairs; jshuff=50;
Computing correlations for 468604 gene-enhancer pairs; jshuff=51;
Computing correlations for 468604 gene-enhancer pairs; jshuff=52;
Computing correlations for 468604 gene-enhancer pairs; jshuff=53;
Computing correlations for 468604 gene-enhancer pairs; jshuff=54;
Computing correlations for 468604 gene-enhancer pairs; jshuff=55;
Computing correlations for 468604 gene-enhancer pairs; jshuff=56;
Computing correlations for 468604 gene-enhancer pairs; jshuff=57;
Computing correlations for 468604 gene-enhancer pairs; jshuff=58;
Computing correlations for 468604 gene-enhancer pairs; jshuff=59;
Computing correlations for 468604 gene-enhancer pairs; jshuff=60;
Computing correlations for 468604 gene-enhancer pairs; jshuff=61;
Computing correlations for 468604 gene-enhancer pairs; jshuff=62;
Computing correlations for 468604 gene-enhancer pairs; jshuff=63;
Computing correlations for 468604 gene-enhancer pairs; jshuff=64;
Computing correlations for 468604 gene-enhancer pairs; jshuff=65;
Computing correlations for 468604 gene-enhancer pairs; jshuff=66;
Computing correlations for 468604 gene-enhancer pairs; jshuff=67;
Computing correlations for 468604 gene-enhancer pairs; jshuff=68;
Computing correlations for 468604 gene-enhancer pairs; jshuff=69;
Computing correlations for 468604 gene-enhancer pairs; jshuff=70;
Computing correlations for 468604 gene-enhancer pairs; jshuff=71;
Computing correlations for 468604 gene-enhancer pairs; jshuff=72;
Computing correlations for 468604 gene-enhancer pairs; jshuff=73;
Computing correlations for 468604 gene-enhancer pairs; jshuff=74;
Computing correlations for 468604 gene-enhancer pairs; jshuff=75;
Computing correlations for 468604 gene-enhancer pairs; jshuff=76;
Computing correlations for 468604 gene-enhancer pairs; jshuff=77;
Computing correlations for 468604 gene-enhancer pairs; jshuff=78;
Computing correlations for 468604 gene-enhancer pairs; jshuff=79;
Computing correlations for 468604 gene-enhancer pairs; jshuff=80;
Computing correlations for 468604 gene-enhancer pairs; jshuff=81;
Computing correlations for 468604 gene-enhancer pairs; jshuff=82;
Computing correlations for 468604 gene-enhancer pairs; jshuff=83;
Computing correlations for 468604 gene-enhancer pairs; jshuff=84;
Computing correlations for 468604 gene-enhancer pairs; jshuff=85;
Computing correlations for 468604 gene-enhancer pairs; jshuff=86;
Computing correlations for 468604 gene-enhancer pairs; jshuff=87;
Computing correlations for 468604 gene-enhancer pairs; jshuff=88;
Computing correlations for 468604 gene-enhancer pairs; jshuff=89;
Computing correlations for 468604 gene-enhancer pairs; jshuff=90;
Computing correlations for 468604 gene-enhancer pairs; jshuff=91;
Computing correlations for 468604 gene-enhancer pairs; jshuff=92;
Computing correlations for 468604 gene-enhancer pairs; jshuff=93;
Computing correlations for 468604 gene-enhancer pairs; jshuff=94;
Computing correlations for 468604 gene-enhancer pairs; jshuff=95;
Computing correlations for 468604 gene-enhancer pairs; jshuff=96;
Computing correlations for 468604 gene-enhancer pairs; jshuff=97;
Computing correlations for 468604 gene-enhancer pairs; jshuff=98;
Computing correlations for 468604 gene-enhancer pairs; jshuff=99;
Computing correlations for 468604 gene-enhancer pairs; jshuff=100;
Computing correlations for 468604 gene-enhancer pairs; jshuff=101;
Computing correlations for 468604 gene-enhancer pairs; jshuff=102;
Computing correlations for 468604 gene-enhancer pairs; jshuff=103;
Computing correlations for 468604 gene-enhancer pairs; jshuff=104;
Computing correlations for 468604 gene-enhancer pairs; jshuff=105;
Computing correlations for 468604 gene-enhancer pairs; jshuff=106;
Computing correlations for 468604 gene-enhancer pairs; jshuff=107;
Computing correlations for 468604 gene-enhancer pairs; jshuff=108;
Computing correlations for 468604 gene-enhancer pairs; jshuff=109;
Computing correlations for 468604 gene-enhancer pairs; jshuff=110;
Computing correlations for 468604 gene-enhancer pairs; jshuff=111;
Computing correlations for 468604 gene-enhancer pairs; jshuff=112;
Computing correlations for 468604 gene-enhancer pairs; jshuff=113;
Computing correlations for 468604 gene-enhancer pairs; jshuff=114;
Computing correlations for 468604 gene-enhancer pairs; jshuff=115;
Computing correlations for 468604 gene-enhancer pairs; jshuff=116;
Computing correlations for 468604 gene-enhancer pairs; jshuff=117;
Computing correlations for 468604 gene-enhancer pairs; jshuff=118;
Computing correlations for 468604 gene-enhancer pairs; jshuff=119;
Computing correlations for 468604 gene-enhancer pairs; jshuff=120;
Computing correlations for 468604 gene-enhancer pairs; jshuff=121;
Computing correlations for 468604 gene-enhancer pairs; jshuff=122;
Computing correlations for 468604 gene-enhancer pairs; jshuff=123;
Computing correlations for 468604 gene-enhancer pairs; jshuff=124;
Computing correlations for 468604 gene-enhancer pairs; jshuff=125;
Computing correlations for 468604 gene-enhancer pairs; jshuff=126;
Computing correlations for 468604 gene-enhancer pairs; jshuff=127;
1058 9811 0.01 0.0927315689981
Computing correlations for 468604 gene-enhancer pairs; jshuff=0;
(9811,) (9811,) (9811,) (468604,)
Computing correlations for 468604 gene-enhancer pairs; jshuff=0;
Computing correlations for 468604 gene-enhancer pairs; jshuff=1;
Computing correlations for 468604 gene-enhancer pairs; jshuff=2;
Computing correlations for 468604 gene-enhancer pairs; jshuff=3;
Computing correlations for 468604 gene-enhancer pairs; jshuff=4;
Computing correlations for 468604 gene-enhancer pairs; jshuff=5;
Computing correlations for 468604 gene-enhancer pairs; jshuff=6;
Computing correlations for 468604 gene-enhancer pairs; jshuff=7;
Computing correlations for 468604 gene-enhancer pairs; jshuff=8;
Computing correlations for 468604 gene-enhancer pairs; jshuff=9;
Computing correlations for 468604 gene-enhancer pairs; jshuff=10;
Computing correlations for 468604 gene-enhancer pairs; jshuff=11;
Computing correlations for 468604 gene-enhancer pairs; jshuff=12;
Computing correlations for 468604 gene-enhancer pairs; jshuff=13;
Computing correlations for 468604 gene-enhancer pairs; jshuff=15;
Computing correlations for 468604 gene-enhancer pairs; jshuff=14;
Computing correlations for 468604 gene-enhancer pairs; jshuff=16;
Computing correlations for 468604 gene-enhancer pairs; jshuff=18;
Computing correlations for 468604 gene-enhancer pairs; jshuff=17;
Computing correlations for 468604 gene-enhancer pairs; jshuff=19;
Computing correlations for 468604 gene-enhancer pairs; jshuff=20;
Computing correlations for 468604 gene-enhancer pairs; jshuff=21;
Computing correlations for 468604 gene-enhancer pairs; jshuff=22;
Computing correlations for 468604 gene-enhancer pairs; jshuff=23;
Computing correlations for 468604 gene-enhancer pairs; jshuff=24;
Computing correlations for 468604 gene-enhancer pairs; jshuff=25;
Computing correlations for 468604 gene-enhancer pairs; jshuff=27;
Computing correlations for 468604 gene-enhancer pairs; jshuff=26;
Computing correlations for 468604 gene-enhancer pairs; jshuff=28;
Computing correlations for 468604 gene-enhancer pairs; jshuff=29;
Computing correlations for 468604 gene-enhancer pairs; jshuff=30;
Computing correlations for 468604 gene-enhancer pairs; jshuff=31;
Computing correlations for 468604 gene-enhancer pairs; jshuff=33;
Computing correlations for 468604 gene-enhancer pairs; jshuff=32;
Computing correlations for 468604 gene-enhancer pairs; jshuff=34;
Computing correlations for 468604 gene-enhancer pairs; jshuff=35;
Computing correlations for 468604 gene-enhancer pairs; jshuff=36;
Computing correlations for 468604 gene-enhancer pairs; jshuff=37;
Computing correlations for 468604 gene-enhancer pairs; jshuff=38;
Computing correlations for 468604 gene-enhancer pairs; jshuff=39;
Computing correlations for 468604 gene-enhancer pairs; jshuff=40;
Computing correlations for 468604 gene-enhancer pairs; jshuff=41;
Computing correlations for 468604 gene-enhancer pairs; jshuff=42;
Computing correlations for 468604 gene-enhancer pairs; jshuff=43;
Computing correlations for 468604 gene-enhancer pairs; jshuff=44;
Computing correlations for 468604 gene-enhancer pairs; jshuff=45;
Computing correlations for 468604 gene-enhancer pairs; jshuff=46;
Computing correlations for 468604 gene-enhancer pairs; jshuff=47;
Computing correlations for 468604 gene-enhancer pairs; jshuff=48;
Computing correlations for 468604 gene-enhancer pairs; jshuff=49;
Computing correlations for 468604 gene-enhancer pairs; jshuff=50;
Computing correlations for 468604 gene-enhancer pairs; jshuff=51;
Computing correlations for 468604 gene-enhancer pairs; jshuff=52;
Computing correlations for 468604 gene-enhancer pairs; jshuff=53;
Computing correlations for 468604 gene-enhancer pairs; jshuff=54;
Computing correlations for 468604 gene-enhancer pairs; jshuff=55;
Computing correlations for 468604 gene-enhancer pairs; jshuff=56;
Computing correlations for 468604 gene-enhancer pairs; jshuff=57;
Computing correlations for 468604 gene-enhancer pairs; jshuff=58;
Computing correlations for 468604 gene-enhancer pairs; jshuff=59;
Computing correlations for 468604 gene-enhancer pairs; jshuff=60;
Computing correlations for 468604 gene-enhancer pairs; jshuff=61;
Computing correlations for 468604 gene-enhancer pairs; jshuff=62;
Computing correlations for 468604 gene-enhancer pairs; jshuff=63;
Computing correlations for 468604 gene-enhancer pairs; jshuff=64;
Computing correlations for 468604 gene-enhancer pairs; jshuff=65;
Computing correlations for 468604 gene-enhancer pairs; jshuff=66;
Computing correlations for 468604 gene-enhancer pairs; jshuff=67;
Computing correlations for 468604 gene-enhancer pairs; jshuff=68;
Computing correlations for 468604 gene-enhancer pairs; jshuff=69;
Computing correlations for 468604 gene-enhancer pairs; jshuff=70;
Computing correlations for 468604 gene-enhancer pairs; jshuff=71;
Computing correlations for 468604 gene-enhancer pairs; jshuff=72;
Computing correlations for 468604 gene-enhancer pairs; jshuff=73;
Computing correlations for 468604 gene-enhancer pairs; jshuff=74;
Computing correlations for 468604 gene-enhancer pairs; jshuff=75;
Computing correlations for 468604 gene-enhancer pairs; jshuff=76;
Computing correlations for 468604 gene-enhancer pairs; jshuff=77;
Computing correlations for 468604 gene-enhancer pairs; jshuff=78;
Computing correlations for 468604 gene-enhancer pairs; jshuff=79;
Computing correlations for 468604 gene-enhancer pairs; jshuff=80;
Computing correlations for 468604 gene-enhancer pairs; jshuff=81;
Computing correlations for 468604 gene-enhancer pairs; jshuff=82;
Computing correlations for 468604 gene-enhancer pairs; jshuff=83;
Computing correlations for 468604 gene-enhancer pairs; jshuff=84;
Computing correlations for 468604 gene-enhancer pairs; jshuff=85;
Computing correlations for 468604 gene-enhancer pairs; jshuff=86;
Computing correlations for 468604 gene-enhancer pairs; jshuff=87;
Computing correlations for 468604 gene-enhancer pairs; jshuff=88;
Computing correlations for 468604 gene-enhancer pairs; jshuff=89;
Computing correlations for 468604 gene-enhancer pairs; jshuff=90;
Computing correlations for 468604 gene-enhancer pairs; jshuff=91;
Computing correlations for 468604 gene-enhancer pairs; jshuff=92;
Computing correlations for 468604 gene-enhancer pairs; jshuff=93;
Computing correlations for 468604 gene-enhancer pairs; jshuff=94;
Computing correlations for 468604 gene-enhancer pairs; jshuff=95;
Computing correlations for 468604 gene-enhancer pairs; jshuff=96;
Computing correlations for 468604 gene-enhancer pairs; jshuff=97;
Computing correlations for 468604 gene-enhancer pairs; jshuff=98;
Computing correlations for 468604 gene-enhancer pairs; jshuff=99;
Computing correlations for 468604 gene-enhancer pairs; jshuff=100;
Computing correlations for 468604 gene-enhancer pairs; jshuff=101;
Computing correlations for 468604 gene-enhancer pairs; jshuff=102;
Computing correlations for 468604 gene-enhancer pairs; jshuff=103;
Computing correlations for 468604 gene-enhancer pairs; jshuff=104;
Computing correlations for 468604 gene-enhancer pairs; jshuff=105;
Computing correlations for 468604 gene-enhancer pairs; jshuff=106;
Computing correlations for 468604 gene-enhancer pairs; jshuff=107;
Computing correlations for 468604 gene-enhancer pairs; jshuff=108;
Computing correlations for 468604 gene-enhancer pairs; jshuff=109;
Computing correlations for 468604 gene-enhancer pairs; jshuff=110;
Computing correlations for 468604 gene-enhancer pairs; jshuff=111;
Computing correlations for 468604 gene-enhancer pairs; jshuff=112;
Computing correlations for 468604 gene-enhancer pairs; jshuff=113;
Computing correlations for 468604 gene-enhancer pairs; jshuff=114;
Computing correlations for 468604 gene-enhancer pairs; jshuff=115;
Computing correlations for 468604 gene-enhancer pairs; jshuff=116;
Computing correlations for 468604 gene-enhancer pairs; jshuff=117;
Computing correlations for 468604 gene-enhancer pairs; jshuff=118;
Computing correlations for 468604 gene-enhancer pairs; jshuff=119;
Computing correlations for 468604 gene-enhancer pairs; jshuff=120;
Computing correlations for 468604 gene-enhancer pairs; jshuff=121;
Computing correlations for 468604 gene-enhancer pairs; jshuff=122;
Computing correlations for 468604 gene-enhancer pairs; jshuff=123;
Computing correlations for 468604 gene-enhancer pairs; jshuff=124;
Computing correlations for 468604 gene-enhancer pairs; jshuff=125;
Computing correlations for 468604 gene-enhancer pairs; jshuff=126;
Computing correlations for 468604 gene-enhancer pairs; jshuff=127;
358 9811 0.01 0.27405027933
def plot_dists(best_cc, best_enh, best_ensid, all_cc,
best_cc_shuff, best_cc_shuff_max, best_cc_shuff_mean,
sig, fdr, alpha,
feature):
ngenes = best_cc.shape[0]
fig, axs = plt.subplots(3,1,figsize=(5,10))
ax = axs[0]
ax.scatter(best_cc, best_cc_shuff_mean,
s=2,c=sig,
cmap=ListedColormap(["gray",'red']),
rasterized=True,
)
ax.plot([-1,0,1],[1,0,1],'k--')
ax.set_xlabel('Max %s correlation' % corr_type)
ax.set_ylabel('Max %s correlation\n(Mean of shuffles)' % corr_type)
ax.set_title('%s\n%d/%d=%3.1f%%\nsig. genes (p<%3.2g, FDR=%3.1f%%)' % (
feature,
sig.sum(),ngenes,
100*sig.sum()/ngenes,
alpha, fdr*100), )
ax = axs[1]
bins = np.arange(-2,2,0.1)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': False,
}
_vec = best_cc.squeeze()/best_cc_shuff_mean.squeeze()
cond_pos_sig = np.logical_and(sig, best_cc > 0)
cond_neg_sig = np.logical_and(sig, best_cc <= 0)
ax.hist(_vec, bins=bins,
color='gray', label='All genes',
**hist_config,
)
ax.hist(_vec[sig], bins=bins,
color='red', label='Significant',
**hist_config,
)
ax.axvline(-1, linestyle='--', color='k')
ax.axvline(1, linestyle='--', color='k')
ax.set_xlabel(corr_type+' correlation/(Mean abs. corr. of shuffles)')
ax.set_ylabel('Number of genes')
num_sig, num_pos_sig, num_neg_sig = (sig.sum(),
cond_pos_sig.sum(),
cond_neg_sig.sum(),
)
ax.set_title("Num. pos={} ({:.1f}%)\nNum. neg={} ({:.1f}%)".format(
num_pos_sig, num_pos_sig/num_sig*100,
num_neg_sig, num_neg_sig/num_sig*100,
))
ax.legend(bbox_to_anchor=(1,1))
ax = axs[2]
bins = bins=np.arange(0,1,0.02)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': True,
}
ax.hist(np.abs(all_cc), bins=bins,
color='C1',
label='All enh-gene pairs',
**hist_config,
)
ax.hist(best_cc_shuff.reshape(-1,1), bins=bins,
color='gray',
label='Best (all shuffles)',
**hist_config,
)
ax.hist(best_cc_shuff_max, bins=bins,
color='C2',
label='Best (max. shuffle)',
**hist_config,
)
ax.hist(best_cc_shuff_mean, bins=bins,
color='C0',
label='Best (mean shuffle)',
**hist_config,
)
ax.hist(best_cc.squeeze(), bins=bins,
color='C3',
label='Best (data)',
**hist_config,
)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel(corr_type+' correlation')
ax.set_ylabel('Density of genes')
fig.subplots_adjust(hspace=0.9)
fn_plot = output.format("genes_corr_"+feature+'_'+corr_type)
snmcseq_utils.savefig(fig, fn_plot)
print('Saved %s' % fn_plot)
_____no_output_____alpha = 0.01
feature = 'mCG'
plot_dists(best_cc_1, best_enh_1, best_ensid_1, all_cc_1,
best_cc_shuff_1, best_cc_shuff_max_1, best_cc_shuff_mean_1, sig_1, fdr_1, alpha,
feature)
feature = 'ATAC'
plot_dists(best_cc_2, best_enh_2, best_ensid_2, all_cc_2,
best_cc_shuff_2, best_cc_shuff_max_2, best_cc_shuff_mean_2, sig_2, fdr_2, alpha,
feature)Saved /cndd2/fangming/projects/scf_enhancers/results/eran_model_mCG+ATAC_genes_corr_mCG_Pearson_27-06-2020
Saved /cndd2/fangming/projects/scf_enhancers/results/eran_model_mCG+ATAC_genes_corr_ATAC_Pearson_27-06-2020
# np.savez(
# output.format('GenesCorr_%s_%s.npz' % (features_use, today)),
# best_cc=best_cc,best_enh=best_enh,best_ensid=best_ensid,
# sig=sig, best_cc_shuff=best_cc_shuff)
# print('Saved data; t=%3.3f; %s' % (time.time()-t00, datetime.datetime.today()))_____no_output_____# check randomness
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[0])
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[1])
# plt.scatter(np.arange(best_cc_shuff.shape[1]), best_cc_shuff[2])
# plt.title("num_processors = {}".format(num_processors))
# plt.xlabel('n shuffle')
# plt.ylabel('corr for a gene-enh pair')_____no_output_____genes2enhu.head()_____no_output_____genes2enhu['cc'] = all_cc
best_ensid_inv = pd.Series(best_ensid.index.values, index=best_ensid)
i = best_ensid_inv.loc[genes2enhu.index].values
genes2enhu['best_cc'] = genes2enhu.iloc[i,:]['cc']
i = pd.Series(np.arange(best_ensid.shape[0]), index=best_ensid)
genes2enhu['best_cc_shuff_max'] = best_cc_shuff_max[i.loc[genes2enhu.index]]
isig = sig[best_ensid_inv.loc[genes2enhu.index]].values
genes2enhu['sig'] = (genes2enhu['cc'].abs() >= genes2enhu['best_cc_shuff_max'].abs())
genes2enhu['nonsig'] = (genes2enhu['cc'].abs() < genes2enhu['best_cc_shuff_max'].abs())_____no_output_____# How many enhancers are
# best_cc_shuff_max
nsig = genes2enhu.groupby(level=0).sum()[['sig','nonsig']]
nsig['best_cc'] = best_cc.values
plt.semilogy(nsig['best_cc'], nsig['sig'], '.',
markersize=5);_____no_output_____# top significant genes
nsig['gene_name'] = genes2enhu.loc[nsig.index,:]['gene_name'].drop_duplicates()
nsig.sort_values('sig').iloc[-10:,:]_____no_output_____def my_cdfplot(ax, x, label=''):
ax.semilogx(np.sort(np.abs(x)), np.linspace(0,1,len(x)),
label='%s (%d)\nd=%3.1f±%3.1f kb' %
(label, len(x), x.mean()/1000, x.std()/1000/np.sqrt(len(x))))
return
fig, axs = plt.subplots(1, 2, figsize=(8,5))
ax = axs[0]
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 1,
'density': False,
}
ax.hist(nsig['sig'].values, bins=np.arange(100),
**hist_config
)
ax.set_xlabel('Number of significant enhancers')
ax.set_ylabel('Number of genes')
ax.set_yscale('log')
ax = axs[1]
my_cdfplot(ax, nsig['sig'].values,)
ax.set_xlabel('Number of significant enhancers')
ax.set_ylabel('Cumulative fraction of genes')
fig.tight_layout()
snmcseq_utils.savefig(fig,
output_fig.format('GenesCorr_NumSigEnh_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
_____no_output_____
</code>
# Stage 1.5: Compare ATAC and mC _____no_output_____
<code>
print(all_cc_1.shape, best_cc_1.shape, sig_1.shape, best_cc_1[sig_1].shape, best_ensid_1.shape, best_enh_1.shape)
# best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values].shape(468604,) (9811,) (9811,) (1058,) (9811,) (9811,)
fig, ax = plt.subplots()
ax.scatter(all_cc_1, all_cc_2,
color='lightgray', s=1, alpha=0.3,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1.index.values],
color='lightblue', label='best mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2.index.values],
color='wheat', label='best ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1[sig_1].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values],
color='C0', label='sig. mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2[sig_2].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2[sig_2].index.values],
color='C1', label='sig. ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('mCG-RNA {} corr'.format(corr_type))
ax.set_ylabel('ATAC-RNA {} corr'.format(corr_type))
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()_____no_output_____fig, ax = plt.subplots()
ax.scatter(
all_cc_1[best_enh_1.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1.index.values],
color='lightgray', label='best',
s=1, alpha=0.3,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2.index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2.index.values],
color='lightgray', label='best',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_1[sig_1].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_1[sig_1].index.values],
color='C0', label='sig. mCG',
s=1, alpha=0.5,
rasterized=True,)
ax.scatter(
all_cc_1[best_enh_2[sig_2].index.values], # same as best_cc_1[sig_1]
all_cc_2[best_enh_2[sig_2].index.values],
color='C1', label='sig. ATAC',
s=1, alpha=0.5,
rasterized=True,)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('mCG-RNA {} corr'.format(corr_type))
ax.set_ylabel('ATAC-RNA {} corr'.format(corr_type))
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement2_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()_____no_output_____
from matplotlib_venn import venn2
fig, ax = plt.subplots()
venn2([set(best_ensid_1[sig_1].values), set(best_ensid_2[sig_2].values)],
set_labels=('sig. mCG', 'sig. ATAC'),
set_colors=('C0', 'C1'),
ax=ax
)
ax.set_title('Overlap of sig. genes')
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement3_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()_____no_output_____fig, ax = plt.subplots()
venn2([set(sig_1[sig_1].index.values), set(sig_2[sig_2].index.values)],
set_labels=('sig. mCG', 'sig. ATAC'),
set_colors=('C0', 'C1'),
ax=ax
)
ax.set_title('Overlap of sig. gene-enhancer pairs')
snmcseq_utils.savefig(fig,
output_fig.format('mCG_ATAC_agreement4_%s_%s_%s.pdf' % (features_use, today, corr_type))
)
plt.show()_____no_output_____
</code>
# Stage 2: Regression modeling across sig. genes _____no_output_____
<code>
# Are there any duplicate enhancers?
_x = genes2enhu.iloc[(best_enh_1[sig_1].values),:]
nenh_sig = len(_x)
nenh_sig_unique = len(_x['enh_pos'].unique())
nenh_sig_genes_unique = len(_x['ensid'].unique())
print(nenh_sig, nenh_sig_unique, nenh_sig_genes_unique)1058 1041 1058
# best_enh_1[sig_1]_____no_output_____# get sig. mC enhancer-gene pairs (1 for each gene) only
mc_u = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()[best_enh_1[sig_1],:]
atac_u = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()[best_enh_1[sig_1],:]
rna_u = rnau.loc[genes2enhu['ensid'],:].to_numpy()[best_enh_1[sig_1],:].copy()
genes2enhu_u = genes2enhu.iloc[best_enh_1[sig_1],:].copy()
genes2enhu_u = genes2enhu_u.drop('ensid',axis=1).reset_index()_____no_output_____# genes2enhu.iloc[(best_enh_1[sig_1].values),:]['enh_pos'].shape_____no_output_____# cc_mc_rna = np.array([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(mc_u,rna_u)])
# cc_atac_rna = np.array([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(atac_u,rna_u)])_____no_output_____# genes2enhu_u.loc[:,'cc_mc_rna'] = cc_mc_rna
# genes2enhu_u.loc[:,'cc_atac_rna'] = cc_atac_rna
# genes2enhu_u.sort_values('cc_mc_rna')
# # genes2enhu_u['cc_atac_rna'] = cc_atac_rna_____no_output_____# fig, ax = plt.subplots()
# sig_pos = (genes2enhu_u['cc_mc_rna']<0) & (genes2enhu_u['cc_atac_rna']>0)
# sig_neg = (genes2enhu_u['cc_mc_rna']>0) & (genes2enhu_u['cc_atac_rna']<-0)
# ax.plot(cc_mc_rna, cc_atac_rna, '.', color='gray', label='%d significnat pairs' % np.sum(sig))
# ax.plot(cc_mc_rna[sig_pos], cc_atac_rna[sig_pos], 'r.', label='%d corr pairs' % np.sum(sig_pos))
# ax.plot(cc_mc_rna[sig_neg], cc_atac_rna[sig_neg], 'g.', label='%d anti-corr pairs' % np.sum(sig_neg))
# ax.set_xlabel('Correlation mCG vs. RNA')
# ax.set_ylabel('Correlation ATAC vs. RNA')
# ax.legend(bbox_to_anchor=(1,1))
# print('We found %d significant enhancer-gene links, covering %d unique enhancers and %d unique genes' %
# (nenh_sig, nenh_sig_unique, nenh_sig_genes_unique))
# print('%d of these have the expected correlation (negative for mCG, positive for ATAC)' %
# (np.sum(sig_pos)))
# print('%d of these have the opposite correlation (positive for mCG, negative for ATAC)' %
# (np.sum(sig_neg)))
# snmcseq_utils.savefig(fig, output_fig.format(
# 'EnhancerRegression_SigEnhancers_scatter_mCG_ATAC_corr_%dGenes_%dCelltypes_%s' %
# (genes2enhu.ensid.unique().shape[0], len(celltypes), today)
# ))_____no_output_____# fig, ax = plt.subplots(figsize=(7,4))
# my_cdfplot(ax, genes2enhu['dtss'], label='All pairs')
# my_cdfplot(ax, genes2enhu_u['dtss'], label='Best pair for each gene')
# my_cdfplot(ax, genes2enhu_u['dtss'][sig_pos], label='Positive corr')
# my_cdfplot(ax, genes2enhu_u['dtss'][sig_neg], label='Negative corr')
# ax.legend(bbox_to_anchor=(1, 0.8))
# ax.set_xlim([1e3,3e5])
# ax.set_xlabel('Distance of enhancer from TSS')
# ax.set_ylabel('Cumulative fraction')
# ax.set_yticks(ticks=[0,.25,.5,.75,1]);
# snmcseq_utils.savefig(fig, output_fig.format(
# 'EnhancerRegression_SigEnhancers_dTSS_cdf_%dGenes_%dCelltypes_%s' %
# (genes2enhu.ensid.unique().shape[0], len(celltypes), today)
# ))_____no_output_____# Ordinary linear regression with CV
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_validate
from sklearn.metrics import r2_score, make_scorer
from sklearn.preprocessing import PolynomialFeatures
X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
X = zscore(X, axis=0)
y = zscore(y, axis=0)
y = y - np.mean(y,axis=1,keepdims=True)
# X = X[sig_pos,:]
# y = y[sig_pos,:]
mdl = LinearRegression(fit_intercept=True, normalize=True)
ngenes,ncells = y.shape
print('%d genes, %d celltypes' % (ngenes,ncells))
intxn_order = 3
my_r2 = make_scorer(r2_score)
res_cv = {}
cv = 5
for i,yi in enumerate(y.T):
# Regression using only mCG and ATAC from the same cell type
Xu = X[:,[i,i+ncells]]
Xu = np.concatenate((X[:,[i,i+ncells]],
# np.mean(X[:,:ncells],axis=1,keepdims=True),
# np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
# Xu = PolynomialFeatures(degree=3, include_bias=False).fit_transform(Xu)
res_cvi = cross_validate(mdl,Xu,yi,cv=cv,
scoring=my_r2,
return_train_score=True,
verbose=0)
if i==0:
print('Simple model: %d parameters' % Xu.shape[1])
dof_simple=Xu.shape[1]
for m in res_cvi:
if (m in res_cv):
res_cv[m] = np.vstack((res_cv[m], res_cvi[m]))
else:
res_cv[m]=res_cvi[m]
# Regression using mCG and ATAC from the same cell type, as well as the mean across all cell types
# res_cvi = cross_validate(mdl,X,yi,cv=cv,
# scoring=my_r2,
# return_train_score=True,
# verbose=0)
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu = PolynomialFeatures(degree=intxn_order, include_bias=False).fit_transform(Xu)
res_cvi = cross_validate(mdl, Xu, yi,
cv=cv,
scoring=my_r2,
return_train_score=True,
verbose=0)
if i==0:
print('Complex model: %d parameters' % Xu.shape[1])
dof_complex=Xu.shape[1]
for m1 in res_cvi:
m = m1+'_all'
if (m in res_cv):
res_cv[m] = np.vstack((res_cv[m], res_cvi[m1]))
else:
res_cv[m]=res_cvi[m1]
1058 genes, 24 celltypes
Simple model: 2 parameters
Complex model: 34 parameters
cellnames = df_cellnames.loc[celltypes]['annot']
# Show the OLS results
def myplot(ax, x, label='', fmt=''):
x[x<0] = 0
# xu = np.sqrt(x)
xu = x
ax.errorbar(cellnames, xu.mean(axis=1), xu.std(axis=1)/np.sqrt(cv),
label=label, fmt=fmt)
return
fig, ax = plt.subplots(figsize=(8,6))
myplot(ax, res_cv['train_score'],
fmt='rs-', label='Train simple model:\nRNA~mCG+ATAC\n(%d params)' % dof_simple)
myplot(ax, res_cv['test_score'],
fmt='ro-', label='Test')
myplot(ax, res_cv['train_score_all'],
fmt='bs--', label='Train complex model:\nRNA~mCG+ATAC+mean(mCG)+mean(ATAC)+%dth order intxn\n(%d params)' %
(intxn_order, dof_complex))
myplot(ax, res_cv['test_score_all'],
fmt='bo--', label='Test')
ax.legend(bbox_to_anchor=(1, 1))
ax.set_xlabel('Cell type')
ax.set_ylabel('Score (R^2)')
ax.xaxis.set_tick_params(rotation=90)
ax.grid(axis='y')
ax.set_title('%d genes, separate model for each of %d celltypes' % y.shape)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_OLS_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
))_____no_output_____# # Multi-task LASSO regression with CV
# from sklearn.linear_model import MultiTaskLassoCV
# t0=time.time()
# mdl = MultiTaskLassoCV(fit_intercept=True, normalize=True, cv=cv,
# selection='random',
# random_state=0)
# X = np.concatenate((mc_u,atac_u),axis=1).copy()
# y = np.log10(rna_u+1).copy()
# X = zscore(X[sig_pos,:], axis=0)
# y = zscore(np.log10(y[sig_pos,:]+1), axis=0)
# reg = mdl.fit(X,y)
# print('Done fitting LASSO, t=%3.3f s' % (time.time()-t0))_____no_output_____# plt.errorbar(reg.alphas_, reg.mse_path_.mean(axis=1), reg.mse_path_.std(axis=1))
# plt.vlines(reg.alpha_, plt.ylim()[0], plt.ylim()[1], 'k')
# plt.xscale('log')_____no_output_____# Single task LASSO with CV, interaction terms
from sklearn.linear_model import LassoCV
Xu_all = []
for i,yi in enumerate(y.T):
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu_all.append(Xu.T)
Xu_all = np.dstack(Xu_all).reshape(4,-1).T
Xu_fit = PolynomialFeatures(degree=intxn_order, include_bias=False)
Xu_all = Xu_fit.fit_transform(Xu_all)
feature_names = Xu_fit.get_feature_names(input_features=['mC','A','mCm','Am'])_____no_output_____print(Xu_all.shape, y.shape)
yu = y.ravel()
print(Xu_all.shape, yu.shape)
t0=time.time()
mdl = LassoCV(fit_intercept=True, normalize=True, cv=cv,
selection='random',
random_state=0,
n_jobs=8)
reg = mdl.fit(Xu_all,yu)
print('Done fitting LASSO, t=%3.3f s' % (time.time()-t0))(25392, 34) (1058, 24)
(25392, 34) (25392,)
Done fitting LASSO, t=1.090 s
plt.errorbar(reg.alphas_, reg.mse_path_.mean(axis=1), reg.mse_path_.std(axis=1))
plt.vlines(reg.alpha_, plt.ylim()[0], plt.ylim()[1], 'k')
plt.xscale('log')
plt.xlabel('LASSO Regularization (lambda)')
plt.ylabel('MSE')_____no_output_____yhat = reg.predict(Xu_all).reshape(y.shape)
cc = [np.corrcoef(y1,y1hat)[0,1] for (y1,y1hat) in zip(y.T,yhat.T)]
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(cellnames, np.power(cc, 2), 'o-', color='C1', label='LASSO fit, single model for all cell types')
# myplot(ax, res_cv['test_score_all'], label='Test (RNA~mCG+ATAC+mean(mCG)+mean(ATAC)+Intxn)', fmt='o--')
myplot(ax, res_cv['train_score'],
fmt='rs-', label='Train simple model:\nRNA~mCG+ATAC\n(%d params)' % dof_simple)
myplot(ax, res_cv['test_score'],
fmt='ro-', label='Test')
myplot(ax, res_cv['train_score_all'],
fmt='bs--', label='Train complex model:\nRNA~mCG+ATAC+mean(mCG)+mean(ATAC)+%dth order intxn\n(%d params)' %
(intxn_order, dof_complex))
myplot(ax, res_cv['test_score_all'],
fmt='bo--', label='Test')
ax.legend(bbox_to_anchor=(1, 0.8))
ax.set_xlabel('Cell type')
ax.set_ylabel('Score (R^2)')
ax.xaxis.set_tick_params(rotation=90)
ax.grid(axis='y')
ax.set_ylim([0,0.8])
ax.set_title('Model for %d genes across %d celltypes' % y.shape)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_CompareLASSO_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
))
_____no_output_____fig, ax = plt.subplots(figsize=(10,5))
show = np.abs(reg.coef_)>0.01
show = np.argsort(np.abs(reg.coef_))[-30:][::-1]
ax.bar(np.array(feature_names)[show], reg.coef_[show])
ax.xaxis.set_tick_params(rotation=90)
ax.set_ylabel('Regression coefficient')
ax.grid(axis='y')
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_SigEnhancers_LASSO_CorrCoef_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today)
))_____no_output_____
</code>
# Apply the nonlinear model to all enhancer_____no_output_____
<code>
mc_u = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
atac_u = df_atacu.loc[genes2enhu['enh_pos'],:].to_numpy()
genes2enhu_u = genes2enhu.copy()
genes2enhu_u = genes2enhu_u.drop('ensid',axis=1).reset_index()
rna_u = rnau.loc[genes2enhu['ensid'],:].to_numpy()
rna_u.shape, mc_u.shape, atac_u.shape_____no_output_____X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
X = zscore(X, axis=0)
y = zscore(y, axis=0)
y = y - np.mean(y,axis=1,keepdims=True)
X.shape, y.shape_____no_output_____Xu_all = []
for i,yi in enumerate(y.T):
Xu = np.concatenate((X[:,[i,i+ncells]],
np.mean(X[:,:ncells],axis=1,keepdims=True),
np.mean(X[:,ncells:],axis=1,keepdims=True),
),axis=1)
Xu_all.append(Xu.T)
Xu_all = np.dstack(Xu_all).reshape(4,-1).T
Xu_fit = PolynomialFeatures(degree=intxn_order, include_bias=False).fit(Xu_all)
feature_names = Xu_fit.get_feature_names(input_features=['mC','A','mCm','Am'])
Xu_all = PolynomialFeatures(degree=intxn_order, include_bias=False).fit_transform(Xu_all)
Xu_all.shape, y.shape_____no_output_____yhat = reg.predict(Xu_all).reshape(y.shape)_____no_output_____x = df_mlevelu.loc[genes2enhu['enh_pos'],:].to_numpy()
best_cc,best_enh,best_ensid,all_cc = my_cc(-x,y,genes2enhu['ensid'],False,0,corr_type)Computing correlations for 468604 gene-enhancer pairs; jshuff=0;
(~np.isfinite(best_cc2)).sum()_____no_output_____best_cc2,best_enh2,best_ensid2,all_cc2 = my_cc(yhat,y,genes2enhu['ensid'],False,0,corr_type)_____no_output_____plt.figure(figsize=(10,10))
plt.plot(np.abs(all_cc[best_enh]), np.abs(all_cc2[best_enh]), '.', markersize=1, rasterized=True)
plt.plot(np.abs(all_cc[best_enh2]), np.abs(all_cc2[best_enh2]), '.', markersize=1, rasterized=True)
plt.plot([0,1],[0,1],'k')_____no_output_____np.abs(best_cc2)/(np.abs(best_cc)+1e-6)
best_cc2.shape, best_cc.shape_____no_output_____plt.hist(np.abs(best_cc2).values/np.abs(best_cc).values, bins=np.arange(0.7,1.3,0.01));
print(np.abs(best_cc2).values/np.abs(best_cc).values.mean())_____no_output_____# For each gene, find all enhancers with significant cc
df = pd.DataFrame(data=all_cc, columns=['cc'], index=genes2enhu[['ensid','enh_pos']])
df['ensid'] = genes2enhu['ensid'].values
df['enh_pos'] = genes2enhu['enh_pos'].values
df['cc2'] = all_cc2_____no_output_____df['good_pairs'] = df['cc']>0.6
df['good_pairs2'] = df['cc2']>0.6
npairs_df=df.groupby('ensid')[['good_pairs','good_pairs2']].sum()_____no_output_____plt.loglog(npairs_df['good_pairs']+1,npairs_df['good_pairs2']+1,'.')
plt.plot([1,1e3],[1,1e3],'k')_____no_output_____np.mean((npairs_df['good_pairs2']+1)/(npairs_df['good_pairs']+1))_____no_output_____
</code>
# Average over all the enhancers linked to a single gene_____no_output_____
<code>
def myz(x):
z = zscore(x, axis=1, nan_policy='omit', ddof=0)
return z
def make_df(z):
z_df = pd.DataFrame(data=z, columns=df_mlevelu.columns, index=rnau.index)
return z_df
multiEnh = {}
multiEnh['rna'] = myz(rnau.values);
multiEnh['rna_hat_1Enh'] = myz(yhat[best_enh2,:])
multiEnh['rna_hat_AllEnh'] = myz(yhat[best_enh2,:])
multiEnh['rna_hat_AllSigEnh'] = np.zeros(yhat[best_enh2,:].shape)+np.nan;
t0=time.time()
for i,c in enumerate(celltypes):
df = pd.DataFrame(data=yhat[:,i], columns=['yhat'])
df['ensid'] = genes2enhu.loc[:,'ensid'].values
multiEnh['rna_hat_AllEnh'][:,i] = df.groupby('ensid')['yhat'].mean()
df = df.loc[genes2enhu.sig.values,:]
multiEnh['rna_hat_AllSigEnh'][sig,i] = df.groupby('ensid')['yhat'].mean()
multiEnh['rna'] = make_df(multiEnh['rna']);
multiEnh['rna_hat_1Enh'] = make_df(multiEnh['rna_hat_1Enh']);
multiEnh['rna_hat_AllEnh'] = make_df(multiEnh['rna_hat_AllEnh'])
multiEnh['rna_hat_AllSigEnh'] = make_df(multiEnh['rna_hat_AllSigEnh'])
print(time.time()-t0)_____no_output_____cc_1Enh = np.diag(np.corrcoef(multiEnh['rna'].values, multiEnh['rna_hat_1Enh'].values, rowvar=False)[:ncells,ncells:])
cc_AllEnh = np.diag(np.corrcoef(multiEnh['rna'].values, multiEnh['rna_hat_AllEnh'].values, rowvar=False)[:ncells,ncells:])
cc_AllSigEnh = np.diag(np.corrcoef(multiEnh['rna'].values[sig,:], multiEnh['rna_hat_AllSigEnh'].values[sig,:], rowvar=False)[:ncells,ncells:])
plt.plot(cellnames, cc_1Enh, label='1 enhancer')
plt.plot(cellnames, cc_AllEnh, label='All enhancers')
plt.plot(cellnames, cc_AllSigEnh, label='Significant enhancers')
plt.legend(bbox_to_anchor=(1,1))
plt.xticks(rotation=90);
plt.ylabel('Correlation across genes')_____no_output_____def cc_gene(x,y):
c = np.nan_to_num([np.corrcoef(x1,y1)[0,1] for (x1,y1) in zip(x,y)])
return c
cc_1Enh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_1Enh'].values)
cc_AllEnh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_AllEnh'].values)
cc_AllSigEnh = cc_gene(multiEnh['rna'].values, multiEnh['rna_hat_AllSigEnh'].values)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(2,2,1)
ax.plot(np.abs(cc_1Enh), np.abs(cc_AllEnh), '.', markersize=1, rasterized=True)
ax.set_xlabel('Corr with 1 best enhancer')
ax.set_ylabel('Corr with avg. prediction\nbased on all enhancers')
ax = fig.add_subplot(2,2,2)
ax.plot(np.abs(cc_1Enh), np.abs(cc_AllSigEnh), '.', markersize=1, rasterized=True)
ax.set_xlabel('Corr with 1 best enhancer')
ax.set_ylabel('Corr with avg. prediction\nbased on sig. enhancers')
ax = fig.add_subplot(2,1,2)
bins = np.arange(-1,1,1/100)
hist_config = {
'histtype': 'bar',
'edgecolor': 'none',
'alpha': 0.5,
'density': False,
}
ax.hist(np.abs(cc_AllEnh)-np.abs(cc_1Enh), bins=bins,
label='All enhancers-Best enhancer',
**hist_config,
)
ax.hist(np.abs(cc_AllSigEnh)-np.abs(cc_1Enh), bins=bins,
label='Sig enhancers-Best enhancer',
**hist_config,
)
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel('Difference in correlation')
ax.set_ylabel('Number of genes')
fig.subplots_adjust(wspace=0.5, hspace=0.3)
snmcseq_utils.savefig(fig, output_fig.format(
'EnhancerRegression_Correlation_1Enh_vs_AllEnh_%dGenes_%dCelltypes_%s' %
(genes2enhu.ensid.unique().shape[0], len(celltypes), today))
)
_____no_output_____
</code>
# Nonlinear model fitting_____no_output_____
<code>
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)cpu
X = np.concatenate((mc_u,atac_u),axis=1).copy()
y = np.log10(rna_u+1).copy()
ngenes,ncells = y.shape
X.shape, y.shape_____no_output_____# Define a class for the NN architecture
Ngenes, Nc = y.shape
Nx = X.shape[1]
N1 = 128
N2 = 32
N3 = 0
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(Nx, N1);
self.fc2 = nn.Linear(N1, N2);
# self.fc3 = nn.Linear(N2, N3);
self.fc4 = nn.Linear(N2, Nc);
def forward(self, x):
x = F.relu(self.fc1(x)) # Out: N x N1
x = F.relu(self.fc2(x)) # Out: N x N2
# x = F.relu(self.fc3(x)) # Out: N x N3
x = self.fc4(x) # Out: N x C
return x_____no_output_____# Initialize
def myinit():
global net, optimizer, criterion, scheduler, loss_test, loss_train, test, train, ensids
net = Net()
net.to(device)
# # Initialize the kmer weights to 0 and turn off learning
# net.fc1_kmers.requires_grad_(False)
# net.fc1_kmers.weight.fill_(0)
# net.fc1_kmers.bias.fill_(0)
criterion = nn.MSELoss(reduction='sum')
optimizer = optim.Adam(net.parameters(), lr=lr)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=0.25)
loss_test=np.array([])
loss_train = np.array([])
# Train/Test split
test = (np.random.rand(Ngenes,1)<0.2)
train = [not i for i in test]
test = np.random.permutation(np.nonzero(test)[0]).squeeze()
train = np.random.permutation(np.nonzero(train)[0]).squeeze()
ensids = rnau.index.values
return
def train_epoch(epoch):
nsamp = 0
running_loss = 0.0
running_time = 0.0
net.train()
t0train = time.time()
for i in range(0, len(train), batch_size):
tstart = time.time()
indices = train[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(batch_X)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_time += time.time()-tstart
nsamp += len(indices)
if (time.time()-t0train)>5:
print('Epoch %d, i=%d/%d, LR=%3.5g, loss=%3.8f, t=%3.3f, %3.5f s/sample' % (epoch, i, len(train),
optimizer.state_dict()['param_groups'][0]['lr'],
running_loss/nsamp, running_time, running_time/nsamp))
t0train=time.time()
return running_loss/nsamp
def test_epoch(epoch):
net.eval()
running_loss_test = 0.0
nsamp = 0
yyhat = {'y':[], 'yhat':[]}
for i in range(0, len(test), batch_size):
indices = test[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# forward + backward + optimize
outputs = net(batch_X)
loss = criterion(outputs, batch_y)
running_loss_test += loss.item()
nsamp += len(indices)
yyhat['yhat'].append(outputs.detach().cpu().numpy())
yyhat['y'].append(batch_y.detach().cpu().numpy())
return running_loss_test/nsamp
_____no_output_____lr = 0.0002
myinit()
train.shape, test.shape_____no_output_____import glob
from IPython import display
def test_net(indices):
net.eval()
yyhat = {'y':[], 'yhat':[]}
for i in range(0, len(indices), batch_size):
i = indices[i:i+batch_size]
# Input should be of size: (batch, channels, samples)
batch_X = torch.tensor(X[indices,:],dtype=torch.float)
batch_y = torch.tensor(y[indices,:],dtype=torch.float)
# Send training data to CUDA
if device is not "cpu":
batch_X = batch_X.to(device)
outputs = net(batch_X)
yyhat['yhat'].append(outputs.detach().cpu().numpy())
yyhat['y'].append(batch_y.numpy())
yyhat['yhat'] = np.concatenate(yyhat['yhat'],axis=0)
yyhat['y'] = np.concatenate(yyhat['y'],axis=0)
cc = np.zeros((Nc,1))
for i in range(yyhat['y'].shape[1]):
cc[i,0] = np.corrcoef(yyhat['y'][:,i], yyhat['yhat'][:,i])[0,1]
return yyhat, cc
def make_plot1(save=False):
plt.figure(figsize=(15,4))
plt.clf()
plt.subplot(1,3,1)
plt.semilogx(loss_train[2:],'o-',label='Train')
plt.plot(loss_test[2:],'o-',label='Test')
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title(fn_save)
plt.subplot(1,3,2)
plt.plot(yyhat_test['y'].T, yyhat_test['yhat'].T,'.');
plt.plot([0,3],[0,3],'k--')
plt.xlabel('True RNA expression')
plt.ylabel('Estimated RNA expression')
plt.subplot(1,3,3)
plt.plot(np.arange(Nc), cc)
plt.ylabel('R^2?')
plt.xlabel('Cell type')
plt.legend(['Train','Test'])
if save:
fn_plot = output_fig.format(fn_save.replace('.torch','')+'_corrcoef').replace('pdf', 'png')
plt.savefig(fn_plot)
print('Saved plot: '+fn_plot)
plt.tight_layout()
plt.show();
def make_plot2(save=False):
plt.figure(figsize=(20,20))
for i in range(Nc):
plt.subplot(5,6,i+1)
plt.plot([0,2],[0,2],'k--')
plt.plot(yyhat_train['y'][:,i], yyhat_train['yhat'][:,i],'.');
plt.plot(yyhat_test['y'][:,i], yyhat_test['yhat'][:,i],'.');
# cc = np.corrcoef(yyhat['y'][:,i], yyhat['yhat'][:,i])[0,1]
plt.title('r=%3.3f train/%3.3f test' % (cc[i,0], cc[i,1]))
if save:
fn_plot = output_fig.format(fn_save.replace('.torch','')+'_scatter').replace('pdf', 'png')
plt.savefig(fn_plot)
print('Saved plot: '+fn_plot)
plt.tight_layout()
plt.show();
_____no_output_____num_epochs1 = 1000
fn_id = len(glob.glob('./RegressEnh*.pt'))+1 # Generate a unique ID for this run
fn_save = 'RegressEnh%0.4d_%s_N_%d_%d_%d.%s.pt' % (fn_id, ('UseKmers' if use_kmers else 'NoKmers'), N1,N2,N3,today)
t0 = time.time()
batch_size = 16
for epoch in range(num_epochs1): # loop over the dataset multiple times
# while epoch<num_epochs1:
new_loss_train = train_epoch(epoch);
loss_train = np.append(loss_train, new_loss_train)
new_loss_test = test_epoch(epoch);
loss_test = np.append(loss_test,new_loss_test)
scheduler.step(new_loss_test)
print('**** Phase1 epoch %d, LR=%3.5g, loss_train=%3.8f, loss_test=%3.8f, time = %3.5f s/epoch' % (
len(loss_train),
optimizer.param_groups[0]['lr'],
loss_train[-1],
loss_test[-1],
(time.time()-t0))
)
if (time.time()-t0)>60 or (epoch==num_epochs1-1):
if (epoch>0):
cc = np.zeros((Nc,2))
yyhat_train, cc[:,[0]] = test_net(random.sample(train.tolist(), 500))
yyhat_test, cc[:,[1]] = test_net(random.sample(test.tolist(), 500))
display.clear_output(wait=True)
display.display(plt.gcf())
make_plot1(save=True)
# make_plot2(save=True)
display.display(plt.gcf())
t0=time.time()
torch.save({
'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss_train': loss_train,
'loss_test': loss_test,
}, fn_save)
print('Saved data: %s' % fn_save)
_____no_output_____output_fig_____no_output_____# test.max()
# plt.hist2d(df['log_rna'], mdl.predict(), bins=(50,50), cmap=plt.cm.Reds);
# plt.scatter(df['log_rna'], mdl.predict(),s=1)_____no_output_____plt.hist(np.log(rnau.loc[genes2enhu['ensid'][best_enh],:].iloc[:,3]+1), bins=100);_____no_output_____
</code>
### Fangming follow-ups _____no_output_____
| {
"repository": "FangmingXie/scf_enhancer_paper",
"path": "eran/.ipynb_checkpoints/Regress_June25_mc-checkpoint.ipynb",
"matched_keywords": [
"RNA"
],
"stars": null,
"size": 958112,
"hexsha": "d073717f2b36f3c9c99d2463adef04b2272bcbd7",
"max_line_length": 145516,
"avg_line_length": 357.1047335073,
"alphanum_fraction": 0.9218181173
} |
# Notebook from ceb8/lightkurve
Path: docs/source/tutorials/2.02-recover-a-planet.ipynb
# How to recover a known planet in Kepler data_____no_output_____This tutorial demonstrates the basic steps required to recover a transiting planet candidate in the Kepler data.
We will show how you can recover the signal of [Kepler-10b](https://en.wikipedia.org/wiki/Kepler-10b), the first rocky planet that was discovered by Kepler! Kepler-10 is a Sun-like (G-type) star approximately 600 light years away in the constellation of Cygnus. In this tutorial, we will download the pixel data of Kepler-10, extract a lightcurve, and recover the planet._____no_output_____Kepler pixel data is distributed in "Target Pixel Files". You can read more about them in our tutorial [here](http://lightkurve.keplerscience.org/tutorials/target-pixel-files.html). The `lightkurve` package provides a `KeplerTargetPixelFile` class, which enables you to load and interact with data in this format.
The class can take a path (local or url), or you can load data straight from the [MAST archive](https://archive.stsci.edu/kepler/), which holds all of the Kepler and K2 data archive. We'll download the Kepler-10 light curve using the `from_archive` function, as shown below. *(Note: we're adding the keyword `quarter=3` to download only the data from the third Kepler quarter. There were 17 quarters during the Kepler mission.)*_____no_output_____
<code>
from lightkurve import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile.from_archive("Kepler-10", quarter=3)Downloading URL https://mast.stsci.edu/api/v0/download/file?uri=mast:Kepler/url/missions/kepler/target_pixel_files/0119/011904151/kplr011904151-2009350155506_lpd-targ.fits.gz to ./mastDownload/Kepler/kplr011904151_lc_Q111111110111011101/kplr011904151-2009350155506_lpd-targ.fits.gz ... [Done]
</code>
Let's use the `plot` method and pass along an aperture mask and a few plotting arguments._____no_output_____
<code>
tpf.plot(scale='log');_____no_output_____
</code>
The target pixel file contains one bright star with approximately 50,000 counts._____no_output_____Now, we will use the ``to_lightcurve`` method to create a simple aperture photometry lightcurve using the
mask defined by the pipeline which is stored in `tpf.pipeline_mask`._____no_output_____
<code>
lc = tpf.to_lightcurve(aperture_mask=tpf.pipeline_mask)_____no_output_____
</code>
Let's take a look at the output lightcurve._____no_output_____
<code>
lc.plot();_____no_output_____
</code>
Now let's use the `flatten` method, which applies a Savitzky-Golay filter, to remove long-term variability that we are not interested in. We'll use the `return_trend` keyword so that it returns both the corrected `KeplerLightCurve` object and a new `KeplerLightCurve` object called 'trend'. This contains only the long term variability._____no_output_____
<code>
flat, trend = lc.flatten(window_length=301, return_trend=True)_____no_output_____
</code>
Let's plot the trend estimated by the Savitzky-Golay filter:_____no_output_____
<code>
ax = lc.plot() #plot() returns a matplotlib axis
trend.plot(ax, color='red'); #which we can pass to the next plot() to use the same plotting window_____no_output_____
</code>
and the flat lightcurve:_____no_output_____
<code>
flat.plot();_____no_output_____
</code>
Now, let's run a period search function using the Box-Least Squares algorithm (http://adsabs.harvard.edu/abs/2002A%26A...391..369K). We will shortly have a built in BLS implementation, but until then you can download and install it separately from lightkurve using_____no_output_____`pip install git+https://github.com/mirca/transit-periodogram.git`_____no_output_____
<code>
from transit_periodogram import transit_periodogram
import numpy as np
import matplotlib.pyplot as plt_____no_output_____periods = np.arange(0.3, 1.5, 0.0001)
durations = np.arange(0.005, 0.15, 0.001)
power, _, _, _, _, _, _ = transit_periodogram(time=flat.time,
flux=flat.flux,
flux_err=flat.flux_err,
periods=periods,
durations=durations)
best_fit = periods[np.argmax(power)]_____no_output_____print('Best Fit Period: {} days'.format(best_fit))Best Fit Period: 0.8375999999999408 days
flat.fold(best_fit).plot(alpha=0.4);_____no_output_____
</code>
| {
"repository": "ceb8/lightkurve",
"path": "docs/source/tutorials/2.02-recover-a-planet.ipynb",
"matched_keywords": [
"STAR"
],
"stars": null,
"size": 107024,
"hexsha": "d07399e1f0e6eb948466da0f1a5fba165ddcf846",
"max_line_length": 28412,
"avg_line_length": 352.0526315789,
"alphanum_fraction": 0.9378737479
} |
# Notebook from jai-singhal/data_science
Path: pandas/01-pandas_introduction.ipynb
<!--<img width=700px; src="../img/logoUPSayPlusCDS_990.png"> -->
<p style="margin-top: 3em; margin-bottom: 2em;"><b><big><big><big><big>Introduction to Pandas</big></big></big></big></b></p>_____no_output_____
<code>
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8_____no_output_____
</code>
# 1. Let's start with a showcase
#### Case 1: titanic survival data_____no_output_____
<code>
df = pd.read_csv("data/titanic.csv")_____no_output_____df.head()_____no_output_____
</code>
Starting from reading this dataset, to answering questions about this data in a few lines of code:_____no_output_____**What is the age distribution of the passengers?**_____no_output_____
<code>
df['Age'].hist()_____no_output_____
</code>
**How does the survival rate of the passengers differ between sexes?**_____no_output_____
<code>
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))_____no_output_____
</code>
**Or how does it differ between the different classes?**_____no_output_____
<code>
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')_____no_output_____
</code>
All the needed functionality for the above examples will be explained throughout this tutorial._____no_output_____#### Case 2: air quality measurement timeseries_____no_output_____AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
Starting from these hourly data for different stations:_____no_output_____
<code>
data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)_____no_output_____data.head()_____no_output_____
</code>
to answering questions about this data in a few lines of code:
**Does the air pollution show a decreasing trend over the years?**_____no_output_____
<code>
data['1999':].resample('M').mean().plot(ylim=[0,120])_____no_output_____data['1999':].resample('A').mean().plot(ylim=[0,100])_____no_output_____
</code>
**What is the difference in diurnal profile between weekdays and weekend?**_____no_output_____
<code>
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot()_____no_output_____
</code>
We will come back to these example, and build them up step by step._____no_output_____# 2. Pandas: data analysis in python
For data-intensive work in Python the [Pandas](http://pandas.pydata.org) library has become essential.
What is `pandas`?
* Pandas can be thought of as *NumPy arrays with labels* for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
* Pandas can also be thought of as `R`'s `data.frame` in Python.
* Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...
It's documentation: http://pandas.pydata.org/pandas-docs/stable/
** When do you need pandas? **
When working with **tabular or structured data** (like R dataframe, SQL table, Excel spreadsheet, ...):
- Import data
- Clean up messy data
- Explore data, gain insight into data
- Process and prepare your data for analysis
- Analyse your data (together with scikit-learn, statsmodels, ...)
<div class="alert alert-warning">
<b>ATTENTION!</b>: <br><br>
Pandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!
<ul>
<li>When working with array data (e.g. images, numerical algorithms): just stick with numpy</li>
<li>When working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/)</li>
</ul>
</div>_____no_output_____# 2. The pandas data structures: `DataFrame` and `Series`
A `DataFrame` is a **tablular data structure** (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img align="left" width=50% src="img/schema-dataframe.svg">_____no_output_____
<code>
df_____no_output_____
</code>
### Attributes of the DataFrame
A DataFrame has besides a `index` attribute, also a `columns` attribute:_____no_output_____
<code>
df.index_____no_output_____df.columns_____no_output_____
</code>
To check the data types of the different columns:_____no_output_____
<code>
df.dtypes_____no_output_____
</code>
An overview of that information can be given with the `info()` method:_____no_output_____
<code>
df.info()<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 66.2+ KB
</code>
Also a DataFrame has a `values` attribute, but attention: when you have heterogeneous data, all values will be upcasted:_____no_output_____
<code>
df.values_____no_output_____
</code>
Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:_____no_output_____
<code>
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries_____no_output_____
</code>
### One-dimensional data: `Series` (a column of a DataFrame)
A Series is a basic holder for **one-dimensional labeled data**._____no_output_____
<code>
df['Age']_____no_output_____age = df['Age']_____no_output_____
</code>
### Attributes of a Series: `index` and `values`
The Series has also an `index` and `values` attribute, but no `columns`_____no_output_____
<code>
age.index_____no_output_____
</code>
You can access the underlying numpy array representation with the `.values` attribute:_____no_output_____
<code>
age.values[:10]_____no_output_____
</code>
We can access series values via the index, just like for NumPy arrays:_____no_output_____
<code>
age[0]_____no_output_____
</code>
Unlike the NumPy array, though, this index can be something other than integers:_____no_output_____
<code>
df = df.set_index('Name')
df_____no_output_____age = df['Age']
age_____no_output_____age['Dooley, Mr. Patrick']_____no_output_____
</code>
but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
Eg element-wise operations:_____no_output_____
<code>
age * 1000_____no_output_____
</code>
A range of methods:_____no_output_____
<code>
age.mean()_____no_output_____
</code>
Fancy indexing, like indexing with a list or boolean indexing:_____no_output_____
<code>
age[age > 70]_____no_output_____
</code>
But also a lot of pandas specific methods, e.g._____no_output_____
<code>
df['Embarked'].value_counts()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>What is the maximum Fare that was paid? And the median?</li>
</ul>
</div>_____no_output_____
<code>
df["Fare"].max()_____no_output_____df["Fare"].median()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average survival ratio for all passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)).</li>
</ul>
</div>_____no_output_____
<code>
survived_0 = df[df["Survived"] == 0]["Survived"].count()
survived_1 = df[df["Survived"] == 1]["Survived"].count()
total = df["Survived"].count()
survived_0_ratio = survived_0/total
survived_1_ratio = survived_1/total
print(survived_0_ratio)
print(survived_1_ratio)
# Method 2
print(df["Survived"].mean())0.6161616161616161
0.3838383838383838
0.3838383838383838
</code>
# 3. Data import and export_____no_output_____A wide range of input/output formats are natively supported by pandas:
* CSV, text
* SQL database
* Excel
* HDF5
* json
* html
* pickle
* sas, stata
* (parquet)
* ..._____no_output_____
<code>
#pd.read_____no_output_____#df.to_____no_output_____
</code>
Very powerful csv reader:_____no_output_____
<code>
pd.read_csv?_____no_output_____
</code>
Luckily, if we have a well formed csv file, we don't need many of those arguments:_____no_output_____
<code>
df = pd.read_csv("data/titanic.csv")_____no_output_____df.head()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`
<br><br>
Some aspects about the file:
<ul>
<li>Which separator is used in the file?</li>
<li>The second row includes unit information and should be skipped (check `skiprows` keyword)</li>
<li>For missing values, it uses the `'n/d'` notation (check `na_values` keyword)</li>
<li>We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword)</li>
</ul>
</div>_____no_output_____
<code>
no2 = pd.read_csv("./data/20000101_20161231-NO2.csv", sep=";", skiprows=[1],
index_col =[0], na_values=["n/d"], parse_dates=True )
no2_____no_output_____
</code>
# 4. Exploration_____no_output_____Some useful methods:
`head` and `tail`_____no_output_____
<code>
no2.head(3)_____no_output_____no2.tail()_____no_output_____
</code>
`info()`_____no_output_____
<code>
no2.info()<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 149039 entries, 2000-01-01 01:00:00 to 2016-12-31 23:00:00
Data columns (total 4 columns):
BASCH 139949 non-null float64
BONAP 136493 non-null float64
PA18 142259 non-null float64
VERS 143813 non-null float64
dtypes: float64(4)
memory usage: 5.7 MB
</code>
Getting some basic summary statistics about the data with `describe`:_____no_output_____
<code>
no2.describe()_____no_output_____
</code>
Quickly visualizing the data_____no_output_____
<code>
no2.plot(kind='box', ylim=[0,250])_____no_output_____no2['BASCH'].plot(kind='hist', bins=50)_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the age distribution of the titanic passengers</li>
</ul>
</div>_____no_output_____
<code>
df["Age"].hist()_____no_output_____
</code>
The default plot (when not specifying `kind`) is a line plot of all columns:_____no_output_____
<code>
no2.plot(figsize=(12,6))_____no_output_____
</code>
This does not say too much .._____no_output_____We can select part of the data (eg the latest 500 data points):_____no_output_____
<code>
no2[-500:].plot(figsize=(12,6))_____no_output_____
</code>
Or we can use some more advanced time series features -> see further in this notebook!_____no_output_____# 5. Selecting and filtering data_____no_output_____<div class="alert alert-warning">
<b>ATTENTION!</b>: <br><br>
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br> We now have to distuinguish between:
<ul>
<li>selection by **label**</li>
<li>selection by **position**</li>
</ul>
</div>_____no_output_____
<code>
df = pd.read_csv("data/titanic.csv")_____no_output_____
</code>
### `df[]` provides some convenience shortcuts _____no_output_____For a DataFrame, basic indexing selects the columns.
Selecting a single column:_____no_output_____
<code>
df['Age']_____no_output_____
</code>
or multiple columns:_____no_output_____
<code>
df[['Age', 'Fare']]_____no_output_____
</code>
But, slicing accesses the rows:_____no_output_____
<code>
df[10:15]_____no_output_____
</code>
### Systematic indexing with `loc` and `iloc`
When using `[]` like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
* `loc`: selection by label
* `iloc`: selection by position_____no_output_____
<code>
df = df.set_index('Name')_____no_output_____df.loc['Bonnell, Miss. Elizabeth', 'Fare']_____no_output_____df.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :]_____no_output_____
</code>
Selecting by position with `iloc` works similar as indexing numpy arrays:_____no_output_____
<code>
df.iloc[0:2,1:3]_____no_output_____
</code>
The different indexing methods can also be used to assign data:_____no_output_____
<code>
df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100_____no_output_____df_____no_output_____
</code>
### Boolean indexing (filtering)_____no_output_____Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed._____no_output_____
<code>
df['Fare'] > 50_____no_output_____df[df['Fare'] > 50]_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers</li>
</ul>
</div>_____no_output_____
<code>
df = pd.read_csv("data/titanic.csv")_____no_output_____# %load snippets/01-pandas_introduction63.py
male_mean_age = df[df["Sex"] == "male"]["Age"].mean()
female_mean_age = df[df["Sex"] == "female"]["Age"].mean()
print(male_mean_age)
print(female_mean_age)
print(male_mean_age == female_mean_age)30.72664459161148
27.915708812260537
False
# by loc
male_mean_age = df.loc[df["Sex"] == "male", "Age"].mean()
female_mean_age = df.loc[df["Sex"] == "female", "Age"].mean()
print(male_mean_age)
print(female_mean_age)
print(male_mean_age == female_mean_age)30.72664459161148
27.915708812260537
False
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Based on the titanic data set, how many passengers older than 70 were on the Titanic?</li>
</ul>
</div>_____no_output_____
<code>
len(df[df["Age"] >= 70])_____no_output_____
</code>
# 6. The group-by operation_____no_output_____### Some 'theory': the groupby operation (split-apply-combine)_____no_output_____
<code>
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df_____no_output_____
</code>
### Recap: aggregating functions_____no_output_____When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:_____no_output_____
<code>
df['data'].sum()_____no_output_____
</code>
However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:_____no_output_____
<code>
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())_____no_output_____
</code>
This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this._____no_output_____### Groupby: applying functions per group_____no_output_____The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets**
This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
* **Splitting** the data into groups based on some criteria
* **Applying** a function to each group independently
* **Combining** the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL `GROUP BY`_____no_output_____Instead of doing the manual filtering as above
df[df['key'] == "A"].sum()
df[df['key'] == "B"].sum()
...
pandas provides the `groupby` method to do exactly this:_____no_output_____
<code>
df.groupby('key').sum()_____no_output_____df.groupby('key').aggregate(np.sum) # 'sum'_____no_output_____
</code>
And many more methods are available. _____no_output_____
<code>
df.groupby('key')['data'].sum()_____no_output_____
</code>
### Application of the groupby concept on the titanic data_____no_output_____We go back to the titanic passengers survival data:_____no_output_____
<code>
df = pd.read_csv("data/titanic.csv")_____no_output_____df.head()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average age for each sex again, but now using groupby.</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction76.py
df.groupby("Sex")["Age"].mean()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average survival ratio for all passengers.</li>
</ul>
</div>_____no_output_____
<code>
# df.groupby("Survived")["Survived"].count()
df["Survived"].mean()
# %load snippets/01-pandas_introduction77.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction78.py
df[df["Age"] <= 25]["Survived"].mean()_____no_output_____df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>What is the difference in the survival ratio between the sexes?</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction79.py
df.groupby("Sex")["Survived"].mean()_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Or how does it differ between the different classes? Make a bar plot visualizing the survival ratio for the 3 classes.</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction80.py
df.groupby("Pclass")["Survived"].mean().plot(kind = "bar")_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li>
</ul>
</div>_____no_output_____
<code>
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))_____no_output_____# %load snippets/01-pandas_introduction82.py
df.groupby("AgeClass")["Fare"].mean().plot(kind="bar")_____no_output_____
</code>
# 7. Working with time series data_____no_output_____
<code>
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)_____no_output_____
</code>
When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available:_____no_output_____
<code>
no2.index_____no_output_____
</code>
Indexing a time series works with strings:_____no_output_____
<code>
no2["2010-01-01 09:00": "2010-01-01 12:00"]_____no_output_____
</code>
A nice feature is "partial string" indexing, so you don't need to provide the full datetime string._____no_output_____E.g. all data of January up to March 2012:_____no_output_____
<code>
no2['2012-01':'2012-03']_____no_output_____
</code>
Time and date components can be accessed from the index:_____no_output_____
<code>
no2.index.hour_____no_output_____no2.index.year_____no_output_____
</code>
## Converting your time series with `resample`_____no_output_____A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).
Remember the air quality data:_____no_output_____
<code>
no2.plot()_____no_output_____
</code>
The time series has a frequency of 1 hour. I want to change this to daily:_____no_output_____
<code>
no2.head()_____no_output_____no2.resample('D').mean().head()_____no_output_____
</code>
Above I take the mean, but as with `groupby` I can also specify other methods:_____no_output_____
<code>
no2.resample('D').max().head()_____no_output_____
</code>
The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
These strings can also be combined with numbers, eg `'10D'`._____no_output_____Further exploring the data:_____no_output_____
<code>
no2.resample('M').mean().plot() # 'A'_____no_output_____# no2['2012'].resample('D').plot()_____no_output_____# %load snippets/01-pandas_introduction95.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: The evolution of the yearly averages with, and the overall mean of all stations
<ul>
<li>Use `resample` and `plot` to plot the yearly averages for the different stations.</li>
<li>The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`).</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction96.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: how does the *typical monthly profile* look like for the different stations?
<ul>
<li>Add a 'month' column to the dataframe.</li>
<li>Group by the month to obtain the typical monthly averages over the different years.</li>
</ul>
</div>_____no_output_____First, we add a column to the dataframe that indicates the month (integer value of 1 to 12):_____no_output_____
<code>
# %load snippets/01-pandas_introduction97.py_____no_output_____
</code>
Now, we can calculate the mean of each month over the different years:_____no_output_____
<code>
# %load snippets/01-pandas_introduction98.py_____no_output_____# %load snippets/01-pandas_introduction99.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: The typical diurnal profile for the different stations
<ul>
<li>Similar as for the month, you can now group by the hour of the day.</li>
</ul>
</div>_____no_output_____
<code>
# %load snippets/01-pandas_introduction100.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: What is the difference in the typical diurnal profile between week and weekend days for the 'BASCH' station.
<ul>
<li>Add a column 'weekday' defining the different days in the week.</li>
<li>Add a column 'weekend' defining if a days is in the weekend (i.e. days 5 and 6) or not (True/False).</li>
<li>You can groupby on multiple items at the same time. In this case you would need to group by both weekend/weekday and hour of the day.</li>
</ul>
</div>_____no_output_____Add a column indicating the weekday:_____no_output_____
<code>
no2.index.weekday?_____no_output_____# %load snippets/01-pandas_introduction102.py_____no_output_____
</code>
Add a column indicating week/weekend_____no_output_____
<code>
# %load snippets/01-pandas_introduction103.py_____no_output_____
</code>
Now we can groupby the hour of the day and the weekend (or use `pivot_table`):_____no_output_____
<code>
# %load snippets/01-pandas_introduction104.py_____no_output_____# %load snippets/01-pandas_introduction105.py_____no_output_____# %load snippets/01-pandas_introduction106.py_____no_output_____# %load snippets/01-pandas_introduction107.py_____no_output_____
</code>
<div class="alert alert-success">
<b>EXERCISE</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?
Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?
<br><br>
Hints:
<ul>
<li>Create a new DataFrame, called `exceedances`, (with boolean values) indicating if the threshold is exceeded or not</li>
<li>Remember that the sum of True values can be used to count elements. Do this using groupby for each year.</li>
<li>Adding a horizontal line can be done with the matplotlib function `ax.axhline`.</li>
</ul>
</div>_____no_output_____
<code>
# re-reading the data to have a clean version
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)_____no_output_____# %load snippets/01-pandas_introduction109.py_____no_output_____# %load snippets/01-pandas_introduction110.py_____no_output_____# %load snippets/01-pandas_introduction111.py_____no_output_____
</code>
# 9. What I didn't talk about_____no_output_____- Concatenating data: `pd.concat`
- Merging and joining data: `pd.merge`
- Reshaping data: `pivot_table`, `melt`, `stack`, `unstack`
- Working with missing data: `isnull`, `dropna`, `interpolate`, ...
- ..._____no_output_____
## Further reading
* Pandas documentation: http://pandas.pydata.org/pandas-docs/stable/
* Books
* "Python for Data Analysis" by Wes McKinney
* "Python Data Science Handbook" by Jake VanderPlas
* Tutorials (many good online tutorials!)
* https://github.com/jorisvandenbossche/pandas-tutorial
* https://github.com/brandon-rhodes/pycon-pandas-tutorial
* Tom Augspurger's blog
* https://tomaugspurger.github.io/modern-1.html_____no_output_____
| {
"repository": "jai-singhal/data_science",
"path": "pandas/01-pandas_introduction.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 596970,
"hexsha": "d073c09f8df43a18549881ddb62ba2da01279183",
"max_line_length": 152100,
"avg_line_length": 103.7667304015,
"alphanum_fraction": 0.8133591303
} |
# Notebook from jacsonrbinf/minicurso-mineracao-interativa
Path: resultados/4.Proxy.ipynb
Para entrar no modo apresentação, execute a seguinte célula e pressione `-`_____no_output_____
<code>
%reload_ext slide_____no_output_____
</code>
<span class="notebook-slide-start"/>
# Proxy
Este notebook apresenta os seguintes tópicos:
- [Introdução](#Introdu%C3%A7%C3%A3o)
- [Servidor de proxy](#Servidor-de-proxy)_____no_output_____## Introdução
Existe muita informação disponível em repositórios software.
A seguir temos uma *screenshot* do repositório `gems-uff/sapos`.
<img src="images/githubexample.png" alt="Página Inicial de Repositório no GitHub" width="auto"/>_____no_output_____Nessa imagem, vemos a organização e nome do repositório
<img src="images/githubexample1.png" alt="Página Inicial de Repositório no GitHub com nome do repositório selecionado" width="auto"/>_____no_output_____Estrelas, forks, watchers
<img src="images/githubexample2.png" alt="Página Inicial de Repositório no GitHub com watchers, star e fork selecionados" width="auto"/>_____no_output_____Número de issues e pull requests
<img src="images/githubexample3.png" alt="Página Inicial de Repositório no GitHub com numero de issues e pull requests selecionados" width="auto"/>_____no_output_____Número de commits, branches, releases, contribuidores e licensa <span class="notebook-slide-extra" data-count="1"/>
<img src="images/githubexample4.png" alt="Página Inicial de Repositório no GitHub com número de commits, branches, releases, contribuidores e licensa selecionados" width="auto"/>_____no_output_____Arquivos
<img src="images/githubexample5.png" alt="Página Inicial de Repositório no GitHub com arquivos selecionados" width="auto"/>
_____no_output_____Mensagem e data dos commits que alteraram esses arquivos por último
<img src="images/githubexample6.png" alt="Página Inicial de Repositório no GitHub com arquivos selecionados" width="auto"/>_____no_output_____Podemos extrair informações de repositórios de software de 3 formas:
- Crawling do site do repositório
- APIs que fornecem dados
- Diretamente do sistema de controle de versões
Neste minicurso abordaremos as 3 maneiras, porém daremos mais atenção a APIs do GitHub e extração direta do Git._____no_output_____## Servidor de proxy
Servidores de repositório costumam limitar a quantidade de requisições que podemos fazer.
Em geral, essa limitação não afeta muito o uso esporádico dos serviços para mineração. Porém, quando estamos desenvolvendo algo, pode ser que passemos do limite com requisições repetidas.
Para evitar esse problema, vamos configurar um servidor de proxy simples em flask._____no_output_____Quando estamos usando um servidor de proxy, ao invés de fazermos requisições diretamente ao site de destino, fazemos requisições ao servidor de proxy, que, em seguida, redireciona as requisições para o site de destino.
Ao receber o resultado da requisição, o proxy faz um cache do resultado e nos retorna o resultado.
Se uma requisição já tiver sido feita pelo servidor de proxy, ele apenas nos retorna o resultado do cache._____no_output_____### Implementação do Proxy
A implementação do servidor de proxy está no arquivo `proxy.py`. Como queremos executar o proxy em paralelo ao notebook, o servidor precisa ser executado externamente.
Entretanto, o código do proxy será explicado aqui._____no_output_____Começamos o arquivo com os imports necessários.
```python
import hashlib
import requests
import simplejson
import os
import sys
from flask import Flask, request, Response
```
A biblioteca `hashlib` é usada para fazer hash das requisições. A biblioteca `requests` é usada para fazer requisições ao GitHub. A biblioteca `simplejson` é usada para transformar requisiçoes e respostas em JSON. A biblioteca `os` é usada para manipular caminhos de diretórios e verificar a existência de arquivos. A biblioteca `sys` é usada para pegar os argumentos da execução. Por fim, `flask` é usada como servidor.
_____no_output_____Em seguida, definimos o site para qual faremos proxy, os headers excluídos da resposta recebida, e criamos um `app` pro `Flask`. Note que `SITE` está sendo definido como o primeiro argumendo da execução do programa ou como https://github.com/, caso não haja argumento.
```python
if len(sys.argv) > 1:
SITE = sys.argv[1]
else:
SITE = "https://github.com/"
EXCLUDED_HEADERS = ['content-encoding', 'content-length', 'transfer-encoding', 'connection']
app = Flask(__name__)
```_____no_output_____Depois, definimos uma função para tratar todas rotas e métodos possíveis que o servidor pode receber.
```python
METHODS = ['GET', 'POST', 'PATCH', 'PUT', 'DELETE']
@app.route('/', defaults={'path': ''}, methods=METHODS)
@app.route('/<path:path>', methods=METHODS)
def catch_all(path):
```_____no_output_____Dentro desta função, definimos um dicionário de requisição com base na requisição que foi recebida pelo `flask`.
```python
request_dict = {
"method": request.method,
"url": request.url.replace(request.host_url, SITE),
"headers": {key: value for (key, value) in request.headers if key != 'Host'},
"data": request.get_data(),
"cookies": request.cookies,
"allow_redirects": False
}
```
Nesta requsição, substituímos o host pelo site de destino.
_____no_output_____Em seguida, convertemos o dicionário para JSON e calculamos o hash SHA1 do resultado.
```python
request_json = simplejson.dumps(request_dict, sort_keys=True)
sha1 = hashlib.sha1(request_json.encode("utf-8")).hexdigest()
path_req = os.path.join("cache", sha1 + ".req")
path_resp = os.path.join("cache", sha1 + ".resp")
```
No diretório `cache` armazenamos arquivos `{sha1}.req` e `{sha1}.resp` com a requisição e resposta dos resultados em cache._____no_output_____Com isso, ao receber uma requisição, podemos ver se `{sha1}.req` existe. Se existir, podemos comparar com a nossa requisição (para evitar conflitos). Por fim, se forem iguais, podemos retornar a resposta que está em cache.
```python
if os.path.exists(path_req):
with open(path_req, "r") as req:
req_read = req.read()
if req_read == request_json:
with open(path_resp, "r") as dump:
response = simplejson.load(dump)
return Response(
response["content"],
response["status_code"],
response["headers"]
)
```_____no_output_____Se a requisição não estiver em cache, transformamos o dicionário da requisição em uma requisição do `requests` para o GitHub, excluimos os headers populados pelo `flask` e criamos um JSON para a resposta.
```python
resp = requests.request(**request_dict)
headers = [(name, value) for (name, value) in resp.raw.headers.items()
if name.lower() not in EXCLUDED_HEADERS]
response = {
"content": resp.content,
"status_code": resp.status_code,
"headers": headers
}
response_json = simplejson.dumps(response, sort_keys=True)
```_____no_output_____Depois disso, salvamos a resposta no cache e retornamos ela para o cliente original.
```python
with open(path_resp, "w") as dump:
dump.write(response_json)
with open(path_req, "w") as req:
req.write(request_json)
return Response(
response["content"],
response["status_code"],
response["headers"]
)
```_____no_output_____No fim do script, iniciamos o servidor.
```python
if __name__ == '__main__':
app.run(debug=True)
```_____no_output_____### Uso do Proxy
Execute a seguinte linha em um terminal:
```bash
python proxy.py
```
Agora, toda requisição que faríamos a github.com, passaremos a fazer a localhost:5000. Por exemplo, ao invés de acessar https://github.com/gems-uff/sapos, acessaremos http://localhost:5000/gems-uff/sapos
_____no_output_____### Requisição com requests
A seguir fazemos uma requisição com requests para o proxy. <span class="notebook-slide-extra" data-count="2"/>_____no_output_____
<code>
SITE = "http://localhost:5000/" # Se não usar o proxy, alterar para https://github.com/_____no_output_____import requests
response = requests.get(SITE + "gems-uff/sapos")
response.headers['server'], response.status_code_____no_output_____
</code>
<span class="notebook-slide-scroll" data-position="-1"/>
Podemos que o resultado foi obtido do GitHub e que a requisição funcionou, dado que o resultado foi 200. _____no_output_____Continua: [5.Crawling.ipynb](5.Crawling.ipynb)_____no_output_____
_____no_output_____
| {
"repository": "jacsonrbinf/minicurso-mineracao-interativa",
"path": "resultados/4.Proxy.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 2,
"size": 21460,
"hexsha": "d075ac52423f4b41b3191604aacc09a851929d7b",
"max_line_length": 429,
"avg_line_length": 32.8637059724,
"alphanum_fraction": 0.5112301957
} |
# Notebook from shubham3121/PySyft-TensorFlow
Path: examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
# Part 2: Intro to Private Training with Remote Execution
In the last section, we learned about PointerTensors, which create the underlying infrastructure we need for privacy preserving Deep Learning. In this section, we're going to see how to use these basic tools to train our first deep learning model using remote execution.
Authors:
- Yann Dupis - Twitter: [@YannDupis](https://twitter.com/YannDupis)
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
### Why use remote execution?
Let's say you are an AI startup who wants to build a deep learning model to detect [diabetic retinopathy (DR)](https://ai.googleblog.com/2016/11/deep-learning-for-detection-of-diabetic.html), which is the fastest growing cause of blindness. Before training your model, the first step would be to acquire a dataset of retinopathy images with signs of DR. One approach could be to work with a hospital and ask them to send you a copy of this dataset. However because of the sensitivity of the patients' data, the hospital might be exposed to liability risks.
That's where remote execution comes into the picture. Instead of bringing training data to the model (a central server), you bring the model to the training data (wherever it may live). In this case, it would be the hospital.
The idea is that this allows whoever is creating the data to own the only permanent copy, and thus maintain control over who ever has access to it. Pretty cool, eh?_____no_output_____# Section 2.1 - Private Training on MNIST
For this tutorial, we will train a model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify digits based on images.
We can assume that we have a remote worker named Bob who owns the data._____no_output_____
<code>
import tensorflow as tf
import syft as sy
hook = sy.TensorFlowHook(tf)
bob = sy.VirtualWorker(hook, id="bob")_____no_output_____
</code>
Let's download the MNIST data from `tf.keras.datasets`. Note that we are converting the data from numpy to `tf.Tensor` in order to have the PySyft functionalities._____no_output_____
<code>
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train, y_train = tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train)
x_test, y_test = tf.convert_to_tensor(x_test), tf.convert_to_tensor(y_test)_____no_output_____
</code>
As decribed in Part 1, we can send this data to Bob with the `send` method on the `tf.Tensor`. _____no_output_____
<code>
x_train_ptr = x_train.send(bob)
y_train_ptr = y_train.send(bob)_____no_output_____
</code>
Excellent! We have everything to start experimenting. To train our model on Bob's machine, we just have to perform the following steps:
- Define a model, including optimizer and loss
- Send the model to Bob
- Start the training process
- Get the trained model back
Let's do it!_____no_output_____
<code>
# Define the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile with optimizer, loss and metrics
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])_____no_output_____
</code>
Once you have defined your model, you can simply send it to Bob calling the `send` method. It's the exact same process as sending a tensor._____no_output_____
<code>
model_ptr = model.send(bob)_____no_output_____model_ptr_____no_output_____
</code>
Now, we have a pointer pointing to the model on Bob's machine. We can validate that's the case by inspecting the attribute `_objects` on the virtual worker. _____no_output_____
<code>
bob._objects[model_ptr.id_at_location]_____no_output_____
</code>
Everything is ready to start training our model on this remote dataset. You can call `fit` and pass `x_train_ptr` `y_train_ptr` which are pointing to Bob's data. Note that's the exact same interface as normal `tf.keras`._____no_output_____
<code>
model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)_____no_output_____
</code>
Fantastic! you have trained your model acheiving an accuracy greater than 95%.
You can get your trained model back by just calling `get` on it. _____no_output_____
<code>
model_gotten = model_ptr.get()
model_gotten_____no_output_____
</code>
It's good practice to see if your model can generalize by assessing its accuracy on an holdout dataset. You can simply call `evaluate`._____no_output_____
<code>
model_gotten.evaluate(x_test, y_test, verbose=2)_____no_output_____
</code>
Boom! The model remotely trained on Bob's data is more than 95% accurate on this holdout dataset._____no_output_____If your model doesn't fit into the Sequential paradigm, you can use Keras's functional API, or even subclass [tf.keras.Model](https://www.tensorflow.org/guide/keras/custom_layers_and_models#building_models) to create custom models._____no_output_____
<code>
class CustomModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(CustomModel, self).__init__(name='custom_model')
self.num_classes = num_classes
self.flatten = tf.keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = tf.keras.layers.Dense(128, activation='relu')
self.dropout = tf.keras.layers.Dropout(0.2)
self.dense_2 = tf.keras.layers.Dense(num_classes, activation='softmax')
def call(self, inputs, training=False):
x = self.flatten(inputs)
x = self.dense_1(x)
x = self.dropout(x, training=training)
return self.dense_2(x)
model = CustomModel(10)
# need to call the model on dummy data before sending it
# in order to set the input shape (required when saving to SavedModel)
model.predict(tf.ones([1, 28, 28]))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model_ptr = model.send(bob)
model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)_____no_output_____
</code>
## Well Done!
And voilà! We have trained a Deep Learning model on Bob's data by sending the model to him. Never in this process do we ever see or request access to the underlying training data! We preserve the privacy of Bob!!!_____no_output_____# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- Star PySyft on GitHub! - [https://github.com/OpenMined/PySyft](https://github.com/OpenMined/PySyft)
- Star PySyft-TensorFlow on GitHub! - [https://github.com/OpenMined/PySyft-TensorFlow]
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)_____no_output_____
| {
"repository": "shubham3121/PySyft-TensorFlow",
"path": "examples/Part 02 - Intro to Private Training with Remote Execution.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 39,
"size": 11890,
"hexsha": "d0768341e30434bfc2c68d1a985bf22921bca8ab",
"max_line_length": 565,
"avg_line_length": 33.5875706215,
"alphanum_fraction": 0.6146341463
} |
# Notebook from DavidStirling/profiling-resistance-mechanisms
Path: 3.feature-differences/1.apply-signatures.ipynb
# Apply Signature Analysis to Cell Morphology Features
Gregory Way, 2020
Here, I apply [`singscore`](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html) ([Foroutan et al. 2018](https://doi.org/10.1186/s12859-018-2435-4)) to our Cell Painting profiles.
This notebook largely follows the [package vignette](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html).
I generate two distinct signatures.
1. Comparing Clone A and E resistant clones to sensitive wildtype cell lines.
* Clones A and E both have a confirmed _PSMB5_ mutation which is known to cause bortezomib resistance.
2. Derived from comparing four other resistant clones to four other sensitive wildtype clones.
* We do not know the resistance mechanism in these four resistant clones.
However, we can hypothesize that the mechanisms are similar based on single sample enrichment using the potential PSMB5 signature.
To review how I derived these signatures see `0.build-morphology-signatures.ipynb`._____no_output_____
<code>
suppressPackageStartupMessages(library(singscore))
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(ggplot2))_____no_output_____seed <- 1234
num_permutations <- 1000_____no_output_____set.seed(seed)_____no_output_____
</code>
## Load Clone A/E (_PSMB5_ Mutations) Signature_____no_output_____
<code>
sig_cols <- readr::cols(
feature = readr::col_character(),
estimate = readr::col_double(),
adj.p.value = readr::col_double()
)
sig_file <- file.path("results", "cloneAE_signature_tukey.tsv")
psmb_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols)
head(psmb_signature_scores, 2)_____no_output_____# Extract features that are up and down in the signature
up_features <- psmb_signature_scores %>% dplyr::filter(estimate > 0) %>% dplyr::pull(feature)
down_features <- psmb_signature_scores %>% dplyr::filter(estimate < 0) %>% dplyr::pull(feature)_____no_output_____
</code>
## Load Four Clone Dataset_____no_output_____
<code>
col_types <- readr::cols(
.default = readr::col_double(),
Metadata_Plate = readr::col_character(),
Metadata_Well = readr::col_character(),
Metadata_plate_map_name = readr::col_character(),
Metadata_clone_number = readr::col_character(),
Metadata_clone_type = readr::col_character(),
Metadata_plate_ID = readr::col_character(),
Metadata_plate_filename = readr::col_character(),
Metadata_treatment = readr::col_character(),
Metadata_batch = readr::col_character()
)
# Do not load the feature selected data
profile_dir <- file.path("..", "2.describe-data", "data", "merged")
profile_file <- file.path(profile_dir, "combined_four_clone_dataset.csv")
fourclone_data_df <- readr::read_csv(profile_file, col_types = col_types)
print(dim(fourclone_data_df))
head(fourclone_data_df, 2)[1] 300 3537
# Generate unique sample names (for downstream merging of results)
sample_names <- paste(
fourclone_data_df$Metadata_clone_number,
fourclone_data_df$Metadata_Plate,
fourclone_data_df$Metadata_Well,
fourclone_data_df$Metadata_batch,
sep = "_"
)
fourclone_data_df <- fourclone_data_df %>%
dplyr::mutate(Metadata_unique_sample_name = sample_names)_____no_output_____
</code>
## Apply `singscore`_____no_output_____
<code>
# Convert the four clone dataset into a feature x sample matrix without metadata
features_only_df <- t(fourclone_data_df %>% dplyr::select(!starts_with("Metadata_")))
# Apply the `rankGenes()` method to get feature rankings per feature for each sample
rankData <- rankGenes(features_only_df)
colnames(rankData) <- fourclone_data_df$Metadata_unique_sample_name
print(dim(rankData))
head(rankData, 3)[1] 3528 300
# Using the rank dataframe, up, and down features, get the sample scores
scoredf <- simpleScore(rankData, upSet = up_features, downSet = down_features)
# Merge scores with metadata features
full_result_df <- dplyr::bind_cols(
fourclone_data_df %>% dplyr::select(starts_with("Metadata_")),
scoredf
)
print(dim(full_result_df))
head(full_result_df, 2)[1] 300 16
</code>
## Perform Permutation Testing to Determine Significance of Observation_____no_output_____
<code>
# Generate a null distribution of scores by randomly shuffling ranks
permuteResult <- generateNull(
upSet = up_features,
downSet = down_features,
rankData = rankData,
centerScore = TRUE,
knownDirection = TRUE,
B = num_permutations,
seed = seed,
useBPPARAM = NULL
)
# Calculate p values and add to list
pvals <- getPvals(permuteResult, scoredf)
pval_tidy <- broom::tidy(pvals)
colnames(pval_tidy) <- c("names", "Metadata_permuted_p_value")
full_result_df <- full_result_df %>%
dplyr::left_join(
pval_tidy,
by = c("Metadata_unique_sample_name" = "names")
)Warning message:
“'tidy.numeric' is deprecated.
See help("Deprecated")”# Are there differences in quantiles across batch?
batch_info <- gsub("^.*_", "", rownames(t(permuteResult)))
batch_permute <- t(permuteResult) %>%
dplyr::as_tibble() %>%
dplyr::mutate(batch = batch_info)
permute_bounds <- list()
for (batch_id in unique(batch_permute$batch)) {
subset_permute <- batch_permute %>% dplyr::filter(batch == !!batch_id) %>% dplyr::select(!batch)
min_val <- quantile(as.vector(as.matrix(subset_permute)), 0.005)
max_val <- quantile(as.vector(as.matrix(subset_permute)), 0.995)
permute_bounds[[batch_id]] <- c(batch_id, min_val, max_val)
}
do.call(rbind, permute_bounds)Warning message:
“`as_tibble.matrix()` requires a matrix with column names or a `.name_repair` argument. Using compatibility `.name_repair`.
[90mThis warning is displayed once per session.[39m”
</code>
## Visualize Results_____no_output_____
<code>
min_val <- quantile(as.vector(as.matrix(permuteResult)), 0.05)
max_val <- quantile(as.vector(as.matrix(permuteResult)), 0.95)_____no_output_____apply_psmb_signature_gg <- ggplot(full_result_df,
aes(y = TotalScore,
x = Metadata_clone_number)) +
geom_boxplot(aes(fill = Metadata_treatment), outlier.alpha = 0) +
geom_point(
aes(fill = Metadata_treatment, group = Metadata_treatment),
position = position_dodge(width=0.75),
size = 0.9,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Treatment",
labels = c("bortezomib" = "Bortezomib", "DMSO" = "DMSO"),
values = c("bortezomib" = "#9e0ba3", "DMSO" = "#fcba03")) +
theme_bw() +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_result_df$Metadata_clone_number)) + 1,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
xlab("") +
ylab("PSMB5 Signature Score") +
theme(axis.text.x = element_text(angle=90)) +
facet_wrap("Metadata_batch~Metadata_plate_ID", nrow=3) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone.png")
ggsave(output_fig, dpi = 500, height = 5, width = 10)
apply_psmb_signature_gg_____no_output_____summarized_mean_result_df <- full_result_df %>%
dplyr::group_by(
Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_treatment, Metadata_clone_type
) %>%
dplyr::mutate(mean_score = mean(TotalScore)) %>%
dplyr::select(
Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_clone_type, Metadata_treatment, mean_score
) %>%
dplyr::distinct() %>%
tidyr::spread(key = "Metadata_treatment", value = "mean_score") %>%
dplyr::mutate(treatment_score_diff = DMSO - bortezomib)
head(summarized_mean_result_df)_____no_output_____apply_psmb_signature_diff_gg <- ggplot(summarized_mean_result_df,
aes(y = treatment_score_diff,
x = Metadata_clone_number,
fill = Metadata_clone_type)) +
geom_boxplot(outlier.alpha = 0) +
geom_jitter(
width = 0.2,
size = 2,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "Wildtype"),
values = c("resistant" = "#9e0ba3", "wildtype" = "#fcba03")) +
theme_bw() +
xlab("") +
ylab("Difference PSMB5 Signature Score\nDMSO - Bortezomib") +
theme(axis.text.x = element_text(angle=90)) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone_difference.png")
ggsave(output_fig, dpi = 500, height = 4.5, width = 6)
apply_psmb_signature_diff_gg_____no_output_____
</code>
## Load Four Clone Signature (Generic Resistance)_____no_output_____
<code>
sig_file <- file.path("results", "fourclone_signature_tukey.tsv")
resistance_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols)
head(resistance_signature_scores, 2)_____no_output_____# Extract features that are up and down in the signature
up_resistance_features <- resistance_signature_scores %>%
dplyr::filter(estimate > 0) %>%
dplyr::pull(feature)
down_resistance_features <- resistance_signature_scores %>%
dplyr::filter(estimate < 0) %>%
dplyr::pull(feature)_____no_output_____
</code>
## Load Clone A/E Dataset_____no_output_____
<code>
# Do not load the feature selected data
profile_file <- file.path(profile_dir, "combined_cloneAcloneE_dataset.csv")
cloneae_cols <- readr::cols(
.default = readr::col_double(),
Metadata_CellLine = readr::col_character(),
Metadata_Plate = readr::col_character(),
Metadata_Well = readr::col_character(),
Metadata_batch = readr::col_character(),
Metadata_plate_map_name = readr::col_character(),
Metadata_clone_type = readr::col_character()
)
cloneAE_data_df <- readr::read_csv(profile_file, col_types = cloneae_cols)
print(dim(cloneAE_data_df))
head(cloneAE_data_df, 2)[1] 72 3535
# Generate unique sample names (for downstream merging of results)
cloneae_sample_names <- paste(
cloneAE_data_df$Metadata_CellLine,
cloneAE_data_df$Metadata_Plate,
cloneAE_data_df$Metadata_Well,
cloneAE_data_df$Metadata_batch,
sep = "_"
)
cloneAE_data_df <- cloneAE_data_df %>%
dplyr::mutate(Metadata_unique_sample_name = cloneae_sample_names)_____no_output_____
</code>
## Apply `singscore`_____no_output_____
<code>
# Convert the four clone dataset into a feature x sample matrix without metadata
features_only_res_df <- t(cloneAE_data_df %>% dplyr::select(!starts_with("Metadata_")))
# Apply the `rankGenes()` method to get feature rankings per feature for each sample
rankData_res <- rankGenes(features_only_res_df)
colnames(rankData_res) <- cloneAE_data_df$Metadata_unique_sample_name
print(dim(rankData_res))
head(rankData_res, 3)[1] 3528 72
# Using the rank dataframe, up, and down features, get the sample scores
scoredf_res <- simpleScore(rankData_res,
upSet = up_resistance_features,
downSet = down_resistance_features)
# Merge scores with metadata features
full_res_result_df <- dplyr::bind_cols(
cloneAE_data_df %>% dplyr::select(starts_with("Metadata_")),
scoredf_res
)
print(dim(full_res_result_df))
head(full_res_result_df, 2)[1] 72 14
</code>
## Perform Permutation Testing_____no_output_____
<code>
# Generate a null distribution of scores by randomly shuffling ranks
permuteResult_res <- generateNull(
upSet = up_resistance_features,
downSet = down_resistance_features,
rankData = rankData_res,
centerScore = TRUE,
knownDirection = TRUE,
B = num_permutations,
seed = seed,
useBPPARAM = NULL
)
# Calculate p values and add to list
pvals_res <- getPvals(permuteResult_res, scoredf_res)
pval_res_tidy <- broom::tidy(pvals)
colnames(pval_res_tidy) <- c("names", "Metadata_permuted_p_value")
full_res_result_df <- full_res_result_df %>%
dplyr::left_join(
pval_res_tidy,
by = c("Metadata_unique_sample_name" = "names")
)Warning message:
“'tidy.numeric' is deprecated.
See help("Deprecated")”
</code>
## Visualize Signature Results_____no_output_____
<code>
min_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.05)
max_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.95)_____no_output_____append_dose <- function(string) paste0("Dose: ", string, "nM")
apply_res_signature_gg <- ggplot(full_res_result_df,
aes(y = TotalScore,
x = Metadata_CellLine)) +
geom_boxplot(aes(fill = Metadata_clone_type), outlier.alpha = 0) +
geom_point(
aes(fill = Metadata_clone_type, group = Metadata_clone_type),
position = position_dodge(width=0.75),
size = 0.9,
alpha = 0.7,
shape = 21) +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "WildType"),
values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) +
theme_bw() +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 1,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
xlab("") +
ylab("Generic Resistance Signature Score") +
theme(axis.text.x = element_text(angle=90)) +
facet_grid("Metadata_Dosage~Metadata_batch",
labeller = labeller(Metadata_Dosage = as_labeller(append_dose))) +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE.png")
ggsave(output_fig, dpi = 500, height = 5, width = 5)
apply_res_signature_gg_____no_output_____full_res_result_df$Metadata_Dosage <- factor(
full_res_result_df$Metadata_Dosage, levels = unique(sort(full_res_result_df$Metadata_Dosage))
)
full_res_result_df <- full_res_result_df %>%
dplyr::mutate(Metadata_group = paste0(Metadata_batch, Metadata_CellLine))_____no_output_____ggplot(full_res_result_df, aes(x = Metadata_Dosage, y = TotalScore, color = Metadata_CellLine, group = Metadata_group)) +
geom_point(size = 1) +
geom_smooth(aes(fill = Metadata_clone_type), method = "loess", lwd = 0.5) +
facet_wrap("~Metadata_batch", nrow = 2) +
theme_bw() +
scale_fill_manual(name = "Clone Type",
labels = c("resistant" = "Resistant", "wildtype" = "WildType"),
values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) +
ylab("Generic Resistance Signature Score") +
annotate("rect", ymin = min_val,
ymax = max_val,
xmin = 0,
xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 2,
alpha = 0.2,
color = "red",
linetype = "dashed",
fill = "grey") +
theme(strip.text = element_text(size = 8, color = "black"),
strip.background = element_rect(colour = "black", fill = "#fdfff4"))
output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE_xaxis_dosage.png")
ggsave(output_fig, dpi = 500, height = 5, width = 5)Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“pseudoinverse used at 0.985”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“neighborhood radius 2.015”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“reciprocal condition number 4.2401e-17”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
“There are other near singularities as well. 4.0602”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“pseudoinverse used at 0.985”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“neighborhood radius 2.015”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“reciprocal condition number 4.2401e-17”Warning message in predLoess(object$y, object$x, newx = if (is.null(newdata)) object$x else if (is.data.frame(newdata)) as.matrix(model.frame(delete.response(terms(object)), :
“There are other near singularities as well. 4.0602”Warning message in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :
</code>
| {
"repository": "DavidStirling/profiling-resistance-mechanisms",
"path": "3.feature-differences/1.apply-signatures.ipynb",
"matched_keywords": [
"Bioconductor"
],
"stars": null,
"size": 701404,
"hexsha": "d07ab040c7419cee18e41e5fbd1187b380f64381",
"max_line_length": 189448,
"avg_line_length": 445.6188055909,
"alphanum_fraction": 0.9000233817
} |
# Notebook from nextstrain/seasonal-cov
Path: data-wrangling/.ipynb_checkpoints/make_s1_s2_rdrp_reference-checkpoint.ipynb
<code>
import re
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Alphabet import IUPAC
from Bio.SeqFeature import SeqFeature, FeatureLocation_____no_output_____#first 6 aas of each domain
#from uniprot: NL63 (Q6Q1S2), 229e(P15423), oc43 (P36334), hku1 (Q0ZME7)
#nl63 s1 domain definition: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2693060/
s1_domains = {'nl63': 'FFTCNS', '229e': 'CQTTNG', 'oc43': 'AVIGDL', 'hku1': 'AVIGDF'}
s2_domains = {'nl63': 'SSDNGI', '229e': 'IIAVQP', 'oc43': 'AITTGY', 'hku1': 'SISASY'}
rdrp_domains_start = {'oc43': 'SKDTNF'}
rdrp_domains_end = {'oc43': 'RSAVMQ'}_____no_output_____def write_gene_reference(gene_seq, gene_id, gene_name, gene_description, cov_type, outfile):
gene_record = SeqRecord(gene_seq, id= gene_id,
name= gene_name,
description= gene_description)
source_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='source',
qualifiers={'organsism':cov_type, "mol_type":"genomic RNA"})
gene_record.features.append(source_feature)
cds_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='CDS', qualifiers={'translation':gene_seq.translate()})
gene_record.features.append(cds_feature)
SeqIO.write(gene_record, outfile, 'genbank')_____no_output_____def make_s1_s2_reference(cov):
spike_reference = '../'+str(cov)+'/config/'+str(cov)+'_spike_reference.gb'
with open(spike_reference, "r") as handle:
for record in SeqIO.parse(handle, "genbank"):
nt_seq = record.seq
aa_seq = record.seq.translate()
s1_regex = re.compile(f'{s1_domains[cov]}.*(?={s2_domains[cov]})')
s1_aa = s1_regex.search(str(aa_seq)).group()
s1_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(s1_regex, str(aa_seq))][0]
s1_nt_coords = [s1_aa_coords[0]*3, s1_aa_coords[1]*3]
s1_nt_seq = nt_seq[s1_nt_coords[0]: s1_nt_coords[1]]
s2_regex = re.compile(f'{s2_domains[cov]}.*')
s2_aa = s2_regex.search(str(aa_seq)).group()
s2_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(s2_regex, str(aa_seq))][0]
s2_nt_coords = [s2_aa_coords[0]*3, s2_aa_coords[1]*3]
s2_nt_seq = nt_seq[s2_nt_coords[0]: s2_nt_coords[1]]
write_gene_reference(s1_nt_seq, record.id, str(cov)+'_S1', 'spike s1 subdomain',
cov, '../'+str(cov)+'/config/'+str(cov)+'_s1_reference.gb')
write_gene_reference(s2_nt_seq, record.id, str(cov)+'_S2', 'spike s2 subdomain',
cov, '../'+str(cov)+'/config/'+str(cov)+'_s2_reference.gb')_____no_output_____# covs = ['oc43', '229e', 'nl63', 'hku1']
covs = ['229e']
for cov in covs:
make_s1_s2_reference(cov)_____no_output_____def make_rdrp_reference(cov):
replicase_reference = '../'+str(cov)+'/config/'+str(cov)+'_replicase1ab_reference.gb'
with open(replicase_reference, "r") as handle:
for record in SeqIO.parse(handle, "genbank"):
nt_seq = record.seq
aa_seq = record.seq.translate()
rdrp_regex = re.compile(f'{rdrp_domains_start[cov]}.*{rdrp_domains_end[cov]}')
rdrp_aa = rdrp_regex.search(str(aa_seq)).group()
rdrp_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(rdrp_regex, str(aa_seq))][0]
rdrp_nt_coords = [rdrp_aa_coords[0]*3, rdrp_aa_coords[1]*3]
rdrp_nt_seq = nt_seq[rdrp_nt_coords[0]: rdrp_nt_coords[1]]
write_gene_reference(rdrp_nt_seq, record.id, str(cov)+'_rdrp', 'rna-dependent rna polymerase',
cov, '../'+str(cov)+'/config/'+str(cov)+'_rdrp_reference.gb')_____no_output_____
</code>
| {
"repository": "nextstrain/seasonal-cov",
"path": "data-wrangling/.ipynb_checkpoints/make_s1_s2_rdrp_reference-checkpoint.ipynb",
"matched_keywords": [
"RNA"
],
"stars": 4,
"size": 5469,
"hexsha": "d07b3afc6a6e0d394dd4a6e6e8d0425c1904dc97",
"max_line_length": 133,
"avg_line_length": 38.2447552448,
"alphanum_fraction": 0.5381239715
} |
# Notebook from vitutorial/exercises
Path: LatentFactorModel/LatentFactorModel.ipynb
<a href="https://colab.research.google.com/github/vitutorial/exercises/blob/master/LatentFactorModel/LatentFactorModel.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>_____no_output_____
<code>
%matplotlib inline
import os
import re
import urllib.request
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
import itertools
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")_____no_output_____
</code>
In this notebook you will work with a deep generative language model that maps words from a discrete (bit-vector-valued) latent space. We will use text data (we will work on the character level) in Spanish and pytorch.
The first section concerns data manipulation and data loading classes necessary for our implementation. You do not need to modify anything in this part of the code._____no_output_____Let's first download the SIGMORPHON dataset that we will be using for this notebook: these are inflected Spanish words together with some morphosyntactic descriptors. For this notebook we will ignore the morphosyntactic descriptors._____no_output_____
<code>
url = "https://raw.githubusercontent.com/ryancotterell/sigmorphon2016/master/data/"
train_file = "spanish-task1-train"
val_file = "spanish-task1-dev"
test_file = "spanish-task1-test"
print("Downloading data files...")
if not os.path.isfile(train_file):
urllib.request.urlretrieve(url + train_file, filename=train_file)
if not os.path.isfile(val_file):
urllib.request.urlretrieve(url + val_file, filename=val_file)
if not os.path.isfile(test_file):
urllib.request.urlretrieve(url + test_file, filename=test_file)
print("Download complete.")Downloading data files...
Download complete.
</code>
# Data
In order to work with text data, we need to transform the text into something that our algorithms can work with. The first step of this process is converting words into word ids. We do this by constructing a vocabulary from the data, assigning a new word id to each new word it encounters._____no_output_____
<code>
UNK_TOKEN = "?"
PAD_TOKEN = "_"
SOW_TOKEN = ">"
EOW_TOKEN = "."
def extract_inflected_word(s):
"""
Extracts the inflected words in the SIGMORPHON dataset.
"""
return s.split()[-1]
class Vocabulary:
def __init__(self):
self.idx_to_char = {0: UNK_TOKEN, 1: PAD_TOKEN, 2: SOW_TOKEN, 3: EOW_TOKEN}
self.char_to_idx = {UNK_TOKEN: 0, PAD_TOKEN: 1, SOW_TOKEN: 2, EOW_TOKEN: 3}
self.word_freqs = {}
def __getitem__(self, key):
return self.char_to_idx[key] if key in self.char_to_idx else self.char_to_idx[UNK_TOKEN]
def word(self, idx):
return self.idx_to_char[idx]
def size(self):
return len(self.char_to_idx)
@staticmethod
def from_data(filenames):
"""
Creates a vocabulary from a list of data files. It assumes that the data files have been
tokenized and pre-processed beforehand.
"""
vocab = Vocabulary()
for filename in filenames:
with open(filename) as f:
for line in f:
# Strip whitespace and the newline symbol.
word = extract_inflected_word(line.strip())
# Split the words into characters and assign ids to each
# new character it encounters.
for char in list(word):
if char not in vocab.char_to_idx:
idx = len(vocab.char_to_idx)
vocab.char_to_idx[char] = idx
vocab.idx_to_char[idx] = char
return vocab_____no_output_____# Construct a vocabulary from the training and validation data.
print("Constructing vocabulary...")
vocab = Vocabulary.from_data([train_file, val_file])
print("Constructed a vocabulary of %d types" % vocab.size())Constructing vocabulary...
Constructed a vocabulary of 37 types
# some examples
print('e', vocab['e'])
print('é', vocab['é'])
print('ș', vocab['ș']) # something UNKNOWNe 8
é 24
ș 0
</code>
We also need to load the data files into memory. We create a simple class `TextDataset` that stores the data as a list of words:_____no_output_____
<code>
class TextDataset(Dataset):
"""
A simple class that loads a list of words into memory from a text file,
split by newlines. This does not do any memory optimisation,
so if your dataset is very large, you might want to use an alternative
class.
"""
def __init__(self, text_file, max_len=30):
self.data = []
with open(text_file) as f:
for line in f:
word = extract_inflected_word(line.strip())
if len(list(word)) <= max_len:
self.data.append(word)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]_____no_output_____# Load the training, validation, and test datasets into memory.
train_dataset = TextDataset(train_file)
val_dataset = TextDataset(val_file)
test_dataset = TextDataset(test_file)
# Print some samples from the data:
print("Sample from training data: \"%s\"" % train_dataset[np.random.choice(len(train_dataset))])
print("Sample from validation data: \"%s\"" % val_dataset[np.random.choice(len(val_dataset))])
print("Sample from test data: \"%s\"" % test_dataset[np.random.choice(len(test_dataset))])Sample from training data: "compiláramos"
Sample from validation data: "debutara"
Sample from test data: "paginabas"
</code>
Now it's time to write a function that converts a word into a list of character ids using the vocabulary we created before. This function is `create_batch` in the code cell below. This function creates a batch from a list of words, and makes sure that each word starts with a start-of-word symbol and ends with an end-of-word symbol. Because not all words are of equal length in a certain batch, words are padded with padding symbols so that they match the length of the largest word in the batch. The function returns an input batch, an output batch, a mask of 1s for words and 0s for padding symbols, and the sequence lengths of each word in the batch. The output batch is shifted by one character, to reflect the predictions that the model is expected to make. For example, for a word
\begin{align}
\text{e s p e s e m o s}
\end{align}
the input sequence is
\begin{align}
\text{SOW e s p e s e m o s}
\end{align}
and the output sequence is
\begin{align}
\text{e s p e s e m o s EOW}
\end{align}
You can see the output is shifted wrt the input, that's because we will be computing a distribution for the next character in context of its prefix, and that's why we need to shift the sequence this way.
Lastly, we create an inverse function `batch_to_words` that recovers the list of words from a padded batch of character ids to use during test time._____no_output_____
<code>
def create_batch(words, vocab, device, word_dropout=0.):
"""
Converts a list of words to a padded batch of word ids. Returns
an input batch, an output batch shifted by one, a sequence mask over
the input batch, and a tensor containing the sequence length of each
batch element.
:param words: a list of words, each a list of token ids
:param vocab: a Vocabulary object for this dataset
:param device:
:param word_dropout: rate at which we omit words from the context (input)
:returns: a batch of padded inputs, a batch of padded outputs, mask, lengths
"""
tok = np.array([[SOW_TOKEN] + list(w) + [EOW_TOKEN] for w in words])
seq_lengths = [len(w)-1 for w in tok]
max_len = max(seq_lengths)
pad_id = vocab[PAD_TOKEN]
pad_id_input = [
[vocab[w[t]] if t < seq_lengths[idx] else pad_id for t in range(max_len)]
for idx, w in enumerate(tok)]
# Replace words of the input with <unk> with p = word_dropout.
if word_dropout > 0.:
unk_id = vocab[UNK_TOKEN]
word_drop = [
[unk_id if (np.random.random() < word_dropout and t < seq_lengths[idx]) else word_ids[t] for t in range(max_len)]
for idx, word_ids in enumerate(pad_id_input)]
# The output batch is shifted by 1.
pad_id_output = [
[vocab[w[t+1]] if t < seq_lengths[idx] else pad_id for t in range(max_len)]
for idx, w in enumerate(tok)]
# Convert everything to PyTorch tensors.
batch_input = torch.tensor(pad_id_input)
batch_output = torch.tensor(pad_id_output)
seq_mask = (batch_input != vocab[PAD_TOKEN])
seq_length = torch.tensor(seq_lengths)
# Move all tensors to the given device.
batch_input = batch_input.to(device)
batch_output = batch_output.to(device)
seq_mask = seq_mask.to(device)
seq_length = seq_length.to(device)
return batch_input, batch_output, seq_mask, seq_length
def batch_to_words(tensors, vocab: Vocabulary):
"""
Converts a batch of word ids back to words.
:param tensors: [B, T] word ids
:param vocab: a Vocabulary object for this dataset
:returns: an array of strings (each a word).
"""
words = []
batch_size = tensors.size(0)
for idx in range(batch_size):
word = [vocab.word(t.item()) for t in tensors[idx,:]]
# Filter out the start-of-word and padding tokens.
word = list(filter(lambda t: t != PAD_TOKEN and t != SOW_TOKEN, word))
# Remove the end-of-word token and all tokens following it.
if EOW_TOKEN in word:
word = word[:word.index(EOW_TOKEN)]
words.append("".join(word))
return np.array(words)_____no_output_____
</code>
In PyTorch the RNN functions expect inputs to be sorted from long words to shorter ones. Therefore we create a simple wrapper class for the DataLoader class that sorts words from long to short: _____no_output_____
<code>
class SortingTextDataLoader:
"""
A wrapper for the DataLoader class that sorts a list of words by their
lengths in descending order.
"""
def __init__(self, dataloader):
self.dataloader = dataloader
self.it = iter(dataloader)
def __iter__(self):
return self
def __next__(self):
words = None
for s in self.it:
words = s
break
if words is None:
self.it = iter(self.dataloader)
raise StopIteration
words = np.array(words)
sort_keys = sorted(range(len(words)),
key=lambda idx: len(list(words[idx])),
reverse=True)
sorted_words = words[sort_keys]
return sorted_words_____no_output_____
</code>
# Model
## Deterministic language model
In language modelling, we model a word $x = \langle x_1, \ldots, x_n \rangle$ of length $n = |x|$ as a sequence of categorical draws:
\begin{align}
X_i|x_{<i} & \sim \text{Cat}(f(x_{<i}; \theta))
& i = 1, \ldots, n \\
\end{align}
where we use $x_{<i}$ to denote a (possibly empty) prefix string, and thus the model makes no Markov assumption. We map from the conditioning context, the prefix $x_{<i}$, to the categorical parameters (a $v$-dimensional probability vector, where $v$ denotes the size of the vocabulary, in this case, the size of the character set) using a fixed neural network architecture whose parameters we collectively denote by $\theta$.
This assigns the following likelihood to the word
\begin{align}
P(x|\theta) &= \prod_{i=1}^n P(x_i|x_{<i}, \theta) \\
&= \prod_{i=1}^n \text{Cat}(x_i|f(x_{<i}; \theta))
\end{align}
where the categorical pmf is $\text{Cat}(k|\pi) = \prod_{j=1}^v \pi_j^{[k=j]} = \pi_k$.
Suppose we have a dataset $\mathcal D = \{x^{(1)}, \ldots, x^{(N)}\}$ containing $N$ i.i.d. observations. Then we can use the log-likelihood function
\begin{align}
\mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{N} \log P(x^{(k)}| \theta) \\
&= \sum_{k=1}^{N} \sum_{i=1}^{|x^{(k)}|} \log \text{Cat}(x^{(k)}_i|f(x^{(k)}_{<i}; \theta))
\end{align}
to estimate $\theta$ by maximisation:
\begin{align}
\theta^\star = \arg\max_{\theta \in \Theta} \mathcal L(\theta|\mathcal D) ~ .
\end{align}
We can use stochastic gradient-ascent to find a local optimum of $\mathcal L(\theta|\mathcal D)$, which only requires a gradient estimate:
\begin{align}
\nabla_\theta \mathcal L(\theta|\mathcal D) &= \sum_{k=1}^{|\mathcal D|} \nabla_\theta \log P(x^{(k)}|\theta) \\
&= \sum_{k=1}^{|\mathcal D|} \frac{1}{N} N \nabla_\theta \log P(x^{(k)}| \theta) \\
&= \mathbb E_{\mathcal U(1/N)} \left[ N \nabla_\theta \log P(x^{(K)}| \theta) \right] \\
&\overset{\text{MC}}{\approx} \frac{N}{M} \sum_{m=1}^M \nabla_\theta \log P(x^{(k_m)}|\theta) \\
&\text{where }K_m \sim \mathcal U(1/N)
\end{align}
This is a Monte Carlo (MC) estimate of the gradient computed on $M$ data points selected uniformly at random from $\mathcal D$.
For as long as $f$ remains differentiable wrt to its inputs and parameters, we can rely on automatic differentiation to obtain gradient estimates.
An example design for $f$ is:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\
\mathbf h_0 &= \mathbf 0 \\
\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\
f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))
\end{align}
where
* $\text{emb}$ is a fixed embedding layer with parameters $\theta_{\text{emb}}$;
* $\text{rnn}$ is a recurrent architecture with parameters $\theta_{\text{rnn}}$, e.g. an LSTM or GRU, and $\mathbf h_0$ is part of the architecture's parameters;
* $\text{dense}_v$ is a dense layer with $v$ outputs (vocabulary size) and parameters $\theta_{\text{out}}$.
In what follows we show how to extend this model with a continuous latent word embedding._____no_output_____## Deep generative language model
We want to model a word $x$ as a draw from the marginal of deep generative model $P(z, x|\theta, \alpha) = P(z|\alpha)P(x|z, \theta)$.
### Generative model
The generative story is:
\begin{align}
Z_k & \sim \text{Bernoulli}(\alpha_k) & k=1,\ldots, K \\
X_i | z, x_{<i} &\sim \text{Cat}(f(z, x_{<i}; \theta)) & i=1, \ldots, n
\end{align}
where $z \in \mathbb R^K$ and we impose a product of independent Bernoulli distributions prior. Other choices of prior can induce interesting properties in latent space, for example, the Bernoullis could be correlated, however, in this notebook, we use independent distributions.
**About the prior parameter** The parameter of the $k$th Bernoulli distribution is the probability that the $k$th bit in $z$ is set to $1$, and therefore, if we have reasons to believe some bits are more frequent than others (for example, because we expect some bits to capture verb attributes and others to capture noun attributes, and we know nouns are more frequent than verbs) we may be able to have a good guess at $\alpha_k$ for different $k$, otherwise, we may simply say that bits are about as likely to be on or off a priori, thus setting $\alpha_k = 0.5$ for every $k$. In this lab, we will treat the prior parameter ($\alpha$) as *fixed*.
**Architecture** It is easy to design $f$ by a simple modification of the deterministic design shown before:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \theta_{\text{emb}}) \\
\mathbf h_0 &= \tanh(\text{dense}(z; \theta_{\text{init}})) \\
\mathbf h_i &= \text{rnn}(\mathbf h_{i-1}, \mathbf x_{i-1}; \theta_{\text{rnn}}) \\
f(x_{<i}; \theta) &= \text{softmax}(\text{dense}_v(\mathbf h_{i}; \theta_{\text{out}}))
\end{align}
where we just initialise the recurrent cell using $z$. Note we could also use $z$ in other places, for example, as additional input to every update of the recurrent cell $\mathbf h_i = \text{rnn}(\mathbf h_{i-1}, [\mathbf x_{i-1}, z])$. This is an architecture choice which like many others can only be judged empirically or on the basis of practical convenience.
_____no_output_____### Parameter estimation
The marginal likelihood, necessary for parameter estimation, is now no longer tractable:
\begin{align}
P(x|\theta, \alpha) &= \sum_{z \in \{0,1\}^K} P(z|\alpha)P(x|z, \theta) \\
&= \sum_{z \in \{0,1\}^K} \prod_{k=1}^K \text{Bernoulli}(z_k|\alpha_k)\prod_{i=1}^n \text{Cat}(x_i|f(z,x_{<i}; \theta) )
\end{align}
the intractability is clear as there is an exponential number of assignments to $z$, namely, $2^K$.
We turn to variational inference and derive a lowerbound $\mathcal E(\theta, \lambda|\mathcal D)$ on the log-likelihood function
\begin{align}
\mathcal E(\theta, \lambda|\mathcal D) &= \sum_{s=1}^{|\mathcal D|} \mathcal E_s(\theta, \lambda|x^{(s)})
\end{align}
which for a single datapoint $x$ is
\begin{align}
\mathcal E(\theta, \lambda|x) &= \mathbb{E}_{Q(z|x, \lambda)}\left[\log P(x|z, \theta)\right] - \text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right)\\
\end{align}
where we have introduce an independently parameterised auxiliary distribution $Q(z|x, \lambda)$. The distribution $Q$ which maximises this *evidence lowerbound* (ELBO) is also the distribution that minimises
\begin{align}
\text{KL}(Q(z|x, \lambda)||P(z|x, \theta, \alpha)) = \mathbb E_{Q(z|x, \lambda)}\left[\log \frac{Q(z|x, \lambda)}{P(z|x, \theta, \alpha)}\right]
\end{align}
where $P(z|x, \theta, \alpha) = \frac{P(x, z|\theta, \alpha)}{P(x|\theta, \alpha)}$ is our intractable true posterior. For that reason, we think of $Q(z|x, \lambda)$ as an *approximate posterior*.
The approximate posterior is an independent model of the latent variable given the data, for that reason we also call it an *inference model*.
In this notebook, our inference model will be a product of independent Bernoulli distributions, to make sure that we cover the sample space of our latent variable. We will leave at the end of the notebook as an optional exercise to model correlations (thus achieving *structured* inference, rather than mean field inference). Such mean field (MF) approximation takes $K$ Bernoulli variational factors whose parameters we predict with a neural network:
\begin{align}
Q(z|x, \lambda) &= \prod_{k=1}^K \text{Bernoulli}(z_k|\beta_k(x; \lambda))
\end{align}
Note we compute a *fixed* number, namely, $K$, of Bernoulli parameters. This can be done with a neural network that outputs $K$ values and employs a sigmoid activation for the outputs.
For this choice, the KL term in the ELBO is tractable:
\begin{align}
\text{KL}\left(Q(z|x, \lambda)||P(z|\alpha)\right) &= \sum_{k=1}^K \text{KL}\left(Q(z_k|x, \lambda)||P(z_k|\alpha_k)\right) \\
&= \sum_{k=1}^K \text{KL}\left(\text{Bernoulli}(\beta_k(x;\lambda))|| \text{Bernoulli}(\alpha_k)\right) \\
&= \sum_{k=1}^K \beta_k(x;\lambda) \log \frac{\beta_k(x;\lambda)}{\alpha_k} + (1-\beta_k(x;\lambda)) \log \frac{1-\beta_k(x;\lambda)}{1-\alpha_k}
\end{align}
Here's an example design for our inference model:
\begin{align}
\mathbf x_i &= \text{emb}(x_i; \lambda_{\text{emb}}) \\
\mathbf f_i &= \text{rnn}(\mathbf f_{i-1}, \mathbf x_{i}; \lambda_{\text{fwd}}) \\
\mathbf b_i &= \text{rnn}(\mathbf b_{i+1}, \mathbf x_{i}; \lambda_{\text{bwd}}) \\
\mathbf h &= \text{dense}([\mathbf f_{n}, \mathbf b_1]; \lambda_{\text{hid}}) \\
\beta(x; \lambda) &= \text{sigmoid}(\text{dense}_K(\mathbf h; \lambda_{\text{out}}))
\end{align}
where we use the $\text{sigmoid}$ activation to make sure our probabilities are independently set between $0$ and $1$.
Because we have neural networks compute the Bernoulli variational factors for us, we call this *amortised* mean field inference.
_____no_output_____### Gradient estimation
We have to obtain gradients of the ELBO with respect to $\theta$ (generative model) and $\lambda$ (inference model). Recall we will leave $\alpha$ fixed.
For the **generative model**
\begin{align}
\nabla_\theta \mathcal E(\theta, \lambda|x) &=\nabla_\theta\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \underbrace{\nabla_\theta \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{\color{blue}{0}} \\
&=\sum_{z} Q(z|x, \lambda)\nabla_\theta\log P(x|z,\theta) \\
&= \mathbb E_{Q(z|x, \lambda)}\left[\nabla_\theta\log P(x|z,\theta) \right] \\
&\overset{\text{MC}}{\approx} \frac{1}{S} \sum_{s=1}^S \nabla_\theta \log P(x|z^{(s)}, \theta)
\end{align}
where $z^{(s)} \sim Q(z|x,\lambda)$.
Note there is no difficulty in obtaining gradient estimates precisely because the samples come from the inference model and therefore do not interfere with backpropagation for updates to $\theta$.
For the **inference model** the story is less straightforward, and we have to use the *score function estimator* (a.k.a. REINFORCE):
\begin{align}
\nabla_\lambda \mathcal E(\theta, \lambda|x) &=\nabla_\lambda\sum_{z} Q(z|x, \lambda)\log P(x|z,\theta) - \nabla_\lambda \underbrace{\sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))}_{ \color{blue}{\text{tractable} }} \\
&=\sum_{z} \nabla_\lambda Q(z|x, \lambda)\log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&=\sum_{z} \underbrace{Q(z|x, \lambda) \nabla_\lambda \log Q(z|x, \lambda)}_{\nabla_\lambda Q(z|x, \lambda)} \log P(x|z,\theta) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&= \mathbb E_{Q(z|x, \lambda)}\left[ \log P(x|z,\theta) \nabla_\lambda \log Q(z|x, \lambda) \right] - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k)) \\
&\overset{\text{MC}}{\approx} \left(\frac{1}{S} \sum_{s=1}^S \log P(x|z^{(s)}, \theta) \nabla_\lambda \log Q(z^{(s)}|x, \lambda) \right) - \sum_{k=1}^K \nabla_\lambda \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))
\end{align}
where $z^{(s)} \sim Q(z|x,\lambda)$.
_____no_output_____## Implementation
Let's implement the model and the loss (negative ELBO). We work with the notion of a *surrogate loss*, that is, a computation node whose gradients wrt to parameters are equivalent to the gradients we need.
For a given sample $z \sim Q(z|x, \lambda)$, the following is a single-sample surrogate loss:
\begin{align}
\mathcal S(\theta, \lambda|x) = \log P(x|z, \theta) + \color{red}{\text{detach}(\log P(x|z, \theta) )}\log Q(z|x, \lambda) - \sum_{k=1}^K \text{KL}(Q(z_k|x, \lambda) || P(z_k|\alpha_k))
\end{align}
Check the documentation of pytorch's `detach` method.
Show that it's gradients wrt $\theta$ and $\lambda$ are exactly what we need:
_____no_output_____\begin{align}
\nabla_\theta \mathcal S(\theta, \lambda|x) = \color{red}{?}
\end{align}
\begin{align}
\nabla_\lambda \mathcal S(\theta, \lambda|x) = \color{red}{?}
\end{align}_____no_output_____Let's now turn to the actual implementation in pytorch of the inference model as well as the generative model.
Here and there we will provide helper code for you._____no_output_____
<code>
def bernoulli_log_probs_from_logits(logits):
"""
Let p be the Bernoulli parameter and q = 1 - p.
This function is a stable computation of p and q from logit = log(p/q).
:param logit: log (p/q)
:return: log_p, log_q
"""
return - F.softplus(-logits), - F.softplus(logits)_____no_output_____
</code>
We start with the implementation of a product of Bernoulli distributions where the parameters are *given* at construction time. That is, for some vector $b_1, \ldots, b_K$ we have
\begin{equation}
Z_k \sim \text{Bernoulli}(b_k)
\end{equation}
and thus the joint probability of $z_1, \ldots, z_K$ is given by $\prod_{k=1}^K \text{Bernoulli}(z_k|b_k)$._____no_output_____
<code>
class ProductOfBernoullis:
"""
This is class models a product of independent Bernoulli distributions.
Each product of Bernoulli is defined by a D-dimensional vector of logits
for each independent Bernoulli variable.
"""
def __init__(self, logits):
"""
:param p: a tensor of D Bernoulli parameters (logits) for each batch element. [B, D]
"""
pass
def mean(self):
"""For Bernoulli variables this is the probability of each Bernoulli being 1."""
return None
def std(self):
"""For Bernoulli variables this is p*(1-p) where p
is the probability of the Bernoulli being 1"""
return self.probs * (1.0 - self.probs)
def sample(self):
"""
Returns a sample with the shape of the Bernoulli parameter. # [B, D]
"""
return None
def log_prob(self, x):
"""
Assess the log probability mass of x.
:param x: a tensor of Bernoulli samples (same shape as the Bernoulli parameter) [B, D]
:returns: tensor of log probabilitie densities [B]
"""
return None
def unstable_kl(self, other: 'Bernoulli'):
"""
The straightforward implementation of the KL between two Bernoullis.
This implementation is unstable, a stable implementation is provided in
ProductOfBernoullis.kl(self, q)
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
return None
def kl(self, other: 'Bernoulli'):
"""
A stable implementation of the KL divergence between two Bernoulli variables.
:returns: a tensor of KL values with the same shape as the parameters of self.
"""
return None_____no_output_____
</code>
Then we should implement the inference model $Q(z | x, \lambda)$, that is, a module that uses a neural network to map from a data point $x$ to the parameters of a product of Bernoullis.
You might want to consult the documentation of
* `torch.nn.Embedding`
* `torch.nn.LSTM`
* `torch.nn.Linear`
* and of our own `ProductOfBernoullis` distribution (see above)._____no_output_____
<code>
class InferenceModel(nn.Module):
def __init__(self, vocab_size, embedder, hidden_size,
latent_size, pad_idx, bidirectional=False):
"""
Implement the layers in the inference model.
:param vocab_size: size of the vocabulary of the language
:param embedder: embedding layer
:param hidden_size: size of recurrent cell
:param latent_size: size K of the latent variable
:param pad_idx: id of the -PAD- token
:param bidirectional: whether we condition on x via a bidirectional or
unidirectional encoder
"""
super().__init__() # pytorch modules should always start with this
pass
# Construct your NN blocks here
# and make sure every block is an attribute of self
# or they won't get initialised properly
# for example, self.my_linear_layer = torch.nn.Linear(...)
def forward(self, x, seq_mask, seq_len) -> ProductOfBernoullis:
"""
Return an inference product of Bernoullis per instance in the mini-batch
:param x: words [B, T] as token ids
:param seq_mask: indicates valid positions vs padding positions [B, T]
:param seq_len: the length of the sequences [B]
:return: a collection of B ProductOfBernoullis approximate posterior,
each a distribution over K-dimensional bit vectors
"""
pass_____no_output_____# tests for inference model
pad_idx = vocab.char_to_idx[PAD_TOKEN]
dummy_inference_model = InferenceModel(
vocab_size=vocab.size(),
embedder=nn.Embedding(vocab.size(), 64, padding_idx=pad_idx),
hidden_size=128, latent_size=16, pad_idx=pad_idx, bidirectional=True
).to(device=device)
dummy_batch_size = 32
dummy_dataloader = SortingTextDataLoader(DataLoader(train_dataset, batch_size=dummy_batch_size))
dummy_words = next(dummy_dataloader)
x_in, _, seq_mask, seq_len = create_batch(dummy_words, vocab, device)
q_z_given_x = dummy_inference_model.forward(x_in, seq_mask, seq_len)_____no_output_____
</code>
Then we should implement the generative latent factor model. The decoder is a sequence of correlated Categorical draws that condition on a latent factor assignment.
We will be parameterising categorical distributions, so you might want to check the documentation of `torch.distributions.categorical.Categorical`.
_____no_output_____
<code>
from torch.distributions import Categorical
class LatentFactorModel(nn.Module):
def __init__(self, vocab_size, emb_size, hidden_size, latent_size,
pad_idx, dropout=0.):
"""
:param vocab_size: size of the vocabulary of the language
:param emb_size: dimensionality of embeddings
:param hidden_size: dimensionality of recurrent cell
:param latent_size: this is D the dimensionality of the latent variable z
:param pad_idx: the id reserved to the -PAD- token
:param dropout: a dropout rate (you can ignore this for now)
"""
super().__init__()
# Construct your NN blocks here,
# remember to assign them to attributes of self
pass
def init_hidden(self, z):
"""
Returns the hidden state of the LSTM initialized with a projection of a given z.
:param z: [B, K]
:returns: [num_layers, B, H] hidden state, [num_layers, B, H] cell state
"""
pass
def step(self, prev_x, z, hidden):
"""
Performs a single LSTM step for a given previous word and hidden state.
Returns the unnormalized log probabilities (logits) over the vocabulary
for this time step.
:param prev_x: [B, 1] id of the previous token
:param z: [B, K] latent variable
:param hidden: hidden ([num_layers, B, H] state, [num_layers, B, H] cell)
:returns: [B, V] logits, ([num_layers, B, H] updated state, [num_layers, B, H] updated cell)
"""
pass
def forward(self, x, z) -> Categorical:
"""
Performs an entire forward pass given a sequence of words x and a z.
This returns a collection of [B, T] categorical distributions, each
with support over V events.
:param x: [B, T] token ids
:param z: [B, K] a latent sample
:returns: Categorical object with shape [B,T,V]
"""
hidden = self.init_hidden(z)
outputs = []
for t in range(x.size(1)):
# [B, 1]
prev_x = x[:, t].unsqueeze(-1)
# logits: [B, V]
logits, hidden = self.step(prev_x, z, hidden)
outputs.append(logits)
outputs = torch.cat(outputs, dim=1)
return Categorical(logits=outputs)
def loss(self, output_distributions, observations, pz, qz, free_nats=0., evaluation=False):
"""
Computes the terms in the loss (negative ELBO) given the
output Categorical distributions, observations,
the prior distribution p(z), and the approximate posterior distribution q(z|x).
If free_nats is nonzero it will clamp the KL divergence between the posterior
and prior to that value, preventing gradient propagation via the KL if it's
below that value.
If evaluation is set to true, the loss will be summed instead
of averaged over the batch.
Returns the (surrogate) loss, the ELBO, and the KL.
:returns:
surrogate loss (scalar),
ELBO (scalar),
KL (scalar)
"""
pass_____no_output_____
</code>
The code below is used to assess the model and also investigate what it learned. We implemented it for you, so that you can focus on the VAE part. It's useful however to learn from this example: we do interesting things like computing perplexity and sampling novel words!_____no_output_____# Evaluation metrics
During training we'd like to keep track of some evaluation metrics on the validation data in order to keep track of how our model is doing and to perform early stopping. One simple metric we can compute is the ELBO on all the validation or test data using a single sample from the approximate posterior $Q(z|x, \lambda)$:_____no_output_____
<code>
def eval_elbo(model, inference_model, eval_dataset, vocab, device, batch_size=128):
"""
Computes a single sample estimate of the ELBO on a given dataset.
This returns both the average ELBO and the average KL (for inspection).
"""
dl = DataLoader(eval_dataset, batch_size=batch_size)
sorted_dl = SortingTextDataLoader(dl)
# Make sure the model is in evaluation mode (i.e. disable dropout).
model.eval()
total_ELBO = 0.
total_KL = 0.
num_words = 0
# We don't need to compute gradients for this.
with torch.no_grad():
for words in sorted_dl:
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device)
# Infer the approximate posterior and construct the prior.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5)
# Compute the unnormalized probabilities using a single sample from the
# approximate posterior.
z = qz.sample()
# Compute distributions X_i|z, x_{<i}
px_z = model(x_in, z)
# Compute the reconstruction loss and KL divergence.
loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z,
free_nats=0.,
evaluation=True)
total_ELBO += ELBO
total_KL += KL
num_words += x_in.size(0)
# Return the average reconstruction loss and KL.
avg_ELBO = total_ELBO / num_words
avg_KL = total_KL / num_words
return avg_ELBO, avg_KL_____no_output_____dummy_lm = LatentFactorModel(
vocab.size(), emb_size=64, hidden_size=128,
latent_size=16, pad_idx=pad_idx).to(device=device)
!head -n 128 {val_file} > ./dummy_dataset
dummy_data = TextDataset('./dummy_dataset')
dummy_ELBO, dummy_kl = eval_elbo(dummy_lm, dummy_inference_model,
dummy_data, vocab, device)
print(dummy_ELBO, dummy_kl)
assert dummy_kl.item() > 0tensor(-37.6747, device='cuda:0') tensor(0.5302, device='cuda:0')
</code>
A common metric to evaluate language models is the perplexity per word. The perplexity per word for a dataset is defined as:
\begin{align}
\text{ppl}(\mathcal{D}|\theta, \lambda) = \exp\left(-\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal{D}|} \log P(x^{(k)}|\theta, \lambda)\right)
\end{align}
where $n^{(k)} = |x^{(k)}|$ is the number of tokens in a word and $P(x^{(k)}|\theta, \lambda)$ is the probability that our model assigns to the datapoint $x^{(k)}$. In order to compute $\log P(x|\theta, \lambda)$ for our model we need to evaluate the marginal:
\begin{align}
P(x|\theta, \lambda) = \sum_{z \in \{0, 1\}^K} P(x|z,\theta) P(z|\alpha)
\end{align}
As this is summation cannot be computed in a reasonable amount of time (due to exponential complexity), we have two options: we can use the earlier derived lower-bound on the log-likelihood, which will give us an upper-bound on the perplexity, or we can make an importance sampling estimate using our approximate posterior distribution. The importance sampling (IS) estimate can be done as:
\begin{align}
\hat P(x|\theta, \lambda) &\overset{\text{IS}}{\approx} \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x|z^{(s)}, \theta)}{Q(z^{(s)}|x)} & \text{where }z^{(s)} \sim Q(z|x)
\end{align}
where $S$ is the number of samples.
Then our perplexity becomes:
\begin{align}
&\frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log P(x^{(k)}|\theta) \\
&\approx \frac{1}{\sum_{k=1}^{|\mathcal D|} n^{(k)}} \sum_{k=1}^{|\mathcal D|} \log \frac{1}{S} \sum_{s=1}^{S} \frac{P(z^{(s)}|\alpha)P(x^{(k)}|z^{(s)}, \theta)}{Q(z^{(s)}|x^{(k)})} \\
\end{align}
We define the function `eval_perplexity` below that implements this importance sampling estimate:_____no_output_____
<code>
def eval_perplexity(model, inference_model, eval_dataset, vocab, device,
n_samples, batch_size=128):
"""
Estimates the per-word perplexity using importance sampling with the
given number of samples.
"""
dl = DataLoader(eval_dataset, batch_size=batch_size)
sorted_dl = SortingTextDataLoader(dl)
# Make sure the model is in evaluation mode (i.e. disable dropout).
model.eval()
log_px = 0.
num_predictions = 0
num_words = 0
# We don't need to compute gradients for this.
with torch.no_grad():
for words in sorted_dl:
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device)
# Infer the approximate posterior and construct the prior.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5) # TODO different prior
# Create an array to hold all samples for this batch.
batch_size = x_in.size(0)
log_px_samples = torch.zeros(n_samples, batch_size)
# Sample log P(x) n_samples times.
for s in range(n_samples):
# Sample a z^s from the posterior.
z = qz.sample()
# Compute log P(x^k|z^s)
px_z = model(x_in, z)
# [B, T]
cond_log_prob = px_z.log_prob(x_out)
cond_log_prob = torch.where(seq_mask, cond_log_prob, torch.zeros_like(cond_log_prob))
# [B]
cond_log_prob = cond_log_prob.sum(-1)
# Compute log p(z^s) and log q(z^s|x^k)
prior_log_prob = pz.log_prob(z) # B
posterior_log_prob = qz.log_prob(z) # B
# Store the sample for log P(x^k) importance weighted with p(z^s)/q(z^s|x^k).
log_px_sample = cond_log_prob + prior_log_prob - posterior_log_prob
log_px_samples[s] = log_px_sample
# Average over the number of samples and count the number of predictions made this batch.
log_px_batch = torch.logsumexp(log_px_samples, dim=0) - \
torch.log(torch.Tensor([n_samples]))
log_px += log_px_batch.sum()
num_predictions += seq_len.sum()
num_words += seq_len.size(0)
# Compute and return the perplexity per word.
perplexity = torch.exp(-log_px / num_predictions)
NLL = -log_px / num_words
return perplexity, NLL_____no_output_____
</code>
Lastly, we want to occasionally qualitatively see the performance of the model during training, by letting it reconstruct a given word from the latent space. This gives us an idea of whether the model is using the latent space to encode some semantics about the data. For this we use a deterministic greedy decoding algorithm, that chooses the word with maximum probability at every time step, and feeds that word into the next time step._____no_output_____
<code>
def greedy_decode(model, z, vocab, max_len=50):
"""
Greedily decodes a word from a given z, by picking the word with
maximum probability at each time step.
"""
# Disable dropout.
model.eval()
# Don't compute gradients.
with torch.no_grad():
batch_size = z.size(0)
# We feed the model the start-of-word symbol at the first time step.
prev_x = torch.ones(batch_size, 1, dtype=torch.long).fill_(vocab[SOW_TOKEN]).to(z.device)
# Initialize the hidden state from z.
hidden = model.init_hidden(z)
predictions = []
for t in range(max_len):
logits, hidden = model.step(prev_x, z, hidden)
# Choose the argmax of the unnnormalized probabilities as the
# prediction for this time step.
prediction = torch.argmax(logits, dim=-1)
predictions.append(prediction)
prev_x = prediction.view(batch_size, 1)
return torch.cat(predictions, dim=1)_____no_output_____
</code>
# Training
Now it's time to train the model. We use early stopping on the validation perplexity for model selection._____no_output_____
<code>
# Define the model hyperparameters.
emb_size = 256
hidden_size = 256
latent_size = 16
bidirectional_encoder = True
free_nats = 0 # 5.
annealing_steps = 0 # 11400
dropout = 0.6
word_dropout = 0 # 0.75
batch_size = 64
learning_rate = 0.001
num_epochs = 20
n_importance_samples = 3 # 50
# Create the training data loader.
dl = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
sorted_dl = SortingTextDataLoader(dl)
# Create the generative model.
model = LatentFactorModel(vocab_size=vocab.size(),
emb_size=emb_size,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
dropout=dropout)
model = model.to(device)
# Create the inference model.
inference_model = InferenceModel(vocab_size=vocab.size(),
embedder=model.embedder,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
bidirectional=bidirectional_encoder)
inference_model = inference_model.to(device)
# Create the optimizer.
optimizer = optim.Adam(itertools.chain(model.parameters(),
inference_model.parameters()),
lr=learning_rate)
# Save the best model (early stopping).
best_model = "./best_model.pt"
best_val_ppl = float("inf")
best_epoch = 0
# Keep track of some statistics to plot later.
train_ELBOs = []
train_KLs = []
val_ELBOs = []
val_KLs = []
val_perplexities = []
val_NLLs = []
step = 0
training_ELBO = 0.
training_KL = 0.
num_batches = 0
for epoch_num in range(1, num_epochs+1):
for words in sorted_dl:
# Make sure the model is in training mode (for dropout).
model.train()
# Transform the words to input, output, seq_len, seq_mask batches.
x_in, x_out, seq_mask, seq_len = create_batch(words, vocab, device,
word_dropout=word_dropout)
# Compute the multiplier for the KL term if we do annealing.
if annealing_steps > 0:
KL_weight = min(1., (1.0 / annealing_steps) * step)
else:
KL_weight = 1.
# Do a forward pass through the model and compute the training loss. We use
# a reparameterized sample from the approximate posterior during training.
qz = inference_model(x_in, seq_mask, seq_len)
pz = ProductOfBernoullis(torch.ones_like(qz.probs) * 0.5)
z = qz.sample()
px_z = model(x_in, z)
loss, ELBO, KL = model.loss(px_z, x_out, pz, qz, z, free_nats=free_nats)
# Backpropagate and update the model weights.
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Update some statistics to track for the training loss.
training_ELBO += ELBO
training_KL += KL
num_batches += 1
# Every 100 steps we evaluate the model and report progress.
if step % 100 == 0:
val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device)
print("(%d) step %d: training ELBO (KL) = %.2f (%.2f) --"
" KL weight = %.2f --"
" validation ELBO (KL) = %.2f (%.2f)" %
(epoch_num, step, training_ELBO/num_batches,
training_KL/num_batches, KL_weight, val_ELBO, val_KL))
# Update some statistics for plotting later.
train_ELBOs.append((step, (training_ELBO/num_batches).item()))
train_KLs.append((step, (training_KL/num_batches).item()))
val_ELBOs.append((step, val_ELBO.item()))
val_KLs.append((step, val_KL.item()))
# Reset the training statistics.
training_ELBO = 0.
training_KL = 0.
num_batches = 0
step += 1
# After an epoch we'll compute validation perplexity and save the model
# for early stopping if it's better than previous models.
print("Finished epoch %d" % (epoch_num))
val_perplexity, val_NLL = eval_perplexity(model, inference_model, val_dataset, vocab, device,
n_importance_samples)
val_ELBO, val_KL = eval_elbo(model, inference_model, val_dataset, vocab, device)
# Keep track of the validation perplexities / NLL.
val_perplexities.append((epoch_num, val_perplexity.item()))
val_NLLs.append((epoch_num, val_NLL.item()))
# If validation perplexity is better, store this model for early stopping.
if val_perplexity < best_val_ppl:
best_val_ppl = val_perplexity
best_epoch = epoch_num
torch.save(model.state_dict(), best_model)
# Print epoch statistics.
print("Evaluation epoch %d:\n"
" - validation perplexity: %.2f\n"
" - validation NLL: %.2f\n"
" - validation ELBO (KL) = %.2f (%.2f)"
% (epoch_num, val_perplexity, val_NLL, val_ELBO, val_KL))
# Also show some qualitative results by reconstructing a word from the
# validation data. Use the mean of the approximate posterior and greedy
# decoding.
random_word = val_dataset[np.random.choice(len(val_dataset))]
x_in, _, seq_mask, seq_len = create_batch([random_word], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
z = qz.mean()
reconstruction = greedy_decode(model, z, vocab)
reconstruction = batch_to_words(reconstruction, vocab)[0]
print("-- Original word: \"%s\"" % random_word)
print("-- Model reconstruction: \"%s\"" % reconstruction)(1) step 0: training ELBO (KL) = -39.02 (0.43) -- KL weight = 1.00 -- validation ELBO (KL) = -38.29 (0.43)
(1) step 100: training ELBO (KL) = -27.68 (1.20) -- KL weight = 1.00 -- validation ELBO (KL) = -23.76 (1.28)
Finished epoch 1
Evaluation epoch 1:
- validation perplexity: 7.88
- validation NLL: 21.97
- validation ELBO (KL) = -22.52 (1.25)
-- Original word: "interpretarían"
-- Model reconstruction: "acontaren"
(2) step 200: training ELBO (KL) = -24.03 (1.33) -- KL weight = 1.00 -- validation ELBO (KL) = -22.47 (1.23)
(2) step 300: training ELBO (KL) = -23.19 (1.33) -- KL weight = 1.00 -- validation ELBO (KL) = -22.19 (1.47)
Finished epoch 2
Evaluation epoch 2:
- validation perplexity: 7.41
- validation NLL: 21.32
- validation ELBO (KL) = -21.99 (1.57)
-- Original word: "subtítulos"
-- Model reconstruction: "acarrarían"
(3) step 400: training ELBO (KL) = -23.07 (1.66) -- KL weight = 1.00 -- validation ELBO (KL) = -22.02 (1.65)
(3) step 500: training ELBO (KL) = -23.00 (1.85) -- KL weight = 1.00 -- validation ELBO (KL) = -22.06 (1.91)
Finished epoch 3
Evaluation epoch 3:
- validation perplexity: 7.34
- validation NLL: 21.22
- validation ELBO (KL) = -22.09 (2.12)
-- Original word: "antojó"
-- Model reconstruction: "acontaran"
(4) step 600: training ELBO (KL) = -22.87 (1.95) -- KL weight = 1.00 -- validation ELBO (KL) = -22.17 (2.22)
(4) step 700: training ELBO (KL) = -23.29 (2.55) -- KL weight = 1.00 -- validation ELBO (KL) = -22.70 (2.85)
Finished epoch 4
Evaluation epoch 4:
- validation perplexity: 7.77
- validation NLL: 21.83
- validation ELBO (KL) = -22.74 (3.02)
-- Original word: "cosquillearé"
-- Model reconstruction: "acontaran"
(5) step 800: training ELBO (KL) = -23.54 (2.97) -- KL weight = 1.00 -- validation ELBO (KL) = -22.73 (3.01)
(5) step 900: training ELBO (KL) = -23.41 (2.98) -- KL weight = 1.00 -- validation ELBO (KL) = -22.54 (2.93)
Finished epoch 5
Evaluation epoch 5:
- validation perplexity: 7.69
- validation NLL: 21.71
- validation ELBO (KL) = -22.73 (3.19)
-- Original word: "chutases"
-- Model reconstruction: "acalaran"
(6) step 1000: training ELBO (KL) = -23.44 (3.05) -- KL weight = 1.00 -- validation ELBO (KL) = -22.70 (3.17)
(6) step 1100: training ELBO (KL) = -23.34 (3.12) -- KL weight = 1.00 -- validation ELBO (KL) = -22.49 (3.04)
Finished epoch 6
Evaluation epoch 6:
- validation perplexity: 7.44
- validation NLL: 21.37
- validation ELBO (KL) = -22.37 (3.00)
-- Original word: "diversificaciones"
-- Model reconstruction: "acarraría"
(7) step 1200: training ELBO (KL) = -23.31 (3.03) -- KL weight = 1.00 -- validation ELBO (KL) = -22.49 (3.12)
(7) step 1300: training ELBO (KL) = -23.24 (3.18) -- KL weight = 1.00 -- validation ELBO (KL) = -22.34 (3.03)
Finished epoch 7
Evaluation epoch 7:
- validation perplexity: 7.37
- validation NLL: 21.27
- validation ELBO (KL) = -22.33 (3.08)
-- Original word: "entrelazado"
-- Model reconstruction: "acontaran"
(8) step 1400: training ELBO (KL) = -23.08 (3.04) -- KL weight = 1.00 -- validation ELBO (KL) = -22.40 (3.16)
(8) step 1500: training ELBO (KL) = -23.23 (3.27) -- KL weight = 1.00 -- validation ELBO (KL) = -22.68 (3.49)
Finished epoch 8
Evaluation epoch 8:
- validation perplexity: 7.65
- validation NLL: 21.66
- validation ELBO (KL) = -22.78 (3.63)
-- Original word: "comulgaríamos"
-- Model reconstruction: "abarraría"
(9) step 1600: training ELBO (KL) = -23.46 (3.54) -- KL weight = 1.00 -- validation ELBO (KL) = -22.72 (3.58)
(9) step 1700: training ELBO (KL) = -23.47 (3.69) -- KL weight = 1.00 -- validation ELBO (KL) = -22.88 (3.83)
Finished epoch 9
Evaluation epoch 9:
- validation perplexity: 7.98
- validation NLL: 22.11
- validation ELBO (KL) = -23.19 (4.19)
-- Original word: "coleccionarás"
-- Model reconstruction: "acontaran"
(10) step 1800: training ELBO (KL) = -23.82 (4.08) -- KL weight = 1.00 -- validation ELBO (KL) = -23.18 (4.15)
(10) step 1900: training ELBO (KL) = -23.79 (4.08) -- KL weight = 1.00 -- validation ELBO (KL) = -23.11 (4.15)
Finished epoch 10
Evaluation epoch 10:
- validation perplexity: 7.87
- validation NLL: 21.96
- validation ELBO (KL) = -23.06 (4.13)
-- Original word: "conmemoraran"
-- Model reconstruction: "acarraría"
(11) step 2000: training ELBO (KL) = -23.79 (4.17) -- KL weight = 1.00 -- validation ELBO (KL) = -23.03 (4.10)
(11) step 2100: training ELBO (KL) = -23.59 (3.99) -- KL weight = 1.00 -- validation ELBO (KL) = -22.82 (3.94)
Finished epoch 11
Evaluation epoch 11:
- validation perplexity: 7.71
- validation NLL: 21.74
- validation ELBO (KL) = -22.98 (4.14)
-- Original word: "esculpieren"
-- Model reconstruction: "acontaran"
(12) step 2200: training ELBO (KL) = -23.73 (4.10) -- KL weight = 1.00 -- validation ELBO (KL) = -22.90 (4.07)
(12) step 2300: training ELBO (KL) = -23.64 (4.15) -- KL weight = 1.00 -- validation ELBO (KL) = -22.97 (4.22)
Finished epoch 12
Evaluation epoch 12:
- validation perplexity: 7.60
- validation NLL: 21.59
- validation ELBO (KL) = -22.87 (4.16)
-- Original word: "cansándose"
-- Model reconstruction: "acontrastaría"
(13) step 2400: training ELBO (KL) = -23.68 (4.25) -- KL weight = 1.00 -- validation ELBO (KL) = -23.02 (4.29)
(13) step 2500: training ELBO (KL) = -23.56 (4.16) -- KL weight = 1.00 -- validation ELBO (KL) = -22.87 (4.16)
Finished epoch 13
Evaluation epoch 13:
- validation perplexity: 7.78
- validation NLL: 21.84
- validation ELBO (KL) = -22.99 (4.33)
-- Original word: "desmoldasen"
-- Model reconstruction: "acontermaría"
(14) step 2600: training ELBO (KL) = -23.61 (4.29) -- KL weight = 1.00 -- validation ELBO (KL) = -23.00 (4.37)
(14) step 2700: training ELBO (KL) = -23.76 (4.42) -- KL weight = 1.00 -- validation ELBO (KL) = -23.24 (4.63)
Finished epoch 14
Evaluation epoch 14:
- validation perplexity: 7.79
- validation NLL: 21.85
- validation ELBO (KL) = -23.09 (4.50)
-- Original word: "homenajearemos"
-- Model reconstruction: "aconterraría"
(15) step 2800: training ELBO (KL) = -23.89 (4.59) -- KL weight = 1.00 -- validation ELBO (KL) = -23.20 (4.63)
(15) step 2900: training ELBO (KL) = -23.97 (4.77) -- KL weight = 1.00 -- validation ELBO (KL) = -23.48 (4.97)
Finished epoch 15
Evaluation epoch 15:
- validation perplexity: 7.99
- validation NLL: 22.12
- validation ELBO (KL) = -23.23 (4.75)
-- Original word: "pisotearan"
-- Model reconstruction: "acontaran"
(16) step 3000: training ELBO (KL) = -23.90 (4.76) -- KL weight = 1.00 -- validation ELBO (KL) = -23.19 (4.70)
(16) step 3100: training ELBO (KL) = -24.00 (4.88) -- KL weight = 1.00 -- validation ELBO (KL) = -23.60 (5.16)
Finished epoch 16
Evaluation epoch 16:
- validation perplexity: 8.32
- validation NLL: 22.56
- validation ELBO (KL) = -23.78 (5.34)
-- Original word: "coexistid"
-- Model reconstruction: "acondiciaren"
(17) step 3200: training ELBO (KL) = -24.46 (5.36) -- KL weight = 1.00 -- validation ELBO (KL) = -24.00 (5.60)
(17) step 3300: training ELBO (KL) = -24.72 (5.64) -- KL weight = 1.00 -- validation ELBO (KL) = -23.92 (5.55)
Finished epoch 17
Evaluation epoch 17:
- validation perplexity: 8.33
- validation NLL: 22.57
- validation ELBO (KL) = -23.87 (5.53)
-- Original word: "ensamblamos"
-- Model reconstruction: "aconderaría"
(18) step 3400: training ELBO (KL) = -24.50 (5.45) -- KL weight = 1.00 -- validation ELBO (KL) = -23.74 (5.41)
(18) step 3500: training ELBO (KL) = -23.51 (4.53) -- KL weight = 1.00 -- validation ELBO (KL) = -23.71 (5.42)
Finished epoch 18
Evaluation epoch 18:
- validation perplexity: 8.29
- validation NLL: 22.52
- validation ELBO (KL) = -23.95 (5.68)
-- Original word: "caro"
-- Model reconstruction: "aconternaría"
(19) step 3600: training ELBO (KL) = -24.57 (5.59) -- KL weight = 1.00 -- validation ELBO (KL) = -23.96 (5.73)
(19) step 3700: training ELBO (KL) = -24.55 (5.68) -- KL weight = 1.00 -- validation ELBO (KL) = -23.59 (5.36)
Finished epoch 19
Evaluation epoch 19:
- validation perplexity: 8.22
- validation NLL: 22.43
- validation ELBO (KL) = -23.70 (5.50)
-- Original word: "captáremos"
-- Model reconstruction: "aconderaría"
(20) step 3800: training ELBO (KL) = -24.24 (5.44) -- KL weight = 1.00 -- validation ELBO (KL) = -23.66 (5.45)
(20) step 3900: training ELBO (KL) = -24.25 (5.42) -- KL weight = 1.00 -- validation ELBO (KL) = -23.49 (5.34)
Finished epoch 20
Evaluation epoch 20:
- validation perplexity: 8.11
- validation NLL: 22.28
- validation ELBO (KL) = -23.60 (5.46)
-- Original word: "endeudado"
-- Model reconstruction: "acondiciaría"
</code>
# Let's plot the training and validation statistics:_____no_output_____
<code>
steps, training_ELBO = list(zip(*train_ELBOs))
_, training_KL = list(zip(*train_KLs))
_, val_ELBO = list(zip(*val_ELBOs))
_, val_KL = list(zip(*val_KLs))
epochs, val_ppl = list(zip(*val_perplexities))
_, val_NLL = list(zip(*val_NLLs))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
# Plot training ELBO and KL
ax1.set_title("Training ELBO")
ax1.plot(steps, training_ELBO, "-o")
ax2.set_title("Training KL")
ax2.plot(steps, training_KL, "-o")
plt.show()
# Plot validation ELBO and KL
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
ax1.set_title("Validation ELBO")
ax1.plot(steps, val_ELBO, "-o", color="orange")
ax2.set_title("Validation KL")
ax2.plot(steps, val_KL, "-o", color="orange")
plt.show()
# Plot validation perplexities.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 5))
ax1.set_title("Validation perplexity")
ax1.plot(epochs, val_ppl, "-o", color="orange")
ax2.set_title("Validation NLL")
ax2.plot(epochs, val_NLL, "-o", color="orange")
plt.show()
print()_____no_output_____
</code>
Let's load the best model according to validation perplexity and compute its perplexity on the test data:_____no_output_____
<code>
# Load the best model from disk.
model = LatentFactorModel(vocab_size=vocab.size(),
emb_size=emb_size,
hidden_size=hidden_size,
latent_size=latent_size,
pad_idx=vocab[PAD_TOKEN],
dropout=dropout)
model.load_state_dict(torch.load(best_model))
model = model.to(device)
# Compute test perplexity and ELBO.
test_perplexity, test_NLL = eval_perplexity(model, inference_model, test_dataset, vocab,
device, n_importance_samples)
test_ELBO, test_KL = eval_elbo(model, inference_model, test_dataset, vocab, device)
print("test ELBO (KL) = %.2f (%.2f) -- test perplexity = %.2f -- test NLL = %.2f" %
(test_ELBO, test_KL, test_perplexity, test_NLL))test ELBO (KL) = -25.34 (5.46) -- test perplexity = 9.56 -- test NLL = 24.05
</code>
# Qualitative analysis
Let's have a look at what how our trained model interacts with the learned latent space. First let's greedily decode some samples from the prior to assess the diversity of the model:_____no_output_____
<code>
# Generate 10 samples from the standard normal prior.
num_prior_samples = 10
pz = ProductOfBernoullis(torch.ones(num_prior_samples, latent_size) * 0.5)
z = pz.sample()
z = z.to(device)
# Use the greedy decoding algorithm to generate words.
predictions = greedy_decode(model, z, vocab)
predictions = batch_to_words(predictions, vocab)
for num, prediction in enumerate(predictions):
print("%d: %s" % (num+1, prediction))_____no_output_____
</code>
Let's now have a look how good the model is at reconstructing words from the test dataset using the approximate posterior mean and a couple of samples:_____no_output_____
<code>
# Pick a random test word.
test_word = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x).
x_in, _, seq_mask, seq_len = create_batch([test_word], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
# Decode using the mean.
z_mean = qz.mean()
mean_reconstruction = greedy_decode(model, z_mean, vocab)
mean_reconstruction = batch_to_words(mean_reconstruction, vocab)[0]
print("Original: \"%s\"" % test_word)
print("Posterior mean reconstruction: \"%s\"" % mean_reconstruction)
# Decode a couple of samples from the approximate posterior.
for s in range(3):
z = qz.sample()
sample_reconstruction = greedy_decode(model, z, vocab)
sample_reconstruction = batch_to_words(sample_reconstruction, vocab)[0]
print("Posterior sample reconstruction (%d): \"%s\"" % (s+1, sample_reconstruction))_____no_output_____
</code>
We can also qualitatively assess the smoothness of the learned latent space by interpolating between two words in the test set:_____no_output_____
<code>
# Pick a random test word.
test_word_1 = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x).
x_in, _, seq_mask, seq_len = create_batch([test_word_1], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
qz_1 = qz.mean()
# Pick a random second test word.
test_word_2 = test_dataset[np.random.choice(len(test_dataset))]
# Infer q(z|x) again.
x_in, _, seq_mask, seq_len = create_batch([test_word_2], vocab, device)
qz = inference_model(x_in, seq_mask, seq_len)
qz_2 = qz.mean()
# Now interpolate between the two means and generate words between those.
num_words = 5
print("Word 1: \"%s\"" % test_word_1)
for alpha in np.linspace(start=0., stop=1., num=num_words):
z = (1-alpha) * qz_1 + alpha * qz_2
reconstruction = greedy_decode(model, z, vocab)
reconstruction = batch_to_words(reconstruction, vocab)[0]
print("(1-%.2f) * qz1.mean + %.2f qz2.mean: \"%s\"" % (alpha, alpha, reconstruction))
print("Word 2: \"%s\"" % test_word_2)_____no_output_____
</code>
| {
"repository": "vitutorial/exercises",
"path": "LatentFactorModel/LatentFactorModel.ipynb",
"matched_keywords": [
"STAR"
],
"stars": 1,
"size": 82041,
"hexsha": "d080d0956c0f4475db66015747f22df1dfb03649",
"max_line_length": 796,
"avg_line_length": 42.4862765407,
"alphanum_fraction": 0.5444107215
} |
# Notebook from anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Path: 02-Discrete-Bayes.ipynb
[Table of Contents](./table_of_contents.ipynb)_____no_output_____# Discrete Bayes Filter_____no_output_____# 离散贝叶斯滤波_____no_output_____
<code>
%matplotlib inline_____no_output_____#format the book
import book_format
book_format.set_style()_____no_output_____
</code>
The Kalman filter belongs to a family of filters called *Bayesian filters*. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level.
That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader.
I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic.
Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking._____no_output_____卡爾曼濾波器屬於一個名為“貝葉斯濾波器”的大家族。許多教材将卡爾曼濾波器作為貝葉斯公式的範例講授,例如展示貝葉斯公式是如何組成卡爾曼濾波器公式的。這些討論往往停留在抽象层面。
這種學習方法需要讀者理解許多複雜的數學知識,而且無助於讀者直觀地理解卡爾曼濾波器。
我會用不一樣的方式展開這個主題。對此,我首先要感謝Dieter Fox和Sebastian Thrun的工作,從他們那裡我獲益良多。具體来說,我會以如何跟踪走廊上的一個目標為例來為你建立對貝葉斯統計方法的直覺上的理解——別人使用機器人作為目標,而我則喜欢用狗。原因是狗的運動比機器人的更難預測,這為濾波任務帶來許多有趣的挑戰。我找到的關於這類例子最早的記錄見於Fox於1999年所作的【1】,隨後他在2003年的【2】中增加了更多細節。Sebastian Thrun在他的優達學城的“面向机器人学的人工智能”课程上引用了這個例子。如果你喜欢看視頻,我强烈建議你在閱讀本書之前先學一學該課程的前面几節,然后再回来繼續深入了解這個問題。
像先前g-h濾波器一章中那樣,讓我們从一个簡單的思想實驗開始,看看如何用概率工具来解释濾波器和跟蹤器。_____no_output_____## Tracking a Dog
Let's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody invented a sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to the network via wifi and sends an update once a second.
I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read **door**, **hall**, **hall**, and so on. How can I use that information to determine where Simon is?
To keep the problem small enough to plot easily we will assume that there are only 10 positions in the hallway, which we will number 0 to 9, where 1 is to the right of 0. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0.
When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. From my perspective he is equally likely to be in any position. There are 10 positions, so the probability that he is in any given position is 1/10.
Let's represent our belief of his position in a NumPy array. I could use a Python list, but NumPy arrays offer functionality that we will be using soon._____no_output_____## 狗的跟蹤問題
我们先从一个简单的问题开始。因为我们的工作室是宠物友好型,所以同事们会带狗狗到工作场所来。偶尔狗会从办公室跑出来,到走廊去玩。因為我们希望能跟踪狗的运动,所以在一次黑客马拉松上,某个人提出在狗狗的项圈上装一个超声波传感器。传感器能发出声波,并接受回声。根据回声的速度,传感器能够输出狗是否來到了開放的門道前。它能够在狗运动的时候给出信号,报告运动的方向。它还能通过无线网络连接以每秒钟一次的频率上传数据。
我想要跟踪我的狗,西蒙。于是我打开Python,准备編寫代码来实现在建筑内跟踪狗狗的功能。乍看起来这似乎不可能。如果我监听西蒙项圈传来的信号,能得到類似於**门、走廊、走廊**這樣的一连串信号。但我们如何才能使用这些信号确定西蒙的位置呢?
為控制問題的規模以便於繪圖,我們假設走廊中一共僅有10個不同的地點,從0到9標號。1號在0號右側。我們還假設走廊是圓形或矩形的環,其中理由隨後自明。於是當你從9號地點向右移動,你就會回到0點。
我開始監聽傳感器時,我並沒有任何理由相信西蒙現在位於某個具體地點。從這一角度看,他在任意位置的可能性都是均等的。一共有10個位置,所以每個位置的概率都是1/10.
首先,我们将狗狗在各个位置的置信度表示为一个NumPy数组。雖然用Python提供的list也可以,但NumPy数组提供了一些我們需要的功能。_____no_output_____
<code>
import numpy as np
belief = np.array([1/10]*10)
print(belief)[0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]
</code>
In [Bayesian statistics](https://en.wikipedia.org/wiki/Bayesian_probability) this is called a [*prior*](https://en.wikipedia.org/wiki/Prior_probability). It is the probability prior to incorporating measurements or other information. More completely, this is called the *prior probability distribution*. A [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) is a collection of all possible probabilities for an event. Probability distributions always sum to 1 because something had to happen; the distribution lists all possible events and the probability of each.
I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats probability as a belief about a single event. Let's take an example. I know that if I flip a fair coin infinitely many times I will get 50% heads and 50% tails. This is called [*frequentist statistics*](https://en.wikipedia.org/wiki/Frequentist_inference) to distinguish it from Bayesian statistics. Computations are based on the frequency in which events occur.
I flip the coin one more time and let it land. Which way do I believe it landed? Frequentist probability has nothing to say about that; it will merely state that 50% of coin flips land as heads. In some ways it is meaningless to assign a probability to the current state of the coin. It is either heads or tails, we just don't know which. Bayes treats this as a belief about a single event - the strength of my belief or knowledge that this specific coin flip is heads is 50%. Some object to the term "belief"; belief can imply holding something to be true without evidence. In this book it always is a measure of the strength of our knowledge. We'll learn more about this as we go.
Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian.
In practice statisticians use a mix of frequentist and Bayesian techniques. Sometimes finding the prior is difficult or impossible, and frequentist techniques rule. In this book we can find the prior. When I talk about the probability of something I am referring to the probability that some specific thing is true given past events. When I do that I'm taking the Bayesian approach.
Now let's create a map of the hallway. We'll place the first two doors close together, and then another door further away. We will use 1 for doors, and 0 for walls:_____no_output_____在[贝叶斯统计学](https://en.wikipedia.org/wiki/Bayesian_probability)中这被称为[先验](https://en.wikipedia.org/wiki/Prior_probability). 它是在考虑测量结果或其它信息之前的概率。更完整的说,这叫做“先验概率”。 所谓[“先验概率”](https://en.wikipedia.org/wiki/Probability_distribution)是某个事件所有可能概率的集合。由于所有可能之中必有其一真实发生,所以概率分布的总和恆等於1。概率分布列出了所有可能的事件及每个事件的对应概率。
我知道你肯定有使用过概率——比如“今天下雨的概率是30%”。這句話與上一段文字有相似的含義。贝叶斯统计是概率论的一场革命,是因为它将概率视作是每个事件的置信度。举例来说,如果我多次抛掷一个理想的硬币,我将得到50%正面,50%反面的统计结果。这叫做[频率统计](https://en.wikipedia.org/wiki/Frequentist_inference)。同贝叶斯统计不同,频率统计的計算基於事件發生的頻率。
假如我再掷一次硬币,我相信它是哪一面落地?频率學派對此沒有什麼建議。它唯一能回答的是,有50%的硬币正面朝上。然而從某些方面看,像這樣为硬币的当前状态赋予概率是无意义的。它要么正要么反,只是我們不確定罷了。贝叶斯学派則將此視為单个事件的信念——它表示我们对该硬币正面朝上的概率是50%的信念或者知识。“信念”一词的含义是,在没有充分的证据的情况下相信某种情况属实。本书中,始终使用置信度来作为对知识的强度的度量。随着阅读的继续,我们将了解更多细节。
贝叶斯统计学考虑過去的信息(先验概率)。通过觀察我们知道过去100天内有4天下雨,据此推出明天下雨的概率是1/25. 当然天气预报不是这么做的。如果我知道今天是雨天,而风暴的边沿移动迟缓,于是我猜测明天继续下雨。这才是天气预报的做法,属于贝叶斯方法。
实践中频率统计和贝叶斯方法常常混合使用。有时候先验难以取得,或者无法取得,就使用频率统计的方法。本书中,先验是提供了的。当我们提到某事的概率时,我们指的是已知过去的系列事件的條件下,某事为真的概率。这时我们使用的是贝叶斯方法。
现在,我们来为走廊建一个地图。我们将两个门靠近放在一起,另一个门放远一些。用1表示门,0表示墙壁。_____no_output_____
<code>
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])_____no_output_____
</code>
I start listening to Simon's transmissions on the network, and the first data I get from the sensor is **door**. For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no reason to believe he is in front of the first, second, or third door. What I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door. _____no_output_____我开始监听西蒙发送到网络的信号,得到的第一个数据是**门**。假設传感器返回的信号永远准确,於是我知道西蒙就在某道门前。但是是哪一道门?我没有理由去相信它现在是在第一道门前。对于第二三道门也是如此。我唯一能做的是为各个门的赋予一个置信度。因为每个门看起来都是等可能的,而有三道门,故我为每道门赋予$1/3$的置信度。_____no_output_____
<code>
import kf_book.book_plots as book_plots
from kf_book.book_plots import figsize, set_figsize
import matplotlib.pyplot as plt
belief = np.array([1/3, 1/3, 0, 0, 0, 0, 0, 0, 1/3, 0])
book_plots.bar_plot(belief)_____no_output_____
</code>
This distribution is called a [*categorical distribution*](https://en.wikipedia.org/wiki/Categorical_distribution), which is a discrete distribution describing the probability of observing $n$ outcomes. It is a [*multimodal distribution*](https://en.wikipedia.org/wiki/Multimodal_distribution) because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that we have narrowed down our knowledge to one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8.
This is an improvement in two ways. I've rejected a number of hallway positions as impossible, and the strength of my belief in the remaining positions has increased from 10% to 33%. This will always happen. As our knowledge improves the probabilities will get closer to 100%.
A few words about the [*mode*](https://en.wikipedia.org/wiki/Mode_%28statistics%29)
of a distribution. Given a list of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the *mode* is the number that occurs most often. For this set the mode is 2. A distribution can contain more than one mode. The list {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former list is [*unimodal*](https://en.wikipedia.org/wiki/Unimodality), and the latter is *multimodal*.
Another term used for this distribution is a [*histogram*](https://en.wikipedia.org/wiki/Histogram). Histograms graphically depict the distribution of a set of numbers. The bar chart above is a histogram.
I hand coded the `belief` array in the code above. How would we implement this in code? We represent doors with 1, and walls as 0, so we will multiply the hallway variable by the percentage, like so;_____no_output_____此分布称为[**分类分布**](https://en.wikipedia.org/wiki/Categorical_distribution),它是描述了$n$个输出的离散分布。它还是[**多峰分布(multimodal distribution)**](https://en.wikipedia.org/wiki/Multimodal_distribution) ,因为它为狗的多种可能位置给出了置信度。当然这不是说我们认为它可以同时出现在三个不同的位置,我们只是根据知识将范围缩小到三个位置之一。我们的(贝叶斯)置信度认为狗狗有33.3%的概率出现在0号门,有33.3%的方式出现在1号门,还有33.3%的方式出现在8号门。
我们的改进体现在两个方面。一是我们拒绝了狗狗在一些位置出现的可能性,二是我们将余下位置的置信度从10%增长到33%。随着我们知识的增加,这种差别会更加明显。
这里要说一說分布的[**众数(mode)**](https://en.wikipedia.org/wiki/Mode_%28statistics%29)。给定一个数组,比如{1, 2, 2, 2, 3, 3, 4}, **众数**是其中出现次数最多的数。对于该例,众数是2. 一个分布可以有多个众数。例如{1, 2, 2, 2, 3, 3, 4, 4, 4}的众数是2和4,因为它们都出现三次。所以第一个数组是[**单峰分布**](https://en.wikipedia.org/wiki/Unimodality),后一个数组是**多峰分布**。
另一个重要的概念是[**直方图**](https://en.wikipedia.org/wiki/Histogram)。直方图通过图像的形式了一系列数组分布。上面那个图就是直方图的一个例子。
上面的置信度数组`belief`是我手算的。如何用代码来实现这个过程呢?我们用1表示门,用0表示墙。我们用一个比例乘以这个数组,如下所示。_____no_output_____
<code>
belief = hallway * (1/3)
print(belief)[0.333 0.333 0. 0. 0. 0. 0. 0. 0.333 0. ]
</code>
## Extracting Information from Sensor Readings
Let's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor:
* door
* move right
* door
Can we deduce Simon's location? Of course! Given the hallway's layout there is only one place from which you can get this sequence, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. The only possibility is that he is now in front of the second door. Our belief is:_____no_output_____## 从传感器读数中提取信息
我们先抛开Python来思考这个问题。假如我们从西蒙的传感器读取到如下数据:
* 门
* 右移
* 门
我们可以推导出西蒙的位置吗?当然可以!根据走廊的地图,只有一个位置可以产生测得的序列,即地图的最左端。因此我们非常肯定西蒙在第二道门前。如果这还不够清晰,可以试着假定西蒙从第二道门或第三道门出发,向右走。这样它的传感器会返回“墙”这个信号。这与传感器实际读数不匹配,所以我们知道这两处不是真正的起点。我们也可以在其它可能起点重复这样的推理。唯一的可能是西蒙目前在第二道门前。我们的置信度是:_____no_output_____
<code>
belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])_____no_output_____
</code>
I designed the hallway layout and sensor readings to give us an exact answer quickly. Real problems are not so clear cut. But this should trigger your intuition - the first sensor reading only gave us low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we know more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. This is possible when a set of sensor readings only matches one to a few starting locations.
We could implement this solution now, but instead let's consider a real world complication to the problem._____no_output_____我特意将走廊的地图以及传感器的读数设计成这样以方便快速得到准确的答案。但现实问题往往没有这么清晰。但这个例子可以帮助你建立一种直觉——当收到第一个传感器信号的时候,我们只能以一个较低的置信度(0.333)猜测西蒙的位置,但是当第二个传感器数据到来时,我们就获得了更多关于西蒙位置的信息。你猜得对,就算有一道很长的走廊和许多道门,只要我们有一段足够长的传感器读数和位置更新的信息,我们就能定位西蒙,或者将可能性缩小到有限的几种情况。因为有可能一系列传感器读数只能通过个别起始点获得。
我们现在就可以实现该解法,但在此之前,让我们再考虑考虑这个问题在现实世界中的复杂性。_____no_output_____## Noisy Sensors
Perfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or misread if he is not facing down the hallway. Thus when I get **door** I cannot use 1/3 as the probability. I have to assign less than 1/3 to each door, and assign a small probability to each blank wall position. Something like
```Python
[.31, .31, .01, .01, .01, .01, .01, .01, .31, .01]
```
At first this may seem insurmountable. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure?
The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise.
Say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to fix that in a moment.
Let's look at that in Python code. Here I use the variable `z` to denote the measurement. `z` or `y` are customary choices in the literature for the measurement. As a programmer I prefer meaningful variable names, but I want you to be able to read the literature and/or other filtering code, so I will start introducing these abbreviated names now._____no_output_____## 传感器噪声
不存在有理想的傳感器。傳感器有可能在西蒙坐在門前撓癢癢的時候無法給出正确定位,也有可能在沒有正面朝向走廊時候給出錯誤讀數。因此當傳感器傳來“門”這一數據时,我不能使用$1/3$作为其概率,而應使用比$1/3$小的数作为門的概率,然后用一个较小的值作为其它位置的概率。一个可能的情况是:
```Python
[.31, .31, .01, .01, .01, .01, .01, .01, .31, .01]
```
乍看之下,问题似乎是无解的。如果传感器含有噪声,那么每一段数据都值得怀疑。在我們無法確定任何事情的情況下,如何下結論呢?
同上面那個問題的回答一樣,我們應該使用概率。我們對於為每個可能的位置賦予一定概率的做法已經習慣了。那麼現在我們需要將傳感器噪聲導致的額外的不確定性考慮進來。
假如我們得到傳感器數據“門”,同時假設根據測試,該類數據正確的概率是錯誤的概率的三倍。在此情況下,概率分佈上對應門的位置應當放大三倍。如果我們這麼做,原來的數據就不再是概率分佈了,但我們後面會介紹修復這個問題的方法。
讓我們看看這種做法用Python怎麼寫。我們這裡用`z`表示測量值。`z`或`y`常常在文獻中用來代表測量值。作為程序員我喜歡更有意義的名字,但我還希望能方便你閱讀有關文獻和查看其它濾波器的代碼,因此我這裡會使用這些簡化的變量名。_____no_output_____
<code>
def update_belief(hall, belief, z, correct_scale):
for i, val in enumerate(hall):
if val == z:
belief[i] *= correct_scale
belief = np.array([0.1] * 10)
reading = 1 # 1 is 'door'
update_belief(hallway, belief, z=reading, correct_scale=3.)
print('belief:', belief)
print('sum =', sum(belief))
plt.figure()
book_plots.bar_plot(belief)belief: [0.3 0.3 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.1]
sum = 1.6000000000000003
</code>
This is not a probability distribution because it does not sum to 1.0. But the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list. That is easy with NumPy:_____no_output_____該數組的和不為1,因而不構成一個概率分佈。但代碼所作的事情大致還是對的——門對應的置信度(0.3)是其它位置的置信度(0,1)的三倍。我們只需做一個歸一化,就能使數組的和為1. 所謂歸一化,是將數組的各個元素除以自身的總和。這很容易用NumPy實現:_____no_output_____
<code>
belief / sum(belief)_____no_output_____
</code>
FilterPy implements this with the `normalize` function:
```Python
from filterpy.discrete_bayes import normalize
normalize(belief)
```
It is a bit odd to say "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and compute the scale factor from that. The equation for that is
$$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$
Also, the `for` loop is cumbersome. As a general rule you will want to avoid using `for` loops in NumPy code. NumPy is implemented in C and Fortran, so if you avoid for loops the result often runs 100x faster than the equivalent loop.
How do we get rid of this `for` loop? NumPy lets you index arrays with boolean arrays. You create a boolean array with logical operators. We can find all the doors in the hallway with:_____no_output_____FilterPy實現了該歸一化函數`normalize`:
```Python
from filterpy.discrete_bayes import normalize
normalize(belief)
```
“正確概率是錯誤概率三倍”這樣的說法很奇怪。我們既然以概率論為工具,那麼更好的做法還是為指定傳感器正確的概率,並據此計算縮放係數。公式如下
$$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$
另外,`for`循環也很多餘。通常你需要避免在使用NumPy時寫`for`循環。NumPy是用C和Fortran實現的,如果你能避免經常使用for循環,那麼程序往往能加快100倍。
如何避免`for`循環呢?NumPy可以使用布爾值作為數組的索引。布爾值可以用邏輯運算符得到。我們可以通過如下方式獲得所有門的位置:_____no_output_____
<code>
hallway == 1_____no_output_____
</code>
When you use the boolean array as an index to another array it returns only the elements where the index is `True`. Thus we can replace the `for` loop with
```python
belief[hall==z] *= scale
```
and only the elements which equal `z` will be multiplied by `scale`.
Teaching you NumPy is beyond the scope of this book. I will use idiomatic NumPy constructs and explain them the first time I present them. If you are new to NumPy there are many blog posts and videos on how to use NumPy efficiently and idiomatically.
Here is our improved version:_____no_output_____當你使用布爾類型數組作為其他數組的索引的時候,就會得到對應值為真的位置。根據這個原理我們可以用下面的代碼替換掉前面使用的`for`循環
```python
belief[hall==z] *= scale
```
這樣,只有對應`z`的位置的元素會被縮放`scale`倍。
本書的目的不是NumPy教學。我只會使用常見的NumPy寫法,並且在引入新用法的時候做介紹。如果你是NumPy的新手,網絡上又許多介紹如何高效、規範使用NumPy的文章和視頻。
經過改進的代碼如下:_____no_output_____
<code>
from filterpy.discrete_bayes import normalize
def scaled_update(hall, belief, z, z_prob):
scale = z_prob / (1. - z_prob)
belief[hall==z] *= scale
normalize(belief)
belief = np.array([0.1] * 10)
scaled_update(hallway, belief, z=1, z_prob=.75)
print('sum =', sum(belief))
print('probability of door =', belief[0])
print('probability of wall =', belief[2])
book_plots.bar_plot(belief, ylim=(0, .3))sum = 1.0
probability of door = 0.1875
probability of wall = 0.06249999999999999
</code>
We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions.
This result is called the [*posterior*](https://en.wikipedia.org/wiki/Posterior_probability), which is short for *posterior probability distribution*. All this means is a probability distribution *after* incorporating the measurement information (posterior means 'after' in this context). To review, the *prior* is the probability distribution before including the measurement's information.
Another term is the [*likelihood*](https://en.wikipedia.org/wiki/Likelihood_function). When we computed `belief[hall==z] *= scale` we were computing how *likely* each position was given the measurement. The likelihood is not a probability distribution because it does not sum to one.
The combination of these gives the equation
$$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$
When we talk about the filter's output we typically call the state after performing the prediction the *prior* or *prediction*, and we call the state after the update either the *posterior* or the *estimated state*.
It is very important to learn and internalize these terms as most of the literature uses them extensively.
Does `scaled_update()` perform this computation? It does. Let me recast it into this form:_____no_output_____我們可以通過程序輸出看到數組的和為1.0,同時對應于門的概率是墻壁的概率的三倍。同時,結果顯示對應門的概率小於0.333,這是符合我們直覺的。除此之外,由於我們沒有任何信息能幫助我們對各個門、墻進行內部區分,因此所有墻壁具有一樣的概率,所有門具有一樣的概率,這也是符合我們認識的。
這個結果即所謂的[**後驗**](https://en.wikipedia.org/wiki/Posterior_probability),是**後驗概率分佈**的縮寫。這表示該概率分佈是在考慮測量結果信息**之後**得到的(英文的posterior在此上下文中意思等同於after)。複習一下,**先驗**概率是考慮測量結果信息之前的概率分佈。
另一個術語是[**似然**](https://en.wikipedia.org/wiki/Likelihood_function). 當我們計算`belief[hall==z] *= scale`時,我們計算的是給定測量結果後每個位置的**似然**程度。似然度不是概率分佈,因為其和不必等於1.
結合上述步驟可以得到如下公式
$$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$
當我們討論濾波器的輸出的時候,我們一般將更新前的狀態叫做**先驗**或者**預測**,將更新後的狀態叫做**後驗**或者**估計**。
大多數相關文獻廣泛使用類似術語,因此學習和內化這些術語非常重要。
函數`scaled_update()`包含了此操作了嗎?答案是肯定的。我們可以將其轉化為如下形式:_____no_output_____
<code>
def scaled_update(hall, belief, z, z_prob):
scale = z_prob / (1. - z_prob)
likelihood = np.ones(len(hall))
likelihood[hall==z] *= scale
return normalize(likelihood * belief)_____no_output_____
</code>
This function is not fully general. It contains knowledge about the hallway, and how we match measurements to it. We always strive to write general functions. Here we will remove the computation of the likelihood from the function, and require the caller to compute the likelihood themselves.
Here is a full implementation of the algorithm:
```python
def update(likelihood, prior):
return normalize(likelihood * prior)
```
Computation of the likelihood varies per problem. For example, the sensor might not return just 1 or 0, but a `float` between 0 and 1 indicating the probability of being in front of a door. It might use computer vision and report a blob shape that you then probabilistically match to a door. It might use sonar and return a distance reading. In each case the computation of the likelihood will be different. We will see many examples of this throughout the book, and learn how to perform these calculations.
FilterPy implements `update`. Here is the previous example in a fully general form:_____no_output_____這個函數還不夠通用。它包含有關於走廊問題以及測量方法的知識。而我們總是盡可能寫通用的函數。這裡我們要從函數中移除似然度的計算,要求函數調用者計算似然度。
算法的完整實現如下:
```python
def update(likelihood, prior):
return normalize(likelihood * prior)
```
對於不同問題,似然度計算方法不盡相同。例如,傳感器可能返回的不是0、1信號,而是返回一個出於0和1之間的浮點型小數用於表示目標出於門前的概率。它可能採用計算機視覺的方法去檢測團塊的外形來計算目標物體是門的概率。它也可能通過傳感器獲得距離的讀數。在不同的案例中,計算似然度的方式也不相同。本書中會介紹許多種不同的例子及對應的計算方式。
FilterPy也實現了`update`函數。前面的例子用完全通用的形式寫出來會是這樣:_____no_output_____
<code>
from filterpy.discrete_bayes import update
def lh_hallway(hall, z, z_prob):
""" compute likelihood that a measurement matches
positions in the hallway."""
try:
scale = z_prob / (1. - z_prob)
except ZeroDivisionError:
scale = 1e8
likelihood = np.ones(len(hall))
likelihood[hall==z] *= scale
return likelihood
belief = np.array([0.1] * 10)
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
update(likelihood, belief) _____no_output_____
</code>
## Incorporating Movement
Recall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors?
Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math.
First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array?
I hope that after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after he moved one position to the right we should believe that there is a 50% chance he is at position 4. The hallway is circular, so we will use modulo arithmetic to perform the shift._____no_output_____## 考慮運動模型
回想一下之前我們是如何通過同時考慮一系列測量值和運動模型來快速找出準確解的。但是,該解法只存在於可以使用理想傳感器的幻想世界中。我們是否可以通過帶有噪聲的傳感器獲得精確解呢?
不幸的是,答案是否定的。即使傳感器讀數和複雜的走廊地圖完美吻合,我們還是無法百分百確定狗的確切位置——畢竟每個傳感器讀書都有小概率出錯!自然,在典型的環境中,大多數傳感器數據都是正確的,這使得我們的推理正確的概率接近100%,但也永遠達不到100%。這看起來有點複雜,我們且繼續前進,把數學代碼寫出來。
我們先解決一個簡單的問題——假如運動傳感器是準確的,它回報說狗向右移動一步。此時我們應如何更新`belief`數組?
略經思考,你已明白,我們應當將所有數值向右移動一步。假如我們先前認為西蒙處於位置3的概率為50%,那麼現在它處於位置4的概率為50%。走廊是環形的,所以我們使用取模運算來執行此操作。_____no_output_____
<code>
def perfect_predict(belief, move):
""" move the position by `move` spaces, where positive is
to the right, and negative is to the left
"""
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = belief[(i-move) % n]
return result
belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05])
plt.subplot(121)
book_plots.bar_plot(belief, title='Before prediction', ylim=(0, .4))
belief = perfect_predict(belief, 1)
plt.subplot(122)
book_plots.bar_plot(belief, title='After prediction', ylim=(0, .4))_____no_output_____
</code>
We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning.
The next cell animates this so you can see it in action. Use the slider to move forwards and backwards in time. This simulates Simon walking around and around the hallway. It does not yet incorporate new measurements so the probability distribution does not change shape, only position._____no_output_____可見我們正確地將所有數值都向右移動了一步,最右邊的數組回到了數組的左側。
下一個單元格輸出一個動畫。你可以用滑塊在時間上前移或後移。這就好像西蒙在走廊上四處遊走一般。因為沒有新的測量結果進來,分佈只是發生平移,形狀沒有改變。_____no_output_____
<code>
from ipywidgets import interact, IntSlider
belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05])
perfect_beliefs = []
for _ in range(20):
# Simon takes one step to the right
belief = perfect_predict(belief, 1)
perfect_beliefs.append(belief)
def simulate(time_step):
book_plots.bar_plot(perfect_beliefs[time_step], ylim=(0, .4))
interact(simulate, time_step=IntSlider(value=0, max=len(perfect_beliefs)-1));_____no_output_____
</code>
## Terminology
Let's pause a moment to review terminology. I introduced this terminology in the last chapter, but let's take a second to help solidify your knowledge.
The *system* is what we are trying to model or filter. Here the system is our dog. The *state* is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the *estimated state* of the system. In practice this often gets called the state, so be careful to understand the context.
One cycle of prediction and updating with a measurement is called the state or system *evolution*, which is short for *time evolution* [7]. Another term is *system propagation*. It refers to how the state of the system changes over time. For filters, time is usually a discrete step, such as 1 second. For our dog tracker the system state is the position of the dog, and the state evolution is the position after a discrete amount of time has passed.
We model the system behavior with the *process model*. Here, our process model is that the dog moves one or more positions at each time step. This is not a particularly accurate model of how dogs behave. The error in the model is called the *system error* or *process error*.
The prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements.
Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds?
Clearly,
$$ \begin{aligned}
\bar x &= 17 + (15*2) \\
&= 47
\end{aligned}$$
I use bars over variables to indicate that they are priors (predictions). We can write the equation for the process model like this:
$$ \bar x_{k+1} = f_x(\bullet) + x_k$$
$x_k$ is the current position or state. If the dog is at 17 m then $x_k = 17$.
$f_x(\bullet)$ is the state propagation function for x. It describes how much the $x_k$ changes over one time step. For our example it performs the computation $15 \cdot 2$ so we would define it as
$$f_x(v_x, t) = v_k t$$._____no_output_____## 術語
我們暫停複習一下術語。我上一章已經介紹過這些術語,但我們稍微花幾秒鐘鞏固一下知識。
所谓**系統**是我們嘗試建模和濾波的對象。這裡,系統指的是那隻狗。**狀態**表示當前的配置或數值。本章中,狀態是狗的位置。我們一般無法知道真實的狀態,所以我們說濾波器得到的是**狀態估計**。實踐中我們往往也將其稱為狀態,所以你要小心理解上下文。
一個預測步和一個根據測量更新狀態的更新步構成一個循環,這個循環被稱為狀態的**演化**,或系統的演化,它是**時間演化**【7】的縮寫。另一個術語是**系統傳播**。他指的是狀態是如何隨著時間改變的。對於濾波器,時間步是離散的,例如一秒的時間。對於我們的狗的跟蹤問題而言,系統的狀態是狗的位置,狀態的演化是經過一段離散的時間步後狗的位置的改變。
我們用“過程模型”來建模系統的行為。這裡,我們的過程模型是夠經過每個時間步都會移動一段距離。這個模型並不精確地建模狗的運動。模型的誤差稱為“系統誤差”或者“過程誤差”。
每個預測結果都給我們一個新的“先驗”。隨著時間的推進,我們在無測量結果輔助的情況下做出下一時刻的預測。
讓我們看一個例子。當前狗的位置是17m。一個時間步是兩秒,狗的速度是15米每秒。我們預測它兩秒後的位置會在哪裡。
顯而易見,
$$ \begin{aligned}
\bar x &= 17 + (15*2) \\
&= 47
\end{aligned}$$
我通過在符號上加一橫表示先驗(即預測結果)。我們將過程模型用公式表示出來,如下所示:
$$ \bar x_{k+1} = f_x(\bullet) + x_k$$
$x_k$是當前位置或狀態,如果狗在17 m處,那麼$x_k = 17$.
$f_x(\bullet)$是x的狀態傳播函數。它描述了$x_k$經過一個時間步的改變程度。對於這個例子,它執行了此計算$15 \cdot 2$,所以我們將它定義為
$$f_x(v_x, t) = v_k t$$_____no_output_____## Adding Uncertainty to the Prediction
`perfect_predict()` assumes perfect measurements, but all sensors have noise. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? This may sound like an insurmountable problem, but let's model it and see what happens.
Assume that the sensor's movement measurement is 80% likely to be correct, 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if the movement measurement is 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces.
Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider the reported movement of 2. If we are 100% certain the dog started from position 3, then there is an 80% chance he is at 5, and a 10% chance for either 4 or 6. Let's try coding that:_____no_output_____## 在預測中考慮不確定性
`perfect_predict()`函數假定測量是完美的,但實際上所有傳感器都有噪聲。如果傳感器顯示狗移動了一位,但實際上移動了兩位,會發生什麼?又或者實際上沒有移動呢?雖然這個問題乍看之下是無法解決的,但我們還是先建模一下問題,看看會發生什麼。
設傳感器測量的位移有80%的幾率是正確的,10%的幾率給出右偏一位的值,10%的概率給出左偏一位的值。即是說,如果傳感器測得的位移是4(向右移4位),那麼狗有80%的概率向右移4位,有10%的概率向右移3位,有10%的概率向右移5位。
對於數組中的每一個結果,我們都需要考慮三種情況的概率。例如,若傳感器報告位移為2,且我們百分百確定狗是從位置3起步的,那麼此時狗有80%的概率位於位置5,各有10%的概率位於4或6.
我們試著用代碼的形式表達這個問題:_____no_output_____
<code>
def predict_move(belief, move, p_under, p_correct, p_over):
n = len(belief)
prior = np.zeros(n)
for i in range(n):
prior[i] = (
belief[(i-move) % n] * p_correct +
belief[(i-move-1) % n] * p_over +
belief[(i-move+1) % n] * p_under)
return prior
belief = [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]
prior = predict_move(belief, 2, .1, .8, .1)
book_plots.plot_belief_vs_prior(belief, prior)_____no_output_____
</code>
It appears to work correctly. Now what happens when our belief is not 100% certain?_____no_output_____看起來代碼工作正常。現在我們看看置信度不是100%的情況會是怎樣?_____no_output_____
<code>
belief = [0, 0, .4, .6, 0, 0, 0, 0, 0, 0]
prior = predict_move(belief, 2, .1, .8, .1)
book_plots.plot_belief_vs_prior(belief, prior)
prior_____no_output_____
</code>
Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. **I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step.**
If you look at the probabilities after performing the update you might be dismayed. In the example above we started with probabilities of 0.4 and 0.6 in two positions; after performing the update the probabilities are not only lowered, but they are strewn out across the map.
This is not a coincidence, or the result of a carefully chosen example - it is always true of the prediction. If the sensor is noisy we lose some information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `belief` array. Let's try this with 100 iterations. The plot is animated; use the slider to change the step number._____no_output_____儘管現在情況更加複雜了,但你還是能夠用你的大腦的理解它。0.04是對應0.4置信度的信念被高估的可能性。而0.38是這樣算來的:有80%的概率移動了兩步 (0.4 $\times$ 0.8)和10%概率高估了位移(0.6 $\times$ 0.1)。低估的情況不參與計算,因為這種情況下對應於0.4和0.6的信念都會跳過該點。**我強烈建議你多使幾個例子,直到你深刻理解它們,這是因為後面許多內容都依賴於這一步。**
如果你看過更新後的概率,那你可能會感覺失望。上面的例子中,我們開始時對兩個位置各有0.4和0.6的置信度;在更新後,置信度不僅減小了,它們還在地圖上分散開來。
這不是偶然,也不是特意挑選的例子才能產生的結果——不論如何,預測的結果永遠會像這樣。如果傳感器包含噪聲,我們每次預測就都會丟失信息。假如我們在無限的時間裡無數次預測——結果會是怎樣?如果我們每次預測都丟失信息,我們最終會什麼信息都無法留下,我們的`belief`數組上的概率分佈將會處處均等。我們試試迭代100次。下面是繪製的動畫。你可以用滑塊來逐步瀏覽。
_____no_output_____
<code>
belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
predict_beliefs = []
for i in range(100):
belief = predict_move(belief, 1, .1, .8, .1)
predict_beliefs.append(belief)
print('Final Belief:', belief)
# make interactive plot
def show_prior(step):
book_plots.bar_plot(predict_beliefs[step-1])
plt.title(f'Step {step}')
interact(show_prior, step=IntSlider(value=1, max=len(predict_beliefs)));Final Belief: [0.104 0.103 0.101 0.099 0.097 0.096 0.097 0.099 0.101 0.103]
print('Final Belief:', belief)Final Belief: [0.104 0.103 0.101 0.099 0.097 0.096 0.097 0.099 0.101 0.103]
</code>
After 100 iterations we have lost almost all information, even though we were 100% sure that we started in position 0. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates a small amount of information is left, after 50 a lot is left, but by 200 iterations essentially all information is lost._____no_output_____儘管我們以100%的置信度相信我們從0點開始,100次迭代後我們仍然幾乎丟失了所有信息。請隨意改變數目,觀察步數的影響。例如,經過100次更新,仍存在有一小部分信息;50次更新後,留下的信息較多;而200次更新後基本上所有數據都丟失了。_____no_output_____And, if you are viewing this online here is an animation of that output.
<img src="animations/02_no_info.gif">
I will not generate these standalone animations through the rest of the book. Please see the preface for instructions to run this book on the web, for free, or install IPython on your computer. This will allow you to run all of the cells and see the animations. It's very important that you practice with this code, not just read passively._____no_output_____另外,如果你通過線上方式閱讀本書,你會看到這裡有輸出的動圖。
<img src="animations/02_no_info.gif">
這之後我就不會再生成單獨的動圖了。請遵循本前言部分的介紹在網頁上,或者在你電腦上配置IPython以免費運行本書內容。這樣你就能運行所有單元格並且看到動圖。為了能練習本書代碼而不僅僅是被動閱讀,這點非常重要。_____no_output_____## Generalizing with Convolution
We made the assumption that the movement error is at most one position. But it is possible for the error to be two, three, or more positions. As programmers we always want to generalize our code so that it works for all cases.
This is easily solved with [*convolution*](https://en.wikipedia.org/wiki/Convolution). Convolution modifies one function with another function. In our case we are modifying a probability distribution with the error function of the sensor. The implementation of `predict_move()` is a convolution, though we did not call it that. Formally, convolution is defined as_____no_output_____## 在卷積的幫助下推廣該方法
我們先前假設位移的誤差最多不差過一位。但實際上可能有兩位、三位、甚至更多。作為程序員,我們總希望能將我們的代碼推廣到適應所有情況。
這可以藉助[**卷積**](https://en.wikipedia.org/wiki/Convolution)工具輕鬆解決。卷積通過一個函數來修改另一個函數。在我們的案例中,我們用傳感器的誤差函數修改概率分佈。雖然我們之前沒這麼稱呼它,但`predict_move()`函數的實現就是一個卷積。卷積的正式定義如下:_____no_output_____$$ (f \ast g) (t) = \int_0^t \!f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$_____no_output_____where $f\ast g$ is the notation for convolving f by g. It does not mean multiply.
Integrals are for continuous functions, but we are using discrete functions. We replace the integral with a summation, and the parenthesis with array brackets.
$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$
Comparison shows that `predict_move()` is computing this equation - it computes the sum of a series of multiplications.
[Khan Academy](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) [4] has a good introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear. You slide an array called the *kernel* across another array, multiplying the neighbors of the current cell with the values of the second array. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array `[0.1, 0.8, 0.1]`. All we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named it `pdf`._____no_output_____其中 $f\ast g$ 表示f和g的卷積。它不代表乘法。
積分對應於連續函數,但我們使用的是離散函數。我們將積分替換為求和符號,將圓括號換成數組使用的方括號。
$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$
比較發現`predict_move()`是在實行該計算——即一系列數值的積的和。
[可汗學院](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) 【4】很好地介紹了卷積。維基百科提供了描繪卷積的優美動圖【5】。不論如何,卷積的大概思想是清除易懂的。你將一個稱為“核(kernel)”的數組劃過另一個數組,連同當前單元格的鄰接單元格與對應數組上的單元格相乘。在上面的例子中,我們用0.8作為正確估計的概率,0.1作為高估的概率,0.1作為低估的概率。這可以用數組`[0.1, 0.8, 0.1]`構成的核來表示。我們所要做的事情循環遍歷數組的每一元素,與核對應相乘,對結果求和。為強調置信度是一個概率分佈,我用`pdf`作為變量名。_____no_output_____
<code>
def predict_move_convolution(pdf, offset, kernel):
N = len(pdf)
kN = len(kernel)
width = int((kN - 1) / 2)
prior = np.zeros(N)
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
prior[i] += pdf[index] * kernel[k]
return prior_____no_output_____
</code>
This illustrates the algorithm, but it runs very slow. SciPy provides a convolution routine `convolve()` in the `ndimage.filters` module. We need to shift the pdf by `offset` before convolution; `np.roll()` does that. The move and predict algorithm can be implemented with one line:
```python
convolve(np.roll(pdf, offset), kernel, mode='wrap')
```
FilterPy implements this with `discrete_bayes`' `predict()` function._____no_output_____雖然該函數演示了算法執行流程,然而它執行得很慢。SciPy庫在`ndimage.filters`包中提供了卷積操作`convolve()`。我們要在作卷積前先將pdf平移`offset`步,這可以通過`np.roll()`函數實現。移動操作和預測操作可以由一行代碼實現:
```python
convolve(np.roll(pdf, offset), kernel, mode='wrap')
```
FilterPy在`discrete_bayes`的`predict()`函數中實現了此操作。_____no_output_____
<code>
from filterpy.discrete_bayes import predict
belief = [.05, .05, .05, .05, .55, .05, .05, .05, .05, .05]
prior = predict(belief, offset=1, kernel=[.1, .8, .1])
book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))_____no_output_____
</code>
All of the elements are unchanged except the middle ones. The values in position 4 and 6 should be
$$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$
Position 5 should be $$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$
Let's ensure that it shifts the positions correctly for movements greater than one and for asymmetric kernels._____no_output_____除去中部的幾個數值外,其它數保持不變。位於4和6除的概率應為
$$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$
位置5處的概率應為$$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$
接著,我們來確認一下對於移動量大於1,且非對稱的核,它也能正確地移動位置。
_____no_output_____
<code>
prior = predict(belief, offset=3, kernel=[.05, .05, .6, .2, .1])
book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))_____no_output_____
</code>
The position was correctly shifted by 3 positions and we give more weight to the likelihood of an overshoot vs an undershoot, so this looks correct.
Make sure you understand what we are doing. We are making a prediction of where the dog is moving, and convolving the probabilities to get the prior.
If we weren't using probabilities we would use this equation that I gave earlier:
$$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$
The prior, our prediction of where the dog will be, is the amount the dog moved plus his current position. The dog was at 10, he moved 5 meters, so he is now at 15 m. It couldn't be simpler. But we are using probabilities to model this, so our equation is:
$$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$
We are *convolving* the current probabilistic position estimate with a probabilistic estimate of how much we think the dog moved. It's the same concept, but the math is slightly different. $\mathbf x$ is bold to denote that it is an array of numbers. _____no_output_____预测位置正确移动了三步,且我们为偏大的位移给出了更高的似然权重,所以结果看起来是正确的。
你要保证确实理解我们在做的事情。我们在预测狗的位移,通过对概率分布做卷积来给出先验:
如果我们使用的不是概率分布,那么我们需要使用前面给出的公式
$$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$
先验等于狗的当前位置加上狗的位移(这里先验指的是狗的预测位置)。如果狗在位置10,位移了5米,那么它现在出现在15米的位置。简单到不能再简单了。但现在我们使用概率分布来建模,于是我们的公式变为:
$$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$_____no_output_____## Integrating Measurements and Movement Updates
The problem of losing information during a prediction may make it seem as if our system would quickly devolve into having no knowledge. However, each prediction is followed by an update where we incorporate the measurement into the estimate. The update improves our knowledge. The output of the update step is fed into the next prediction. The prediction degrades our certainty. That is passed into another update, where certainty is again increased.
Let's think about this intuitively. Consider a simple case - you are tracking a dog while he sits still. During each prediction you predict he doesn't move. Your filter quickly *converges* on an accurate estimate of his position. Then the microwave in the kitchen turns on, and he goes streaking off. You don't know this, so at the next prediction you predict he is in the same spot. But the measurements tell a different story. As you incorporate the measurements your belief will be smeared along the hallway, leading towards the kitchen. On every epoch (cycle) your belief that he is sitting still will get smaller, and your belief that he is inbound towards the kitchen at a startling rate of speed increases.
That is what intuition tells us. What does the math tell us?
We have already programmed the update and predict steps. All we need to do is feed the result of one into the other, and we will have implemented a dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right one position each epoch. As in a real world application, we will start with no knowledge of his position by assigning equal probability to all positions. _____no_output_____## 在位移更新过程中结合测量值
因為在预测过程中存在信息丢失的问题,所以看起來似乎我们的系统会迅速退化到没有任何信息的状态。然而並非如此,因為每次預測後面都會緊跟著一個更新操作。有了更新操作,我們就可以在作估計時將測量結果納入考量。更新操作能改善信息的質量。更新的結果作為下一次預測的輸入。經過預測,確定性降低了。其結果傳遞給下一次更新,確定性又再一次得到增強。
讓我們從直覺上思考這個問題。考慮一個簡單的例子:你需要跟蹤一條狗,而這條狗永遠坐在那裡不動。每次預測,你給出的結果都是它原地不動。於是你的濾波器迅速“收斂”到其位置的精確估計。這時候廚房的微波爐打開了,狗狗飛奔出去。你不知道這件事,所以下一次預測,你還是預言它原地不動。而這時測量值則傳遞出相悖的信息。如果你結合測量結果去做更新,那你關於位置的信念起始時散佈在走廊上各處,總體向著廚房移動。每一輪迭代(循環),你對狗原地不動的信念俞弱,俞相信狗在以驚人的速度向廚房進發。
這是直覺上我們所能理解的。那麼我們是否能從數學中得到什麼呢?
我們已編寫好更新和預測操作。我們所要做的只是將其中一步的結果傳給下一步,這樣我們就實現了一個狗跟蹤器!!!我們看下它表現如何。我們輸入測量值,假裝狗從位置0開始移動,每次向右移動一步。如果是在真實世界的應用中,起始狀態下我們沒有任何關於其位置的知識,這時我們就為每種可能位置賦予相等的概率。
_____no_output_____
<code>
from filterpy.discrete_bayes import update
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
prior = np.array([.1] * 10)
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))_____no_output_____
</code>
After the first update we have assigned a high probability to each door position, and a low probability to each wall position. _____no_output_____第一次更新後,我们为每个门的位置分配了更高的权重,而为墙壁的位置赋予了较低的权重。_____no_output_____
<code>
kernel = (.1, .8, .1)
prior = predict(posterior, 1, kernel)
book_plots.plot_prior_vs_posterior(prior, posterior, True, ylim=(0,.5))_____no_output_____
</code>
The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense._____no_output_____预测操作使得概率分布右移,并且使分布散得更开。现在让我们看看下一个读入传感器的读数会发生什么。_____no_output_____
<code>
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))_____no_output_____
</code>
Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall._____no_output_____注意看位置1处的高峰。它对应的(正確)情況是:以位置0为起点出發,傳感器感應到門,向右移一步,然後傳感器再次感應到門。除此以外的情形則不太可能產生同樣的觀測結果。現在我們增加一個更新操作,這個更新操作中傳感器感應到墻壁。_____no_output_____
<code>
prior = predict(posterior, 1, kernel)
likelihood = lh_hallway(hallway, z=0, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))_____no_output_____
</code>
This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more cycle._____no_output_____結果真是令人激動!條形圖在位置2處的數值顯著突出,其值為35%,其高度在其它任意一柱高度的兩倍以上。上一張圖的最高高度約為31%,所以經過本次操作高度提高的量約為4%。我們再觀察一輪。_____no_output_____
<code>
prior = predict(posterior, 1, kernel)
likelihood = lh_hallway(hallway, z=0, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))_____no_output_____
</code>
I ignored an important issue. Earlier I assumed that we had a motion sensor for the predict step; then, when talking about the dog and the microwave I assumed that you had no knowledge that he suddenly began running. I mentioned that your belief that the dog is running would increase over time, but I did not provide any code for this. In short, how do we detect and/or estimate changes in the process model if we aren't directly measuring it?
For now I want to ignore this problem. In later chapters we will learn the mathematics behind this estimation; for now it is a large enough task just to learn this algorithm. It is profoundly important to solve this problem, but we haven't yet built enough of the mathematical apparatus that is required, and so for the remainder of the chapter we will ignore the problem by assuming we have a sensor that senses movement._____no_output_____我忽略了一個重要問題。起初我們為預測步提供了運動傳感器,可是之後談及狗與微波爐的例子的時候,我卻假定你沒有關於狗突然開始運動的知識。我斷言道即使如此你仍會越來越相信狗處於運動狀態,但我未提供任何證明該斷言的代碼。簡而言之,在不直接測量的條件下,我們如何才能檢測或估計過程模型狀態的改變呢?
我想暫時擱置這一問題。後續章節會介紹估計方法幕後的數學原理。而現在,僅僅是學習算法就已經是一個大任務。雖然解決這個問題很重要,但是我們還缺乏解決該問題所需的數學工具。那麼本章的後續部分會暫時忽略這個問題,仍然假設我們有一個專門用於測量運動的傳感器。_____no_output_____## The Discrete Bayes Algorithm
This chart illustrates the algorithm:_____no_output_____## 離散貝葉斯算法
下圖顯示了算法的流程:_____no_output_____
<code>
book_plots.predict_update_chart()_____no_output_____
</code>
This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter.
The filter equations are:
$$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{Predict Step} \\
\mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{Update Step}\end{aligned}$$
$\mathcal L$ is the usual way to write the likelihood function, so I use that. The $\|\|$ notation denotes taking the norm. We need to normalize the product of the likelihood with the prior to ensure $x$ is a probability distribution that sums to one.
We can express this in pseudocode.
**Initialization**
1. Initialize our belief in the state
**Predict**
1. Based on the system behavior, predict state for the next time step
2. Adjust belief to account for the uncertainty in prediction
**Update**
1. Get a measurement and associated belief about its accuracy
2. Compute how likely it is the measurement matches each state
3. Update state belief with this likelihood
When we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ.
Algorithms in this form are sometimes called *predictor correctors*. We make a prediction, then correct them.
Let's animate this. First Let's write functions to perform the filtering and to plot the results at any step. I've plotted the position of the doorways in black. Prior are drawn in orange, and the posterior in blue. I draw a thick vertical line to indicate where Simon really is. This is not an output of the filter - we know where Simon is only because we are simulating his movement._____no_output_____如圖所示的濾波器是g-h濾波器的一種特殊形式。這裡我們用誤差的百分比隱式計算g和h參數。我們也可以將貝葉斯濾波器用g-h濾波器的形式該寫出來,但這麼做會使得濾波器所遵循的邏輯變得模糊。
濾波器的公式如下:
$$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{預測操作} \\
\mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{更新操作}\end{aligned}$$
遵循慣例,我是用$\mathcal L$來代表似然函數。$\|\|$符號表示取模。我們需要對似然與先驗的乘積作歸一化來確保$x$是和為1的概率分佈。
我們可以用如下的偽代碼來表達這個過程。
**初始化**
1. 為狀態的置信度賦予初始值。
**預測**
1. 基於系統的行為預測下一時間步的狀態;
2. 根據預測操作的不確定性調整置信度;
**更新**
1. 得到測量值和對測量精度的置信度;
2. 計算測量值與真實狀態相符的似然程度;
3. 根據似然更新狀態置信度;
在卡爾曼濾波的章節,我們會使用完全一致的算法;只在於計算的細節上有所不同。
這類算法有時候被稱為“預測校準器”,因為它們先做預測,再修正預測的值。
讓我們用動畫來展示這個算法。我們先實現濾波函數,然後將每一步結果繪製出來。我用黑色來指示門道的位置。用橙色來繪製先驗,用藍色繪製後驗。縱向粗線用於指示西蒙的實際位置。注意它不是濾波器的輸出——之所以我們能知道西蒙的真實位置是因為我們在用程序模擬這個過程。_____no_output_____
<code>
def discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway):
posterior = np.array([.1]*10)
priors, posteriors = [], []
for i, z in enumerate(measurements):
prior = predict(posterior, 1, kernel)
priors.append(prior)
likelihood = lh_hallway(hallway, z, z_prob)
posterior = update(likelihood, prior)
posteriors.append(posterior)
return priors, posteriors
def plot_posterior(hallway, posteriors, i):
plt.title('Posterior')
book_plots.bar_plot(hallway, c='k')
book_plots.bar_plot(posteriors[i], ylim=(0, 1.0))
plt.axvline(i % len(hallway), lw=5)
def plot_prior(hallway, priors, i):
plt.title('Prior')
book_plots.bar_plot(hallway, c='k')
book_plots.bar_plot(priors[i], ylim=(0, 1.0), c='#ff8015')
plt.axvline(i % len(hallway), lw=5)
def animate_discrete_bayes(hallway, priors, posteriors):
def animate(step):
step -= 1
i = step // 2
if step % 2 == 0:
plot_prior(hallway, priors, i)
else:
plot_posterior(hallway, posteriors, i)
return animate_____no_output_____
</code>
Let's run the filter and animate it._____no_output_____讓我們運行濾波器,並觀察運行結果。_____no_output_____
<code>
# change these numbers to alter the simulation
kernel = (.1, .8, .1)
z_prob = 1.0
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
# measurements with no noise
zs = [hallway[i % len(hallway)] for i in range(50)]
priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway)
interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=1, max=len(zs)*2));_____no_output_____
</code>
Now we can see the results. You can see how the prior shifts the position and reduces certainty, and the posterior stays in the same position and increases certainty as it incorporates the information from the measurement. I've made the measurement perfect with the line `z_prob = 1.0`; we will explore the effect of imperfect measurements in the next section. Finally,
Another thing to note is how accurate our estimate becomes when we are in front of a door, and how it degrades when in the middle of the hallway. This should make intuitive sense. There are only a few doorways, so when the sensor tells us we are in front of a door this boosts our certainty in our position. A long stretch of no doors reduces our certainty._____no_output_____現在,我們可以觀察到結果。你可以看到先驗是如何發生移動的,其不確定性是如何減少的。還注意到雖然先驗與後驗的極大值點重合,但後驗的確定性更高,這是後驗結合了測量信息的緣故。這裡通過令`z_prob = 1.0`使得測量是完全準確的。後續小節我們會探索不完美的測量造成的影響。
最後一個值得注意的事是,當狗處於門前時估計的準確度是如何增加的,以及當狗位於走廊中心時它是如何退化的。你可以從直覺上理解這個問題。門的數量很少,所以一旦傳感器感應到門,我們對位置的確定性就增加。成片的非門區域則會降低確定性。_____no_output_____## The Effect of Bad Sensor Data
You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that?
To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 6 correct measurements:_____no_output_____## 不良傳感器數據的影響
你可能會對上面的結果表示懷疑,畢竟我一直只給函數傳入正確的傳感器數據。然而,既然我們聲稱自己實現了“濾波器”,那麼它應當能過濾不良傳感器數據。那麼它確實能做到這一點嗎?
為使得問題易於編程實現和方便可視化,我要改變走廊的佈局,使門道和走廊均勻交替分佈。以六個正確測量結果為輸入運行算法:_____no_output_____
<code>
hallway = np.array([1, 0, 1, 0, 0]*2)
kernel = (.1, .8, .1)
prior = np.array([.1] * 10)
zs = [1, 0, 1, 0, 0, 1]
z_prob = 0.75
priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway)
interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=12, max=len(zs)*2));_____no_output_____
</code>
We have identified the likely cases of having started at position 0 or 5, because we saw this sequence of doors and walls: 1,0,1,0,0. Now I inject a bad measurement. The next measurement should be 0, but instead we get a 1:_____no_output_____我們看到最可能的起始位置是0或5。這是因為傳感器的讀數序列為:1、0、1、0、0.現在我插入一個錯誤的測量值。原本下一個讀數應當是0,但我將其替換為1._____no_output_____
<code>
measurements = [1, 0, 1, 0, 0, 1, 1]
priors, posteriors = discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway);
plot_posterior(hallway, posteriors, 6)_____no_output_____
</code>
That one bad measurement has significantly eroded our knowledge. Now let's continue with a series of correct measurements._____no_output_____一個不良傳感器數據的插入嚴重污染了我們的知識。接著,我們在正確測量數據上繼續運行。_____no_output_____
<code>
with figsize(y=5.5):
measurements = [1, 0, 1, 0, 0, 1, 1, 1, 0, 0]
for i, m in enumerate(measurements):
likelihood = lh_hallway(hallway, z=m, z_prob=.75)
posterior = update(likelihood, prior)
prior = predict(posterior, 1, kernel)
plt.subplot(5, 2, i+1)
book_plots.bar_plot(posterior, ylim=(0, .4), title=f'step {i+1}')
plt.tight_layout()_____no_output_____
</code>
We quickly filtered out the bad sensor reading and converged on the most likely positions for our dog._____no_output_____我們很快就濾除了不良傳感器讀數,並且概率分佈收斂在狗的最可能位置上。_____no_output_____## Drawbacks and Limitations
Do not be mislead by the simplicity of the examples I chose. This is a robust and complete filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works.
With that said, this filter it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book.
The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dog's $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of an array we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements.
The second problem is that the filter is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. A 100 meter hallway requires 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. A 100x100 m$^2$ courtyard requires 100,000,000 bins to get 1cm accuracy.
A third problem is that the filter is multimodal. In the last example we ended up with strong beliefs that the dog was in position 4 or 9. This is not always a problem. Particle filters, which we will study later, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, and 30% sure you are on Willow Avenue.
A forth problem is that it requires a measurement of the change in state. We need a motion sensor to detect how much the dog moves. There are ways to work around this problem, but it would complicate the exposition of this chapter, so, given the aforementioned problems, I will not discuss it further.
With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues._____no_output_____## 缺點和局限
不要受我選的這個示例的簡單性所誤導。這是一個穩健的完整濾波器,可以應用於現實世界的解決方案。如果你需要一個多峰的,離散的濾波器,那麼這個濾波器可以為你所用。
說是這麼說。實際上由於一些限制,這種濾波器也不是經常使用。餘下的章節就主要圍繞如何克服這些限制展開。
第一個問題在於其伸縮性。我們的狗跟蹤問題只使用一個變量$pos$來表示狗的位置。許多有趣的問題都需要在一個大的向量空間中跟蹤多個變量。比如在現實中我們往往需要跟蹤狗的位置$(x,y)$,有時還需跟蹤其速度$(\dot{x},\dot{y})$ 。我們還沒有處理過多維的情況。在高維空間中,我們不再使用一維數組表示狀態,而是使用一個多維的網格來存儲各離散位置的對應概率。每個`update()`和`predict()`步都需要更新網格上的所有位置。那麼一個含有四個變量**每一步**運算都需要$O(n^4)$的運行時間。現實世界中的濾波器往往有超過10個需要跟蹤的變量,這需要極多的計算資源。
第二個問題是這種濾波器是離散的,但我們生活的世界是連續的。這種基於直方圖的方式要求你將濾波器的輸出建模為一些列離散點。要在100米的走廊上達到1cm的定位精度需要10000個點。情況隨著維度的增加指數惡化。要在100平方米的庭院內達到1cm的精確度需要尺寸為一億的直方圖。
第三個問題是,這種濾波器是多峰的。上一個問題中,程序以堅信狗處於位置4或9的狀態結束。這並不總成問題。我們後面將介紹粒子濾波器。粒子濾波器正是因為其具有多峰的性質而被廣泛應用。但你可以想象看你車里的GPS報告說你有40%的概率位於D街,又有30%的概率位於柳樹大道嗎。
第四個問題是它需要狀態改變程度的測量值。我們需要運動傳感器以測量狗的運動量。有許多應對該問題的方法,但為不使本章的闡述過於複雜這裡就不再介紹。總之基於上述所有原因,我們不再做進一步的討論。
話雖如此,如果我手頭有一個可以由這項技術處理的小問題,我就會使用它。易於實現、調試和理解都是它的優點。
_____no_output_____## Tracking and Control
We have been passively tracking an autonomously moving object. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go there. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command. There is more than one robot, and we need to know where they all are so we do not cause them to crash.
So we add sensors. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we count 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. We can use the code from the previous section to track our robot since magnet counting is very similar to doorway sensing.
But we are not done. We've learned to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? We know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. This is a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. Wheels and motors are imperfect. The robot might end up 0.9 units away, or maybe 1.2 units.
Now the entire solution is clear. We assumed that the dog kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call `predict()` we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement._____no_output_____## 跟蹤和控制
我們已經實現了對單個自主移動目標的被動跟蹤。但請你考慮一個十分相似的問題。我要實現仓库自動化,希望使用机器人收集客户訂的所有貨。或許一個最簡單的辦法是讓機器人在火車軌道上行駛。我希望我能讓機器人去到我所指定的目的地。但是鐵軌和機器人的發動機都不是完美的。輪胎打滑和發動機不完美決定了機器人不太可能準確移動到你指定的位置。機器人不止一個,我需要知道所有機器人的位置以免它們碰撞。
所以我们增加了传感器。也许我们每隔几英尺就在轨道上安装一块磁铁,并使用霍尔传感器来计算經過的磁铁数。如果我們數到10個磁鐵,那麼機器人應在第10個磁鐵處。當然,有可能會發生漏掉一塊磁鐵沒統計,或者一塊磁鐵被數了兩次的情況,所以我們需要能適應一定程度的誤差。因為磁鐵計數和走廊傳感器相似,所以我們可以使用前面的代碼來跟蹤機器人。
但這還沒有完成。我們學到一句話:永遠不要丟掉任何信息。如果信息存在,就應當用來改善你的估計。有什麼信息是為我們忽略的呢?我們能即時獲得對機器輪的控制信號輸入。比如,不妨設我們每秒傳遞一個信號給機器人——左一步,右一步,站住不動。我一送出命令“左一步”,我就預期一秒後機器人將位於當前位置的左邊一步。我沒有考慮加速度,所以這隻是一個簡化的問題。但我也不打算在這裡教控制論。車輪和發動機都是不完美的。機器人可能只移動0.9步,也可能移動1.2步。
現在整個問題的解清晰了。我們先前假定狗總是保持之前的移動方向。這個假設對於我的狗來說是不可靠的。但機器人的行為就容易預測得多。預期使用假設得到一個不準確的預測,不如將我們送給機器人的命令作為輸入!換句話說,當調用`predict()`函數時,我們將送給機器人的移動命令,同描述移動的似然度的卷積核一起作為函數的輸入。_____no_output_____### Simulating the Train Behavior
We need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and its sensor will sometimes return the incorrect value._____no_output_____## 列車的行為模擬
我們要模擬一個不完美的列車。當我們命令它移動時,它偶爾會犯一些小錯,它的傳感器有時會返回錯誤的值。_____no_output_____
<code>
class Train(object):
def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9):
self.track_len = track_len
self.pos = 0
self.kernel = kernel
self.sensor_accuracy = sensor_accuracy
def move(self, distance=1):
""" move in the specified direction
with some small chance of error"""
self.pos += distance
# insert random movement error according to kernel
r = random.random()
s = 0
offset = -(len(self.kernel) - 1) / 2
for k in self.kernel:
s += k
if r <= s:
break
offset += 1
self.pos = int((self.pos + offset) % self.track_len)
return self.pos
def sense(self):
pos = self.pos
# insert random sensor error
if random.random() > self.sensor_accuracy:
if random.random() > 0.5:
pos += 1
else:
pos -= 1
return pos_____no_output_____
</code>
With that we are ready to write the filter. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. A length of 10 makes it easier to plot and inspect._____no_output_____有了這個我們就可以實現濾波器了。我們將其封裝為一個函數,以便我們可以在不同假設條件下運行這段代碼。我假設機器人總是從軌道的起點出發。我們實現的軌道長度為10個單位。你可以想象他是一個每10單位長度就放置一塊磁鐵的10000單位長度的軌道。令長度為10有利於繪圖和分析。_____no_output_____
<code>
def train_filter(iterations, kernel, sensor_accuracy,
move_distance, do_print=True):
track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
prior = np.array([.9] + [0.01]*9)
posterior = prior[:]
normalize(prior)
robot = Train(len(track), kernel, sensor_accuracy)
for i in range(iterations):
# move the robot and
robot.move(distance=move_distance)
# peform prediction
prior = predict(posterior, move_distance, kernel)
# and update the filter
m = robot.sense()
likelihood = lh_hallway(track, m, sensor_accuracy)
posterior = update(likelihood, prior)
index = np.argmax(posterior)
if do_print:
print(f'time {i}: pos {robot.pos}, sensed {m}, at position {track[robot.pos]}')
conf = posterior[index] * 100
print(f' estimated position is {index} with confidence {conf:.4f}%:')
book_plots.bar_plot(posterior)
if do_print:
print()
print('final position is', robot.pos)
index = np.argmax(posterior)
print('''Estimated position is {} with '''
'''confidence {:.4f}%:'''.format(
index, posterior[index]*100))_____no_output_____
</code>
Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding._____no_output_____請閱讀代碼并且保证你确实理解它。我們先在沒有傳感器誤差和運動誤差的情況下運行代码。如果代码正确,那么它应当能正确无误地检出目标机器人。虽然程序输出有点冗长难读,但若你对更新/预测循环的工作方式完全不确顶,你务必通读这些文字以巩固你的理解。_____no_output_____
<code>
import random
random.seed(3)
np.set_printoptions(precision=2, suppress=True, linewidth=60)
train_filter(4, kernel=[1.], sensor_accuracy=.999,
move_distance=4, do_print=True)time 0: pos 4, sensed 4, at position 4
estimated position is 4 with confidence 99.9900%:
time 1: pos 8, sensed 8, at position 8
estimated position is 8 with confidence 100.0000%:
time 2: pos 2, sensed 2, at position 2
estimated position is 2 with confidence 100.0000%:
time 3: pos 6, sensed 6, at position 6
estimated position is 6 with confidence 100.0000%:
final position is 6
Estimated position is 6 with confidence 100.0000%:
</code>
We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors. _____no_output_____我们可以看到程序完美无误地实现了对机器人的跟踪,所以我们相当有信心相信代码在正常工作。现在我们来看一些失败案例和几个错误。_____no_output_____
<code>
random.seed(5)
train_filter(4, kernel=[.1, .8, .1], sensor_accuracy=.9,
move_distance=4, do_print=True)time 0: pos 4, sensed 4, at position 4
estimated position is 4 with confidence 96.0390%:
time 1: pos 8, sensed 9, at position 8
estimated position is 9 with confidence 52.1180%:
time 2: pos 3, sensed 3, at position 3
estimated position is 3 with confidence 88.3993%:
time 3: pos 7, sensed 8, at position 7
estimated position is 8 with confidence 49.3174%:
final position is 7
Estimated position is 8 with confidence 49.3174%:
</code>
There was a sensing error at time 1, but we are still quite confident in our position.
Now let's run a very long simulation and see how the filter responds to errors._____no_output_____在时刻1有一个传感器异常,但我们仍然对预测的位置有相当高的置信度。
现在加長模擬時間,看看滤波器是如何应对错误的。_____no_output_____
<code>
with figsize(y=5.5):
for i in range (4):
random.seed(3)
plt.subplot(221+i)
train_filter(148+i, kernel=[.1, .8, .1],
sensor_accuracy=.8,
move_distance=4, do_print=False)
plt.title (f'iteration {148 + i}')_____no_output_____
</code>
We can see that there was a problem on iteration 149 as the confidence degrades. But within a few iterations the filter is able to correct itself and regain confidence in the estimated position._____no_output_____我们可以看到虽然第149次迭代出现问题,导致置信度降低,但是数次迭代后,滤波器能自我校正,使其对预测位置的置信度再次提升。_____no_output_____## Bayes Theorem and the Total Probability Theorem_____no_output_____## 贝叶斯理论和全概率定理_____no_output_____We developed the math in this chapter merely by reasoning about the information we have at each moment. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem) and the [*Total Probability Theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability).
Bayes theorem tells us how to compute the probability of an event given previous information.
We implemented the `update()` function with this probability calculation:
$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization\, factor}}$$
We haven't developed the mathematics to discuss Bayes yet, but this is Bayes' theorem. Every filter in this book is an expression of Bayes' theorem. In the next chapter we will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation:
$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$
where $\| \cdot\|$ expresses normalizing the term.
We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.
Likewise, the `predict()` step computes the total probability of multiple possible events. This is known as the *Total Probability Theorem* in statistics, and we will also cover this in the next chapter after developing some supporting math.
For now I need you to understand that Bayes' theorem is a formula to incorporate new information into existing information._____no_output_____我们仅通过利用每个时刻的现有信息作推理就引出了本章的所有数学公式。这个过程中,我们发现了[贝叶斯理论](https://en.wikipedia.org/wiki/Bayes%27_theorem)和[全概率定理](https://en.wikipedia.org/wiki/Law_of_total_probability).
贝叶斯理论告诉我们如何基于给定历史信息的计算某一事件的概率。
我们实现了`update()`函数以执行如下的概率计算:
$$ \mathtt{後验} = \frac{\mathtt{似然}\times \mathtt{先验}}{\mathtt{归一化系数}}$$
我们还没有用数学推导讨论过贝叶斯,但这就是贝叶斯定理。本书的每个滤波器都是贝叶斯定理的表达形式。下一章我们会用多种方式推导数学公式,这个过程中,如下等式所表示的简单思想会从各方面掩藏起来。
$$ 更新後的知识 = \big\|新知识的似然度\times 先验知识 \big\|$$
其中 $\| \cdot\|$ 表示归一化。
我们通过关于狗跟踪问题的简单推理得到这一式子。然而,我们将会看到这一式子适用于一些列滤波问题。在接下来的每一章中我们都会用到它。
类似的,`predict()`步骤计算多个可能事件的总概率。这在统计学中被称为“全概率定理”,在下一章中,我们会在一系列数学推导後讲授此定理。
当下我想要你理解的是,贝叶斯公式所做的是将新信息合并到现有信息中去。
_____no_output_____## Summary
The code is very short, but the result is impressive! We have implemented a form of a Bayesian filter. We have learned how to start with no information and derive information from noisy sensors. Even though the sensors in this chapter are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the predict step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result.
This book is mostly about the Kalman filter. The math it uses is different, but the logic is exactly the same as used in this chapter. It uses Bayesian reasoning to form estimates from a combination of measurements and process models.
**If you can understand this chapter you will be able to understand and implement Kalman filters.** I cannot stress this enough. If anything is murky, go back and reread this chapter and play with the code. The rest of this book will build on the algorithms that we use here. If you don't understand why this filter works you will have little success with the rest of the material. However, if you grasp the fundamental insight - multiplying probabilities when we measure, and shifting probabilities when we update leads to a converging solution - then after learning a bit of math you are ready to implement a Kalman filter._____no_output_____## 总结
虽然代码很短,但它的运行结果让人映像深刻!我们实现了贝叶斯滤波器的一种具体形式。我们学会了如何从没有任何信息的状态开始,从带有噪声的传感器中推理出信息。虽然本章所用的传感器大多包含许多噪声(举例来说,大多数的传感器的准确率在80%以上),我们还是能快速收敛到狗的最可能位置。我们学到,预测操作总是减少我们的知识,但一旦引入额外的测量值我们就能改善我们的知识,加快收敛到最可能結果的速度。即使引入的测量值中包含噪声也是如此。
本書的主旨是卡爾曼濾波。卡爾曼濾波使用的數學工具有些不同,但其邏輯同本章所用的是一樣的。它使用貝葉斯推理結合測量與過程模型構造估計。
**如果你能理解本章的內容,你就能理解和實現卡爾曼濾波。**我怎么强调这一点都不为过。 如果有任何不清楚的地方,可以重新阅读本章,跑一跑代码。 本书的其余部分将建立在我们在这里使用的算法之上。 如果你不理解此濾波器的工作原理,你也无法順利學習後續內容。 但是,如果你掌握了基本知识——測量時測量分佈乘上概率分佈,更新時平移概率分佈,這樣我們就能收斂於解——那么在学习一些数学原理后,就可以准备实施卡尔曼滤波器了。_____no_output_____## References
* [1] D. Fox, W. Burgard, and S. Thrun. "Monte carlo localization: Efficient position estimation for mobile robots." In *Journal of Artifical Intelligence Research*, 1999.
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html
* [2] Dieter Fox, et. al. "Bayesian Filters for Location Estimation". In *IEEE Pervasive Computing*, September 2003.
http://swarmlab.unimaas.nl/wp-content/uploads/2012/07/fox2003bayesian.pdf
* [3] Sebastian Thrun. "Artificial Intelligence for Robotics".
https://www.udacity.com/course/cs373
* [4] Khan Acadamy. "Introduction to the Convolution"
https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution
* [5] Wikipedia. "Convolution"
http://en.wikipedia.org/wiki/Convolution
* [6] Wikipedia. "Law of total probability"
http://en.wikipedia.org/wiki/Law_of_total_probability
* [7] Wikipedia. "Time Evolution"
https://en.wikipedia.org/wiki/Time_evolution
* [8] We need to rethink how we teach statistics from the ground up
http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up_____no_output_____## 參考資料
* [1] D. Fox, W. Burgard, and S. Thrun. "Monte carlo localization: Efficient position estimation for mobile robots." In *Journal of Artifical Intelligence Research*, 1999.
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html
* [2] Dieter Fox, et. al. "Bayesian Filters for Location Estimation". In *IEEE Pervasive Computing*, September 2003.
http://swarmlab.unimaas.nl/wp-content/uploads/2012/07/fox2003bayesian.pdf
* [3] Sebastian Thrun. "Artificial Intelligence for Robotics".
https://www.udacity.com/course/cs373
* [4] Khan Acadamy. "Introduction to the Convolution"
https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution
* [5] Wikipedia. "Convolution"
http://en.wikipedia.org/wiki/Convolution
* [6] Wikipedia. "Law of total probability"
http://en.wikipedia.org/wiki/Law_of_total_probability
* [7] Wikipedia. "Time Evolution"
https://en.wikipedia.org/wiki/Time_evolution
* [8] We need to rethink how we teach statistics from the ground up
http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up_____no_output_____
| {
"repository": "anti-destiny/Kalman-and-Bayesian-Filters-in-Python",
"path": "02-Discrete-Bayes.ipynb",
"matched_keywords": [
"evolution"
],
"stars": null,
"size": 487180,
"hexsha": "d081d31e81b68e8b63a02c7a554a1a3b16110e55",
"max_line_length": 44944,
"avg_line_length": 131.7414818821,
"alphanum_fraction": 0.8648507738
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.